Sample records for optimal monitoring network

  1. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    PubMed

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  2. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of monitoring network design.

  3. Optimal Design of River Monitoring Network in Taizihe River by Matter Element Analysis

    PubMed Central

    Wang, Hui; Liu, Zhe; Sun, Lina; Luo, Qing

    2015-01-01

    The objective of this study is to optimize the river monitoring network in Taizihe River, Northeast China. The situation of the network and water characteristics were studied in this work. During this study, water samples were collected once a month during January 2009 - December 2010 from seventeen sites. Futhermore, the 16 monitoring indexes were analyzed in the field and laboratory. The pH value of surface water sample was found to be in the range of 6.83 to 9.31, and the average concentrations of NH4 +-N, chemical oxygen demand (COD), volatile phenol and total phosphorus (TP) were found decreasing significantly. The water quality of the river has been improved from 2009 to 2010. Through the calculation of the data availability and the correlation between adjacent sections, it was found that the present monitoring network was inefficient as well as the optimization was indispensable. In order to improve the situation, the matter element analysis and gravity distance were applied in the optimization of river monitoring network, which were proved to be a useful method to optimize river quality monitoring network. The amount of monitoring sections were cut from 17 to 13 for the monitoring network was more cost-effective after being optimized. The results of this study could be used in developing effective management strategies to improve the environmental quality of Taizihe River. Also, the results show that the proposed model can be effectively used for the optimal design of monitoring networks in river systems. PMID:26023785

  4. Optimal Design of Multitype Groundwater Monitoring Networks Using Easily Accessible Tools.

    PubMed

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang

    2016-11-01

    Monitoring networks are expensive to establish and to maintain. In this paper, we extend an existing data-worth estimation method from the suite of PEST utilities with a global optimization method for optimal sensor placement (called optimal design) in groundwater monitoring networks. Design optimization can include multiple simultaneous sensor locations and multiple sensor types. Both location and sensor type are treated simultaneously as decision variables. Our method combines linear uncertainty quantification and a modified genetic algorithm for discrete multilocation, multitype search. The efficiency of the global optimization is enhanced by an archive of past samples and parallel computing. We demonstrate our methodology for a groundwater monitoring network at the Steinlach experimental site, south-western Germany, which has been established to monitor river-groundwater exchange processes. The target of optimization is the best possible exploration for minimum variance in predicting the mean travel time of the hyporheic exchange. Our results demonstrate that the information gain of monitoring network designs can be explored efficiently and with easily accessible tools prior to taking new field measurements or installing additional measurement points. The proposed methods proved to be efficient and can be applied for model-based optimal design of any type of monitoring network in approximately linear systems. Our key contributions are (1) the use of easy-to-implement tools for an otherwise complex task and (2) yet to consider data-worth interdependencies in simultaneous optimization of multiple sensor locations and sensor types. © 2016, National Ground Water Association.

  5. Optimization of water-level monitoring networks in the eastern Snake River Plain aquifer using a kriging-based genetic algorithm method

    USGS Publications Warehouse

    Fisher, Jason C.

    2013-01-01

    Long-term groundwater monitoring networks can provide essential information for the planning and management of water resources. Budget constraints in water resource management agencies often mean a reduction in the number of observation wells included in a monitoring network. A network design tool, distributed as an R package, was developed to determine which wells to exclude from a monitoring network because they add little or no beneficial information. A kriging-based genetic algorithm method was used to optimize the monitoring network. The algorithm was used to find the set of wells whose removal leads to the smallest increase in the weighted sum of the (1) mean standard error at all nodes in the kriging grid where the water table is estimated, (2) root-mean-squared-error between the measured and estimated water-level elevation at the removed sites, (3) mean standard deviation of measurements across time at the removed sites, and (4) mean measurement error of wells in the reduced network. The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The network design tool was applied to optimize two observation well networks monitoring the water table of the eastern Snake River Plain aquifer, Idaho; these networks include the 2008 Federal-State Cooperative water-level monitoring network (Co-op network) with 166 observation wells, and the 2008 U.S. Geological Survey-Idaho National Laboratory water-level monitoring network (USGS-INL network) with 171 wells. Each water-level monitoring network was optimized five times: by removing (1) 10, (2) 20, (3) 40, (4) 60, and (5) 80 observation wells from the original network. An examination of the trade-offs associated with changes in the number of wells to remove indicates that 20 wells can be removed from the Co-op network with a relatively small degradation of the estimated water table map, and 40 wells can be removed from the USGS-INL network before the water table map degradation accelerates. The optimal network designs indicate the robustness of the network design tool. Observation wells were removed from high well-density areas of the network while retaining the spatial pattern of the existing water-table map.

  6. Optimal design of monitoring networks for multiple groundwater quality parameters using a Kalman filter: application to the Irapuato-Valle aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S; González-Hita, L; Cardona, A; Mora-Rodríguez, J

    2016-01-01

    A new method for the optimal design of groundwater quality monitoring networks is introduced in this paper. Various indicator parameters were considered simultaneously and tested for the Irapuato-Valle aquifer in Mexico. The steps followed in the design were (1) establishment of the monitoring network objectives, (2) definition of a groundwater quality conceptual model for the study area, (3) selection of the parameters to be sampled, and (4) selection of a monitoring network by choosing the well positions that minimize the estimate error variance of the selected indicator parameters. Equal weight for each parameter was given to most of the aquifer positions and a higher weight to priority zones. The objective for the monitoring network in the specific application was to obtain a general reconnaissance of the water quality, including water types, water origin, and first indications of contamination. Water quality indicator parameters were chosen in accordance with this objective, and for the selection of the optimal monitoring sites, it was sought to obtain a low-uncertainty estimate of these parameters for the entire aquifer and with more certainty in priority zones. The optimal monitoring network was selected using a combination of geostatistical methods, a Kalman filter and a heuristic optimization method. Results show that when monitoring the 69 locations with higher priority order (the optimal monitoring network), the joint average standard error in the study area for all the groundwater quality parameters was approximately 90 % of the obtained with the 140 available sampling locations (the set of pilot wells). This demonstrates that an optimal design can help to reduce monitoring costs, by avoiding redundancy in data acquisition.

  7. Never Use the Complete Search Space: a Concept to Enhance the Optimization Procedure for Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Bode, F.; Reuschen, S.; Nowak, W.

    2015-12-01

    Drinking-water well catchments include many potential sources of contaminations like gas stations or agriculture. Finding optimal positions of early-warning monitoring wells is challenging because there are various parameters (and their uncertainties) that influence the reliability and optimality of any suggested monitoring location or monitoring network.The overall goal of this project is to develop and establish a concept to assess, design and optimize early-warning systems within well catchments. Such optimal monitoring networks need to optimize three competing objectives: a high detection probability, which can be reached by maximizing the "field of vision" of the monitoring network, a long early-warning time such that there is enough time left to install counter measures after first detection, and the overall operating costs of the monitoring network, which should ideally be reduced to a minimum. The method is based on numerical simulation of flow and transport in heterogeneous porous media coupled with geostatistics and Monte-Carlo, scenario analyses for real data, respectively, wrapped up within the framework of formal multi-objective optimization using a genetic algorithm.In order to speed up the optimization process and to better explore the Pareto-front, we developed a concept that forces the algorithm to search only in regions of the search space where promising solutions can be expected. We are going to show how to define these regions beforehand, using knowledge of the optimization problem, but also how to define them independently of problem attributes. With that, our method can be used with and/or without detailed knowledge of the objective functions.In summary, our study helps to improve optimization results in less optimization time by meaningful restrictions of the search space. These restrictions can be done independently of the optimization problem, but also in a problem-specific manner.

  8. Application of SNODAS and hydrologic models to enhance entropy-based snow monitoring network design

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin; Razavi, Tara; Tapsoba, Dominique; Gobena, Adam; Weber, Frank; Pietroniro, Alain

    2018-06-01

    Snow has a unique characteristic in the water cycle, that is, snow falls during the entire winter season, but the discharge from snowmelt is typically delayed until the melting period and occurs in a relatively short period. Therefore, reliable observations from an optimal snow monitoring network are necessary for an efficient management of snowmelt water for flood prevention and hydropower generation. The Dual Entropy and Multiobjective Optimization is applied to design snow monitoring networks in La Grande River Basin in Québec and Columbia River Basin in British Columbia. While the networks are optimized to have the maximum amount of information with minimum redundancy based on entropy concepts, this study extends the traditional entropy applications to the hydrometric network design by introducing several improvements. First, several data quantization cases and their effects on the snow network design problems were explored. Second, the applicability the Snow Data Assimilation System (SNODAS) products as synthetic datasets of potential stations was demonstrated in the design of the snow monitoring network of the Columbia River Basin. Third, beyond finding the Pareto-optimal networks from the entropy with multi-objective optimization, the networks obtained for La Grande River Basin were further evaluated by applying three hydrologic models. The calibrated hydrologic models simulated discharges using the updated snow water equivalent data from the Pareto-optimal networks. Then, the model performances for high flows were compared to determine the best optimal network for enhanced spring runoff forecasting.

  9. A combined geostatistical-optimization model for the optimal design of a groundwater quality monitoring network

    NASA Astrophysics Data System (ADS)

    Kolosionis, Konstantinos; Papadopoulou, Maria P.

    2017-04-01

    Monitoring networks provide essential information for water resources management especially in areas with significant groundwater exploitation due to extensive agricultural activities. In this work, a simulation-optimization framework is developed based on heuristic optimization methodologies and geostatistical modeling approaches to obtain an optimal design for a groundwater quality monitoring network. Groundwater quantity and quality data obtained from 43 existing observation locations at 3 different hydrological periods in Mires basin in Crete, Greece will be used in the proposed framework in terms of Regression Kriging to develop the spatial distribution of nitrates concentration in the aquifer of interest. Based on the existing groundwater quality mapping, the proposed optimization tool will determine a cost-effective observation wells network that contributes significant information to water managers and authorities. The elimination of observation wells that add little or no beneficial information to groundwater level and quality mapping of the area can be obtain using estimations uncertainty and statistical error metrics without effecting the assessment of the groundwater quality. Given the high maintenance cost of groundwater monitoring networks, the proposed tool could used by water regulators in the decision-making process to obtain a efficient network design that is essential.

  10. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China

    PubMed Central

    Chen, Kai; Ni, Minjie; Wang, Jun; Huang, Dongren; Chen, Huorong; Wang, Xiao; Liu, Mengyang

    2016-01-01

    Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO4-P) to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas. PMID:27777951

  11. Optimization of a Coastal Environmental Monitoring Network Based on the Kriging Method: A Case Study of Quanzhou Bay, China.

    PubMed

    Chen, Kai; Ni, Minjie; Cai, Minggang; Wang, Jun; Huang, Dongren; Chen, Huorong; Wang, Xiao; Liu, Mengyang

    2016-01-01

    Environmental monitoring is fundamental in assessing environmental quality and to fulfill protection and management measures with permit conditions. However, coastal environmental monitoring work faces many problems and challenges, including the fact that monitoring information cannot be linked up with evaluation, monitoring data cannot well reflect the current coastal environmental condition, and monitoring activities are limited by cost constraints. For these reasons, protection and management measures cannot be developed and implemented well by policy makers who intend to solve this issue. In this paper, Quanzhou Bay in southeastern China was selected as a case study; and the Kriging method and a geographic information system were employed to evaluate and optimize the existing monitoring network in a semienclosed bay. This study used coastal environmental monitoring data from 15 sites (including COD, DIN, and PO 4 -P) to adequately analyze the water quality from 2009 to 2012 by applying the Trophic State Index. The monitoring network in Quanzhou Bay was evaluated and optimized, with the number of sites increased from 15 to 24, and the monitoring precision improved by 32.9%. The results demonstrated that the proposed advanced monitoring network optimization was appropriate for environmental monitoring in Quanzhou Bay. It might provide technical support for coastal management and pollutant reduction in similar areas.

  12. Optimal redistribution of an urban air quality monitoring network using atmospheric dispersion model and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Hao, Yufang; Xie, Shaodong

    2018-03-01

    Air quality monitoring networks play a significant role in identifying the spatiotemporal patterns of air pollution, and they need to be deployed efficiently, with a minimum number of sites. The revision and optimal adjustment of existing monitoring networks is crucial for cities that have undergone rapid urban expansion and experience temporal variations in pollution patterns. The approach based on the Weather Research and Forecasting-California PUFF (WRF-CALPUFF) model and genetic algorithm (GA) was developed to design an optimal monitoring network. The maximization of coverage with minimum overlap and the ability to detect violations of standards were developed as the design objectives for redistributed networks. The non-dominated sorting genetic algorithm was applied to optimize the network size and site locations simultaneously for Shijiazhuang city, one of the most polluted cities in China. The assessment on the current network identified the insufficient spatial coverage of SO2 and NO2 monitoring for the expanding city. The optimization results showed that significant improvements were achieved in multiple objectives by redistributing the original network. Efficient coverage of the resulting designs improved to 60.99% and 76.06% of the urban area for SO2 and NO2, respectively. The redistributing design for multi-pollutant including 8 sites was also proposed, with the spatial representation covered 52.30% of the urban area and the overlapped areas decreased by 85.87% compared with the original network. The abilities to detect violations of standards were not improved as much as the other two objectives due to the conflicting nature between the multiple objectives. Additionally, the results demonstrated that the algorithm was slightly sensitive to the parameter settings, with the number of generations presented the most significant effect. Overall, our study presents an effective and feasible procedure for air quality network optimization at a city scale.

  13. Optimization of hydrometric monitoring network in urban drainage systems using information theory.

    PubMed

    Yazdi, J

    2017-10-01

    Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.

  14. Optimization of a large-scale microseismic monitoring network in northern Switzerland

    NASA Astrophysics Data System (ADS)

    Kraft, Toni; Mignan, Arnaud; Giardini, Domenico

    2013-10-01

    We have developed a network optimization method for regional-scale microseismic monitoring networks and applied it to optimize the densification of the existing seismic network in northeastern Switzerland. The new network will build the backbone of a 10-yr study on the neotectonic activity of this area that will help to better constrain the seismic hazard imposed on nuclear power plants and waste repository sites. This task defined the requirements regarding location precision (0.5 km in epicentre and 2 km in source depth) and detection capability [magnitude of completeness Mc = 1.0 (ML)]. The goal of the optimization was to find the geometry and size of the network that met these requirements. Existing stations in Switzerland, Germany and Austria were considered in the optimization procedure. We based the optimization on the simulated annealing approach proposed by Hardt & Scherbaum, which aims to minimize the volume of the error ellipsoid of the linearized earthquake location problem (D-criterion). We have extended their algorithm to: calculate traveltimes of seismic body waves using a finite difference ray tracer and the 3-D velocity model of Switzerland, calculate seismic body-wave amplitudes at arbitrary stations assuming the Brune source model and using scaling and attenuation relations recently derived for Switzerland, and estimate the noise level at arbitrary locations within Switzerland using a first-order ambient seismic noise model based on 14 land-use classes defined by the EU-project CORINE and open GIS data. We calculated optimized geometries for networks with 10-35 added stations and tested the stability of the optimization result by repeated runs with changing initial conditions. Further, we estimated the attainable magnitude of completeness (Mc) for the different sized optimal networks using the Bayesian Magnitude of Completeness (BMC) method introduced by Mignan et al. The algorithm developed in this study is also applicable to smaller optimization problems, for example, small local monitoring networks. Possible applications are volcano monitoring, the surveillance of induced seismicity associated with geotechnical operations and many more. Our algorithm is especially useful to optimize networks in populated areas with heterogeneous noise conditions and if complex velocity structures or existing stations have to be considered.

  15. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.

    PubMed

    Vimalarani, C; Subramanian, R; Sivanandam, S N

    2016-01-01

    Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.

  16. Design of a sensor network for structural health monitoring of a full-scale composite horizontal tail

    NASA Astrophysics Data System (ADS)

    Gao, Dongyue; Wang, Yishou; Wu, Zhanjun; Rahim, Gorgin; Bai, Shengbao

    2014-05-01

    The detection capability of a given structural health monitoring (SHM) system strongly depends on its sensor network placement. In order to minimize the number of sensors while maximizing the detection capability, optimal design of the PZT sensor network placement is necessary for structural health monitoring (SHM) of a full-scale composite horizontal tail. In this study, the sensor network optimization was simplified as a problem of determining the sensor array placement between stiffeners to achieve the desired the coverage rate. First, an analysis of the structural layout and load distribution of a composite horizontal tail was performed. The constraint conditions of the optimal design were presented. Then, the SHM algorithm of the composite horizontal tail under static load was proposed. Based on the given SHM algorithm, a sensor network was designed for the full-scale composite horizontal tail structure. Effective profiles of cross-stiffener paths (CRPs) and uncross-stiffener paths (URPs) were estimated by a Lamb wave propagation experiment in a multi-stiffener composite specimen. Based on the coverage rate and the redundancy requirements, a seven-sensor array-network was chosen as the optimal sensor network for each airfoil. Finally, a preliminary SHM experiment was performed on a typical composite aircraft structure component. The reliability of the SHM result for a composite horizontal tail structure under static load was validated. In the result, the red zone represented the delamination damage. The detection capability of the optimized sensor network was verified by SHM of a full-scale composite horizontal tail; all the diagnosis results were obtained in two minutes. The result showed that all the damage in the monitoring region was covered by the sensor network.

  17. Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Rajeeva; Kumar, Aditya; Dai, Dan

    2012-12-31

    This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developedmore » will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve these two formulations were developed and validated. For a given OSP problem the computation efficiency largely depends on the “size” of the problem. Initially a simplified 1-D gasifier model assuming axial and azimuthal symmetry was used to test out various OSP algorithms. Finally these algorithms were used to design the optimal sensor network for condition monitoring of IGCC gasifier refractory wear and RSC fouling. The sensors type and locations obtained as solution to the OSP problem were validated using model based sensing approach. The OSP algorithm has been developed in a modular form and has been packaged as a software tool for OSP design where a designer can explore various OSP design algorithm is a user friendly way. The OSP software tool is implemented in Matlab/Simulink© in-house. The tool also uses few optimization routines that are freely available on World Wide Web. In addition a modular Extended Kalman Filter (EKF) block has also been developed in Matlab/Simulink© which can be utilized for model based sensing of important process variables that are not directly measured through combining the online sensors with model based estimation once the hardware sensor and their locations has been finalized. The OSP algorithm details and the results of applying these algorithms to obtain optimal sensor location for condition monitoring of gasifier refractory wear and RSC fouling profile are summarized in this final report.« less

  18. A Risk-Based Multi-Objective Optimization Concept for Early-Warning Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Bode, F.; Loschko, M.; Nowak, W.

    2014-12-01

    Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources which cannot be eliminated, especially in urban regions. As matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs.In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations and the early warning time and to minimize the installation and operating costs of the monitoring network. A qualitative risk ranking is used to prioritize the known risk sources for monitoring. The unknown risk sources can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well.We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks which are valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrade) to also cover moderate, tolerable and unknown risk sources. Monitoring networks which are valid for the remaining risk also cover all other risk sources but the early-warning time suffers.The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. To avoid numerical dispersion during the transport simulations we use the particle-tracking random walk method.

  19. Geostatistics-based groundwater-level monitoring network design and its application to the Upper Floridan aquifer, USA.

    PubMed

    Bhat, Shirish; Motz, Louis H; Pathak, Chandra; Kuebler, Laura

    2015-01-01

    A geostatistical method was applied to optimize an existing groundwater-level monitoring network in the Upper Floridan aquifer for the South Florida Water Management District in the southeastern United States. Analyses were performed to determine suitable numbers and locations of monitoring wells that will provide equivalent or better quality groundwater-level data compared to an existing monitoring network. Ambient, unadjusted groundwater heads were expressed as salinity-adjusted heads based on the density of freshwater, well screen elevations, and temperature-dependent saline groundwater density. The optimization of the numbers and locations of monitoring wells is based on a pre-defined groundwater-level prediction error. The newly developed network combines an existing network with the addition of new wells that will result in a spatial distribution of groundwater monitoring wells that better defines the regional potentiometric surface of the Upper Floridan aquifer in the study area. The network yields groundwater-level predictions that differ significantly from those produced using the existing network. The newly designed network will reduce the mean prediction standard error by 43% compared to the existing network. The adoption of a hexagonal grid network for the South Florida Water Management District is recommended to achieve both a uniform level of information about groundwater levels and the minimum required accuracy. It is customary to install more monitoring wells for observing groundwater levels and groundwater quality as groundwater development progresses. However, budget constraints often force water managers to implement cost-effective monitoring networks. In this regard, this study provides guidelines to water managers concerned with groundwater planning and monitoring.

  20. Optimization of deformation monitoring networks using finite element strain analysis

    NASA Astrophysics Data System (ADS)

    Alizadeh-Khameneh, M. Amin; Eshagh, Mehdi; Jensen, Anna B. O.

    2018-04-01

    An optimal design of a geodetic network can fulfill the requested precision and reliability of the network, and decrease the expenses of its execution by removing unnecessary observations. The role of an optimal design is highlighted in deformation monitoring network due to the repeatability of these networks. The core design problem is how to define precision and reliability criteria. This paper proposes a solution, where the precision criterion is defined based on the precision of deformation parameters, i. e. precision of strain and differential rotations. A strain analysis can be performed to obtain some information about the possible deformation of a deformable object. In this study, we split an area into a number of three-dimensional finite elements with the help of the Delaunay triangulation and performed the strain analysis on each element. According to the obtained precision of deformation parameters in each element, the precision criterion of displacement detection at each network point is then determined. The developed criterion is implemented to optimize the observations from the Global Positioning System (GPS) in Skåne monitoring network in Sweden. The network was established in 1989 and straddled the Tornquist zone, which is one of the most active faults in southern Sweden. The numerical results show that 17 out of all 21 possible GPS baseline observations are sufficient to detect minimum 3 mm displacement at each network point.

  1. A proposal of optimal sampling design using a modularity strategy

    NASA Astrophysics Data System (ADS)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  2. Putting Man in the Machine: Exploiting Expertise to Enhance Multiobjective Design of Water Supply Monitoring Network

    NASA Astrophysics Data System (ADS)

    Bode, F.; Nowak, W.; Reed, P. M.; Reuschen, S.

    2016-12-01

    Drinking-water well catchments need effective early-warning monitoring networks. Groundwater water supply wells in complex urban environments are in close proximity to a myriad of potential industrial pollutant sources that could irreversibly damage their source aquifers. These urban environments pose fiscal and physical challenges to designing monitoring networks. Ideal early-warning monitoring networks would satisfy three objectives: to detect (1) all potential contaminations within the catchment (2) as early as possible before they reach the pumping wells, (3) while minimizing costs. Obviously, the ideal case is nonexistent, so we search for tradeoffs using multiobjective optimization. The challenge of this optimization problem is the high number of potential monitoring-well positions (the search space) and the non-linearity of the underlying groundwater flow-and-transport problem. This study evaluates (1) different ways to effectively restrict the search space in an efficient way, with and without expert knowledge, (2) different methods to represent the search space during the optimization and (3) the influence of incremental increases in uncertainty in the system. Conductivity, regional flow direction and potential source locations are explored as key uncertainties. We show the need and the benefit of our methods by comparing optimized monitoring networks for different uncertainty levels with networks that seek to effectively exploit expert knowledge. The study's main contributions are the different approaches restricting and representing the search space. The restriction algorithms are based on a point-wise comparison of decision elements of the search space. The representation of the search space can be either binary or continuous. For both cases, the search space must be adjusted properly. Our results show the benefits and drawbacks of binary versus continuous search space representations and the high potential of automated search space restriction algorithms for high-dimensional, highly non-linear optimization problems.

  3. Extending Resolution of Fault Slip With Geodetic Networks Through Optimal Network Design

    NASA Astrophysics Data System (ADS)

    Sathiakumar, Sharadha; Barbot, Sylvain Denis; Agram, Piyush

    2017-12-01

    Geodetic networks consisting of high precision and high rate Global Navigation Satellite Systems (GNSS) stations continuously monitor seismically active regions of the world. These networks measure surface displacements and the amount of geodetic strain accumulated in the region and give insight into the seismic potential. SuGar (Sumatra GPS Array) in Sumatra, GEONET (GNSS Earth Observation Network System) in Japan, and PBO (Plate Boundary Observatory) in California are some examples of established networks around the world that are constantly expanding with the addition of new stations to improve the quality of measurements. However, installing new stations to existing networks is tedious and expensive. Therefore, it is important to choose suitable locations for new stations to increase the precision obtained in measuring the geophysical parameters of interest. Here we describe a methodology to design optimal geodetic networks that augment the existing system and use it to investigate seismo-tectonics at convergent and transform boundaries considering land-based and seafloor geodesy. The proposed network design optimization would be pivotal to better understand seismic and tsunami hazards around the world. Land-based and seafloor networks can monitor fault slip around subduction zones with significant resolution, but transform faults are more challenging to monitor due to their near-vertical geometry.

  4. A geostatistical methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer.

    PubMed

    Júnez-Ferreira, H E; Herrera, G S

    2013-04-01

    This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.

  5. Evaluation of groundwater levels in the South Platte River alluvial aquifer, Colorado, 1953-2012, and design of initial well networks for monitoring groundwater levels

    USGS Publications Warehouse

    Wellman, Tristan

    2015-01-01

    A network of candidate monitoring wells was proposed to initiate a regional monitoring program. Consistent monitoring and analysis of groundwater levels will be needed for informed decisions to optimize beneficial use of water and to limit high groundwater levels in susceptible areas. Finalization of the network will require future field reconnaissance to assess local site conditions and discussions with State authorities.

  6. OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS

    EPA Science Inventory

    The Optimal Well Locator ( OWL) program was designed and developed by USEPA to be a screening tool to evaluate and optimize the placement of wells in long term monitoring networks at small sites. The first objective of the OWL program is to allow the user to visualize the change ...

  7. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    NASA Astrophysics Data System (ADS)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  8. Locations of Sampling Stations for Water Quality Monitoring in Water Distribution Networks.

    PubMed

    Rathi, Shweta; Gupta, Rajesh

    2014-04-01

    Water quality is required to be monitored in the water distribution networks (WDNs) at salient locations to assure the safe quality of water supplied to the consumers. Such monitoring stations (MSs) provide warning against any accidental contaminations. Various objectives like demand coverage, time for detection, volume of water contaminated before detection, extent of contamination, expected population affected prior to detection, detection likelihood and others, have been independently or jointly considered in determining optimal number and location of MSs in WDNs. "Demand coverage" defined as the percentage of network demand monitored by a particular monitoring station is a simple measure to locate MSs. Several methods based on formulation of coverage matrix using pre-specified coverage criteria and optimization have been suggested. Coverage criteria is defined as some minimum percentage of total flow received at the monitoring stations that passed through any upstream node included then as covered node of the monitoring station. Number of monitoring stations increases with the increase in the value of coverage criteria. Thus, the design of monitoring station becomes subjective. A simple methodology is proposed herein which priority wise iteratively selects MSs to achieve targeted demand coverage. The proposed methodology provided the same number and location of MSs for illustrative network as an optimization method did. Further, the proposed method is simple and avoids subjectivity that could arise from the consideration of coverage criteria. The application of methodology is also shown on a WDN of Dharampeth zone (Nagpur city WDN in Maharashtra, India) having 285 nodes and 367 pipes.

  9. Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness

    NASA Astrophysics Data System (ADS)

    Julich, R. J.

    2004-05-01

    The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.

  10. Research Trends in Wireless Visual Sensor Networks When Exploiting Prioritization

    PubMed Central

    Costa, Daniel G.; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo

    2015-01-01

    The development of wireless sensor networks for control and monitoring functions has created a vibrant investigation scenario, where many critical topics, such as communication efficiency and energy consumption, have been investigated in the past few years. However, when sensors are endowed with low-power cameras for visual monitoring, a new scope of challenges is raised, demanding new research efforts. In this context, the resource-constrained nature of sensor nodes has demanded the use of prioritization approaches as a practical mechanism to lower the transmission burden of visual data over wireless sensor networks. Many works in recent years have considered local-level prioritization parameters to enhance the overall performance of those networks, but global-level policies can potentially achieve better results in terms of visual monitoring efficiency. In this paper, we make a broad review of some recent works on priority-based optimizations in wireless visual sensor networks. Moreover, we envisage some research trends when exploiting prioritization, potentially fostering the development of promising optimizations for wireless sensor networks composed of visual sensors. PMID:25599425

  11. Perceptual tools for quality-aware video networks

    NASA Astrophysics Data System (ADS)

    Bovik, A. C.

    2014-01-01

    Monitoring and controlling the quality of the viewing experience of videos transmitted over increasingly congested networks (especially wireless networks) is a pressing problem owing to rapid advances in video-centric mobile communication and display devices that are straining the capacity of the network infrastructure. New developments in automatic perceptual video quality models offer tools that have the potential to be used to perceptually optimize wireless video, leading to more efficient video data delivery and better received quality. In this talk I will review key perceptual principles that are, or could be used to create effective video quality prediction models, and leading quality prediction models that utilize these principles. The goal is to be able to monitor and perceptually optimize video networks by making them "quality-aware."

  12. Designing optimal greenhouse gas observing networks that consider performance and cost

    DOE PAGES

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.; ...

    2015-06-16

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH 2FCF 3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  13. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  14. Artificial Neural Networks Applications: from Aircraft Design Optimization to Orbiting Spacecraft On-board Environment Monitoring

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Lin, Paul P.

    2002-01-01

    This paper reviews some of the recent applications of artificial neural networks taken from various works performed by the authors over the last four years at the NASA Glenn Research Center. This paper focuses mainly on two areas. First, artificial neural networks application in design and optimization of aircraft/engine propulsion systems to shorten the overall design cycle. Out of that specific application, a generic design tool was developed, which can be used for most design optimization process. Second, artificial neural networks application in monitoring the microgravity quality onboard the International Space Station, using on-board accelerometers for data acquisition. These two different applications are reviewed in this paper to show the broad applicability of artificial intelligence in various disciplines. The intent of this paper is not to give in-depth details of these two applications, but to show the need to combine different artificial intelligence techniques or algorithms in order to design an optimized or versatile system.

  15. OPTIMAL WELL LOCATOR (OWL): A SCREENING TOOL FOR EVALUATING LOCATIONS OF MONITORING WELLS: USER'S GUIDE VERSION 1.2

    EPA Science Inventory

    The Optimal Well Locator ( OWL) program was designed and developed by USEPA to be a screening tool to evaluate and optimize the placement of wells in long term monitoring networks at small sites. The first objective of the OWL program is to allow the user to visualize the change ...

  16. Optimization of monitoring networks based on uncertainty quantification of model predictions of contaminant transport

    NASA Astrophysics Data System (ADS)

    Vesselinov, V. V.; Harp, D.

    2010-12-01

    The process of decision making to protect groundwater resources requires a detailed estimation of uncertainties in model predictions. Various uncertainties associated with modeling a natural system, such as: (1) measurement and computational errors; (2) uncertainties in the conceptual model and model-parameter estimates; (3) simplifications in model setup and numerical representation of governing processes, contribute to the uncertainties in the model predictions. Due to this combination of factors, the sources of predictive uncertainties are generally difficult to quantify individually. Decision support related to optimal design of monitoring networks requires (1) detailed analyses of existing uncertainties related to model predictions of groundwater flow and contaminant transport, (2) optimization of the proposed monitoring network locations in terms of their efficiency to detect contaminants and provide early warning. We apply existing and newly-proposed methods to quantify predictive uncertainties and to optimize well locations. An important aspect of the analysis is the application of newly-developed optimization technique based on coupling of Particle Swarm and Levenberg-Marquardt optimization methods which proved to be robust and computationally efficient. These techniques and algorithms are bundled in a software package called MADS. MADS (Model Analyses for Decision Support) is an object-oriented code that is capable of performing various types of model analyses and supporting model-based decision making. The code can be executed under different computational modes, which include (1) sensitivity analyses (global and local), (2) Monte Carlo analysis, (3) model calibration, (4) parameter estimation, (5) uncertainty quantification, and (6) model selection. The code can be externally coupled with any existing model simulator through integrated modules that read/write input and output files using a set of template and instruction files (consistent with the PEST I/O protocol). MADS can also be internally coupled with a series of built-in analytical simulators. MADS provides functionality to work directly with existing control files developed for the code PEST (Doherty 2009). To perform the computational modes mentioned above, the code utilizes (1) advanced Latin-Hypercube sampling techniques (including Improved Distributed Sampling), (2) various gradient-based Levenberg-Marquardt optimization methods, (3) advanced global optimization methods (including Particle Swarm Optimization), and (4) a selection of alternative objective functions. The code has been successfully applied to perform various model analyses related to environmental management of real contamination sites. Examples include source identification problems, quantification of uncertainty, model calibration, and optimization of monitoring networks. The methodology and software codes are demonstrated using synthetic and real case studies where monitoring networks are optimized taking into account the uncertainty in model predictions of contaminant transport.

  17. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Status of Workers.

    PubMed

    ZoroufchiBenis, Khaled; Fatehifar, Esmaeil; Ahmadi, Javad; Rouhi, Alireza

    2015-01-01

    Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availability of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN) for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. A multi-pollutant method (implemented as a MATLAB program) was explored for configur-ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids) according to their ability to represent the ambient concentration. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta-tion's dosage to the total dosage in the network. Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network efficiency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health.

  18. Optimal Design of Air Quality Monitoring Network and its Application in an Oil Refinery Plant: An Approach to Keep Health Status of Workers

    PubMed Central

    ZoroufchiBenis, Khaled; Fatehifar, Esmaeil; Ahmadi, Javad; Rouhi, Alireza

    2015-01-01

    Background: Industrial air pollution is a growing challenge to humane health, especially in developing countries, where there is no systematic monitoring of air pollution. Given the importance of the availability of valid information on population exposure to air pollutants, it is important to design an optimal Air Quality Monitoring Network (AQMN) for assessing population exposure to air pollution and predicting the magnitude of the health risks to the population. Methods: A multi-pollutant method (implemented as a MATLAB program) was explored for configur­ing an AQMN to detect the highest level of pollution around an oil refinery plant. The method ranks potential monitoring sites (grids) according to their ability to represent the ambient concentration. The term of cluster of contiguous grids that exceed a threshold value was used to calculate the Station Dosage. Selection of the best configuration of AQMN was done based on the ratio of a sta­tion’s dosage to the total dosage in the network. Results: Six monitoring stations were needed to detect the pollutants concentrations around the study area for estimating the level and distribution of exposure in the population with total network efficiency of about 99%. An analysis of the design procedure showed that wind regimes have greatest effect on the location of monitoring stations. Conclusion: The optimal AQMN enables authorities to implement an effective program of air quality management for protecting human health. PMID:26933646

  19. A New Wavelength Optimization and Energy-Saving Scheme Based on Network Coding in Software-Defined WDM-PON Networks

    NASA Astrophysics Data System (ADS)

    Ren, Danping; Wu, Shanshan; Zhang, Lijing

    2016-09-01

    In view of the characteristics of the global control and flexible monitor of software-defined networks (SDN), we proposes a new optical access network architecture dedicated to Wavelength Division Multiplexing-Passive Optical Network (WDM-PON) systems based on SDN. The network coding (NC) technology is also applied into this architecture to enhance the utilization of wavelength resource and reduce the costs of light source. Simulation results show that this scheme can optimize the throughput of the WDM-PON network, greatly reduce the system time delay and energy consumption.

  20. Network Optimization for Induced Seismicity Monitoring in Urban Areas

    NASA Astrophysics Data System (ADS)

    Kraft, T.; Husen, S.; Wiemer, S.

    2012-12-01

    With the global challenge to satisfy an increasing demand for energy, geological energy technologies receive growing attention and have been initiated in or close to urban areas in the past several years. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential to the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquake at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized location problem. Optimization for additional criteria (e.g., focal mechanism determination or installation costs) can be included. We consider a 3D seismic velocity model, an European ambient seismic noise model derived from high-resolution land-use data and existing seismic stations in the vicinity of the geotechnical site. Using this algorithm we are able to find the optimal geometry and size of the seismic monitoring network that meets the predefined application-oriented performance criteria. In this talk we will focus on optimal network geometries for deep geothermal projects of the EGS and hydrothermal type. We will discuss the requirements for basic seismic surveillance and high-resolution reservoir monitoring and characterization.

  1. Optimizing Seismic Monitoring Networks for EGS and Conventional Geothermal Projects

    NASA Astrophysics Data System (ADS)

    Kraft, Toni; Herrmann, Marcus; Bethmann, Falko; Stefan, Wiemer

    2013-04-01

    In the past several years, geological energy technologies receive growing attention and have been initiated in or close to urban areas. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential for the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquakes at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. We have developed an optimization algorithm for seismic monitoring networks in urban areas that allows to design and evaluate seismic network geometries for arbitrary geotechnical operation layouts. The algorithm is based on the D-optimal experimental design that aims to minimize the error ellipsoid of the linearized location problem. Optimization for additional criteria (e.g., focal mechanism determination or installation costs) can be included. We consider a 3D seismic velocity model, an European ambient seismic noise model derived from high-resolution land-use data, and existing seismic stations in the vicinity of the geotechnical site. Additionally, we account for the attenuation of the seismic signal with travel time and ambient seismic noise with depth to be able to correctly deal with borehole station networks. Using this algorithm we are able to find the optimal geometry and size of the seismic monitoring network that meets the predefined application-oriented performance criteria. This talk will focus on optimal network geometries for deep geothermal projects of the EGS and hydrothermal type, and discuss the requirements for basic seismic surveillance and high-resolution reservoir monitoring and characterization.

  2. Modeling the BOD of Danube River in Serbia using spatial, temporal, and input variables optimized artificial neural network models.

    PubMed

    Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V

    2016-05-01

    This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lucas, D. D.; Yver Kwok, C.; Cameron-Smith, P.

    Emission rates of greenhouse gases (GHGs) entering into the atmosphere can be inferred using mathematical inverse approaches that combine observations from a network of stations with forward atmospheric transport models. Some locations for collecting observations are better than others for constraining GHG emissions through the inversion, but the best locations for the inversion may be inaccessible or limited by economic and other non-scientific factors. We present a method to design an optimal GHG observing network in the presence of multiple objectives that may be in conflict with each other. As a demonstration, we use our method to design a prototypemore » network of six stations to monitor summertime emissions in California of the potent GHG 1,1,1,2-tetrafluoroethane (CH 2FCF 3, HFC-134a). We use a multiobjective genetic algorithm to evolve network configurations that seek to jointly maximize the scientific accuracy of the inferred HFC-134a emissions and minimize the associated costs of making the measurements. The genetic algorithm effectively determines a set of "optimal" observing networks for HFC-134a that satisfy both objectives (i.e., the Pareto frontier). The Pareto frontier is convex, and clearly shows the tradeoffs between performance and cost, and the diminishing returns in trading one for the other. Without difficulty, our method can be extended to design optimal networks to monitor two or more GHGs with different emissions patterns, or to incorporate other objectives and constraints that are important in the practical design of atmospheric monitoring networks.« less

  4. Designing optimized multi-species monitoring networks to detect range shifts driven by climate change: a case study with bats in the North of Portugal.

    PubMed

    Amorim, Francisco; Carvalho, Sílvia B; Honrado, João; Rebelo, Hugo

    2014-01-01

    Here we develop a framework to design multi-species monitoring networks using species distribution models and conservation planning tools to optimize the location of monitoring stations to detect potential range shifts driven by climate change. For this study, we focused on seven bat species in Northern Portugal (Western Europe). Maximum entropy modelling was used to predict the likely occurrence of those species under present and future climatic conditions. By comparing present and future predicted distributions, we identified areas where each species is likely to gain, lose or maintain suitable climatic space. We then used a decision support tool (the Marxan software) to design three optimized monitoring networks considering: a) changes in species likely occurrence, b) species conservation status, and c) level of volunteer commitment. For present climatic conditions, species distribution models revealed that areas suitable for most species occur in the north-eastern part of the region. However, areas predicted to become climatically suitable in the future shifted towards west. The three simulated monitoring networks, adaptable for an unpredictable volunteer commitment, included 28, 54 and 110 sampling locations respectively, distributed across the study area and covering the potential full range of conditions where species range shifts may occur. Our results show that our framework outperforms the traditional approach that only considers current species ranges, in allocating monitoring stations distributed across different categories of predicted shifts in species distributions. This study presents a straightforward framework to design monitoring schemes aimed specifically at testing hypotheses about where and when species ranges may shift with climatic changes, while also ensuring surveillance of general population trends.

  5. How to Decide? Multi-Objective Early-Warning Monitoring Networks for Water Suppliers

    NASA Astrophysics Data System (ADS)

    Bode, Felix; Loschko, Matthias; Nowak, Wolfgang

    2015-04-01

    Groundwater is a resource for drinking water and hence needs to be protected from contaminations. However, many well catchments include an inventory of known and unknown risk sources, which cannot be eliminated, especially in urban regions. As a matter of risk control, all these risk sources should be monitored. A one-to-one monitoring situation for each risk source would lead to a cost explosion and is even impossible for unknown risk sources. However, smart optimization concepts could help to find promising low-cost monitoring network designs. In this work we develop a concept to plan monitoring networks using multi-objective optimization. Our considered objectives are to maximize the probability of detecting all contaminations, to enhance the early warning time before detected contaminations reach the drinking water well, and to minimize the installation and operating costs of the monitoring network. Using multi-objectives optimization, we avoid the problem of having to weight these objectives to a single objective-function. These objectives are clearly competing, and it is impossible to know their mutual trade-offs beforehand - each catchment differs in many points and it is hardly possible to transfer knowledge between geological formations and risk inventories. To make our optimization results more specific to the type of risk inventory in different catchments we do risk prioritization of all known risk sources. Due to the lack of the required data, quantitative risk ranking is impossible. Instead, we use a qualitative risk ranking to prioritize the known risk sources for monitoring. Additionally, we allow for the existence of unknown risk sources that are totally uncertain in location and in their inherent risk. Therefore, they can neither be located nor ranked. Instead, we represent them by a virtual line of risk sources surrounding the production well. We classify risk sources into four different categories: severe, medium and tolerable for known risk sources and an extra category for the unknown ones. With that, early warning time and detection probability become individual objectives for each risk class. Thus, decision makers can identify monitoring networks valid for controlling the top risk sources, and evaluate the capabilities (or search for least-cost upgrades) to also cover moderate, tolerable and unknown risk sources. Monitoring networks, which are valid for the remaining risk also cover all other risk sources, but only with a relatively poor early-warning time. The data provided for the optimization algorithm are calculated in a preprocessing step by a flow and transport model. It simulates, which potential contaminant plumes from the risk sources would be detectable where and when by all possible candidate positions for monitoring wells. Uncertainties due to hydro(geo)logical phenomena are taken into account by Monte-Carlo simulations. These include uncertainty in ambient flow direction of the groundwater, uncertainty of the conductivity field, and different scenarios for the pumping rates of the production wells. To avoid numerical dispersion during the transport simulations, we use particle-tracking random walk methods when simulating transport.

  6. A Movement-Assisted Deployment of Collaborating Autonomous Sensors for Indoor and Outdoor Environment Monitoring

    PubMed Central

    Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał

    2016-01-01

    Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper. PMID:27649186

  7. A Movement-Assisted Deployment of Collaborating Autonomous Sensors for Indoor and Outdoor Environment Monitoring.

    PubMed

    Niewiadomska-Szynkiewicz, Ewa; Sikora, Andrzej; Marks, Michał

    2016-09-14

    Using mobile robots or unmanned vehicles to assist optimal wireless sensors deployment in a working space can significantly enhance the capability to investigate unknown environments. This paper addresses the issues of the application of numerical optimization and computer simulation techniques to on-line calculation of a wireless sensor network topology for monitoring and tracking purposes. We focus on the design of a self-organizing and collaborative mobile network that enables a continuous data transmission to the data sink (base station) and automatically adapts its behavior to changes in the environment to achieve a common goal. The pre-defined and self-configuring approaches to the mobile-based deployment of sensors are compared and discussed. A family of novel algorithms for the optimal placement of mobile wireless devices for permanent monitoring of indoor and outdoor dynamic environments is described. They employ a network connectivity-maintaining mobility model utilizing the concept of the virtual potential function for calculating the motion trajectories of platforms carrying sensors. Their quality and utility have been justified through simulation experiments and are discussed in the final part of the paper.

  8. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks.

    PubMed

    Puente Fernández, Jesús Antonio; García Villalba, Luis Javier; Kim, Tai-Hoon

    2018-04-03

    Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches.

  9. Evaluating data worth for ground-water management under uncertainty

    USGS Publications Warehouse

    Wagner, B.J.

    1999-01-01

    A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models-a chance-constrained ground-water management model and an integer-programing sampling network design model-to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information-i.e., the projected reduction in management costs-with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.A decision framework is presented for assessing the value of ground-water sampling within the context of ground-water management under uncertainty. The framework couples two optimization models - a chance-constrained ground-water management model and an integer-programming sampling network design model - to identify optimal pumping and sampling strategies. The methodology consists of four steps: (1) The optimal ground-water management strategy for the present level of model uncertainty is determined using the chance-constrained management model; (2) for a specified data collection budget, the monitoring network design model identifies, prior to data collection, the sampling strategy that will minimize model uncertainty; (3) the optimal ground-water management strategy is recalculated on the basis of the projected model uncertainty after sampling; and (4) the worth of the monitoring strategy is assessed by comparing the value of the sample information - i.e., the projected reduction in management costs - with the cost of data collection. Steps 2-4 are repeated for a series of data collection budgets, producing a suite of management/monitoring alternatives, from which the best alternative can be selected. A hypothetical example demonstrates the methodology's ability to identify the ground-water sampling strategy with greatest net economic benefit for ground-water management.

  10. Wavelet decomposition and radial basis function networks for system monitoring

    NASA Astrophysics Data System (ADS)

    Ikonomopoulos, A.; Endou, A.

    1998-10-01

    Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.

  11. Induced Seismicity Related to Hydrothermal Operation of Geothermal Projects in Southern Germany - Observations and Future Directions

    NASA Astrophysics Data System (ADS)

    Megies, T.; Kraft, T.; Wassermann, J. M.

    2015-12-01

    Geothermal power plants in Southern Germany are operated hydrothermally and at low injection pressures in a seismically inactive region considered very low seismic hazard. For that reason, permit authorities initially enforced no monitoring requirements on the operating companies. After a series of events perceived by local residents, a scientific monitoring survey was conducted over several years, revealing several hundred induced earthquakes at one project site.We summarize results from monitoring at this site, including absolute locations in a local 3D velocity model, relocations using double-difference and master-event methods and focal mechanism determinations that show a clear association with fault structures in the reservoir which extend down into the underlying crystalline basement. To better constrain the shear wave velocity models that have a strong influence on hypocentral depth estimates, several different approaches to estimate layered vp/vs models are employed.Results from these studies have prompted permit authorities to start imposing minimal monitoring requirements. Since in some cases these geothermal projects are only separated by a few kilometers, we investigate the capabilities of an optimized network combining the monitoring resources of six neighboring well doublets in a joint network. Optimization is taking into account the -- on this local scale, urban environment -- highly heterogeneous background noise conditions and the feasibility of potential monitoring sites, removing non-viable sites before the optimization procedure. First results from the actual network realization show good detection capabilities for small microearthquakes despite the minimum instrumentational effort, demonstrating the benefits of good coordination of monitoring efforts.

  12. Optimal design of hydrometric monitoring networks with dynamic components based on Information Theory

    NASA Astrophysics Data System (ADS)

    Alfonso, Leonardo; Chacon, Juan; Solomatine, Dimitri

    2016-04-01

    The EC-FP7 WeSenseIt project proposes the development of a Citizen Observatory of Water, aiming at enhancing environmental monitoring and forecasting with the help of citizens equipped with low-cost sensors and personal devices such as smartphones and smart umbrellas. In this regard, Citizen Observatories may complement the limited data availability in terms of spatial and temporal density, which is of interest, among other areas, to improve hydraulic and hydrological models. At this point, the following question arises: how can citizens, who are part of a citizen observatory, be optimally guided so that the data they collect and send is useful to improve modelling and water management? This research proposes a new methodology to identify the optimal location and timing of potential observations coming from moving sensors of hydrological variables. The methodology is based on Information Theory, which has been widely used in hydrometric monitoring design [1-4]. In particular, the concepts of Joint Entropy, as a measure of the amount of information that is contained in a set of random variables, which, in our case, correspond to the time series of hydrological variables captured at given locations in a catchment. The methodology presented is a step forward in the state of the art because it solves the multiobjective optimisation problem of getting simultaneously the minimum number of informative and non-redundant sensors needed for a given time, so that the best configuration of monitoring sites is found at every particular moment in time. To this end, the existing algorithms have been improved to make them efficient. The method is applied to cases in The Netherlands, UK and Italy and proves to have a great potential to complement the existing in-situ monitoring networks. [1] Alfonso, L., A. Lobbrecht, and R. Price (2010a), Information theory-based approach for location of monitoring water level gauges in polders, Water Resour. Res., 46(3), W03528 [2] Alfonso, L., A. Lobbrecht, and R. Price (2010b), Optimization of water level monitoring network in polder systems using information theory, WATER RESOURCES RESEARCH, 46(12), W12553,10.1029/2009wr008953. [3] Alfonso, L., L. He, A. Lobbrecht, and R. Price (2013), Information theory applied to evaluate the discharge monitoring network of the Magdalena River, Journal of Hydroinformatics, 15(1), 211-228 [4] Alfonso, L., E. Ridolfi, S. Gaytan-Aguilar, F. Napolitano, and F. Russo (2014), Ensemble Entropy for Monitoring Network Design, Entropy, 16(3), 1365-1375

  13. An ensemble-based algorithm for optimizing the configuration of an in situ soil moisture monitoring network

    NASA Astrophysics Data System (ADS)

    De Vleeschouwer, Niels; Verhoest, Niko E. C.; Gobeyn, Sacha; De Baets, Bernard; Verwaeren, Jan; Pauwels, Valentijn R. N.

    2015-04-01

    The continuous monitoring of soil moisture in a permanent network can yield an interesting data product for use in hydrological modeling. Major advantages of in situ observations compared to remote sensing products are the potential vertical extent of the measurements, the smaller temporal resolution of the observation time series, the smaller impact of land cover variability on the observation bias, etc. However, two major disadvantages are the typically small integration volume of in situ measurements, and the often large spacing between monitoring locations. This causes only a small part of the modeling domain to be directly observed. Furthermore, the spatial configuration of the monitoring network is typically non-dynamic in time. Generally, e.g. when applying data assimilation, maximizing the observed information under given circumstances will lead to a better qualitative and quantitative insight of the hydrological system. It is therefore advisable to perform a prior analysis in order to select those monitoring locations which are most predictive for the unobserved modeling domain. This research focuses on optimizing the configuration of a soil moisture monitoring network in the catchment of the Bellebeek, situated in Belgium. A recursive algorithm, strongly linked to the equations of the Ensemble Kalman Filter, has been developed to select the most predictive locations in the catchment. The basic idea behind the algorithm is twofold. On the one hand a minimization of the modeled soil moisture ensemble error covariance between the different monitoring locations is intended. This causes the monitoring locations to be as independent as possible regarding the modeled soil moisture dynamics. On the other hand, the modeled soil moisture ensemble error covariance between the monitoring locations and the unobserved modeling domain is maximized. The latter causes a selection of monitoring locations which are more predictive towards unobserved locations. The main factors that will influence the outcome of the algorithm are the following: the choice of the hydrological model, the uncertainty model applied for ensemble generation, the general wetness of the catchment during which the error covariance is computed, etc. In this research the influence of the latter two is examined more in-depth. Furthermore, the optimal network configuration resulting from the newly developed algorithm is compared to network configurations obtained by two other algorithms. The first algorithm is based on a temporal stability analysis of the modeled soil moisture in order to identify catchment representative monitoring locations with regard to average conditions. The second algorithm involves the clustering of available spatially distributed data (e.g. land cover and soil maps) that is not obtained by hydrological modeling.

  14. Clustering and Flow Conservation Monitoring Tool for Software Defined Networks

    PubMed Central

    Puente Fernández, Jesús Antonio

    2018-01-01

    Prediction systems present some challenges on two fronts: the relation between video quality and observed session features and on the other hand, dynamics changes on the video quality. Software Defined Networks (SDN) is a new concept of network architecture that provides the separation of control plane (controller) and data plane (switches) in network devices. Due to the existence of the southbound interface, it is possible to deploy monitoring tools to obtain the network status and retrieve a statistics collection. Therefore, achieving the most accurate statistics depends on a strategy of monitoring and information requests of network devices. In this paper, we propose an enhanced algorithm for requesting statistics to measure the traffic flow in SDN networks. Such an algorithm is based on grouping network switches in clusters focusing on their number of ports to apply different monitoring techniques. Such grouping occurs by avoiding monitoring queries in network switches with common characteristics and then, by omitting redundant information. In this way, the present proposal decreases the number of monitoring queries to switches, improving the network traffic and preventing the switching overload. We have tested our optimization in a video streaming simulation using different types of videos. The experiments and comparison with traditional monitoring techniques demonstrate the feasibility of our proposal maintaining similar values decreasing the number of queries to the switches. PMID:29614049

  15. A Great Lakes atmospheric mercury monitoring network: evaluation and design

    USGS Publications Warehouse

    Risch, Martin R.; Kenski, Donna M.; ,; David, A.

    2014-01-01

    As many as 51 mercury (Hg) wet-deposition-monitoring sites from 4 networks were operated in 8 USA states and Ontario, Canada in the North American Great Lakes Region from 1996 to 2010. By 2013, 20 of those sites were no longer in operation and approximately half the geographic area of the Region was represented by a single Hg-monitoring site. In response, a Great Lakes Atmospheric Mercury Monitoring (GLAMM) network is needed as a framework for regional collaboration in Hg-deposition monitoring. The purpose of the GLAMM network is to detect changes in regional atmospheric Hg deposition related to changes in Hg emissions. An optimized design for the network was determined to be a minimum of 21 sites in a representative and approximately uniform geographic distribution. A majority of the active and historic Hg-monitoring sites in the Great Lakes Region are part of the National Atmospheric Deposition Program (NADP) Mercury Deposition Network (MDN) in North America and the GLAMM network is planned to be part of the MDN. To determine an optimized network design, active and historic Hg-monitoring sites in the Great Lakes Region were evaluated with a rating system of 21 factors that included characteristics of the monitoring locations and interpretations of Hg data. Monitoring sites were rated according to the number of Hg emissions sources and annual Hg emissions in a geographic polygon centered on each site. Hg-monitoring data from the sites were analyzed for long-term averages in weekly Hg concentrations in precipitation and weekly Hg-wet deposition, and on significant temporal trends in Hg concentrations and Hg deposition. A cluster analysis method was used to group sites with similar variability in their Hg data in order to identify sites that were unique for explaining Hg data variability in the Region. The network design included locations in protected natural areas, urban areas, Great Lakes watersheds, and in proximity to areas with a high density of annual Hg emissions and areas with high average weekly Hg wet deposition. In a statistical analysis, relatively strong, positive correlations in the wet deposition of Hg and sulfate were shown for co-located NADP Hg-monitoring and acid-rain monitoring sites in the Region. This finding indicated that efficiency in regional Hg monitoring can be improved by adding new Hg monitoring to existing NADP acid-rain monitoring sites. Implementation of the GLAMM network design will require Hg-wet-deposition monitoring to be: (a) continued at 12 MDN sites active in 2013 and (b) restarted or added at 9 NADP sites where it is absent in 2013. Ongoing discussions between the states in the Great Lakes Region, the Lake Michigan Air Directors Consortium (a regional planning entity), the NADP, the U.S. Environmental Protection Agency, and the U.S. Geological Survey are needed for coordinating the GLAMM network.

  16. Application of Artificial Neural Networks to the Design of Turbomachinery Airfoils

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri

    1997-01-01

    Artificial neural networks are widely used in engineering applications, such as control, pattern recognition, plant modeling and condition monitoring to name just a few. In this seminar we will explore the possibility of applying neural networks to aerodynamic design, in particular, the design of turbomachinery airfoils. The principle idea behind this effort is to represent the design space using a neural network (within some parameter limits), and then to employ an optimization procedure to search this space for a solution that exhibits optimal performance characteristics. Results obtained for design problems in two spatial dimensions will be presented.

  17. A New Network Modeling Tool for the Ground-based Nuclear Explosion Monitoring Community

    NASA Astrophysics Data System (ADS)

    Merchant, B. J.; Chael, E. P.; Young, C. J.

    2013-12-01

    Network simulations have long been used to assess the performance of monitoring networks to detect events for such purposes as planning station deployments and network resilience to outages. The standard tool has been the SAIC-developed NetSim package. With correct parameters, NetSim can produce useful simulations; however, the package has several shortcomings: an older language (FORTRAN), an emphasis on seismic monitoring with limited support for other technologies, limited documentation, and a limited parameter set. Thus, we are developing NetMOD (Network Monitoring for Optimal Detection), a Java-based tool designed to assess the performance of ground-based networks. NetMOD's advantages include: coded in a modern language that is multi-platform, utilizes modern computing performance (e.g. multi-core processors), incorporates monitoring technologies other than seismic, and includes a well-validated default parameter set for the IMS stations. NetMOD is designed to be extendable through a plugin infrastructure, so new phenomenological models can be added. Development of the Seismic Detection Plugin is being pursued first. Seismic location and infrasound and hydroacoustic detection plugins will follow. By making NetMOD an open-release package, it can hopefully provide a common tool that the monitoring community can use to produce assessments of monitoring networks and to verify assessments made by others.

  18. An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida

    USGS Publications Warehouse

    Gain, W.S.

    1997-01-01

    Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.

  19. Wireless Sensor Network for Electric Transmission Line Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alphenaar, Bruce

    Generally, federal agencies tasked to oversee power grid reliability are dependent on data from grid infrastructure owners and operators in order to obtain a basic level of situational awareness. Since there are many owners and operators involved in the day-to-day functioning of the power grid, the task of accessing, aggregating and analyzing grid information from these sources is not a trivial one. Seemingly basic tasks such as synchronizing data timestamps between many different data providers and sources can be difficult as evidenced during the post-event analysis of the August 2003 blackout. In this project we investigate the efficacy and costmore » effectiveness of deploying a network of wireless power line monitoring devices as a method of independently monitoring key parts of the power grid as a complement to the data which is currently available to federal agencies from grid system operators. Such a network is modeled on proprietary power line monitoring technologies and networks invented, developed and deployed by Genscape, a Louisville, Kentucky based real-time energy information provider. Genscape measures transmission line power flow using measurements of electromagnetic fields under overhead high voltage transmission power lines in the United States and Europe. Opportunities for optimization of the commercial power line monitoring technology were investigated in this project to enable lower power consumption, lower cost and improvements to measurement methodologies. These optimizations were performed in order to better enable the use of wireless transmission line monitors in large network deployments (perhaps covering several thousand power lines) for federal situational awareness needs. Power consumption and cost reduction were addressed by developing a power line monitor using a low power, low cost wireless telemetry platform known as the ''Mote''. Motes were first developed as smart sensor nodes in wireless mesh networking applications. On such a platform, it has been demonstrated in this project that wireless monitoring units can effectively deliver real-time transmission line power flow information for less than $500 per monitor. The data delivered by such a monitor has during the course of the project been integrated with a national grid situational awareness visualization platform developed by Oak Ridge National Laboratory. Novel vibration energy scavenging methods based on piezoelectric cantilevers were also developed as a proposed method to power such monitors, with a goal of further cost reduction and large-scale deployment. Scavenging methods developed during the project resulted in 50% greater power output than conventional cantilever-based vibrational energy scavenging devices typically used to power smart sensor nodes. Lastly, enhanced and new methods for electromagnetic field sensing using multi-axis magnetometers and infrared reflectometry were investigated for potential monitoring applications in situations with a high density of power lines or high levels of background 60 Hz noise in order to isolate power lines of interest from other power lines in close proximity. The goal of this project was to investigate and demonstrate the feasibility of using small form factor, highly optimized, low cost, low power, non-contact, wireless electric transmission line monitors for delivery of real-time, independent power line monitoring for the US power grid. The project was divided into three main types of activity as follows; (1) Research into expanding the range of applications for non-contact power line monitoring to enable large scale low cost sensor network deployments (Tasks 1, 2); (2) Optimization of individual sensor hardware components to reduce size, cost and power consumption and testing in a pilot field study (Tasks 3,5); and (3) Demonstration of the feasibility of using the data from the network of power line monitors via a range of custom developed alerting and data visualization applications to deliver real-time information to federal agencies and others tasked with grid reliability (Tasks 6,8).« less

  20. Assessing and optimizing infrasound network performance: application to remote volcano monitoring

    NASA Astrophysics Data System (ADS)

    Tailpied, D.; LE Pichon, A.; Marchetti, E.; Kallel, M.; Ceranna, L.

    2014-12-01

    Infrasound is an efficient monitoring technique to remotely detect and characterize explosive sources such as volcanoes. Simulation methods incorporating realistic source and propagation effects have been developed to quantify the detection capability of any network. These methods can also be used to optimize the network configuration (number of stations, geographical location) in order to reduce the detection thresholds taking into account seasonal effects in infrasound propagation. Recent studies have shown that remote infrasound observations can provide useful information about the eruption chronology and the released acoustic energy. Comparisons with near-field recordings allow evaluating the potential of these observations to better constrain source parameters when other monitoring techniques (satellite, seismic, gas) are not available or cannot be made. Because of its regular activity, the well-instrumented Mount Etna is in Europe a unique natural repetitive source to test and optimize detection and simulation methods. The closest infrasound station part of the International Monitoring System is located in Tunisia (IS48). In summer, during the downwind season, it allows an unambiguous identification of signals associated with Etna eruptions. Under the European ARISE project (Atmospheric dynamics InfraStructure in Europe, FP7/2007-2013), experimental arrays have been installed in order to characterize infrasound propagation in different ranges of distance and direction. In addition, a small-aperture array, set up on the flank by the University of Firenze, has been operating since 2007. Such an experimental setting offers an opportunity to address the societal benefits that can be achieved through routine infrasound monitoring.

  1. Sniffer Channel Selection for Monitoring Wireless LANs

    NASA Astrophysics Data System (ADS)

    Song, Yuan; Chen, Xian; Kim, Yoo-Ah; Wang, Bing; Chen, Guanling

    Wireless sniffers are often used to monitor APs in wireless LANs (WLANs) for network management, fault detection, traffic characterization, and optimizing deployment. It is cost effective to deploy single-radio sniffers that can monitor multiple nearby APs. However, since nearby APs often operate on orthogonal channels, a sniffer needs to switch among multiple channels to monitor its nearby APs. In this paper, we formulate and solve two optimization problems on sniffer channel selection. Both problems require that each AP be monitored by at least one sniffer. In addition, one optimization problem requires minimizing the maximum number of channels that a sniffer listens to, and the other requires minimizing the total number of channels that the sniffers listen to. We propose a novel LP-relaxation based algorithm, and two simple greedy heuristics for the above two optimization problems. Through simulation, we demonstrate that all the algorithms are effective in achieving their optimization goals, and the LP-based algorithm outperforms the greedy heuristics.

  2. Applying model abstraction techniques to optimize monitoring networks for detecting subsurface contaminant transport

    USDA-ARS?s Scientific Manuscript database

    Improving strategies for monitoring subsurface contaminant transport includes performance comparison of competing models, developed independently or obtained via model abstraction. Model comparison and parameter discrimination involve specific performance indicators selected to better understand s...

  3. Application of experimental design for the optimization of artificial neural network-based water quality model: a case study of dissolved oxygen prediction.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-04-01

    This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2  ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.

  4. Two years of LCOGT operations: the challenges of a global observatory

    NASA Astrophysics Data System (ADS)

    Volgenau, Nikolaus; Boroson, Todd

    2016-07-01

    With 18 telescopes distributed over 6 sites, and more telescopes being added in 2016, Las Cumbres Observatory Global Telescope Network is a unique resource for timedomain astronomy. The Network's continuous coverage of the night sky, and the optimization of the observing schedule over all sites simultaneously, have enabled LCOGTusers to produce significant science results. However, practical challenges to maximizing the Network's science output remain. The Network began providing observations for members of its Science Collaboration and other partners in May 2014. In the two years since then, LCOGT has made a number of improvements to increase the Network's science yield. We also now have two years' experience monitoring observatory performance; effective monitoring of an observatory that spans the globe is a complex enterprise. Here, we describe some of LCOGT's efforts to monitor the Network, assess the quality of science data, and improve communication with our users.

  5. Recognition physical activities with optimal number of wearable sensors using data mining algorithms and deep belief network.

    PubMed

    Al-Fatlawi, Ali H; Fatlawi, Hayder K; Sai Ho Ling

    2017-07-01

    Daily physical activities monitoring is benefiting the health care field in several ways, in particular with the development of the wearable sensors. This paper adopts effective ways to calculate the optimal number of the necessary sensors and to build a reliable and a high accuracy monitoring system. Three data mining algorithms, namely Decision Tree, Random Forest and PART Algorithm, have been applied for the sensors selection process. Furthermore, the deep belief network (DBN) has been investigated to recognise 33 physical activities effectively. The results indicated that the proposed method is reliable with an overall accuracy of 96.52% and the number of sensors is minimised from nine to six sensors.

  6. The use of hierarchical clustering for the design of optimized monitoring networks

    NASA Astrophysics Data System (ADS)

    Soares, Joana; Makar, Paul Andrew; Aklilu, Yayne; Akingunola, Ayodeji

    2018-05-01

    Associativity analysis is a powerful tool to deal with large-scale datasets by clustering the data on the basis of (dis)similarity and can be used to assess the efficacy and design of air quality monitoring networks. We describe here our use of Kolmogorov-Zurbenko filtering and hierarchical clustering of NO2 and SO2 passive and continuous monitoring data to analyse and optimize air quality networks for these species in the province of Alberta, Canada. The methodology applied in this study assesses dissimilarity between monitoring station time series based on two metrics: 1 - R, R being the Pearson correlation coefficient, and the Euclidean distance; we find that both should be used in evaluating monitoring site similarity. We have combined the analytic power of hierarchical clustering with the spatial information provided by deterministic air quality model results, using the gridded time series of model output as potential station locations, as a proxy for assessing monitoring network design and for network optimization. We demonstrate that clustering results depend on the air contaminant analysed, reflecting the difference in the respective emission sources of SO2 and NO2 in the region under study. Our work shows that much of the signal identifying the sources of NO2 and SO2 emissions resides in shorter timescales (hourly to daily) due to short-term variation of concentrations and that longer-term averages in data collection may lose the information needed to identify local sources. However, the methodology identifies stations mainly influenced by seasonality, if larger timescales (weekly to monthly) are considered. We have performed the first dissimilarity analysis based on gridded air quality model output and have shown that the methodology is capable of generating maps of subregions within which a single station will represent the entire subregion, to a given level of dissimilarity. We have also shown that our approach is capable of identifying different sampling methodologies as well as outliers (stations' time series which are markedly different from all others in a given dataset).

  7. Long-Term Groundwater Monitoring Optimization, Clare Water Supply Superfund Site, Permeable Reactive Barrier and Soil Remedy Areas, Clare, Michigan

    EPA Pesticide Factsheets

    This report contains a review of the long-term groundwater monitoring network for the Permeable Reactive Barrier (PRB) and Soil Remedy Areas at the Clare Water Supply Superfund Site in Clare, Michigan.

  8. [The therapeutic drug monitoring network server of tacrolimus for Chinese renal transplant patients].

    PubMed

    Deng, Chen-Hui; Zhang, Guan-Min; Bi, Shan-Shan; Zhou, Tian-Yan; Lu, Wei

    2011-07-01

    This study is to develop a therapeutic drug monitoring (TDM) network server of tacrolimus for Chinese renal transplant patients, which can facilitate doctor to manage patients' information and provide three levels of predictions. Database management system MySQL was employed to build and manage the database of patients and doctors' information, and hypertext mark-up language (HTML) and Java server pages (JSP) technology were employed to construct network server for database management. Based on the population pharmacokinetic model of tacrolimus for Chinese renal transplant patients, above program languages were used to construct the population prediction and subpopulation prediction modules. Based on Bayesian principle and maximization of the posterior probability function, an objective function was established, and minimized by an optimization algorithm to estimate patient's individual pharmacokinetic parameters. It is proved that the network server has the basic functions for database management and three levels of prediction to aid doctor to optimize the regimen of tacrolimus for Chinese renal transplant patients.

  9. Monitoring Churn in Wireless Networks

    NASA Astrophysics Data System (ADS)

    Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger

    Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.

  10. Mobile Wireless Sensor Networks for Advanced Soil Sensing and Ecosystem Monitoring

    NASA Astrophysics Data System (ADS)

    Mollenhauer, Hannes; Schima, Robert; Remmler, Paul; Mollenhauer, Olaf; Hutschenreuther, Tino; Toepfer, Hannes; Dietrich, Peter; Bumberger, Jan

    2015-04-01

    For an adequate characterization of ecosystems it is necessary to detect individual processes with suitable monitoring strategies and methods. Due to the natural complexity of all environmental compartments, single point or temporally and spatially fixed measurements are mostly insufficient for an adequate representation. The application of mobile wireless sensor networks for soil and atmosphere sensing offers significant benefits, due to the simple adjustment of the sensor distribution, the sensor types and the sample rate (e.g. by using optimization approaches or event triggering modes) to the local test conditions. This can be essential for the monitoring of heterogeneous and dynamic environmental systems and processes. One significant advantage in the application of mobile ad-hoc wireless sensor networks is their self-organizing behavior. Thus, the network autonomously initializes and optimizes itself. Due to the localization via satellite a major reduction in installation and operation costs and time is generated. In addition, single point measurements with a sensor are significantly improved by measuring at several optimized points continuously. Since performing analog and digital signal processing and computation in the sensor nodes close to the sensors a significant reduction of the data to be transmitted can be achieved which leads to a better energy management of nodes. Furthermore, the miniaturization of the nodes and energy harvesting are current topics under investigation. First results of field measurements are given to present the potentials and limitations of this application in environmental science. In particular, collected in-situ data with numerous specific soil and atmosphere parameters per sensor node (more than 25) recorded over several days illustrates the high performance of this system for advanced soil sensing and soil-atmosphere interaction monitoring. Moreover, investigations of biotic and abiotic process interactions and optimization of sensor positioning for measuring soil moisture are scopes of this work and initial results of these issues will be presented.

  11. Long-Term Monitoring Network Optimization Evaluation for Operable Unit 2, Bunker Hill Mining and Metallurgical Complex Superfund Site, Idaho

    EPA Pesticide Factsheets

    This report presents a description and evaluation of the ground water and surface water monitoring program associated with the Bunker Hill Mining and Metallurgical Complex Superfund Site (Bunker Hill) Operable Unit (OU) 2.

  12. Designing optimal greenhouse gas monitoring networks for Australia

    NASA Astrophysics Data System (ADS)

    Ziehn, T.; Law, R. M.; Rayner, P. J.; Roff, G.

    2016-01-01

    Atmospheric transport inversion is commonly used to infer greenhouse gas (GHG) flux estimates from concentration measurements. The optimal location of ground-based observing stations that supply these measurements can be determined by network design. Here, we use a Lagrangian particle dispersion model (LPDM) in reverse mode together with a Bayesian inverse modelling framework to derive optimal GHG observing networks for Australia. This extends the network design for carbon dioxide (CO2) performed by Ziehn et al. (2014) to also minimise the uncertainty on the flux estimates for methane (CH4) and nitrous oxide (N2O), both individually and in a combined network using multiple objectives. Optimal networks are generated by adding up to five new stations to the base network, which is defined as two existing stations, Cape Grim and Gunn Point, in southern and northern Australia respectively. The individual networks for CO2, CH4 and N2O and the combined observing network show large similarities because the flux uncertainties for each GHG are dominated by regions of biologically productive land. There is little penalty, in terms of flux uncertainty reduction, for the combined network compared to individually designed networks. The location of the stations in the combined network is sensitive to variations in the assumed data uncertainty across locations. A simple assessment of economic costs has been included in our network design approach, considering both establishment and maintenance costs. Our results suggest that, while site logistics change the optimal network, there is only a small impact on the flux uncertainty reductions achieved with increasing network size.

  13. Assessment of water quality monitoring for the optimal sensor placement in lake Yahuarcocha using pattern recognition techniques and geographical information systems.

    PubMed

    Jácome, Gabriel; Valarezo, Carla; Yoo, Changkyoo

    2018-03-30

    Pollution and the eutrophication process are increasing in lake Yahuarcocha and constant water quality monitoring is essential for a better understanding of the patterns occurring in this ecosystem. In this study, key sensor locations were determined using spatial and temporal analyses combined with geographical information systems (GIS) to assess the influence of weather features, anthropogenic activities, and other non-point pollution sources. A water quality monitoring network was established to obtain data on 14 physicochemical and microbiological parameters at each of seven sample sites over a period of 13 months. A spatial and temporal statistical approach using pattern recognition techniques, such as cluster analysis (CA) and discriminant analysis (DA), was employed to classify and identify the most important water quality parameters in the lake. The original monitoring network was reduced to four optimal sensor locations based on a fuzzy overlay of the interpolations of concentration variations of the most important parameters.

  14. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    PubMed

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  15. Optimization of Emissions Sensor Networks Incorporating Tradeoffs Between Different Sensor Technologies

    NASA Astrophysics Data System (ADS)

    Nicholson, B.; Klise, K. A.; Laird, C. D.; Ravikumar, A. P.; Brandt, A. R.

    2017-12-01

    In order to comply with current and future methane emissions regulations, natural gas producers must develop emissions monitoring strategies for their facilities. In addition, regulators must develop air monitoring strategies over wide areas incorporating multiple facilities. However, in both of these cases, only a limited number of sensors can be deployed. With a wide variety of sensors to choose from in terms of cost, precision, accuracy, spatial coverage, location, orientation, and sampling frequency, it is difficult to design robust monitoring strategies for different scenarios while systematically considering the tradeoffs between different sensor technologies. In addition, the geography, weather, and other site specific conditions can have a large impact on the performance of a sensor network. In this work, we demonstrate methods for calculating optimal sensor networks. Our approach can incorporate tradeoffs between vastly different sensor technologies, optimize over typical wind conditions for a particular area, and consider different objectives such as time to detection or geographic coverage. We do this by pre-computing site specific scenarios and using them as input to a mixed-integer, stochastic programming problem that solves for a sensor network that maximizes the effectiveness of the detection program. Our methods and approach have been incorporated within an open source Python package called Chama with the goal of providing facility operators and regulators with tools for designing more effective and efficient monitoring systems. Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energys National Nuclear Security Administration under contract DE-NA0003525.

  16. Environmental Monitoring Networks Optimization Using Advanced Active Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail; Volpi, Michele; Copa, Loris

    2010-05-01

    The problem of environmental monitoring networks optimization (MNO) belongs to one of the basic and fundamental tasks in spatio-temporal data collection, analysis, and modeling. There are several approaches to this problem, which can be considered as a design or redesign of monitoring network by applying some optimization criteria. The most developed and widespread methods are based on geostatistics (family of kriging models, conditional stochastic simulations). In geostatistics the variance is mainly used as an optimization criterion which has some advantages and drawbacks. In the present research we study an application of advanced techniques following from the statistical learning theory (SLT) - support vector machines (SVM) and the optimization of monitoring networks when dealing with a classification problem (data are discrete values/classes: hydrogeological units, soil types, pollution decision levels, etc.) is considered. SVM is a universal nonlinear modeling tool for classification problems in high dimensional spaces. The SVM solution is maximizing the decision boundary between classes and has a good generalization property for noisy data. The sparse solution of SVM is based on support vectors - data which contribute to the solution with nonzero weights. Fundamentally the MNO for classification problems can be considered as a task of selecting new measurement points which increase the quality of spatial classification and reduce the testing error (error on new independent measurements). In SLT this is a typical problem of active learning - a selection of the new unlabelled points which efficiently reduce the testing error. A classical approach (margin sampling) to active learning is to sample the points closest to the classification boundary. This solution is suboptimal when points (or generally the dataset) are redundant for the same class. In the present research we propose and study two new advanced methods of active learning adapted to the solution of MNO problem: 1) hierarchical top-down clustering in an input space in order to remove redundancy when data are clustered, and 2) a general method (independent on classifier) which gives posterior probabilities that can be used to define the classifier confidence and corresponding proposals for new measurement points. The basic ideas and procedures are explained by applying simulated data sets. The real case study deals with the analysis and mapping of soil types, which is a multi-class classification problem. Maps of soil types are important for the analysis and 3D modeling of heavy metals migration in soil and prediction risk mapping. The results obtained demonstrate the high quality of SVM mapping and efficiency of monitoring network optimization by using active learning approaches. The research was partly supported by SNSF projects No. 200021-126505 and 200020-121835.

  17. Volcano Monitoring: A Case Study in Pervasive Computing

    NASA Astrophysics Data System (ADS)

    Peterson, Nina; Anusuya-Rangappa, Lohith; Shirazi, Behrooz A.; Song, Wenzhan; Huang, Renjie; Tran, Daniel; Chien, Steve; Lahusen, Rick

    Recent advances in wireless sensor network technology have provided robust and reliable solutions for sophisticated pervasive computing applications such as inhospitable terrain environmental monitoring. We present a case study for developing a real-time pervasive computing system, called OASIS for optimized autonomous space in situ sensor-web, which combines ground assets (a sensor network) and space assets (NASA’s earth observing (EO-1) satellite) to monitor volcanic activities at Mount St. Helens. OASIS’s primary goals are: to integrate complementary space and in situ ground sensors into an interactive and autonomous sensorweb, to optimize power and communication resource management of the sensorweb and to provide mechanisms for seamless and scalable fusion of future space and in situ components. The OASIS in situ ground sensor network development addresses issues related to power management, bandwidth management, quality of service management, topology and routing management, and test-bed design. The space segment development consists of EO-1 architectural enhancements, feedback of EO-1 data into the in situ component, command and control integration, data ingestion and dissemination and field demonstrations.

  18. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Kim, Y.; Hwang, T.; Vose, J. M.; Martin, K. L.; Band, L. E.

    2016-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  19. Integrated design of multivariable hydrometric networks using entropy theory with a multiobjective optimization approach

    NASA Astrophysics Data System (ADS)

    Keum, J.; Coulibaly, P. D.

    2017-12-01

    Obtaining quality hydrologic observations is the first step towards a successful water resources management. While remote sensing techniques have enabled to convert satellite images of the Earth's surface to hydrologic data, the importance of ground-based observations has never been diminished because in-situ data are often highly accurate and can be used to validate remote measurements. The existence of efficient hydrometric networks is becoming more important to obtain as much as information with minimum redundancy. The World Meteorological Organization (WMO) has recommended a guideline for the minimum hydrometric network density based on physiography; however, this guideline is not for the optimum network design but for avoiding serious deficiency from a network. Moreover, all hydrologic variables are interconnected within the hydrologic cycle, while monitoring networks have been designed individually. This study proposes an integrated network design method using entropy theory with a multiobjective optimization approach. In specific, a precipitation and a streamflow networks in a semi-urban watershed in Ontario, Canada were designed simultaneously by maximizing joint entropy, minimizing total correlation, and maximizing conditional entropy of streamflow network given precipitation network. After comparing with the typical individual network designs, the proposed design method would be able to determine more efficient optimal networks by avoiding the redundant stations, in which hydrologic information is transferable. Additionally, four quantization cases were applied in entropy calculations to assess their implications on the station rankings and the optimal networks. The results showed that the selection of quantization method should be considered carefully because the rankings and optimal networks are subject to change accordingly.

  20. Correlation analysis on real-time tab-delimited network monitoring data

    DOE PAGES

    Pan, Aditya; Majumdar, Jahin; Bansal, Abhay; ...

    2016-01-01

    End-End performance monitoring in the Internet, also called PingER is a part of SLAC National Accelerator Laboratory’s research project. It was created to answer the growing need to monitor network both to analyze current performance and to designate resources to optimize execution between research centers, and the universities and institutes co-operating on present and future operations. The monitoring support reflects the broad geographical area of the collaborations and requires a comprehensive number of research and financial channels. The data architecture retrieval and methodology of the interpretation have emerged over numerous years. Analyzing this data is the main challenge due tomore » its high volume. Finally, by using correlation analysis, we can make crucial conclusions about how the network data affects the performance of the hosts and how it depends from countries to countries.« less

  1. Optimal distribution of borehole geophones for monitoring CO2-injection-induced seismicity

    NASA Astrophysics Data System (ADS)

    Huang, L.; Chen, T.; Foxall, W.; Wagoner, J. L.

    2016-12-01

    The U.S. DOE initiative, National Risk Assessment Partnership (NRAP), aims to develop quantitative risk assessment methodologies for carbon capture, utilization and storage (CCUS). As part of tasks of the Strategic Monitoring Group of NRAP, we develop a tool for optimal design of a borehole geophones distribution for monitoring CO2-injection-induced seismicity. The tool consists of a number of steps, including building a geophysical model for a given CO2 injection site, defining target monitoring regions within CO2-injection/migration zones, generating synthetic seismic data, giving acceptable uncertainties in input data, and determining the optimal distribution of borehole geophones. We use a synthetic geophysical model as an example to demonstrate the capability our new tool to design an optimal/cost-effective passive seismic monitoring network using borehole geophones. The model is built based on the geologic features found at the Kimberlina CCUS pilot site located in southern San Joaquin Valley, California. This tool can provide CCUS operators with a guideline for cost-effective microseismic monitoring of geologic carbon storage and utilization.

  2. Monitoring air quality in mountains: Designing an effective network

    USGS Publications Warehouse

    Peterson, D.L.

    2000-01-01

    A quantitatively robust yet parsimonious air-quality monitoring network in mountainous regions requires special attention to relevant spatial and temporal scales of measurement and inference. The design of monitoring networks should focus on the objectives required by public agencies, namely: 1) determine if some threshold has been exceeded (e.g., for regulatory purposes), and 2) identify spatial patterns and temporal trends (e.g., to protect natural resources). A short-term, multi-scale assessment to quantify spatial variability in air quality is a valuable asset in designing a network, in conjunction with an evaluation of existing data and simulation-model output. A recent assessment in Washington state (USA) quantified spatial variability in tropospheric ozone distribution ranging from a single watershed to the western third of the state. Spatial and temporal coherence in ozone exposure modified by predictable elevational relationships ( 1.3 ppbv ozone per 100 m elevation gain) extends from urban areas to the crest of the Cascade Range. This suggests that a sparse network of permanent analyzers is sufficient at all spatial scales, with the option of periodic intensive measurements to validate network design. It is imperative that agencies cooperate in the design of monitoring networks in mountainous regions to optimize data collection and financial efficiencies.

  3. A sensor network based virtual beam-like structure method for fault diagnosis and monitoring of complex structures with Improved Bacterial Optimization

    NASA Astrophysics Data System (ADS)

    Wang, H.; Jing, X. J.

    2017-02-01

    This paper proposes a novel method for the fault diagnosis of complex structures based on an optimized virtual beam-like structure approach. A complex structure can be regarded as a combination of numerous virtual beam-like structures considering the vibration transmission path from vibration sources to each sensor. The structural 'virtual beam' consists of a sensor chain automatically obtained by an Improved Bacterial Optimization Algorithm (IBOA). The biologically inspired optimization method (i.e. IBOA) is proposed for solving the discrete optimization problem associated with the selection of the optimal virtual beam for fault diagnosis. This novel virtual beam-like-structure approach needs less or little prior knowledge. Neither does it require stationary response data, nor is it confined to a specific structure design. It is easy to implement within a sensor network attached to the monitored structure. The proposed fault diagnosis method has been tested on the detection of loosening screws located at varying positions in a real satellite-like model. Compared with empirical methods, the proposed virtual beam-like structure method has proved to be very effective and more reliable for fault localization.

  4. Advanced Performance Modeling with Combined Passive and Active Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dovrolis, Constantine; Sim, Alex

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less

  5. Wireless sensor placement for structural monitoring using information-fusing firefly algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Guang-Dong; Yi, Ting-Hua; Xie, Mei-Xi; Li, Hong-Nan

    2017-10-01

    Wireless sensor networks (WSNs) are promising technology in structural health monitoring (SHM) applications for their low cost and high efficiency. The limited wireless sensors and restricted power resources in WSNs highlight the significance of optimal wireless sensor placement (OWSP) during designing SHM systems to enable the most useful information to be captured and to achieve the longest network lifetime. This paper presents a holistic approach, including an optimization criterion and a solution algorithm, for optimally deploying self-organizing multi-hop WSNs on large-scale structures. The combination of information effectiveness represented by the modal independence and the network performance specified by the network connectivity and network lifetime is first formulated to evaluate the performance of wireless sensor configurations. Then, an information-fusing firefly algorithm (IFFA) is developed to solve the OWSP problem. The step sizes drawn from a Lévy distribution are adopted to drive fireflies toward brighter individuals. Following the movement with Lévy flights, information about the contributions of wireless sensors to the objective function as carried by the fireflies is fused and applied to move inferior wireless sensors to better locations. The reliability of the proposed approach is verified via a numerical example on a long-span suspension bridge. The results demonstrate that the evaluation criterion provides a good performance metric of wireless sensor configurations, and the IFFA outperforms the simple discrete firefly algorithm.

  6. Node Redeployment Algorithm Based on Stratified Connected Tree for Underwater Sensor Networks

    PubMed Central

    Liu, Jun; Jiang, Peng; Wu, Feng; Yu, Shanen; Song, Chunyue

    2016-01-01

    During the underwater sensor networks (UWSNs) operation, node drift with water environment causes network topology changes. Periodic node location examination and adjustment are needed to maintain good network monitoring quality as long as possible. In this paper, a node redeployment algorithm based on stratified connected tree for UWSNs is proposed. At every network adjustment moment, self-examination and adjustment on node locations are performed firstly. If a node is outside the monitored space, it returns to the last location recorded in its memory along straight line. Later, the network topology is stratified into a connected tree that takes the sink node as the root node by broadcasting ready information level by level, which can improve the network connectivity rate. Finally, with synthetically considering network coverage and connectivity rates, and node movement distance, the sink node performs centralized optimization on locations of leaf nodes in the stratified connected tree. Simulation results show that the proposed redeployment algorithm can not only keep the number of nodes in the monitored space as much as possible and maintain good network coverage and connectivity rates during network operation, but also reduce node movement distance during node redeployment and prolong the network lifetime. PMID:28029124

  7. Citizen Science Seismic Stations for Monitoring Regional and Local Events

    NASA Astrophysics Data System (ADS)

    Zucca, J. J.; Myers, S.; Srikrishna, D.

    2016-12-01

    The earth has tens of thousands of seismometers installed on its surface or in boreholes that are operated by many organizations for many purposes including the study of earthquakes, volcanos, and nuclear explosions. Although global networks such as the Global Seismic Network and the International Monitoring System do an excellent job of monitoring nuclear test explosions and other seismic events, their thresholds could be lowered with the addition of more stations. In recent years there has been interest in citizen-science approaches to augment government-sponsored monitoring networks (see, for example, Stubbs and Drell, 2013). A modestly-priced seismic station that could be purchased by citizen scientists could enhance regional and local coverage of the GSN, IMS, and other networks if those stations are of high enough quality and distributed optimally. In this paper we present a minimum set of hardware and software specifications that a citizen seismograph station would need in order to add value to global networks. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  8. Cluster categorization of urban roads to optimize their noise monitoring.

    PubMed

    Zambon, G; Benocci, R; Brambilla, G

    2016-01-01

    Road traffic in urban areas is recognized to be associated with urban mobility and public health, and it is often the main source of noise pollution. Lately, noise maps have been considered a powerful tool to estimate the population exposure to environmental noise, but they need to be validated by measured noise data. The project Dynamic Acoustic Mapping (DYNAMAP), co-funded in the framework of the LIFE 2013 program, is aimed to develop a statistically based method to optimize the choice and the number of monitoring sites and to automate the noise mapping update using the data retrieved from a low-cost monitoring network. Indeed, the first objective should improve the spatial sampling based on the legislative road classification, as this classification is mainly based on the geometrical characteristics of the road, rather than its noise emission. The present paper describes the statistical approach of the methodology under development and the results of its preliminary application to a limited sample of roads in the city of Milan. The resulting categorization of roads, based on clustering the 24-h hourly L Aeqh, looks promising to optimize the spatial sampling of noise monitoring toward a description of the noise pollution due to complex urban road networks more efficient than that based on the legislative road classification.

  9. SOM neural network fault diagnosis method of polymerization kettle equipment optimized by improved PSO algorithm.

    PubMed

    Wang, Jie-sheng; Li, Shu-xia; Gao, Jie

    2014-01-01

    For meeting the real-time fault diagnosis and the optimization monitoring requirements of the polymerization kettle in the polyvinyl chloride resin (PVC) production process, a fault diagnosis strategy based on the self-organizing map (SOM) neural network is proposed. Firstly, a mapping between the polymerization process data and the fault pattern is established by analyzing the production technology of polymerization kettle equipment. The particle swarm optimization (PSO) algorithm with a new dynamical adjustment method of inertial weights is adopted to optimize the structural parameters of SOM neural network. The fault pattern classification of the polymerization kettle equipment is to realize the nonlinear mapping from symptom set to fault set according to the given symptom set. Finally, the simulation experiments of fault diagnosis are conducted by combining with the industrial on-site historical data of the polymerization kettle and the simulation results show that the proposed PSO-SOM fault diagnosis strategy is effective.

  10. Monitoring of physiological parameters from multiple patients using wireless sensor network.

    PubMed

    Yuce, Mehmet R; Ng, Peng Choong; Khan, Jamil Y

    2008-10-01

    This paper presents a wireless sensor network system that has the capability to monitor physiological parameters from multiple patient bodies. The system uses the Medical Implant Communication Service band between the sensor nodes and a remote central control unit (CCU) that behaves as a base station. The CCU communicates with another network standard (the internet or a mobile network) for a long distance data transfer. The proposed system offers mobility to patients and flexibility to medical staff to obtain patient's physiological data on demand basis via Internet. A prototype sensor network including hardware, firmware and software designs has been implemented and tested. The developed system has been optimized for power consumption by having the nodes sleep when there is no communication via a bidirectional communication.

  11. mHealthMon: toward energy-efficient and distributed mobile health monitoring using parallel offloading.

    PubMed

    Ahnn, Jong Hoon; Potkonjak, Miodrag

    2013-10-01

    Although mobile health monitoring where mobile sensors continuously gather, process, and update sensor readings (e.g. vital signals) from patient's sensors is emerging, little effort has been investigated in an energy-efficient management of sensor information gathering and processing. Mobile health monitoring with the focus of energy consumption may instead be holistically analyzed and systematically designed as a global solution to optimization subproblems. This paper presents an attempt to decompose the very complex mobile health monitoring system whose layer in the system corresponds to decomposed subproblems, and interfaces between them are quantified as functions of the optimization variables in order to orchestrate the subproblems. We propose a distributed and energy-saving mobile health platform, called mHealthMon where mobile users publish/access sensor data via a cloud computing-based distributed P2P overlay network. The key objective is to satisfy the mobile health monitoring application's quality of service requirements by modeling each subsystem: mobile clients with medical sensors, wireless network medium, and distributed cloud services. By simulations based on experimental data, we present the proposed system can achieve up to 10.1 times more energy-efficient and 20.2 times faster compared to a standalone mobile health monitoring application, in various mobile health monitoring scenarios applying a realistic mobility model.

  12. NetMOD version 1.0 user's manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merchant, Bion John

    2014-01-01

    NetMOD (Network Monitoring for Optimal Detection) is a Java-based software package for conducting simulation of seismic networks. Specifically, NetMOD simulates the detection capabilities of seismic monitoring networks. Network simulations have long been used to study network resilience to station outages and to determine where additional stations are needed to reduce monitoring thresholds. NetMOD makes use of geophysical models to determine the source characteristics, signal attenuation along the path between the source and station, and the performance and noise properties of the station. These geophysical models are combined to simulate the relative amplitudes of signal and noise that are observed atmore » each of the stations. From these signal-to-noise ratios (SNR), the probability of detection can be computed given a detection threshold. This manual describes how to configure and operate NetMOD to perform seismic detection simulations. In addition, NetMOD is distributed with a simulation dataset for the Comprehensive Nuclear-Test-Ban Treaty Organization (CTBTO) International Monitoring System (IMS) seismic network for the purpose of demonstrating NetMOD's capabilities and providing user training. The tutorial sections of this manual use this dataset when describing how to perform the steps involved when running a simulation.« less

  13. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks

    PubMed Central

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-01-01

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper. PMID:27873941

  14. Multicriteria relocation analysis of an off-site radioactive monitoring network for a nuclear power plant.

    PubMed

    Chang, Ni-Bin; Ning, Shu-Kuang; Chen, Jen-Chang

    2006-08-01

    Due to increasing environmental consciousness in most countries, every utility that owns a commercial nuclear power plant has been required to have both an on-site and off-site emergency response plan since the 1980s. A radiation monitoring network, viewed as part of the emergency response plan, can provide information regarding the radiation dosage emitted from a nuclear power plant in a regular operational period and/or abnormal measurements in an emergency event. Such monitoring information might help field operators and decision-makers to provide accurate responses or make decisions to protect the public health and safety. This study aims to conduct an integrated simulation and optimization analysis looking for the relocation strategy of a long-term regular off-site monitoring network at a nuclear power plant. The planning goal is to downsize the current monitoring network but maintain its monitoring capacity as much as possible. The monitoring sensors considered in this study include the thermoluminescence dosimetry (TLD) and air sampling system (AP) simultaneously. It is designed for detecting the radionuclide accumulative concentration, the frequency of violation, and the possible population affected by a long-term impact in the surrounding area regularly while it can also be used in an accidental release event. With the aid of the calibrated Industrial Source Complex-Plume Rise Model Enhancements (ISC-PRIME) simulation model to track down the possible radionuclide diffusion, dispersion, transport, and transformation process in the atmospheric environment, a multiobjective evaluation process can be applied to achieve the screening of monitoring stations for the nuclear power plant located at Hengchun Peninsula, South Taiwan. To account for multiple objectives, this study calculated preference weights to linearly combine objective functions leading to decision-making with exposure assessment in an optimization context. Final suggestions should be useful for narrowing the set of scenarios that decision-makers need to consider in this relocation process.

  15. Research on the application of vehicle network in optimization of automobile supply supply chain

    NASA Astrophysics Data System (ADS)

    Jing, Xuelei; Jia, Baoxian

    2017-09-01

    The four key areas of the development of Internet-connected (intelligent transportation) with great potential for development,environmental monitoring, goods tracking, and the development of smart grid are the core supporting technologies of many applications. In order to improve the adaptability of data distribution, so that it can be used in urban, rural or highway and other different car networking scenarios, the study test and hypothetical test of the technical means to accurately estimate the different car network scene parameters indicators, and then different scenarios take different distribution strategies. Taking into account the limited nature of the data distribution of the Internet network data, the paper uses the idea of a customer to optimize the simulation

  16. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  17. Displacement back analysis for a high slope of the Dagangshan Hydroelectric Power Station based on BP neural network and particle swarm optimization.

    PubMed

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes.

  18. Displacement Back Analysis for a High Slope of the Dagangshan Hydroelectric Power Station Based on BP Neural Network and Particle Swarm Optimization

    PubMed Central

    Liang, Zhengzhao; Gong, Bin; Tang, Chunan; Zhang, Yongbin; Ma, Tianhui

    2014-01-01

    The right bank high slope of the Dagangshan Hydroelectric Power Station is located in complicated geological conditions with deep fractures and unloading cracks. How to obtain the mechanical parameters and then evaluate the safety of the slope are the key problems. This paper presented a displacement back analysis for the slope using an artificial neural network model (ANN) and particle swarm optimization model (PSO). A numerical model was established to simulate the displacement increment results, acquiring training data for the artificial neural network model. The backpropagation ANN model was used to establish a mapping function between the mechanical parameters and the monitoring displacements. The PSO model was applied to initialize the weights and thresholds of the backpropagation (BP) network model and determine suitable values of the mechanical parameters. Then the elastic moduli of the rock masses were obtained according to the monitoring displacement data at different excavation stages, and the BP neural network model was proved to be valid by comparing the measured displacements, the displacements predicted by the BP neural network model, and the numerical simulation using the back-analyzed parameters. The proposed model is useful for rock mechanical parameters determination and instability investigation of rock slopes. PMID:25140345

  19. Effects of Sampling and Spatio/Temporal Granularity in Traffic Monitoring on Anomaly Detectability

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Kawahara, Ryoichi; Mori, Tatsuya; Kondoh, Tsuyoshi; Asano, Shoichiro

    We quantitatively evaluate how sampling and spatio/temporal granularity in traffic monitoring affect the detectability of anomalous traffic. Those parameters also affect the monitoring burden, so network operators face a trade-off between the monitoring burden and detectability and need to know which are the optimal paramter values. We derive equations to calculate the false positive ratio and false negative ratio for given values of the sampling rate, granularity, statistics of normal traffic, and volume of anomalies to be detected. Specifically, assuming that the normal traffic has a Gaussian distribution, which is parameterized by its mean and standard deviation, we analyze how sampling and monitoring granularity change these distribution parameters. This analysis is based on observation of the backbone traffic, which exhibits spatially uncorrelated and temporally long-range dependence. Then we derive the equations for detectability. With those equations, we can answer the practical questions that arise in actual network operations: what sampling rate to set to find the given volume of anomaly, or, if the sampling is too high for actual operation, what granularity is optimal to find the anomaly for a given lower limit of sampling rate.

  20. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications

    PubMed Central

    2018-01-01

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter, and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events. PMID:29614060

  1. TwitterSensing: An Event-Based Approach for Wireless Sensor Networks Optimization Exploiting Social Media in Smart City Applications.

    PubMed

    Costa, Daniel G; Duran-Faundez, Cristian; Andrade, Daniel C; Rocha-Junior, João B; Peixoto, João Paulo Just

    2018-04-03

    Modern cities are subject to periodic or unexpected critical events, which may bring economic losses or even put people in danger. When some monitoring systems based on wireless sensor networks are deployed, sensing and transmission configurations of sensor nodes may be adjusted exploiting the relevance of the considered events, but efficient detection and classification of events of interest may be hard to achieve. In Smart City environments, several people spontaneously post information in social media about some event that is being observed and such information may be mined and processed for detection and classification of critical events. This article proposes an integrated approach to detect and classify events of interest posted in social media, notably in Twitter , and the assignment of sensing priorities to source nodes. By doing so, wireless sensor networks deployed in Smart City scenarios can be optimized for higher efficiency when monitoring areas under the influence of the detected events.

  2. Optimal Sensor Fusion for Structural Health Monitoring of Aircraft Composite Components

    DTIC Science & Technology

    2011-09-01

    sensor networks combine or fuse different types of sensors. Fiber Bragg Grating ( FBG ) sensors can be inserted in layers of composite structures to...consideration. This paper describes an example of optimal sensor fusion, which combines FBG sensors and PZT sensors. Optimal sensor fusion tries to find...Fiber Bragg Grating ( FBG ) sensors can be inserted in layers of composite structures to provide local damage detection, while surface mounted

  3. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    NASA Astrophysics Data System (ADS)

    Rao, Nageswara S.; Carter, Steven M.; Wu, Qishi; Wing, William R.; Zhu, Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M.

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  4. Optimisation of groundwater level monitoring networks using geostatistical modelling based on the Spartan family variogram and a genetic algorithm method

    NASA Astrophysics Data System (ADS)

    Parasyris, Antonios E.; Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2016-04-01

    Groundwater level monitoring networks provide essential information for water resources management, especially in areas with significant groundwater exploitation for agricultural and domestic use. Given the high maintenance costs of these networks, development of tools, which can be used by regulators for efficient network design is essential. In this work, a monitoring network optimisation tool is presented. The network optimisation tool couples geostatistical modelling based on the Spartan family variogram with a genetic algorithm method and is applied to Mires basin in Crete, Greece, an area of high socioeconomic and agricultural interest, which suffers from groundwater overexploitation leading to a dramatic decrease of groundwater levels. The purpose of the optimisation tool is to determine which wells to exclude from the monitoring network because they add little or no beneficial information to groundwater level mapping of the area. Unlike previous relevant investigations, the network optimisation tool presented here uses Ordinary Kriging with the recently-established non-differentiable Spartan variogram for groundwater level mapping, which, based on a previous geostatistical study in the area leads to optimal groundwater level mapping. Seventy boreholes operate in the area for groundwater abstraction and water level monitoring. The Spartan variogram gives overall the most accurate groundwater level estimates followed closely by the power-law model. The geostatistical model is coupled to an integer genetic algorithm method programmed in MATLAB 2015a. The algorithm is used to find the set of wells whose removal leads to the minimum error between the original water level mapping using all the available wells in the network and the groundwater level mapping using the reduced well network (error is defined as the 2-norm of the difference between the original mapping matrix with 70 wells and the mapping matrix of the reduced well network). The solution to the optimization problem (the best wells to retain in the monitoring network) depends on the total number of wells removed; this number is a management decision. The water level monitoring network of Mires basin has been optimized 6 times by removing 5, 8, 12, 15, 20 and 25 wells from the original network. In order to achieve the optimum solution in the minimum possible computational time, a stall generations criterion was set for each optimisation scenario. An improvement made to the classic genetic algorithm was the change of the mutation and crossover fraction in respect to the change of the mean fitness value. This results to a randomness in reproduction, if the solution converges, to avoid local minima, or, in a more educated reproduction (higher crossover ratio) when there is higher change in the mean fitness value. The choice of integer genetic algorithm in MATLAB 2015a poses the restriction of adding custom selection and crossover-mutation functions. Therefore, custom population and crossover-mutation-selection functions have been created to set the initial population type to custom and have the ability to change the mutation crossover probability in respect to the convergence of the genetic algorithm, achieving thus higher accuracy. The application of the network optimisation tool to Mires basin indicates that 25 wells can be removed with a relatively small deterioration of the groundwater level map. The results indicate the robustness of the network optimisation tool: Wells were removed from high well-density areas while preserving the spatial pattern of the original groundwater level map. Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49.

  5. Approaching the design of a failsafe turbine monitor with simple microcontroller blocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zapolin, R.E.

    1995-12-31

    The proper approach to early instrumentation design for tasks like failsafe turbine monitoring permits meeting requirements without resorting to traditional complex special-purpose electronics. Instead a small network of basic microcontroller building blocks can split the effort with each block optimized for its portion of the overall system. This paper discusses approaching design by partitioning intricate system specifications to permit each block to be optimized to the safety level appropriate for its portion of the overall task while retaining and production and reliability advantages of having common simple modules. It illustrates that approach with a modular microcontroller-based speed monitor which metmore » user needs for the latest in power plant monitoring equipment.« less

  6. QoS-aware health monitoring system using cloud-based WBANs.

    PubMed

    Almashaqbeh, Ghada; Hayajneh, Thaier; Vasilakos, Athanasios V; Mohd, Bassam J

    2014-10-01

    Wireless Body Area Networks (WBANs) are amongst the best options for remote health monitoring. However, as standalone systems WBANs have many limitations due to the large amount of processed data, mobility of monitored users, and the network coverage area. Integrating WBANs with cloud computing provides effective solutions to these problems and promotes the performance of WBANs based systems. Accordingly, in this paper we propose a cloud-based real-time remote health monitoring system for tracking the health status of non-hospitalized patients while practicing their daily activities. Compared with existing cloud-based WBAN frameworks, we divide the cloud into local one, that includes the monitored users and local medical staff, and a global one that includes the outer world. The performance of the proposed framework is optimized by reducing congestion, interference, and data delivery delay while supporting users' mobility. Several novel techniques and algorithms are proposed to accomplish our objective. First, the concept of data classification and aggregation is utilized to avoid clogging the network with unnecessary data traffic. Second, a dynamic channel assignment policy is developed to distribute the WBANs associated with the users on the available frequency channels to manage interference. Third, a delay-aware routing metric is proposed to be used by the local cloud in its multi-hop communication to speed up the reporting process of the health-related data. Fourth, the delay-aware metric is further utilized by the association protocols used by the WBANs to connect with the local cloud. Finally, the system with all the proposed techniques and algorithms is evaluated using extensive ns-2 simulations. The simulation results show superior performance of the proposed architecture in optimizing the end-to-end delay, handling the increased interference levels, maximizing the network capacity, and tracking user's mobility.

  7. An Architecture for SCADA Network Forensics

    NASA Astrophysics Data System (ADS)

    Kilpatrick, Tim; Gonzalez, Jesus; Chandia, Rodrigo; Papa, Mauricio; Shenoi, Sujeet

    Supervisory control and data acquisition (SCADA) systems are widely used in industrial control and automation. Modern SCADA protocols often employ TCP/IP to transport sensor data and control signals. Meanwhile, corporate IT infrastructures are interconnecting with previously isolated SCADA networks. The use of TCP/IP as a carrier protocol and the interconnection of IT and SCADA networks raise serious security issues. This paper describes an architecture for SCADA network forensics. In addition to supporting forensic investigations of SCADA network incidents, the architecture incorporates mechanisms for monitoring process behavior, analyzing trends and optimizing plant performance.

  8. Demonstration of application-driven network slicing and orchestration in optical/packet domains: on-demand vDC expansion for Hadoop MapReduce optimization.

    PubMed

    Kong, Bingxin; Liu, Siqi; Yin, Jie; Li, Shengru; Zhu, Zuqing

    2018-05-28

    Nowadays, it is common for service providers (SPs) to leverage hybrid clouds to improve the quality-of-service (QoS) of their Big Data applications. However, for achieving guaranteed latency and/or bandwidth in its hybrid cloud, an SP might desire to have a virtual datacenter (vDC) network, in which it can manage and manipulate the network connections freely. To address this requirement, we design and implement a network slicing and orchestration (NSO) system that can create and expand vDCs across optical/packet domains on-demand. Considering Hadoop MapReduce (M/R) as the use-case, we describe the proposed architectures of the system's data, control and management planes, and present the operation procedures for creating, expanding, monitoring and managing a vDC for M/R optimization. The proposed NSO system is then realized in a small-scale network testbed that includes four optical/packet domains, and we conduct experiments in it to demonstrate the whole operations of the data, control and management planes. Our experimental results verify that application-driven on-demand vDC expansion across optical/packet domains can be achieved for M/R optimization, and after being provisioned with a vDC, the SP using the NSO system can fully control the vDC network and further optimize the M/R jobs in it with network orchestration.

  9. Survey on Monitoring and Quality Controlling of the Mobile Biosignal Delivery.

    PubMed

    Pawar, Pravin A; Edla, Damodar R; Edoh, Thierry; Shinde, Vijay; van Beijnum, Bert-Jan

    2017-10-31

    A Mobile Patient Monitoring System (MPMS) acquires patient's biosignals and transmits them using wireless network connection to the decision-making module or healthcare professional for the assessment of patient's condition. A variety of wireless network technologies such as wireless personal area networks (e.g., Bluetooth), mobile ad-hoc networks (MANET), and infrastructure-based networks (e.g., WLAN and cellular networks) are in practice for biosignals delivery. The wireless network quality-of-service (QoS) requirements of biosignals delivery are mainly specified in terms of required bandwidth, acceptable delay, and tolerable error rate. An important research challenge in the MPMS is how to satisfy QoS requirements of biosignals delivery in the environment characterized by patient mobility, deployment of multiple wireless network technologies, and variable QoS characteristics of the wireless networks. QoS requirements are mainly application specific, while available QoS is largely dependent on QoS provided by wireless network in use. QoS provisioning refers to providing support for improving QoS experience of networked applications. In resource poor conditions, application adaptation may also be required to make maximum use of available wireless network QoS. This survey paper presents a survey of recent developments in the area of QoS provisioning for MPMS. In particular, our contributions are as follows: (1) overview of wireless networks and network QoS requirements of biosignals delivery; (2) survey of wireless networks' QoS performance evaluation for the transmission of biosignals; and (3) survey of QoS provisioning mechanisms for biosignals delivery in MPMS. We also propose integrating end-to-end QoS monitoring and QoS provisioning strategies in a mobile patient monitoring system infrastructure to support optimal delivery of biosignals to the healthcare professionals.

  10. Optimizing Observation Networks Combining Ships of Opportunity, Gliders, Moored Buoys and FerryBox in the Bay of Biscay and English Channel

    NASA Astrophysics Data System (ADS)

    Charria, G.; Lamouroux, J.; De Mey, P. J.; Raynaud, S.; Heyraud, C.; Craneguy, P.; Dumas, F.; Le Henaff, M.

    2016-02-01

    Designing optimal observation networks in coastal oceans remains one of the major challenges towards the implementation of future Integrated Ocean Observing Systems to monitor the coastal environment. In the Bay of Biscay and the English Channel, the diversity of involved processes requires to adapt observing systems to the specific targeted environments. Also important is the requirement for those systems to sustain coastal applications. An efficient way to measure the hydrological content of the water column over the continental shelf is to consider ships of opportunity. In the French observation strategy, the RECOPESCA program, as a component of the High frequency Observation network for the environment in coastal SEAs (HOSEA), aims to collect environmental observations from sensors attached to fishing nets. In the present study, we assess that network performances using the ArM method (Le Hénaff et al., 2009). A reference network, based on fishing vessels observations in 2008, is assessed using that method. Moreover, three scenarios, based on the reference network, a denser network in 2010 and a fictive network aggregated from a pluri-annual collection of profiles, are also analyzed. Two other observational network design experiments have been implemented for the spring season in two regions: 1) the Loire River plume (northern part of the Bay of Biscay) to explore different possible glider endurance lines combined with a fixed mooring to monitor temperature and salinity and 2) the Western English Channel using a glider below FerryBox measurements. These experiments combining existing and future observing systems, as well as numerical ensemble simulations, highlight the key issue of monitoring the whole water column in and close to river plumes (e.g. using gliders), the efficiency of the surface high frequency sampling from FerryBoxes in macrotidal regions and the importance of sampling key regions instead of increasing the number of Voluntary Observing Ships.

  11. Energy optimization in mobile sensor networks

    NASA Astrophysics Data System (ADS)

    Yu, Shengwei

    Mobile sensor networks are considered to consist of a network of mobile robots, each of which has computation, communication and sensing capabilities. Energy efficiency is a critical issue in mobile sensor networks, especially when mobility (i.e., locomotion control), routing (i.e., communications) and sensing are unique characteristics of mobile robots for energy optimization. This thesis focuses on the problem of energy optimization of mobile robotic sensor networks, and the research results can be extended to energy optimization of a network of mobile robots that monitors the environment, or a team of mobile robots that transports materials from stations to stations in a manufacturing environment. On the energy optimization of mobile robotic sensor networks, our research focuses on the investigation and development of distributed optimization algorithms to exploit the mobility of robotic sensor nodes for network lifetime maximization. In particular, the thesis studies these five problems: 1. Network-lifetime maximization by controlling positions of networked mobile sensor robots based on local information with distributed optimization algorithms; 2. Lifetime maximization of mobile sensor networks with energy harvesting modules; 3. Lifetime maximization using joint design of mobility and routing; 4. Optimal control for network energy minimization; 5. Network lifetime maximization in mobile visual sensor networks. In addressing the first problem, we consider only the mobility strategies of the robotic relay nodes in a mobile sensor network in order to maximize its network lifetime. By using variable substitutions, the original problem is converted into a convex problem, and a variant of the sub-gradient method for saddle-point computation is developed for solving this problem. An optimal solution is obtained by the method. Computer simulations show that mobility of robotic sensors can significantly prolong the lifetime of the whole robotic sensor network while consuming negligible amount of energy for mobility cost. For the second problem, the problem is extended to accommodate mobile robotic nodes with energy harvesting capability, which makes it a non-convex optimization problem. The non-convexity issue is tackled by using the existing sequential convex approximation method, based on which we propose a novel procedure of modified sequential convex approximation that has fast convergence speed. For the third problem, the proposed procedure is used to solve another challenging non-convex problem, which results in utilizing mobility and routing simultaneously in mobile robotic sensor networks to prolong the network lifetime. The results indicate that joint design of mobility and routing has an edge over other methods in prolonging network lifetime, which is also the justification for the use of mobility in mobile sensor networks for energy efficiency purpose. For the fourth problem, we include the dynamics of the robotic nodes in the problem by modeling the networked robotic system using hybrid systems theory. A novel distributed method for the networked hybrid system is used to solve the optimal moving trajectories for robotic nodes and optimal network links, which are not answered by previous approaches. Finally, the fact that mobility is more effective in prolonging network lifetime for a data-intensive network leads us to apply our methods to study mobile visual sensor networks, which are useful in many applications. We investigate the joint design of mobility, data routing, and encoding power to help improving the video quality while maximizing the network lifetime. This study leads to a better understanding of the role mobility can play in data-intensive surveillance sensor networks.

  12. A wireless medical monitoring over a heterogeneous sensor network.

    PubMed

    Yuce, Mehmet R; Ng, Peng Choong; Lee, Chin K; Khan, Jamil Y; Liu, Wentai

    2007-01-01

    This paper presents a heterogeneous sensor network system that has the capability to monitor physiological parameters from multiple patient bodies by means of different communication standards. The system uses the recently opened medical band called MICS (Medical Implant Communication Service) between the sensor nodes and a remote central control unit (CCU) that behaves as a base station. The CCU communicates with another network standard (the internet or a mobile network) for a long distance data transfer. The proposed system offers mobility to patients and flexibility to medical staff to obtain patient's physiological data on demand basis via Internet. A prototype sensor network including hardware, firmware and software designs has been implemented and tested by incorporating temperature and pulse rate sensors on nodes. The developed system has been optimized for power consumption by having the nodes sleep when there is no communication via a bidirectional communication.

  13. Detection and Monitoring of Improvised Explosive Device Education Networks through the World Wide Web

    DTIC Science & Technology

    2009-06-01

    search engines are not up to this task, as they have been optimized to catalog information quickly and efficiently for user ease of access while promoting retail commerce at the same time. This thesis presents a performance analysis of a new search engine algorithm designed to help find IED education networks using the Nutch open-source search engine architecture. It reveals which web pages are more important via references from other web pages regardless of domain. In addition, this thesis discusses potential evaluation and monitoring techniques to be used in conjunction

  14. An Embedded Wireless Sensor Network with Wireless Power Transmission Capability for the Structural Health Monitoring of Reinforced Concrete Structures.

    PubMed

    Gallucci, Luca; Menna, Costantino; Angrisani, Leopoldo; Asprone, Domenico; Moriello, Rosario Schiano Lo; Bonavolontà, Francesco; Fabbrocino, Francesco

    2017-11-07

    Maintenance strategies based on structural health monitoring can provide effective support in the optimization of scheduled repair of existing structures, thus enabling their lifetime to be extended. With specific regard to reinforced concrete (RC) structures, the state of the art seems to still be lacking an efficient and cost-effective technique capable of monitoring material properties continuously over the lifetime of a structure. Current solutions can typically only measure the required mechanical variables in an indirect, but economic, manner, or directly, but expensively. Moreover, most of the proposed solutions can only be implemented by means of manual activation, making the monitoring very inefficient and then poorly supported. This paper proposes a structural health monitoring system based on a wireless sensor network (WSN) that enables the automatic monitoring of a complete structure. The network includes wireless distributed sensors embedded in the structure itself, and follows the monitoring-based maintenance (MBM) approach, with its ABCDE paradigm, namely: accuracy, benefit, compactness, durability, and easiness of operations. The system is structured in a node level and has a network architecture that enables all the node data to converge in a central unit. Human control is completely unnecessary until the periodic evaluation of the collected data. Several tests are conducted in order to characterize the system from a metrological point of view and assess its performance and effectiveness in real RC conditions.

  15. Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks

    PubMed Central

    Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav

    2017-01-01

    Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880

  16. On the Optimization of a Probabilistic Data Aggregation Framework for Energy Efficiency in Wireless Sensor Networks.

    PubMed

    Kafetzoglou, Stella; Aristomenopoulos, Giorgos; Papavassiliou, Symeon

    2015-08-11

    Among the key aspects of the Internet of Things (IoT) is the integration of heterogeneous sensors in a distributed system that performs actions on the physical world based on environmental information gathered by sensors and application-related constraints and requirements. Numerous applications of Wireless Sensor Networks (WSNs) have appeared in various fields, from environmental monitoring, to tactical fields, and healthcare at home, promising to change our quality of life and facilitating the vision of sensor network enabled smart cities. Given the enormous requirements that emerge in such a setting-both in terms of data and energy-data aggregation appears as a key element in reducing the amount of traffic in wireless sensor networks and achieving energy conservation. Probabilistic frameworks have been introduced as operational efficient and performance effective solutions for data aggregation in distributed sensor networks. In this work, we introduce an overall optimization approach that improves and complements such frameworks towards identifying the optimal probability for a node to aggregate packets as well as the optimal aggregation period that a node should wait for performing aggregation, so as to minimize the overall energy consumption, while satisfying certain imposed delay constraints. Primal dual decomposition is employed to solve the corresponding optimization problem while simulation results demonstrate the operational efficiency of the proposed approach under different traffic and topology scenarios.

  17. Optimizing hidden layer node number of BP network to estimate fetal weight

    NASA Astrophysics Data System (ADS)

    Su, Juan; Zou, Yuanwen; Lin, Jiangli; Wang, Tianfu; Li, Deyu; Xie, Tao

    2007-12-01

    The ultrasonic estimation of fetal weigh before delivery is of most significance for obstetrical clinic. Estimating fetal weight more accurately is crucial for prenatal care, obstetrical treatment, choosing appropriate delivery methods, monitoring fetal growth and reducing the risk of newborn complications. In this paper, we introduce a method which combines golden section and artificial neural network (ANN) to estimate the fetal weight. The golden section is employed to optimize the hidden layer node number of the back propagation (BP) neural network. The method greatly improves the accuracy of fetal weight estimation, and simultaneously avoids choosing the hidden layer node number with subjective experience. The estimation coincidence rate achieves 74.19%, and the mean absolute error is 185.83g.

  18. Design of a monitoring network over France in case of a radiological accidental release

    NASA Astrophysics Data System (ADS)

    Abida, Rachid; Bocquet, Marc; Vercauteren, Nikki; Isnard, Olivier

    The Institute of Radiation Protection and Nuclear Safety (France) is planning the set-up of an automatic nuclear aerosol monitoring network over the French territory. Each of the stations will be able to automatically sample the air aerosol content and provide activity concentration measurements on several radionuclides. This should help monitor the French and neighbouring countries nuclear power plants set. It would help evaluate the impact of a radiological incident occurring at one of these nuclear facilities. This paper is devoted to the spatial design of such a network. Here, any potential network is judged on its ability to extrapolate activity concentrations measured on the network stations over the whole domain. The performance of a network is quantitatively assessed through a cost function that measures the discrepancy between the extrapolation and the true concentration fields. These true fields are obtained through the computation of a database of dispersion accidents over one year of meteorology and originating from 20 French nuclear sites. A close to optimal network is then looked for using a simulated annealing optimisation. The results emphasise the importance of the cost function in the design of a network aimed at monitoring an accidental dispersion. Several choices of norm used in the cost function are studied and give way to different designs. The influence of the number of stations is discussed. A comparison with a purely geometric approach which does not involve simulations with a chemistry-transport model is performed.

  19. Disease Surveillance on Complex Social Networks.

    PubMed

    Herrera, Jose L; Srinivasan, Ravi; Brownstein, John S; Galvani, Alison P; Meyers, Lauren Ancel

    2016-07-01

    As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors-sampling the most connected, random, and friends of random individuals-in three complex social networks-a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals-early and accurate detection of epidemic emergence and peak, and general situational awareness-we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information.

  20. Fault identification and localization for Ethernet Passive Optical Network using L-band ASE source and various types of fiber Bragg grating

    NASA Astrophysics Data System (ADS)

    Naim, Nani Fadzlina; Bakar, A. Ashrif A.; Ab-Rahman, Mohammad Syuhaimi

    2018-01-01

    This paper presents a centralized and fault localization technique for Ethernet Passive Optical Access Network. This technique employs L-band Amplified Spontaneous Emission (ASE) as the monitoring source and various fiber Bragg Gratings (FBGs) as the fiber's identifier. An FBG with a unique combination of Bragg wavelength, reflectivity and bandwidth is inserted at each distribution fiber. The FBG reflection spectrum will be analyzed using an optical spectrum analyzer (OSA) to monitor the condition of the distribution fiber. Various FBGs reflection spectra is employed to optimize the limited bandwidth of monitoring source, thus allows more fibers to be monitored. Basically, one Bragg wavelength is shared by two distinct FBGs with different reflectivity and bandwidth. The experimental result shows that the system is capable to monitor up to 32 customers with OSNR value of ∼1.2 dB and monitoring power received of -24 dBm. This centralized and simple monitoring technique demonstrates a low power, cost efficient and low bandwidth requirement system.

  1. QoS and energy aware cooperative routing protocol for wildfire monitoring wireless sensor networks.

    PubMed

    Maalej, Mohamed; Cherif, Sofiane; Besbes, Hichem

    2013-01-01

    Wireless sensor networks (WSN) are presented as proper solution for wildfire monitoring. However, this application requires a design of WSN taking into account the network lifetime and the shadowing effect generated by the trees in the forest environment. Cooperative communication is a promising solution for WSN which uses, at each hop, the resources of multiple nodes to transmit its data. Thus, by sharing resources between nodes, the transmission quality is enhanced. In this paper, we use the technique of reinforcement learning by opponent modeling, optimizing a cooperative communication protocol based on RSSI and node energy consumption in a competitive context (RSSI/energy-CC), that is, an energy and quality-of-service aware-based cooperative communication routing protocol. Simulation results show that the proposed algorithm performs well in terms of network lifetime, packet delay, and energy consumption.

  2. Design and development of a wireless sensor network to monitor snow depth in multiple catchments in the American River basin, California: hardware selection and sensor placement techniques

    NASA Astrophysics Data System (ADS)

    Kerkez, B.; Rice, R.; Glaser, S. D.; Bales, R. C.; Saksa, P. C.

    2010-12-01

    A 100-node wireless sensor network (WSN) was designed for the purpose of monitoring snow depth in two watersheds, spanning 3 km2 in the American River basin, in the central Sierra Nevada of California. The network will be deployed as a prototype project that will become a core element of a larger water information system for the Sierra Nevada. The site conditions range from mid-elevation forested areas to sub-alpine terrain with light forest cover. Extreme temperature and humidity fluctuations, along with heavy rain and snowfall events, create particularly challenging conditions for wireless communications. We show how statistics gathered from a previously deployed 60-node WSN, located in the Southern Sierra Critical Zone Observatory, were used to inform design. We adapted robust network hardware, manufactured by Dust Networks for highly demanding industrial monitoring, and added linear amplifiers to the radios to improve transmission distances. We also designed a custom data-logging board to interface the WSN hardware with snow-depth sensors. Due to the large distance between sensing locations, and complexity of terrain, we analyzed network statistics to select the location of repeater nodes, to create a redundant and reliable mesh. This optimized network topology will maximize transmission distances, while ensuring power-efficient network operations throughout harsh winter conditions. At least 30 of the 100 nodes will actively sense snow depth, while the remainder will act as sensor-ready repeaters in the mesh. Data from a previously conducted snow survey was used to create a Gaussian Process model of snow depth; variance estimates produced by this model were used to suggest near-optimal locations for snow-depth sensors to measure the variability across a 1 km2 grid. We compare the locations selected by the sensor placement algorithm to those made through expert opinion, and offer explanations for differences resulting from each approach.

  3. Analysis of Spatial Autocorrelation for Optimal Observation Network in Korea

    NASA Astrophysics Data System (ADS)

    Park, S.; Lee, S.; Lee, E.; Park, S. K.

    2016-12-01

    Many studies for improving prediction of high-impact weather have been implemented, such as THORPEX (The Observing System Research and Predictability Experiment), FASTEX (Fronts and Atlantic Storm-Track Experiment), NORPEX (North Pacific Experiment), WSR/NOAA (Winter Storm Reconnaissance), and DOTSTAR (Dropwindsonde Observations for Typhoon Surveillance near the TAiwan Region). One of most important objectives in these studies is to find effects of observation on forecast, and to establish optimal observation network. However, there are lack of such studies on Korea, although Korean peninsula exhibits a highly complex terrain so it is difficult to predict its weather phenomena. Through building the future optimal observation network, it is necessary to increase utilization of numerical weather prediction and improve monitoring·tracking·prediction skills of high-impact weather in Korea. Therefore, we will perform preliminary study to understand the spatial scale for an expansion of observation system through Spatial Autocorrelation (SAC) analysis. In additions, we will develop a testbed system to design an optimal observation network. Analysis is conducted with Automatic Weather System (AWS) rainfall data, global upper air grid observation (i.e., temperature, pressure, humidity), Himawari satellite data (i.e., water vapor) during 2013-2015 of Korea. This study will provide a guideline to construct observation network for not only improving weather prediction skill but also cost-effectiveness.

  4. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    PubMed

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  5. Optimal Performance Monitoring of Hybrid Mid-Infrared Wavelength MIMO Free Space Optical and RF Wireless Networks in Fading Channels

    NASA Astrophysics Data System (ADS)

    Schmidt, Barnet Michael

    An optimal performance monitoring metric for a hybrid free space optical and radio-frequency (RF) wireless network, the Outage Capacity Objective Function, is analytically developed and studied. Current and traditional methods of performance monitoring of both optical and RF wireless networks are centered on measurement of physical layer parameters, the most common being signal-to-noise ratio, error rate, Q factor, and eye diagrams, occasionally combined with link-layer measurements such as data throughput, retransmission rate, and/or lost packet rate. Network management systems frequently attempt to predict or forestall network failures by observing degradations of these parameters and to attempt mitigation (such as offloading traffic, increasing transmitter power, reducing the data rate, or combinations thereof) prior to the failure. These methods are limited by the frequent low sensitivity of the physical layer parameters to the atmospheric optical conditions (measured by optical signal-to-noise ratio) and the radio frequency fading channel conditions (measured by signal-to-interference ratio). As a result of low sensitivity, measurements of this type frequently are unable to predict impending failures sufficiently in advance for the network management system to take corrective action prior to the failure. We derive and apply an optimal measure of hybrid network performance based on the outage capacity of the hybrid optical and RF channel, the outage capacity objective function. The objective function provides high sensitivity and reliable failure prediction, and considers both the effects of atmospheric optical impairments on the performance of the free space optical segment as well as the effect of RF channel impairments on the radio frequency segment. The radio frequency segment analysis considers the three most common RF channel fading statistics: Rayleigh, Ricean, and Nakagami-m. The novel application of information theory to the underlying physics of the gamma-gamma optical channel and radio fading channels in determining the joint hybrid channel outage capacity provides the best performance estimate under any given set of operating conditions. It is shown that, unlike traditional physical layer performance monitoring techniques, the objective function based upon the outage capacity of the hybrid channel at any combination of OSNR and SIR, is able to predict channel degradation and failure well in advance of the actual outage. An outage in the information-theoretic definition occurs when the offered load exceeds the outage capacity under the current conditions of OSNR and SIR. The optical channel is operated at the "long" mid-infrared wavelength of 10000 nm. which provides improved resistance to scattering compared to shorter wavelengths such as 1550 nm.

  6. Development of a wireless sensor network for individual monitoring of panels in a photovoltaic plant.

    PubMed

    Prieto, Miguel J; Pernía, Alberto M; Nuño, Fernando; Díaz, Juan; Villegas, Pedro J

    2014-01-30

    With photovoltaic (PV) systems proliferating in the last few years due to the high prices of fossil fuels and pollution issues, among others, it is extremely important to monitor the efficiency of these plants and optimize the energy production process. This will also result in improvements related to the maintenance and security of the installation. In order to do so, the main parameters in the plant must be continuously monitored so that the appropriate actions can be carried out. This monitoring should not only be carried out at a global level, but also at panel-level, so that a better understanding of what is actually happening in the PV plant can be obtained. This paper presents a system based on a wireless sensor network (WSN) that includes all the components required for such monitoring as well as a power supply obtaining the energy required by the sensors from the photovoltaic panels. The system proposed succeeds in identifying all the nodes in the network and provides real-time monitoring while tracking efficiency, features, failures and weaknesses from a single cell up to the whole infrastructure. Thus, the decision-making process is simplified, which contributes to reducing failures, wastes and, consequently, costs.

  7. A Survey on Multimedia-Based Cross-Layer Optimization in Visual Sensor Networks

    PubMed Central

    Costa, Daniel G.; Guedes, Luiz Affonso

    2011-01-01

    Visual sensor networks (VSNs) comprised of battery-operated electronic devices endowed with low-resolution cameras have expanded the applicability of a series of monitoring applications. Those types of sensors are interconnected by ad hoc error-prone wireless links, imposing stringent restrictions on available bandwidth, end-to-end delay and packet error rates. In such context, multimedia coding is required for data compression and error-resilience, also ensuring energy preservation over the path(s) toward the sink and improving the end-to-end perceptual quality of the received media. Cross-layer optimization may enhance the expected efficiency of VSNs applications, disrupting the conventional information flow of the protocol layers. When the inner characteristics of the multimedia coding techniques are exploited by cross-layer protocols and architectures, higher efficiency may be obtained in visual sensor networks. This paper surveys recent research on multimedia-based cross-layer optimization, presenting the proposed strategies and mechanisms for transmission rate adjustment, congestion control, multipath selection, energy preservation and error recovery. We note that many multimedia-based cross-layer optimization solutions have been proposed in recent years, each one bringing a wealth of contributions to visual sensor networks. PMID:22163908

  8. Detecting event-related changes in organizational networks using optimized neural network models.

    PubMed

    Li, Ze; Sun, Duoyong; Zhu, Renqi; Lin, Zihan

    2017-01-01

    Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques.

  9. Detecting event-related changes in organizational networks using optimized neural network models

    PubMed Central

    Sun, Duoyong; Zhu, Renqi; Lin, Zihan

    2017-01-01

    Organizational external behavior changes are caused by the internal structure and interactions. External behaviors are also known as the behavioral events of an organization. Detecting event-related changes in organizational networks could efficiently be used to monitor the dynamics of organizational behaviors. Although many different methods have been used to detect changes in organizational networks, these methods usually ignore the correlation between the internal structure and external events. Event-related change detection considers the correlation and could be used for event recognition based on social network modeling and supervised classification. Detecting event-related changes could be effectively useful in providing early warnings and faster responses to both positive and negative organizational activities. In this study, event-related change in an organizational network was defined, and artificial neural network models were used to quantitatively determine whether and when a change occurred. To achieve a higher accuracy, Back Propagation Neural Networks (BPNNs) were optimized using Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). We showed the feasibility of the proposed method by comparing its performance with that of other methods using two cases. The results suggested that the proposed method could identify organizational events based on a correlation between the organizational networks and events. The results also suggested that the proposed method not only has a higher precision but also has a better robustness than the previously used techniques. PMID:29190799

  10. Contamination movement around a permeable reactive barrier at Solid Waste Management Unit 12, Naval Weapons Station Charleston, North Charleston, South Carolina, 2009

    USGS Publications Warehouse

    Vroblesky, Don A.; Petkewich, Matthew D.; Conlon, Kevin J.

    2010-01-01

    The ability to monitor the fate and behavior of the plume in the forest is severely limited because the present axis of maximum contamination in that area bypasses all but one of the existing monitoring wells (12MW-12S). Moreover, the 2009 data indicate that there are no optimally placed sentinel wells in the probable path of contaminant transport. Thus the monitoring network is no longer adequate to monitor the groundwater contamination downgradient from the PRB.

  11. Disease Surveillance on Complex Social Networks

    PubMed Central

    Herrera, Jose L.; Srinivasan, Ravi; Brownstein, John S.; Galvani, Alison P.; Meyers, Lauren Ancel

    2016-01-01

    As infectious disease surveillance systems expand to include digital, crowd-sourced, and social network data, public health agencies are gaining unprecedented access to high-resolution data and have an opportunity to selectively monitor informative individuals. Contact networks, which are the webs of interaction through which diseases spread, determine whether and when individuals become infected, and thus who might serve as early and accurate surveillance sensors. Here, we evaluate three strategies for selecting sensors—sampling the most connected, random, and friends of random individuals—in three complex social networks—a simple scale-free network, an empirical Venezuelan college student network, and an empirical Montreal wireless hotspot usage network. Across five different surveillance goals—early and accurate detection of epidemic emergence and peak, and general situational awareness—we find that the optimal choice of sensors depends on the public health goal, the underlying network and the reproduction number of the disease (R0). For diseases with a low R0, the most connected individuals provide the earliest and most accurate information about both the onset and peak of an outbreak. However, identifying network hubs is often impractical, and they can be misleading if monitored for general situational awareness, if the underlying network has significant community structure, or if R0 is high or unknown. Taking a theoretical approach, we also derive the optimal surveillance system for early outbreak detection but find that real-world identification of such sensors would be nearly impossible. By contrast, the friends-of-random strategy offers a more practical and robust alternative. It can be readily implemented without prior knowledge of the network, and by identifying sensors with higher than average, but not the highest, epidemiological risk, it provides reasonably early and accurate information. PMID:27415615

  12. A data acquisition protocol for a reactive wireless sensor network monitoring application.

    PubMed

    Aderohunmu, Femi A; Brunelli, Davide; Deng, Jeremiah D; Purvis, Martin K

    2015-04-30

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain.

  13. A Data Acquisition Protocol for a Reactive Wireless Sensor Network Monitoring Application

    PubMed Central

    Aderohunmu, Femi A.; Brunelli, Davide; Deng, Jeremiah D.; Purvis, Martin K.

    2015-01-01

    Limiting energy consumption is one of the primary aims for most real-world deployments of wireless sensor networks. Unfortunately, attempts to optimize energy efficiency are often in conflict with the demand for network reactiveness to transmit urgent messages. In this article, we propose SWIFTNET: a reactive data acquisition scheme. It is built on the synergies arising from a combination of the data reduction methods and energy-efficient data compression schemes. Particularly, it combines compressed sensing, data prediction and adaptive sampling strategies. We show how this approach dramatically reduces the amount of unnecessary data transmission in the deployment for environmental monitoring and surveillance networks. SWIFTNET targets any monitoring applications that require high reactiveness with aggressive data collection and transmission. To test the performance of this method, we present a real-world testbed for a wildfire monitoring as a use-case. The results from our in-house deployment testbed of 15 nodes have proven to be favorable. On average, over 50% communication reduction when compared with a default adaptive prediction method is achieved without any loss in accuracy. In addition, SWIFTNET is able to guarantee reactiveness by adjusting the sampling interval from 5 min up to 15 s in our application domain. PMID:25942642

  14. Power optimization in body sensor networks: the case of an autonomous wireless EMG sensor powered by PV-cells.

    PubMed

    Penders, J; Pop, V; Caballero, L; van de Molengraft, J; van Schaijk, R; Vullers, R; Van Hoof, C

    2010-01-01

    Recent advances in ultra-low-power circuits and energy harvesters are making self-powered body sensor nodes a reality. Power optimization at the system and application level is crucial in achieving ultra-low-power consumption for the entire system. This paper reviews system-level power optimization techniques, and illustrates their impact on the case of autonomous wireless EMG monitoring. The resulting prototype, an Autonomous wireless EMG sensor power by PV-cells, is presented.

  15. Assessment of an air pollution monitoring network to generate urban air pollution maps using Shannon information index, fuzzy overlay, and Dempster-Shafer theory, A case study: Tehran, Iran

    NASA Astrophysics Data System (ADS)

    Pahlavani, Parham; Sheikhian, Hossein; Bigdeli, Behnaz

    2017-10-01

    Air pollution assessment is an imperative part of megacities planning and control. Hence, a new comprehensive approach for air pollution monitoring and assessment was introduced in this research. It comprises of three main sections: optimizing the existing air pollutant monitoring network, locating new stations to complete the coverage of the existing network, and finally, generating an air pollution map. In the first section, Shannon information index was used to find less informative stations to be candidate for removal. Then, a methodology was proposed to determine the areas which are not sufficiently covered by the current network. These areas are candidates for establishing new monitoring stations. The current air pollution monitoring network of Tehran was used as a case study, where the air pollution issue has been worsened due to the huge population, considerable commuters' absorption and topographic barriers. In this regard, O3, NO, NO2, NOx, CO, PM10, and PM2.5 were considered as the main pollutants of Tehran. Optimization step concluded that all the 16 active monitoring stations should be preserved. Analysis showed that about 35% of the Tehran's area is not properly covered by monitoring stations and about 30% of the area needs additional stations. The winter period in Tehran always faces the most severe air pollution in the year. Hence, to produce the air pollution map of Tehran, three-month of winter measurements of the mentioned pollutants, repeated for five years in the same period, were selected and extended to the entire area using the kriging method. Experts specified the contribution of each pollutant in overall air pollution. Experts' rankings aggregated by a fuzzy-overlay process. Resulted maps characterized the study area with crucial air pollution situation. According to the maps, more than 45% of the city area faced high pollution in the study period, while only less than 10% of the area showed low pollution. This situation confirms the need for effective plans to mitigate the severity of the problem. In addition, an effort made to investigate the rationality of the acquired air pollution map respect to the urban, cultural, and environmental characteristics of Tehran, which also confirmed the results.

  16. Automatic Phase Picker for Local and Teleseismic Events Using Wavelet Transform and Simulated Annealing

    NASA Astrophysics Data System (ADS)

    Gaillot, P.; Bardaine, T.; Lyon-Caen, H.

    2004-12-01

    Since recent years, various automatic phase pickers based on the wavelet transform have been developed. The main motivation for using wavelet transform is that they are excellent at finding the characteristics of transient signals, they have good time resolution at all periods, and they are easy to program for fast execution. Thus, the time-scale properties and flexibility of the wavelets allow detection of P and S phases in a broad frequency range making their utilization possible in various context. However, the direct application of an automatic picking program in a different context/network than the one for which it has been initially developed is quickly tedious. In fact, independently of the strategy involved in automatic picking algorithms (window average, autoregressive, beamforming, optimization filtering, neuronal network), all developed algorithms use different parameters that depend on the objective of the seismological study, the region and the seismological network. Classically, these parameters are manually defined by trial-error or calibrated learning stage. In order to facilitate this laborious process, we have developed an automated method that provide optimal parameters for the picking programs. The set of parameters can be explored using simulated annealing which is a generic name for a family of optimization algorithms based on the principle of stochastic relaxation. The optimization process amounts to systematically modifying an initial realization so as to decrease the value of the objective function, getting the realization acceptably close to the target statistics. Different formulations of the optimization problem (objective function) are discussed using (1) world seismicity data recorded by the French national seismic monitoring network (ReNass), (2) regional seismicity data recorded in the framework of the Corinth Rift Laboratory (CRL) experiment, (3) induced seismicity data from the gas field of Lacq (Western Pyrenees), and (4) micro-seismicity data from glacier monitoring. The developed method is discussed and tested using our wavelet version of the standard STA-LTA algorithm.

  17. Effective Network Management via System-Wide Coordination and Optimization

    DTIC Science & Technology

    2010-08-01

    Srinath Sridhar, Matthew Streeter, Jimeng Sun, Michael Tschantz, Rangarajan Vasudevan, Vijay Vasude- van, Gaurav Veda, Shobha Venkataraman, Justin... Sharma and Byers [150] suggest the use of Bloom filters. While minimizing redundant measurements is a common high-level theme between cSamp and their...NSDI, 2004. [150] M. R. Sharma and J. W. Byers. Scalable Coordination Techniques for Distributed Network Monitoring. In Proc. of PAM, 2005. [151] S

  18. An Embedded Wireless Sensor Network with Wireless Power Transmission Capability for the Structural Health Monitoring of Reinforced Concrete Structures

    PubMed Central

    Gallucci, Luca; Menna, Costantino; Angrisani, Leopoldo; Asprone, Domenico

    2017-01-01

    Maintenance strategies based on structural health monitoring can provide effective support in the optimization of scheduled repair of existing structures, thus enabling their lifetime to be extended. With specific regard to reinforced concrete (RC) structures, the state of the art seems to still be lacking an efficient and cost-effective technique capable of monitoring material properties continuously over the lifetime of a structure. Current solutions can typically only measure the required mechanical variables in an indirect, but economic, manner, or directly, but expensively. Moreover, most of the proposed solutions can only be implemented by means of manual activation, making the monitoring very inefficient and then poorly supported. This paper proposes a structural health monitoring system based on a wireless sensor network (WSN) that enables the automatic monitoring of a complete structure. The network includes wireless distributed sensors embedded in the structure itself, and follows the monitoring-based maintenance (MBM) approach, with its ABCDE paradigm, namely: accuracy, benefit, compactness, durability, and easiness of operations. The system is structured in a node level and has a network architecture that enables all the node data to converge in a central unit. Human control is completely unnecessary until the periodic evaluation of the collected data. Several tests are conducted in order to characterize the system from a metrological point of view and assess its performance and effectiveness in real RC conditions. PMID:29112128

  19. Energy Harvesting Based Body Area Networks for Smart Health.

    PubMed

    Hao, Yixue; Peng, Limei; Lu, Huimin; Hassan, Mohammad Mehedi; Alamri, Atif

    2017-07-10

    Body area networks (BANs) are configured with a great number of ultra-low power consumption wearable devices, which constantly monitor physiological signals of the human body and thus realize intelligent monitoring. However, the collection and transfer of human body signals consume energy, and considering the comfort demand of wearable devices, both the size and the capacity of a wearable device's battery are limited. Thus, minimizing the energy consumption of wearable devices and optimizing the BAN energy efficiency is still a challenging problem. Therefore, in this paper, we propose an energy harvesting-based BAN for smart health and discuss an optimal resource allocation scheme to improve BAN energy efficiency. Specifically, firstly, considering energy harvesting in a BAN and the time limits of human body signal transfer, we formulate the energy efficiency optimization problem of time division for wireless energy transfer and wireless information transfer. Secondly, we convert the optimization problem into a convex optimization problem under a linear constraint and propose a closed-form solution to the problem. Finally, simulation results proved that when the size of data acquired by the wearable devices is small, the proportion of energy consumed by the circuit and signal acquisition of the wearable devices is big, and when the size of data acquired by the wearable devices is big, the energy consumed by the signal transfer of the wearable device is decisive.

  20. Energy Harvesting Based Body Area Networks for Smart Health

    PubMed Central

    Hao, Yixue; Peng, Limei; Alamri, Atif

    2017-01-01

    Body area networks (BANs) are configured with a great number of ultra-low power consumption wearable devices, which constantly monitor physiological signals of the human body and thus realize intelligent monitoring. However, the collection and transfer of human body signals consume energy, and considering the comfort demand of wearable devices, both the size and the capacity of a wearable device’s battery are limited. Thus, minimizing the energy consumption of wearable devices and optimizing the BAN energy efficiency is still a challenging problem. Therefore, in this paper, we propose an energy harvesting-based BAN for smart health and discuss an optimal resource allocation scheme to improve BAN energy efficiency. Specifically, firstly, considering energy harvesting in a BAN and the time limits of human body signal transfer, we formulate the energy efficiency optimization problem of time division for wireless energy transfer and wireless information transfer. Secondly, we convert the optimization problem into a convex optimization problem under a linear constraint and propose a closed-form solution to the problem. Finally, simulation results proved that when the size of data acquired by the wearable devices is small, the proportion of energy consumed by the circuit and signal acquisition of the wearable devices is big, and when the size of data acquired by the wearable devices is big, the energy consumed by the signal transfer of the wearable device is decisive. PMID:28698501

  1. Tool wear modeling using abductive networks

    NASA Astrophysics Data System (ADS)

    Masory, Oren

    1992-09-01

    A tool wear model based on Abductive Networks, which consists of a network of `polynomial' nodes, is described. The model relates the cutting parameters, components of the cutting force, and machining time to flank wear. Thus real time measurements of the cutting force can be used to monitor the machining process. The model is obtained by a training process in which the connectivity between the network's nodes and the polynomial coefficients of each node are determined by optimizing a performance criteria. Actual wear measurements of coated and uncoated carbide inserts were used for training and evaluating the established model.

  2. Optimal Control of Connected and Automated Vehicles at Roundabouts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Liuhui; Malikopoulos, Andreas; Rios-Torres, Jackeline

    Connectivity and automation in vehicles provide the most intriguing opportunity for enabling users to better monitor transportation network conditions and make better operating decisions to improve safety and reduce pollution, energy consumption, and travel delays. This study investigates the implications of optimally coordinating vehicles that are wirelessly connected to each other and to an infrastructure in roundabouts to achieve a smooth traffic flow without stop-and-go driving. We apply an optimization framework and an analytical solution that allows optimal coordination of vehicles for merging in such traffic scenario. The effectiveness of the efficiency of the proposed approach is validated through simulationmore » and it is shown that coordination of vehicles can reduce total travel time by 3~49% and fuel consumption by 2~27% with respect to different traffic levels. In addition, network throughput is improved by up to 25% due to elimination of stop-and-go driving behavior.« less

  3. Fluid status monitoring with a wireless network to reduce cardiovascular-related hospitalizations and mortality in heart failure: rationale and design of the OptiLink HF Study (Optimization of Heart Failure Management using OptiVol Fluid Status Monitoring and CareLink).

    PubMed

    Brachmann, Johannes; Böhm, Michael; Rybak, Karin; Klein, Gunnar; Butter, Christian; Klemm, Hanno; Schomburg, Rolf; Siebermair, Johannes; Israel, Carsten; Sinha, Anil-Martin; Drexler, Helmut

    2011-07-01

    The Optimization of Heart Failure Management using OptiVol Fluid Status Monitoring and CareLink (OptiLink HF) study is designed to investigate whether OptiVol fluid status monitoring with an automatically generated wireless CareAlert notification via the CareLink Network can reduce all-cause death and cardiovascular hospitalizations in an HF population, compared with standard clinical assessment. Methods Patients with newly implanted or replacement cardioverter-defibrillator devices with or without cardiac resynchronization therapy, who have chronic HF in New York Heart Association class II or III and a left ventricular ejection fraction ≤35% will be eligible to participate. Following device implantation, patients are randomized to either OptiVol fluid status monitoring through CareAlert notification or regular care (OptiLink 'on' vs. 'off'). The primary endpoint is a composite of all-cause death or cardiovascular hospitalization. It is estimated that 1000 patients will be required to demonstrate superiority of the intervention group to reduce the primary outcome by 30% with 80% power. The OptiLink HF study is designed to investigate whether early detection of congestion reduces mortality and cardiovascular hospitalization in patients with chronic HF. The study is expected to close recruitment in September 2012 and to report first results in May 2014.

  4. Design of smart sensing components for volcano monitoring

    USGS Publications Warehouse

    Xu, M.; Song, W.-Z.; Huang, R.; Peng, Y.; Shirazi, B.; LaHusen, R.; Kiely, A.; Peterson, N.; Ma, A.; Anusuya-Rangappa, L.; Miceli, M.; McBride, D.

    2009-01-01

    In a volcano monitoring application, various geophysical and geochemical sensors generate continuous high-fidelity data, and there is a compelling need for real-time raw data for volcano eruption prediction research. It requires the network to support network synchronized sampling, online configurable sensing and situation awareness, which pose significant challenges on sensing component design. Ideally, the resource usages shall be driven by the environment and node situations, and the data quality is optimized under resource constraints. In this paper, we present our smart sensing component design, including hybrid time synchronization, configurable sensing, and situation awareness. Both design details and evaluation results are presented to show their efficiency. Although the presented design is for a volcano monitoring application, its design philosophy and framework can also apply to other similar applications and platforms. ?? 2009 Elsevier B.V.

  5. Robust Multi Sensor Classification via Jointly Sparse Representation

    DTIC Science & Technology

    2016-03-14

    rank, sensor network, dictionary learning REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...with ultrafast laser pulses, Optics Express, (04 2015): 10521. doi: Xiaoxia Sun, Nasser M. Nasrabadi, Trac D. Tran. Task-Driven Dictionary Learning...in dictionary design, compressed sensors design, and optimization in sparse recovery also helps. We are able to advance the state of the art

  6. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  7. Optimized autonomous space in-situ sensor web for volcano monitoring

    USGS Publications Warehouse

    Song, W.-Z.; Shirazi, B.; Huang, R.; Xu, M.; Peterson, N.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.; Kedar, S.; Chien, S.; Webb, F.; Kiely, A.; Doubleday, J.; Davies, A.; Pieri, D.

    2010-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), have developed a prototype of dynamic and scalable hazard monitoring sensor-web and applied it to volcano monitoring. The combined Optimized Autonomous Space In-situ Sensor-web (OASIS) has two-way communication capability between ground and space assets, uses both space and ground data for optimal allocation of limited bandwidth resources on the ground, and uses smart management of competing demands for limited space assets. It also enables scalability and seamless infusion of future space and in-situ assets into the sensor-web. The space and in-situ control components of the system are integrated such that each element is capable of autonomously tasking the other. The ground in-situ was deployed into the craters and around the flanks of Mount St. Helens in July 2009, and linked to the command and control of the Earth Observing One (EO-1) satellite. ?? 2010 IEEE.

  8. Model-based evaluation of subsurface monitoring networks for improved efficiency and predictive certainty of regional groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, M. J.; Wöhling, Th.; Moore, C. R.; Dann, R.; Scott, D. M.; Close, M.

    2012-04-01

    Groundwater resources worldwide are increasingly under pressure. Demands from different local stakeholders add to the challenge of managing this resource. In response, groundwater models have become popular to make predictions about the impact of different management strategies and to estimate possible impacts of changes in climatic conditions. These models can assist to find optimal management strategies that comply with the various stakeholder needs. Observations of the states of the groundwater system are essential for the calibration and evaluation of groundwater flow models, particularly when they are used to guide the decision making process. On the other hand, installation and maintenance of observation networks are costly. Therefore it is important to design monitoring networks carefully and cost-efficiently. In this study, we analyse the Central Plains groundwater aquifer (~ 4000 km2) between the Rakaia and Waimakariri rivers on the Eastern side of the Southern Alps in New Zealand. The large sedimentary groundwater aquifer is fed by the two alpine rivers and by recharge from the land surface. The area is mainly under agricultural land use and large areas of the land are irrigated. The other major water use is the drinking water supply for the city of Christchurch. The local authority in the region, Environment Canterbury, maintains an extensive groundwater quantity and quality monitoring programme to monitor the effects of land use and discharges on groundwater quality, and the suitability of the groundwater for various uses, especially drinking-water supply. Current and projected irrigation water demand has raised concerns about possible impacts on groundwater-dependent lowland streams. We use predictive uncertainty analysis and the Central Plains steady-state groundwater flow model to evaluate the worth of pressure head observations in the existing groundwater well monitoring network. The data worth of particular observations is dependent on the problem-specific prediction target under consideration. Therefore, the worth of individual observation locations may differ for different prediction targets. Our evaluation is based on predictions of lowland stream discharge resulting from changes in land use and irrigation in the upper Central Plains catchment. In our analysis, we adopt the model predictive uncertainty analysis method by Moore and Doherty (2005) which accounts for contributions from both measurement errors and uncertain structural heterogeneity. The method is robust and efficient due to a linearity assumption in the governing equations and readily implemented for application in the model-independent parameter estimation and uncertainty analysis toolkit PEST (Doherty, 2010). The proposed methods can be applied not only for the evaluation of monitoring networks, but also for the optimization of networks, to compare alternative monitoring strategies, as well as to identify best cost-benefit monitoring design even prior to any data acquisition.

  9. INFORMAS (International Network for Food and Obesity/non-communicable diseases Research, Monitoring and Action Support): overview and key principles.

    PubMed

    Swinburn, B; Sacks, G; Vandevijvere, S; Kumanyika, S; Lobstein, T; Neal, B; Barquera, S; Friel, S; Hawkes, C; Kelly, B; L'abbé, M; Lee, A; Ma, J; Macmullan, J; Mohan, S; Monteiro, C; Rayner, M; Sanders, D; Snowdon, W; Walker, C

    2013-10-01

    Non-communicable diseases (NCDs) dominate disease burdens globally and poor nutrition increasingly contributes to this global burden. Comprehensive monitoring of food environments, and evaluation of the impact of public and private sector policies on food environments is needed to strengthen accountability systems to reduce NCDs. The International Network for Food and Obesity/NCDs Research, Monitoring and Action Support (INFORMAS) is a global network of public-interest organizations and researchers that aims to monitor, benchmark and support public and private sector actions to create healthy food environments and reduce obesity, NCDs and their related inequalities. The INFORMAS framework includes two 'process' modules, that monitor the policies and actions of the public and private sectors, seven 'impact' modules that monitor the key characteristics of food environments and three 'outcome' modules that monitor dietary quality, risk factors and NCD morbidity and mortality. Monitoring frameworks and indicators have been developed for 10 modules to provide consistency, but allowing for stepwise approaches ('minimal', 'expanded', 'optimal') to data collection and analysis. INFORMAS data will enable benchmarking of food environments between countries, and monitoring of progress over time within countries. Through monitoring and benchmarking, INFORMAS will strengthen the accountability systems needed to help reduce the burden of obesity, NCDs and their related inequalities. © 2013 The Authors. Obesity Reviews published by John Wiley & Sons Ltd on behalf of the International Association for the Study of Obesity.

  10. Analysis of radio wave propagation for ISM 2.4 GHz Wireless Sensor Networks in inhomogeneous vegetation environments.

    PubMed

    Azpilicueta, Leire; López-Iturri, Peio; Aguirre, Erik; Mateo, Ignacio; Astrain, José Javier; Villadangos, Jesús; Falcone, Francisco

    2014-12-10

    The use of wireless networks has experienced exponential growth due to the improvements in terms of battery life and low consumption of the devices. However, it is compulsory to conduct previous radio propagation analysis when deploying a wireless sensor network. These studies are necessary to perform an estimation of the range coverage, in order to optimize the distance between devices in an actual network deployment. In this work, the radio channel characterization for ISM 2.4 GHz Wireless Sensor Networks (WSNs) in an inhomogeneous vegetation environment has been analyzed. This analysis allows designing environment monitoring tools based on ZigBee and WiFi where WSN and smartphones cooperate, providing rich and customized monitoring information to users in a friendly manner. The impact of topology as well as morphology of the environment is assessed by means of an in-house developed 3D Ray Launching code, to emulate the realistic operation in the framework of the scenario. Experimental results gathered from a measurement campaign conducted by deploying a ZigBee Wireless Sensor Network, are analyzed and compared with simulations in this paper. The scenario where this network is intended to operate is a combination of buildings and diverse vegetation species. To gain insight in the effects of radio propagation, a simplified vegetation model has been developed, considering the material parameters and simplified geometry embedded in the simulation scenario. An initial location-based application has been implemented in a real scenario, to test the functionality within a context aware scenario. The use of deterministic tools can aid to know the impact of the topological influence in the deployment of the optimal Wireless Sensor Network in terms of capacity, coverage and energy consumption, making the use of these systems attractive for multiple applications in inhomogeneous vegetation environments.

  11. Development of a Wireless Sensor Network for Individual Monitoring of Panels in a Photovoltaic Plant

    PubMed Central

    Prieto, Miguel J.; Pernía, Alberto M.; Nuño, Fernando; Díaz, Juan; Villegas, Pedro J.

    2014-01-01

    With photovoltaic (PV) systems proliferating in the last few years due to the high prices of fossil fuels and pollution issues, among others, it is extremely important to monitor the efficiency of these plants and optimize the energy production process. This will also result in improvements related to the maintenance and security of the installation. In order to do so, the main parameters in the plant must be continuously monitored so that the appropriate actions can be carried out. This monitoring should not only be carried out at a global level, but also at panel-level, so that a better understanding of what is actually happening in the PV plant can be obtained. This paper presents a system based on a wireless sensor network (WSN) that includes all the components required for such monitoring as well as a power supply obtaining the energy required by the sensors from the photovoltaic panels. The system proposed succeeds in identifying all the nodes in the network and provides real-time monitoring while tracking efficiency, features, failures and weaknesses from a single cell up to the whole infrastructure. Thus, the decision-making process is simplified, which contributes to reducing failures, wastes and, consequently, costs. PMID:24487622

  12. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2013-10-01

    Based on rainfall intensity-duration-frequency (IDF) curves, fitted in several locations of a given area, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimization can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables, and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short- and a long-term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. Therefore, it is proposed to augment it by 25, 50, 100 and 160% virtually, which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.

  13. Optimizing network connectivity for mobile health technologies in sub-Saharan Africa.

    PubMed

    Siedner, Mark J; Lankowski, Alexander; Musinga, Derrick; Jackson, Jonathon; Muzoora, Conrad; Hunt, Peter W; Martin, Jeffrey N; Bangsberg, David R; Haberer, Jessica E

    2012-01-01

    Mobile health (mHealth) technologies hold incredible promise to improve healthcare delivery in resource-limited settings. Network reliability across large catchment areas can be a major challenge. We performed an analysis of network failure frequency as part of a study of real-time adherence monitoring in rural Uganda. We hypothesized that the addition of short messaging service (SMS+GPRS) to the standard cellular network modality (GPRS) would reduce network disruptions and improve transmission of data. Participants were enrolled in a study of real-time adherence monitoring in southwest Uganda. In June 2011, we began using Wisepill devices that transmit data each time the pill bottle is opened. We defined network failures as medication interruptions of >48 hours duration that were transmitted when network connectivity was re-established. During the course of the study, we upgraded devices from GPRS to GPRS+SMS compatibility. We compared network failure rates between GPRS and GPRS+SMS periods and created geospatial maps to graphically demonstrate patterns of connectivity. One hundred fifty-seven participants met inclusion criteria of seven days of SMS and seven days of SMS+GPRS observation time. Seventy-three percent were female, median age was 40 years (IQR 33-46), 39% reported >1-hour travel time to clinic and 17% had home electricity. One hundred one had GPS coordinates recorded and were included in the geospatial maps. The median number of network failures per person-month for the GPRS and GPRS+SMS modalities were 1.5 (IQR 1.0-2.2) and 0.3 (IQR 0-0.9) respectively, (mean difference 1.2, 95%CI 1.0-1.3, p-value<0.0001). Improvements in network connectivity were notable throughout the region. Study costs increased by approximately $1USD per person-month. Addition of SMS to standard GPRS cellular network connectivity can significantly reduce network connection failures for mobile health applications in remote areas. Projects depending on mobile health data in resource-limited settings should consider this upgrade to optimize mHealth applications.

  14. Optimizing Environmental Monitoring Networks with Direction-Dependent Distance Thresholds.

    ERIC Educational Resources Information Center

    Hudak, Paul F.

    1993-01-01

    In the direction-dependent approach to location modeling developed herein, the distance within which a point of demand can find service from a facility depends on direction of measurement. The utility of the approach is illustrated through an application to groundwater remediation. (Author/MDH)

  15. A data fusion-based methodology for optimal redesign of groundwater monitoring networks

    NASA Astrophysics Data System (ADS)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    In this paper, a new data fusion-based methodology is presented for spatio-temporal (S-T) redesigning of Groundwater Level Monitoring Networks (GLMNs). The kriged maps of three different criteria (i.e. marginal entropy of water table levels, estimation error variances of mean values of water table levels, and estimation values of long-term changes in water level) are combined for determining monitoring sub-areas of high and low priorities in order to consider different spatial patterns for each sub-area. The best spatial sampling scheme is selected by applying a new method, in which a regular hexagonal gridding pattern and the Thiessen polygon approach are respectively utilized in sub-areas of high and low monitoring priorities. An Artificial Neural Network (ANN) and a S-T kriging models are used to simulate water level fluctuations. To improve the accuracy of the predictions, results of the ANN and S-T kriging models are combined using a data fusion technique. The concept of Value of Information (VOI) is utilized to determine two stations with maximum information values in both sub-areas with high and low monitoring priorities. The observed groundwater level data of these two stations are considered for the power of trend detection, estimating periodic fluctuations and mean values of the stationary components, which are used for determining non-uniform sampling frequencies for sub-areas. The proposed methodology is applied to the Dehgolan plain in northwestern Iran. The results show that a new sampling configuration with 35 and 7 monitoring stations and sampling intervals of 20 and 32 days, respectively in sub-areas with high and low monitoring priorities, leads to a more efficient monitoring network than the existing one containing 52 monitoring stations and monthly temporal sampling.

  16. A Crowdsensing Based Analytical Framework for Perceptional Degradation of OTT Web Browsing.

    PubMed

    Li, Ke; Wang, Hai; Xu, Xiaolong; Du, Yu; Liu, Yuansheng; Ahmad, M Omair

    2018-05-15

    Service perception analysis is crucial for understanding both user experiences and network quality as well as for maintaining and optimizing of mobile networks. Given the rapid development of mobile Internet and over-the-top (OTT) services, the conventional network-centric mode of network operation and maintenance is no longer effective. Therefore, developing an approach to evaluate and optimizing users' service perceptions has become increasingly important. Meanwhile, the development of a new sensing paradigm, mobile crowdsensing (MCS), makes it possible to evaluate and analyze the user's OTT service perception from end-user's point of view other than from the network side. In this paper, the key factors that impact users' end-to-end OTT web browsing service perception are analyzed by monitoring crowdsourced user perceptions. The intrinsic relationships among the key factors and the interactions between key quality indicators (KQI) are evaluated from several perspectives. Moreover, an analytical framework of perceptional degradation and a detailed algorithm are proposed whose goal is to identify the major factors that impact the perceptional degradation of web browsing service as well as their significance of contribution. Finally, a case study is presented to show the effectiveness of the proposed method using a dataset crowdsensed from a large number of smartphone users in a real mobile network. The proposed analytical framework forms a valuable solution for mobile network maintenance and optimization and can help improve web browsing service perception and network quality.

  17. Recent developments in tissue-type imaging (TTI) for planning and monitoring treatment of prostate cancer.

    PubMed

    Feleppa, Ernest J; Porter, Christopher R; Ketterling, Jeffrey; Lee, Paul; Dasgupta, Shreedevi; Urban, Stella; Kalisz, Andrew

    2004-07-01

    Because current methods of imaging prostate cancer are inadequate, biopsies cannot be effectively guided and treatment cannot be effectively planned and targeted. Therefore, our research is aimed at ultrasonically characterizing cancerous prostate tissue so that we can image it more effectively and thereby provide improved means of detecting, treating and monitoring prostate cancer. We base our characterization methods on spectrum analysis of radiofrequency (rf) echo signals combined with clinical variables such as prostate-specific antigen (PSA). Tissue typing using these parameters is performed by artificial neural networks. We employed and evaluated different approaches to data partitioning into training, validation, and test sets and different neural network configuration options. In this manner, we sought to determine what neural network configuration is optimal for these data and also to assess possible bias that might exist due to correlations among different data entries among the data for a given patient. The classification efficacy of each neural network configuration and data-partitioning method was measured using relative-operating-characteristic (ROC) methods. Neural network classification based on spectral parameters combined with clinical data generally produced ROC-curve areas of 0.80 compared to curve areas of 0.64 for conventional transrectal ultrasound imaging combined with clinical data. We then used the optimal neural network configuration to generate lookup tables that translate local spectral parameter values and global clinical-variable values into pixel values in tissue-type images (TTIs). TTIs continue to show cancerous regions successfully, and may prove to be particularly useful clinically in combination with other ultrasonic and nonultrasonic methods, e.g., magnetic-resonance spectroscopy.

  18. Recent Developments in Tissue-type Imaging(TTI) for Planning and Monitoring Treatment of Prostate Cancer

    PubMed Central

    Feleppa, Ernest J.; Porter, Christopher R.; Ketterling, Jeffrey; Lee, Paul; Dasgupta, Shreedevi; Urban, Stella; Kalisz, Andrew

    2006-01-01

    Because current methods of imaging prostate cancer are inadequate, biopsies cannot be effectively guided and treatment cannot be effectively planned and targeted. Therefore, our research is aimed at ultrasonically characterizing cancerous prostate tissue so that we can image it more effectively and thereby provide improved means of detecting, treating and monitoring prostate cancer. We base our characterization methods on spectrum analysis of radio frequency (rf) echo signals combined with clinical variables such as prostate-specific antigen (PSA). Tissue typing using these parameters is performed by artificial neural networks. We employedand evaluated different approaches to data partitioning into training, validation, and test sets and different neural network configuration options. In this manner, we sought to determine what neural network configuration is optimal for these data and also to assess possible bias that might exist due to correlations among different data entries among the data for a given patient. The classification efficacy of each neural network configuration and data-partitioning method was measured using relative-operating-characteristic (ROC) methods. Neural network classification based on spectral parameters combined with clinical data generally produced ROC-curve areas of 0.80 compared to curve areas of 0.64 for conventional transrectal ultrasound imaging combined with clinical data. We then used the optimal neural network configuration to generate lookup tables that translate local spectral parameter values and global clinical-variable values into pixel values in tissue-type images (TTIs). TTIs continue to show can cerous regions successfully, and may prove to be particularly useful clinically in combination with other ultrasonic and nonultrasonic methods, e.g., magnetic-resonance spectroscopy. PMID:15754797

  19. Multi-channel multi-radio using 802.11 based media access for sink nodes in wireless sensor networks.

    PubMed

    Campbell, Carlene E-A; Khan, Shafiullah; Singh, Dhananjay; Loo, Kok-Keong

    2011-01-01

    The next generation surveillance and multimedia systems will become increasingly deployed as wireless sensor networks in order to monitor parks, public places and for business usage. The convergence of data and telecommunication over IP-based networks has paved the way for wireless networks. Functions are becoming more intertwined by the compelling force of innovation and technology. For example, many closed-circuit TV premises surveillance systems now rely on transmitting their images and data over IP networks instead of standalone video circuits. These systems will increase their reliability in the future on wireless networks and on IEEE 802.11 networks. However, due to limited non-overlapping channels, delay, and congestion there will be problems at sink nodes. In this paper we provide necessary conditions to verify the feasibility of round robin technique in these networks at the sink nodes by using a technique to regulate multi-radio multichannel assignment. We demonstrate through simulations that dynamic channel assignment scheme using multi-radio, and multichannel configuration at a single sink node can perform close to optimal on the average while multiple sink node assignment also performs well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives.

  20. Multi-Channel Multi-Radio Using 802.11 Based Media Access for Sink Nodes in Wireless Sensor Networks

    PubMed Central

    Campbell, Carlene E.-A.; Khan, Shafiullah; Singh, Dhananjay; Loo, Kok-Keong

    2011-01-01

    The next generation surveillance and multimedia systems will become increasingly deployed as wireless sensor networks in order to monitor parks, public places and for business usage. The convergence of data and telecommunication over IP-based networks has paved the way for wireless networks. Functions are becoming more intertwined by the compelling force of innovation and technology. For example, many closed-circuit TV premises surveillance systems now rely on transmitting their images and data over IP networks instead of standalone video circuits. These systems will increase their reliability in the future on wireless networks and on IEEE 802.11 networks. However, due to limited non-overlapping channels, delay, and congestion there will be problems at sink nodes. In this paper we provide necessary conditions to verify the feasibility of round robin technique in these networks at the sink nodes by using a technique to regulate multi-radio multichannel assignment. We demonstrate through simulations that dynamic channel assignment scheme using multi-radio, and multichannel configuration at a single sink node can perform close to optimal on the average while multiple sink node assignment also performs well. The methods proposed in this paper can be a valuable tool for network designers in planning network deployment and for optimizing different performance objectives. PMID:22163883

  1. Optimization of a Time-Lapse Gravity Network for Carbon Sequestration

    NASA Astrophysics Data System (ADS)

    Appriou, D.; Strickland, C. E.; Ruprecht Yonkofski, C. M.

    2017-12-01

    The objective of this study is to evaluate what could be a comprehensive and optimal state of the art gravity monitoring network that would meet the UIC class VI regulation and insure that 90% of the CO2 injected remain underground. Time-lapse gravity surveys have a long history of effective applications of monitoring temporal density changes in the subsurface. For decades, gravity measurements have been used for a wide range of applications. The interest of time-lapse gravity surveys for monitoring carbon sequestration sites started recently. The success of their deployment in such sites depends upon a combination of favorable conditions, such as the reservoir geometry, depth, thickness, density change over time induced by the CO2 injection and the location of the instrument. In most cases, the density changes induced by the CO2 plume in the subsurface are not detectable from the surface but the use of borehole gravimeters can provide excellent results. In the framework of the National Assessment and Risk Partnership (NRAP) funded by the Department of Energy, the evaluation of the effectiveness of the gravity monitoring of a CO2 storage site has been assessed using multiple synthetic scenarios implemented on a community model developed for the Kimberlina site (e.g., fault leakage scenarios, borehole leakage). The Kimberlina carbon sequestration project was a pilot project located in southern San Joaquin Valley, California, aimed to safely inject 250,000 t CO2/yr for four years. Although the project was cancelled in 2012, the site characterization efforts resulted in the development of a geologic model. In this study, we present the results of the time-lapse gravity monitoring applied on different multiphase flow and reactive transport models developed by Lawrence Berkeley National Laboratory (i.e., no leakage, permeable fault zone, wellbore leakage). Our monitoring approach considers an ideal network, consisting of multiple vertical and horizontal instrumented boreholes that could be used to track the CO2 plume and potential leaks. A preliminary cost estimate will also be provided.

  2. Establishment of Stereo Multi-sensor Network for Giant Landslide Monitoring and its Deploy in Xishan landslide, Sichuan, China.

    NASA Astrophysics Data System (ADS)

    Liu, C.; Lu, P.; WU, H.

    2015-12-01

    Landslide is one of the most destructive natural disasters, which severely affects human lives as well as the safety of personal properties and public infrastructures. Monitoring and predicting landslide movements can keep an adequate safety level for human beings in those situations. This paper indicated a newly developed Stereo Multi-sensor Landslide Monitoring Network (SMSLMN) based on a uniform temporal geo-reference. Actually, early in 2003, SAMOA (Surveillance et Auscultation des Mouvements de Terrain Alpins, French) project was put forwarded as a plan for landslide movements monitoring. However, SAMOA project did not establish a stereo observation network to fully cover the surface and internal part of landslide. SMSLMN integrated various sensors, including space-borne, airborne, in-situ and underground sensors, which can quantitatively monitor the slide-body and obtain portent information of movement in high frequency with high resolution. The whole network has been deployed at the Xishan landslide, Sichuan, P.R.China. According to various characteristic of stereo monitoring sensors, observation capabilities indicators for different sensors were proposed in order to obtain the optimal sensors combination groups and observation strategy. Meanwhile, adaptive networking and reliable data communication methods were developed to apply intelligent observation and sensor data transmission. Some key technologies, such as signal amplification and intelligence extraction technology, data access frequency adaptive adjustment technology, different sensor synchronization control technology were developed to overcome the problems in complex observation environment. The collaboratively observation data have been transferred to the remote data center where is thousands miles away from the giant landslide spot. These data were introduced into the landslide stability analysis model, and some primary conclusion will be achieved at the end of paper.

  3. Reliability and availability evaluation of Wireless Sensor Networks for industrial applications.

    PubMed

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements.

  4. Reliability and Availability Evaluation of Wireless Sensor Networks for Industrial Applications

    PubMed Central

    Silva, Ivanovitch; Guedes, Luiz Affonso; Portugal, Paulo; Vasques, Francisco

    2012-01-01

    Wireless Sensor Networks (WSN) currently represent the best candidate to be adopted as the communication solution for the last mile connection in process control and monitoring applications in industrial environments. Most of these applications have stringent dependability (reliability and availability) requirements, as a system failure may result in economic losses, put people in danger or lead to environmental damages. Among the different type of faults that can lead to a system failure, permanent faults on network devices have a major impact. They can hamper communications over long periods of time and consequently disturb, or even disable, control algorithms. The lack of a structured approach enabling the evaluation of permanent faults, prevents system designers to optimize decisions that minimize these occurrences. In this work we propose a methodology based on an automatic generation of a fault tree to evaluate the reliability and availability of Wireless Sensor Networks, when permanent faults occur on network devices. The proposal supports any topology, different levels of redundancy, network reconfigurations, criticality of devices and arbitrary failure conditions. The proposed methodology is particularly suitable for the design and validation of Wireless Sensor Networks when trying to optimize its reliability and availability requirements. PMID:22368497

  5. A spectral profile multiplexed FBG sensor network with application to strain measurement in a Kevlar woven fabric

    NASA Astrophysics Data System (ADS)

    Guo, Guodong; Hackney, Drew; Pankow, Mark; Peters, Kara

    2017-04-01

    A spectral profile division multiplexed fiber Bragg grating (FBG) sensor network is described in this paper. The unique spectral profile of each sensor in the network is identified as a distinct feature to be interrogated. Spectrum overlap is allowed under working conditions. Thus, a specific wavelength window does not need to be allocated to each sensor as in a wavelength division multiplexed (WDM) network. When the sensors are serially connected in the network, the spectrum output is expressed through a truncated series. To track the wavelength shift of each sensor, the identification problem is transformed to a nonlinear optimization problem, which is then solved by a modified dynamic multi-swarm particle swarm optimizer (DMS-PSO). To demonstrate the application of the developed network, a network consisting of four FBGs was integrated into a Kevlar woven fabric, which was under a quasi-static load imposed by an impactor head. Due to the substantial radial strain in the fabric, the spectrums of different FBGs were found to overlap during the loading process. With the developed interrogating method, the overlapped spectrum would be distinguished thus the wavelength shift of each sensor can be monitored.

  6. Remote Autonomous Sensor Networks: A Study in Redundancy and Life Cycle Costs

    NASA Astrophysics Data System (ADS)

    Ahlrichs, M.; Dotson, A.; Cenek, M.

    2017-12-01

    The remote nature of the United States and Canada border and their extreme seasonal shifts has made monitoring much of the area impossible using conventional monitoring techniques. Currently, the United States has large gaps in its ability to detect movement on an as-needed-basis in remote areas. The proposed autonomous sensor network aims to meet that need by developing a product that is low cost, robust, and can be deployed on an as-needed-basis for short term monitoring events. This is accomplished by identifying radio frequency disturbance and acoustic disturbance. This project aims to validate the proposed design and offer optimization strategies by conducting a redundancy model as well as performing a Life Cycle Assessment (LCA). The model will incorporate topological, meteorological, and land cover datasets to estimate sensor loss over a three-month period, ensuring that the remaining network does not have significant gaps in coverage which preclude being able to receive and transmit data. The LCA will investigate the materials used to create the sensor to generate an estimate of the total environmental energy that is utilized to create the network and offer alternative materials and distribution methods that can lower this cost. This platform can function as a stand-alone monitoring network or provide additional spatial and temporal resolution to existing monitoring networks. This study aims to create the framework to determine if a sensor's design and distribution is appropriate for the target environment. The incorporation of a LCA will seek to answer if the data a proposed sensor network will collect outweighs the environmental damage that will result from its deployment. Furthermore, as the arctic continues to thaw and economic development grows, the methodology described in paper will function as a guidance document to ensure that future sensor networks have a minimal impact on these pristine areas.

  7. Using Differential Evolution to Optimize Learning from Signals and Enhance Network Security

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmer, Paul K; Temple, Michael A; Buckner, Mark A

    2011-01-01

    Computer and communication network attacks are commonly orchestrated through Wireless Access Points (WAPs). This paper summarizes proof-of-concept research activity aimed at developing a physical layer Radio Frequency (RF) air monitoring capability to limit unauthorizedWAP access and mprove network security. This is done using Differential Evolution (DE) to optimize the performance of a Learning from Signals (LFS) classifier implemented with RF Distinct Native Attribute (RF-DNA) fingerprints. Performance of the resultant DE-optimized LFS classifier is demonstrated using 802.11a WiFi devices under the most challenging conditions of intra-manufacturer classification, i.e., using emissions of like-model devices that only differ in serial number. Using identicalmore » classifier input features, performance of the DE-optimized LFS classifier is assessed relative to a Multiple Discriminant Analysis / Maximum Likelihood (MDA/ML) classifier that has been used for previous demonstrations. The comparative assessment is made using both Time Domain (TD) and Spectral Domain (SD) fingerprint features. For all combinations of classifier type, feature type, and signal-to-noise ratio considered, results show that the DEoptimized LFS classifier with TD features is uperior and provides up to 20% improvement in classification accuracy with proper selection of DE parameters.« less

  8. Optimal dynamic voltage scaling for wireless sensor nodes with real-time constraints

    NASA Astrophysics Data System (ADS)

    Cassandras, Christos G.; Zhuang, Shixin

    2005-11-01

    Sensors are increasingly embedded in manufacturing systems and wirelessly networked to monitor and manage operations ranging from process and inventory control to tracking equipment and even post-manufacturing product monitoring. In building such sensor networks, a critical issue is the limited and hard to replenish energy in the devices involved. Dynamic voltage scaling is a technique that controls the operating voltage of a processor to provide desired performance while conserving energy and prolonging the overall network's lifetime. We consider such power-limited devices processing time-critical tasks which are non-preemptive, aperiodic and have uncertain arrival times. We treat voltage scaling as a dynamic optimization problem whose objective is to minimize energy consumption subject to hard or soft real-time execution constraints. In the case of hard constraints, we build on prior work (which engages a voltage scaling controller at task completion times) by developing an intra-task controller that acts at all arrival times of incoming tasks. We show that this optimization problem can be decomposed into two simpler ones whose solution leads to an algorithm that does not actually require solving any nonlinear programming problems. In the case of soft constraints, this decomposition must be partly relaxed, but it still leads to a scalable (linear in the number of tasks) algorithm. Simulation results are provided to illustrate performance improvements in systems with intra-task controllers compared to uncontrolled systems or those using inter-task control.

  9. Fluid status monitoring with a wireless network to reduce cardiovascular-related hospitalizations and mortality in heart failure: rationale and design of the OptiLink HF Study (Optimization of Heart Failure Management using OptiVol Fluid Status Monitoring and CareLink)

    PubMed Central

    Brachmann, Johannes; Böhm, Michael; Rybak, Karin; Klein, Gunnar; Butter, Christian; Klemm, Hanno; Schomburg, Rolf; Siebermair, Johannes; Israel, Carsten; Sinha, Anil-Martin; Drexler, Helmut

    2011-01-01

    Aims The Optimization of Heart Failure Management using OptiVol Fluid Status Monitoring and CareLink (OptiLink HF) study is designed to investigate whether OptiVol fluid status monitoring with an automatically generated wireless CareAlert notification via the CareLink Network can reduce all-cause death and cardiovascular hospitalizations in an HF population, compared with standard clinical assessment. Methods Patients with newly implanted or replacement cardioverter-defibrillator devices with or without cardiac resynchronization therapy, who have chronic HF in New York Heart Association class II or III and a left ventricular ejection fraction ≤35% will be eligible to participate. Following device implantation, patients are randomized to either OptiVol fluid status monitoring through CareAlert notification or regular care (OptiLink ‘on' vs. ‘off'). The primary endpoint is a composite of all-cause death or cardiovascular hospitalization. It is estimated that 1000 patients will be required to demonstrate superiority of the intervention group to reduce the primary outcome by 30% with 80% power. Conclusion The OptiLink HF study is designed to investigate whether early detection of congestion reduces mortality and cardiovascular hospitalization in patients with chronic HF. The study is expected to close recruitment in September 2012 and to report first results in May 2014. ClinicalTrials.gov Identifier: NCT00769457 PMID:21555324

  10. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  11. Establishing a Multi-spatial Wireless Sensor Network to Monitor Nitrate Concentrations in Soil Moisture

    NASA Astrophysics Data System (ADS)

    Haux, E.; Busek, N.; Park, Y.; Estrin, D.; Harmon, T. C.

    2004-12-01

    The use of reclaimed wastewater for irrigation in agriculture can be a significant source of nutrients, in particular nitrogen species, but its use raises concern for groundwater, riparian, and water quality. A 'smart' technology would have the ability to measure wastewater nutrients as they enter the irrigation system, monitor their transport in situ and optimally control inputs with little human intervention, all in real-time. Soil heterogeneity and economic issues require, however, a balance between cost and the spatial and temporal scales of the monitoring effort. Therefore, a wireless and embedded sensor network, deployed in the soil vertically across the horizon, is capable of collecting, processing, and transmitting sensor data. The network consists of several networked nodes or 'pylons', each outfitted with an array of sensors measuring humidity, temperature, precipitation, soil moisture, and aqueous nitrate concentrations. Individual sensor arrays are controlled by a MICA2 mote (Crossbow Technology Inc., San Jose, CA) programmed with TinyOS (University of California, Berkeley, CA) and a Stargate (Crossbow Technology Inc., San Jose, CA) base-station capable of GPRS for data transmission. Results are reported for the construction and testing of a prototypical pylon at the benchtop and in the field.

  12. Design and performance of an integrated ground and space sensor web for monitoring active volcanoes.

    NASA Astrophysics Data System (ADS)

    Lahusen, Richard; Song, Wenzhan; Kedar, Sharon; Shirazi, Behrooz; Chien, Steve; Doubleday, Joshua; Davies, Ashley; Webb, Frank; Dzurisin, Dan; Pallister, John

    2010-05-01

    An interdisciplinary team of computer, earth and space scientists collaborated to develop a sensor web system for rapid deployment at active volcanoes. The primary goals of this Optimized Autonomous Space In situ Sensorweb (OASIS) are to: 1) integrate complementary space and in situ (ground-based) elements into an interactive, autonomous sensor web; 2) advance sensor web power and communication resource management technology; and 3) enable scalability for seamless addition sensors and other satellites into the sensor web. This three-year project began with a rigorous multidisciplinary interchange that resulted in definition of system requirements to guide the design of the OASIS network and to achieve the stated project goals. Based on those guidelines, we have developed fully self-contained in situ nodes that integrate GPS, seismic, infrasonic and lightning (ash) detection sensors. The nodes in the wireless sensor network are linked to the ground control center through a mesh network that is highly optimized for remote geophysical monitoring. OASIS also features an autonomous bidirectional interaction between ground nodes and instruments on the EO-1 space platform through continuous analysis and messaging capabilities at the command and control center. Data from both the in situ sensors and satellite-borne hyperspectral imaging sensors stream into a common database for real-time visualization and analysis by earth scientists. We have successfully completed a field deployment of 15 nodes within the crater and on the flanks of Mount St. Helens, Washington. The demonstration that sensor web technology facilitates rapid network deployments and that we can achieve real-time continuous data acquisition. We are now optimizing component performance and improving user interaction for additional deployments at erupting volcanoes in 2010.

  13. Development of a method of robust rain gauge network optimization based on intensity-duration-frequency results

    NASA Astrophysics Data System (ADS)

    Chebbi, A.; Bargaoui, Z. K.; da Conceição Cunha, M.

    2012-12-01

    Based on rainfall intensity-duration-frequency (IDF) curves, a robust optimization approach is proposed to identify the best locations to install new rain gauges. The advantage of robust optimization is that the resulting design solutions yield networks which behave acceptably under hydrological variability. Robust optimisation can overcome the problem of selecting representative rainfall events when building the optimization process. This paper reports an original approach based on Montana IDF model parameters. The latter are assumed to be geostatistical variables and their spatial interdependence is taken into account through the adoption of cross-variograms in the kriging process. The problem of optimally locating a fixed number of new monitoring stations based on an existing rain gauge network is addressed. The objective function is based on the mean spatial kriging variance and rainfall variogram structure using a variance-reduction method. Hydrological variability was taken into account by considering and implementing several return periods to define the robust objective function. Variance minimization is performed using a simulated annealing algorithm. In addition, knowledge of the time horizon is needed for the computation of the robust objective function. A short and a long term horizon were studied, and optimal networks are identified for each. The method developed is applied to north Tunisia (area = 21 000 km2). Data inputs for the variogram analysis were IDF curves provided by the hydrological bureau and available for 14 tipping bucket type rain gauges. The recording period was from 1962 to 2001, depending on the station. The study concerns an imaginary network augmentation based on the network configuration in 1973, which is a very significant year in Tunisia because there was an exceptional regional flood event in March 1973. This network consisted of 13 stations and did not meet World Meteorological Organization (WMO) recommendations for the minimum spatial density. So, it is proposed to virtually augment it by 25, 50, 100 and 160% which is the rate that would meet WMO requirements. Results suggest that for a given augmentation robust networks remain stable overall for the two time horizons.

  14. A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks

    PubMed Central

    Gil, Joon-Min; Han, Youn-Hee

    2011-01-01

    As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387

  15. Secure and Time-Aware Communication of Wireless Sensors Monitoring Overhead Transmission Lines.

    PubMed

    Mazur, Katarzyna; Wydra, Michal; Ksiezopolski, Bogdan

    2017-07-11

    Existing transmission power grids suffer from high maintenance costs and scalability issues along with a lack of effective and secure system monitoring. To address these problems, we propose to use Wireless Sensor Networks (WSNs) as a technology to achieve energy efficient, reliable, and low-cost remote monitoring of transmission grids. With WSNs, smart grid enables both utilities and customers to monitor, predict and manage energy usage effectively and react to possible power grid disturbances in a timely manner. However, the increased application of WSNs also introduces new security challenges, especially related to privacy, connectivity, and security management, repeatedly causing unpredicted expenditures. Monitoring the status of the power system, a large amount of sensors generates massive amount of sensitive data. In order to build an effective Wireless Sensor Network (WSN) for a smart grid, we focus on designing a methodology of efficient and secure delivery of the data measured on transmission lines. We perform a set of simulations, in which we examine different routing algorithms, security mechanisms and WSN deployments in order to select the parameters that will not affect the delivery time but fulfill their role and ensure security at the same time. Furthermore, we analyze the optimal placement of direct wireless links, aiming at minimizing time delays, balancing network performance and decreasing deployment costs.

  16. Secure and Time-Aware Communication of Wireless Sensors Monitoring Overhead Transmission Lines

    PubMed Central

    Mazur, Katarzyna; Wydra, Michal; Ksiezopolski, Bogdan

    2017-01-01

    Existing transmission power grids suffer from high maintenance costs and scalability issues along with a lack of effective and secure system monitoring. To address these problems, we propose to use Wireless Sensor Networks (WSNs)as a technology to achieve energy efficient, reliable, and low-cost remote monitoring of transmission grids. With WSNs, smart grid enables both utilities and customers to monitor, predict and manage energy usage effectively and react to possible power grid disturbances in a timely manner. However, the increased application of WSNs also introduces new security challenges, especially related to privacy, connectivity, and security management, repeatedly causing unpredicted expenditures. Monitoring the status of the power system, a large amount of sensors generates massive amount of sensitive data. In order to build an effective Wireless Sensor Networks (WSNs) for a smart grid, we focus on designing a methodology of efficient and secure delivery of the data measured on transmission lines. We perform a set of simulations, in which we examine different routing algorithms, security mechanisms and WSN deployments in order to select the parameters that will not affect the delivery time but fulfill their role and ensure security at the same time. Furthermore, we analyze the optimal placement of direct wireless links, aiming at minimizing time delays, balancing network performance and decreasing deployment costs. PMID:28696390

  17. Bank supervision using the Threshold-Minimum Dominating Set

    NASA Astrophysics Data System (ADS)

    Gogas, Periklis; Papadimitriou, Theophilos; Matthaiou, Maria-Artemis

    2016-06-01

    An optimized, healthy and stable banking system resilient to financial crises is a prerequisite for sustainable growth. Minimization of (a) the associated systemic risk and (b) the propagation of contagion in the case of a banking crisis are necessary conditions to achieve this goal. Central Banks are in charge of this significant undertaking via a close and detailed monitoring of the banking network. In this paper, we propose the use of an auxiliary supervision/monitoring system that is both efficient with respect to the required resources and can promptly identify a set of banks that are in distress so that immediate and appropriate action can be taken by the supervising authority. We use the network defined by the interrelations between banking institutions employing tools from Complex Networks theory for an efficient management of the entire banking network. In doing so, we introduce the Threshold Minimum Dominating Set (T-MDS). The T-MDS is used to identify the smallest and most efficient subset of banks that can be used as (a) sensors of distress of a manifesting banking crisis and (b) provide a path of possible contagion. We propose the use of this method as a supplementary monitoring tool in the arsenal of a Central Bank. Our dataset includes the 122 largest American banks in terms of their interbank loans. The empirical results show that when the T-MDS methodology is applied, we can have an efficient supervision of the whole banking network, by monitoring just a subset of 47 banks.

  18. The control of a parallel hybrid-electric propulsion system for a small unmanned aerial vehicle using a CMAC neural network.

    PubMed

    Harmon, Frederick G; Frank, Andrew A; Joshi, Sanjay S

    2005-01-01

    A Simulink model, a propulsion energy optimization algorithm, and a CMAC controller were developed for a small parallel hybrid-electric unmanned aerial vehicle (UAV). The hybrid-electric UAV is intended for military, homeland security, and disaster-monitoring missions involving intelligence, surveillance, and reconnaissance (ISR). The Simulink model is a forward-facing simulation program used to test different control strategies. The flexible energy optimization algorithm for the propulsion system allows relative importance to be assigned between the use of gasoline, electricity, and recharging. A cerebellar model arithmetic computer (CMAC) neural network approximates the energy optimization results and is used to control the parallel hybrid-electric propulsion system. The hybrid-electric UAV with the CMAC controller uses 67.3% less energy than a two-stroke gasoline-powered UAV during a 1-h ISR mission and 37.8% less energy during a longer 3-h ISR mission.

  19. High density ozone monitoring using gas sensitive semi-conductor sensors in the Lower Fraser Valley, British Columbia.

    PubMed

    Bart, Mark; Williams, David E; Ainslie, Bruce; McKendry, Ian; Salmond, Jennifer; Grange, Stuart K; Alavi-Shoshtari, Maryam; Steyn, Douw; Henshaw, Geoff S

    2014-04-01

    A cost-efficient technology for accurate surface ozone monitoring using gas-sensitive semiconducting oxide (GSS) technology, solar power, and automated cell-phone communications was deployed and validated in a 50 sensor test-bed in the Lower Fraser Valley of British Columbia, over 3 months from May-September 2012. Before field deployment, the entire set of instruments was colocated with reference instruments for at least 48 h, comparing hourly averaged data. The standard error of estimate over a typical range 0-50 ppb for the set was 3 ± 2 ppb. Long-term accuracy was assessed over several months by colocation of a subset of ten instruments each at a different reference site. The differences (GSS-reference) of hourly average ozone concentration were normally distributed with mean -1 ppb and standard deviation 6 ppb (6000 measurement pairs). Instrument failures in the field were detected using network correlations and consistency checks on the raw sensor resistance data. Comparisons with modeled spatial O3 fields demonstrate the enhanced monitoring capability of a network that was a hybrid of low-cost and reference instruments, in which GSS sensors are used both to increase station density within a network as well as to extend monitoring into remote areas. This ambitious deployment exposed a number of challenges and lessons, including the logistical effort required to deploy and maintain sites over a summer period, and deficiencies in cell phone communications and battery life. Instrument failures at remote sites suggested that redundancy should be built into the network (especially at critical sites) as well as the possible addition of a "sleep-mode" for GSS monitors. At the network design phase, a more objective approach to optimize interstation distances, and the "information" content of the network is recommended. This study has demonstrated the utility and affordability of the GSS technology for a variety of applications, and the effectiveness of this technology as a means substantially and economically to extend the coverage of an air quality monitoring network. Low-cost, neighborhood-scale networks that produce reliable data can be envisaged.

  20. Distributed wireless sensing for methane leak detection technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente; van Kesse, Theodor

    Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less

  1. Distributed wireless sensing for fugitive methane leak detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente J.; van Kessel, Theodore; Nair, Dhruv

    Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less

  2. Distributed wireless sensing for fugitive methane leak detection

    DOE PAGES

    Klein, Levente J.; van Kessel, Theodore; Nair, Dhruv; ...

    2017-12-11

    Large scale environmental monitoring requires dynamic optimization of data transmission, power management, and distribution of the computational load. In this work, we demonstrate the use of a wireless sensor network for detection of chemical leaks on gas oil well pads. The sensor network consist of chemi-resistive and wind sensors and aggregates all the data and transmits it to the cloud for further analytics processing. The sensor network data is integrated with an inversion model to identify leak location and quantify leak rates. We characterize the sensitivity and accuracy of such system under multiple well controlled methane release experiments. It ismore » demonstrated that even 1 hour measurement with 10 sensors localizes leaks within 1 m and determines leak rate with an accuracy of 40%. This integrated sensing and analytics solution is currently refined to be a robust system for long term remote monitoring of methane leaks, generation of alarms, and tracking regulatory compliance.« less

  3. Investigation on the use of artificial neural networks to overcome the effects of environmental and operational changes on guided waves monitoring

    NASA Astrophysics Data System (ADS)

    El Mountassir, M.; Yaacoubi, S.; Dahmene, F.

    2015-07-01

    Intelligent feature extraction and advanced signal processing techniques are necessary for a better interpretation of ultrasonic guided waves signals either in structural health monitoring (SHM) or in nondestructive testing (NDT). Such signals are characterized by at least multi-modal and dispersive components. In addition, in SHM, these signals are closely vulnerable to environmental and operational conditions (EOCs), and can be severely affected. In this paper we investigate the use of Artificial Neural Network (ANN) to overcome these effects and to provide a reliable damage detection method with a minimal of false indications. An experimental case of study (full scale pipe) is presented. Damages sizes have been increased and their shapes modified in different steps. Various parameters such as the number of inputs and the number of hidden neurons were studied to find the optimal configuration of the neural network.

  4. Multiobjective design of aquifer monitoring networks for optimal spatial prediction and geostatistical parameter estimation

    NASA Astrophysics Data System (ADS)

    Alzraiee, Ayman H.; Bau, Domenico A.; Garcia, Luis A.

    2013-06-01

    Effective sampling of hydrogeological systems is essential in guiding groundwater management practices. Optimal sampling of groundwater systems has previously been formulated based on the assumption that heterogeneous subsurface properties can be modeled using a geostatistical approach. Therefore, the monitoring schemes have been developed to concurrently minimize the uncertainty in the spatial distribution of systems' states and parameters, such as the hydraulic conductivity K and the hydraulic head H, and the uncertainty in the geostatistical model of system parameters using a single objective function that aggregates all objectives. However, it has been shown that the aggregation of possibly conflicting objective functions is sensitive to the adopted aggregation scheme and may lead to distorted results. In addition, the uncertainties in geostatistical parameters affect the uncertainty in the spatial prediction of K and H according to a complex nonlinear relationship, which has often been ineffectively evaluated using a first-order approximation. In this study, we propose a multiobjective optimization framework to assist the design of monitoring networks of K and H with the goal of optimizing their spatial predictions and estimating the geostatistical parameters of the K field. The framework stems from the combination of a data assimilation (DA) algorithm and a multiobjective evolutionary algorithm (MOEA). The DA algorithm is based on the ensemble Kalman filter, a Monte-Carlo-based Bayesian update scheme for nonlinear systems, which is employed to approximate the posterior uncertainty in K, H, and the geostatistical parameters of K obtained by collecting new measurements. Multiple MOEA experiments are used to investigate the trade-off among design objectives and identify the corresponding monitoring schemes. The methodology is applied to design a sampling network for a shallow unconfined groundwater system located in Rocky Ford, Colorado. Results indicate that the effect of uncertainties associated with the geostatistical parameters on the spatial prediction might be significantly alleviated (by up to 80% of the prior uncertainty in K and by 90% of the prior uncertainty in H) by sampling evenly distributed measurements with a spatial measurement density of more than 1 observation per 60 m × 60 m grid block. In addition, exploration of the interaction of objective functions indicates that the ability of head measurements to reduce the uncertainty associated with the correlation scale is comparable to the effect of hydraulic conductivity measurements.

  5. Reconnecting Stochastic Methods With Hydrogeological Applications: A Utilitarian Uncertainty Analysis and Risk Assessment Approach for the Design of Optimal Monitoring Networks

    NASA Astrophysics Data System (ADS)

    Bode, Felix; Ferré, Ty; Zigelli, Niklas; Emmert, Martin; Nowak, Wolfgang

    2018-03-01

    Collaboration between academics and practitioners promotes knowledge transfer between research and industry, with both sides benefiting greatly. However, academic approaches are often not feasible given real-world limits on time, cost and data availability, especially for risk and uncertainty analyses. Although the need for uncertainty quantification and risk assessment are clear, there are few published studies examining how scientific methods can be used in practice. In this work, we introduce possible strategies for transferring and communicating academic approaches to real-world applications, countering the current disconnect between increasingly sophisticated academic methods and methods that work and are accepted in practice. We analyze a collaboration between academics and water suppliers in Germany who wanted to design optimal groundwater monitoring networks for drinking-water well catchments. Our key conclusions are: to prefer multiobjective over single-objective optimization; to replace Monte-Carlo analyses by scenario methods; and to replace data-hungry quantitative risk assessment by easy-to-communicate qualitative methods. For improved communication, it is critical to set up common glossaries of terms to avoid misunderstandings, use striking visualization to communicate key concepts, and jointly and continually revisit the project objectives. Ultimately, these approaches and recommendations are simple and utilitarian enough to be transferred directly to other practical water resource related problems.

  6. Multi-objective Decision Based Available Transfer Capability in Deregulated Power System Using Heuristic Approaches

    NASA Astrophysics Data System (ADS)

    Pasam, Gopi Krishna; Manohar, T. Gowri

    2016-09-01

    Determination of available transfer capability (ATC) requires the use of experience, intuition and exact judgment in order to meet several significant aspects in the deregulated environment. Based on these points, this paper proposes two heuristic approaches to compute ATC. The first proposed heuristic algorithm integrates the five methods known as continuation repeated power flow, repeated optimal power flow, radial basis function neural network, back propagation neural network and adaptive neuro fuzzy inference system to obtain ATC. The second proposed heuristic model is used to obtain multiple ATC values. Out of these, a specific ATC value will be selected based on a number of social, economic, deregulated environmental constraints and related to specific applications like optimization, on-line monitoring, and ATC forecasting known as multi-objective decision based optimal ATC. The validity of results obtained through these proposed methods are scrupulously verified on various buses of the IEEE 24-bus reliable test system. The results presented and derived conclusions in this paper are very useful for planning, operation, maintaining of reliable power in any power system and its monitoring in an on-line environment of deregulated power system. In this way, the proposed heuristic methods would contribute the best possible approach to assess multiple objective ATC using integrated methods.

  7. Remote Control and Monitoring of VLBI Experiments by Smartphones

    NASA Astrophysics Data System (ADS)

    Ruztort, C. H.; Hase, H.; Zapata, O.; Pedreros, F.

    2012-12-01

    For the remote control and monitoring of VLBI operations, we developed a software optimized for smartphones. This is a new tool based on a client-server architecture with a Web interface optimized for smartphone screens and cellphone networks. The server uses variables of the Field System and its station specific parameters stored in the shared memory. The client running on the smartphone by a Web interface analyzes and visualizes the current status of the radio telescope, receiver, schedule, and recorder. In addition, it allows commands to be sent remotely to the Field System computer and displays the log entries. The user has full access to the entire operation process, which is important in emergency cases. The software also integrates a webcam interface.

  8. Toward of a highly integrated probe for improving wireless network quality

    NASA Astrophysics Data System (ADS)

    Ding, Fei; Song, Aiguo; Wu, Zhenyang; Pan, Zhiwen; You, Xiaohu

    2016-10-01

    Quality of service and customer perception is the focus of the telecommunications industry. This paper proposes a low-cost approach to the acquisition of terminal data, collected from LTE networks with the application of a soft probe, based on the Java language. The soft probe includes support for fast call in the form of a referenced library, and can be integrated into various Android-based applications to automatically monitor any exception event in the network. Soft probe-based acquisition of terminal data has the advantages of low cost and can be applied on large scale. Experiment shows that a soft probe can efficiently obtain terminal network data. With this method, the quality of service of LTE networks can be determined from acquired wireless data. This work contributes to efficient network optimization, and the analysis of abnormal network events.

  9. Optimal wide-area monitoring and nonlinear adaptive coordinating neurocontrol of a power system with wind power integration and multiple FACTS devices.

    PubMed

    Qiao, Wei; Venayagamoorthy, Ganesh K; Harley, Ronald G

    2008-01-01

    Wide-area coordinating control is becoming an important issue and a challenging problem in the power industry. This paper proposes a novel optimal wide-area coordinating neurocontrol (WACNC), based on wide-area measurements, for a power system with power system stabilizers, a large wind farm and multiple flexible ac transmission system (FACTS) devices. An optimal wide-area monitor (OWAM), which is a radial basis function neural network (RBFNN), is designed to identify the input-output dynamics of the nonlinear power system. Its parameters are optimized through particle swarm optimization (PSO). Based on the OWAM, the WACNC is then designed by using the dual heuristic programming (DHP) method and RBFNNs, while considering the effect of signal transmission delays. The WACNC operates at a global level to coordinate the actions of local power system controllers. Each local controller communicates with the WACNC, receives remote control signals from the WACNC to enhance its dynamic performance and therefore helps improve system-wide dynamic and transient performance. The proposed control is verified by simulation studies on a multimachine power system.

  10. Design and implementation of smart sensor nodes for wireless disaster monitoring systems

    NASA Astrophysics Data System (ADS)

    Chen, Yih-Fan; Wu, Wen-Jong; Chen, Chun-Kuang; Wen, Chih-Min; Jin, Ming-Hui; Gau, Chung-Yun; Chang, Chih-Chie; Lee, Chih-Kung

    2004-07-01

    A newly developed smart sensor node that can monitor the safety of temporary structures such as scaffolds at construction sites is detailed in this paper. The design methodology and its trade-offs, as well as its influence on the optimization of sensor networks, is examined. The potential impact on civil engineering construction sites, environmental and natural disaster pre-warning issues, etc., all of which are foundations of smart sensor nodes and corresponding smart sensor networks, is also presented. To minimize the power requirements in order to achieve a true wireless system both in terms of signal and power, a sensor node was designed by adopting an 8051-based micro-controller, an ISM band RF transceiver, and an auto-balanced strain gage signal conditioner. With the built-in RF transceiver, all measurement data can be transmitted to a local control center for data integrity, security, central monitoring, and full-scale analysis. As a battery is the only well-established power source and there is a strong desire to eliminate the need to install bulky power lines, this system designed includes a battery-powered core with optimal power efficiency. To further extend the service life of the built-in power source, a power control algorithm has been embedded in the microcontroller of each sensor node. The entire system has been verified by experimental tests on full-scale scaffold monitoring. The results show that this system provides a practical method to monitor the structure safety in real time and possesses the potential of reducing maintenance costs significantly. The design of the sensor node, central control station, and the integration of several kinds of wireless communication protocol, all of which are successfully integrated to demonstrate the capabilities of this newly developed system, are detailed. Potential impact to the network topology is briefly examined as well.

  11. Evaluation of neural network modeing to calculate well-watered leaf temperature of wine grape

    USDA-ARS?s Scientific Manuscript database

    Mild to moderate water stress is desirable in wine grape for controlling vine vigor and optimizing fruit yield and quality, but precision irrigation management is hindered by the lack of a reliable method to easily quantify and monitor vine water status. The crop water stress index (CWSI) that effec...

  12. Control of water distribution networks with dynamic DMA topology using strictly feasible sequential convex programming

    NASA Astrophysics Data System (ADS)

    Wright, Robert; Abraham, Edo; Parpas, Panos; Stoianov, Ivan

    2015-12-01

    The operation of water distribution networks (WDN) with a dynamic topology is a recently pioneered approach for the advanced management of District Metered Areas (DMAs) that integrates novel developments in hydraulic modeling, monitoring, optimization, and control. A common practice for leakage management is the sectorization of WDNs into small zones, called DMAs, by permanently closing isolation valves. This facilitates water companies to identify bursts and estimate leakage levels by measuring the inlet flow for each DMA. However, by permanently closing valves, a number of problems have been created including reduced resilience to failure and suboptimal pressure management. By introducing a dynamic topology to these zones, these disadvantages can be eliminated while still retaining the DMA structure for leakage monitoring. In this paper, a novel optimization method based on sequential convex programming (SCP) is outlined for the control of a dynamic topology with the objective of reducing average zone pressure (AZP). A key attribute for control optimization is reliable convergence. To achieve this, the SCP method we propose guarantees that each optimization step is strictly feasible, resulting in improved convergence properties. By using a null space algorithm for hydraulic analyses, the computations required are also significantly reduced. The optimized control is actuated on a real WDN operated with a dynamic topology. This unique experimental program incorporates a number of technologies set up with the objective of investigating pioneering developments in WDN management. Preliminary results indicate AZP reductions for a dynamic topology of up to 6.5% over optimally controlled fixed topology DMAs. This article was corrected on 12 JAN 2016. See the end of the full text for details.

  13. Many-objective Groundwater Monitoring Network Design Using Bias-Aware Ensemble Kalman Filtering and Evolutionary Optimization

    NASA Astrophysics Data System (ADS)

    Kollat, J. B.; Reed, P. M.

    2009-12-01

    This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.

  14. On-line dynamic monitoring automotive exhausts: using BP-ANN for distinguishing multi-components

    NASA Astrophysics Data System (ADS)

    Zhao, Yudi; Wei, Ruyi; Liu, Xuebin

    2017-10-01

    Remote sensing-Fourier Transform infrared spectroscopy (RS-FTIR) is one of the most important technologies in atmospheric pollutant monitoring. It is very appropriate for on-line dynamic remote sensing monitoring of air pollutants, especially for the automotive exhausts. However, their absorption spectra are often seriously overlapped in the atmospheric infrared window bands, i.e. MWIR (3 5μm). Artificial Neural Network (ANN) is an algorithm based on the theory of the biological neural network, which simplifies the partial differential equation with complex construction. For its preferable performance in nonlinear mapping and fitting, in this paper we utilize Back Propagation-Artificial Neural Network (BP-ANN) to quantitatively analyze the concentrations of four typical industrial automotive exhausts, including CO, NO, NO2 and SO2. We extracted the original data of these automotive exhausts from the HITRAN database, most of which virtually overlapped, and established a mixed multi-component simulation environment. Based on Beer-Lambert Law, concentrations can be retrieved from the absorbance of spectra. Parameters including learning rate, momentum factor, the number of hidden nodes and iterations were obtained when the BP network was trained with 80 groups of input data. By improving these parameters, the network can be optimized to produce necessarily higher precision for the retrieved concentrations. This BP-ANN method proves to be an effective and promising algorithm on dealing with multi-components analysis of automotive exhausts.

  15. MicroRadarNet: A network of weather micro radars for the identification of local high resolution precipitation patterns

    NASA Astrophysics Data System (ADS)

    Turso, S.; Paolella, S.; Gabella, M.; Perona, G.

    2013-01-01

    In this paper, MicroRadarNet, a novel micro radar network for continuous, unattended meteorological monitoring is presented. Key aspects and constraints are introduced. Specific design strategies are highlighted, leading to the technological implementations of this wireless, low-cost, low power consumption sensor network. Raw spatial and temporal datasets are processed on-board in real-time, featuring a consistent evaluation of the signals from the sensors and optimizing the data loads to be transmitted. Network servers perform the final post-elaboration steps on the data streams coming from each unit. Final network products are meteorological mappings of weather events, monitored with high spatial and temporal resolution, and lastly served to the end user through any Web browser. This networked approach is shown to imply a sensible reduction of the overall operational costs, including management and maintenance aspects, if compared to the traditional long range monitoring strategy. Adoption of the TITAN storm identification and nowcasting engine is also here evaluated for in-loop integration within the MicroRadarNet data processing chain. A brief description of the engine workflow is provided, to present preliminary feasibility results and performance estimates. The outcomes were not so predictable, taking into account relevant operational differences between a Western Alps micro radar scenario and the long range radar context in the Denver region of Colorado. Finally, positive results from a set of case studies are discussed, motivating further refinements and integration activities.

  16. Implementation and Analysis of a Wireless Sensor Network-Based Pet Location Monitoring System for Domestic Scenarios

    PubMed Central

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leyre; Astrain, José Javier; Villadangos, Jesús; Santesteban, Daniel; Falcone, Francisco

    2016-01-01

    The flexibility of new age wireless networks and the variety of sensors to measure a high number of variables, lead to new scenarios where anything can be monitored by small electronic devices, thereby implementing Wireless Sensor Networks (WSN). Thanks to ZigBee, RFID or WiFi networks the precise location of humans or animals as well as some biological parameters can be known in real-time. However, since wireless sensors must be attached to biological tissues and they are highly dispersive, propagation of electromagnetic waves must be studied to deploy an efficient and well-working network. The main goal of this work is to study the influence of wireless channel limitations in the operation of a specific pet monitoring system, validated at physical channel as well as at functional level. In this sense, radio wave propagation produced by ZigBee devices operating at the ISM 2.4 GHz band is studied through an in-house developed 3D Ray Launching simulation tool, in order to analyze coverage/capacity relations for the optimal system selection as well as deployment strategy in terms of number of transceivers and location. Furthermore, a simplified dog model is developed for simulation code, considering not only its morphology but also its dielectric properties. Relevant wireless channel information such as power distribution, power delay profile and delay spread graphs are obtained providing an extensive wireless channel analysis. A functional dog monitoring system is presented, operating over the implemented ZigBee network and providing real time information to Android based devices. The proposed system can be scaled in order to consider different types of domestic pets as well as new user based functionalities. PMID:27589751

  17. Implementation and Analysis of a Wireless Sensor Network-Based Pet Location Monitoring System for Domestic Scenarios.

    PubMed

    Aguirre, Erik; Lopez-Iturri, Peio; Azpilicueta, Leyre; Astrain, José Javier; Villadangos, Jesús; Santesteban, Daniel; Falcone, Francisco

    2016-08-30

    The flexibility of new age wireless networks and the variety of sensors to measure a high number of variables, lead to new scenarios where anything can be monitored by small electronic devices, thereby implementing Wireless Sensor Networks (WSN). Thanks to ZigBee, RFID or WiFi networks the precise location of humans or animals as well as some biological parameters can be known in real-time. However, since wireless sensors must be attached to biological tissues and they are highly dispersive, propagation of electromagnetic waves must be studied to deploy an efficient and well-working network. The main goal of this work is to study the influence of wireless channel limitations in the operation of a specific pet monitoring system, validated at physical channel as well as at functional level. In this sense, radio wave propagation produced by ZigBee devices operating at the ISM 2.4 GHz band is studied through an in-house developed 3D Ray Launching simulation tool, in order to analyze coverage/capacity relations for the optimal system selection as well as deployment strategy in terms of number of transceivers and location. Furthermore, a simplified dog model is developed for simulation code, considering not only its morphology but also its dielectric properties. Relevant wireless channel information such as power distribution, power delay profile and delay spread graphs are obtained providing an extensive wireless channel analysis. A functional dog monitoring system is presented, operating over the implemented ZigBee network and providing real time information to Android based devices. The proposed system can be scaled in order to consider different types of domestic pets as well as new user based functionalities.

  18. Establish a Data Transmission Platform of the Rig Based on the Distributed Network

    NASA Astrophysics Data System (ADS)

    Bao, Zefu; Li, Tao

    In order to control in real-time ,closed-loop feedback the information, saving the money and labor,we distribute a platform of network data. It through the establishment of the platform in the oil drilling to achieve the easiest route of each device of the rig that conveying timely. The design proposed the platform to transfer networking data by PA which allows the rig control for optimal use. Against the idea,achieving first through on-site cabling and the establishment of data transmission module in the rig monitoring system. The results of standard field application show that the platform solve the problem of rig control.

  19. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    NASA Astrophysics Data System (ADS)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  20. Adaptive critics for dynamic optimization.

    PubMed

    Kulkarni, Raghavendra V; Venayagamoorthy, Ganesh Kumar

    2010-06-01

    A novel action-dependent adaptive critic design (ACD) is developed for dynamic optimization. The proposed combination of a particle swarm optimization-based actor and a neural network critic is demonstrated through dynamic sleep scheduling of wireless sensor motes for wildlife monitoring. The objective of the sleep scheduler is to dynamically adapt the sleep duration to node's battery capacity and movement pattern of animals in its environment in order to obtain snapshots of the animal on its trajectory uniformly. Simulation results show that the sleep time of the node determined by the actor critic yields superior quality of sensory data acquisition and enhanced node longevity. Copyright 2010 Elsevier Ltd. All rights reserved.

  1. An optimized compression algorithm for real-time ECG data transmission in wireless network of medical information systems.

    PubMed

    Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro

    2015-01-01

    Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.

  2. A multipath routing protocol based on clustering and ant colony optimization for wireless sensor networks.

    PubMed

    Yang, Jing; Xu, Mai; Zhao, Wei; Xu, Baoguo

    2010-01-01

    For monitoring burst events in a kind of reactive wireless sensor networks (WSNs), a multipath routing protocol (MRP) based on dynamic clustering and ant colony optimization (ACO) is proposed. Such an approach can maximize the network lifetime and reduce the energy consumption. An important attribute of WSNs is their limited power supply, and therefore some metrics (such as energy consumption of communication among nodes, residual energy, path length) were considered as very important criteria while designing routing in the MRP. Firstly, a cluster head (CH) is selected among nodes located in the event area according to some parameters, such as residual energy. Secondly, an improved ACO algorithm is applied in the search for multiple paths between the CH and sink node. Finally, the CH dynamically chooses a route to transmit data with a probability that depends on many path metrics, such as energy consumption. The simulation results show that MRP can prolong the network lifetime, as well as balance of energy consumption among nodes and reduce the average energy consumption effectively.

  3. Detection Optimization of the Progressive Multi-Channel Correlation Algorithm Used in Infrasound Nuclear Treaty Monitoring

    DTIC Science & Technology

    2013-03-01

    82 4.3.2 Bayes Decision Criteria and Risk Minimization ............................................ 86...on the globe. In its mission to achieve information superiority, AFTAC has historically combined data garnered from seismic and infrasound networks...to improve location estimates for nuclear events. For instance, underground explosions produce seismic waves that can couple into the atmosphere

  4. Analysis on energy consumption index system of thermal power plant

    NASA Astrophysics Data System (ADS)

    Qian, J. B.; Zhang, N.; Li, H. F.

    2017-05-01

    Currently, the increasingly tense situation in the context of resources, energy conservation is a realistic choice to ease the energy constraint contradictions, reduce energy consumption thermal power plants has become an inevitable development direction. And combined with computer network technology to build thermal power “small index” to monitor and optimize the management system, the power plant is the application of information technology and to meet the power requirements of the product market competition. This paper, first described the research status of thermal power saving theory, then attempted to establish the small index system and build “small index” monitoring and optimization management system in thermal power plant. Finally elaborated key issues in the field of small thermal power plant technical and economic indicators to be further studied and resolved.

  5. Design of online monitoring and forecasting system for electrical equipment temperature of prefabricated substation based on WSN

    NASA Astrophysics Data System (ADS)

    Qi, Weiran; Miao, Hongxia; Miao, Xuejiao; Xiao, Xuanxuan; Yan, Kuo

    2016-10-01

    In order to ensure the safe and stable operation of the prefabricated substations, temperature sensing subsystem, temperature remote monitoring and management subsystem, forecast subsystem are designed in the paper. Wireless temperature sensing subsystem which consists of temperature sensor and MCU sends the electrical equipment temperature to the remote monitoring center by wireless sensor network. Remote monitoring center can realize the remote monitoring and prediction by monitoring and management subsystem and forecast subsystem. Real-time monitoring of power equipment temperature, history inquiry database, user management, password settings, etc., were achieved by monitoring and management subsystem. In temperature forecast subsystem, firstly, the chaos of the temperature data was verified and phase space is reconstructed. Then Support Vector Machine - Particle Swarm Optimization (SVM-PSO) was used to predict the temperature of the power equipment in prefabricated substations. The simulation results found that compared with the traditional methods SVM-PSO has higher prediction accuracy.

  6. A Power-Efficient Clustering Protocol for Coal Mine Face Monitoring with Wireless Sensor Networks Under Channel Fading Conditions

    PubMed Central

    Ren, Peng; Qian, Jiansheng

    2016-01-01

    This study proposes a novel power-efficient and anti-fading clustering based on a cross-layer that is specific to the time-varying fading characteristics of channels in the monitoring of coal mine faces with wireless sensor networks. The number of active sensor nodes and a sliding window are set up such that the optimal number of cluster heads (CHs) is selected in each round. Based on a stable expected number of CHs, we explore the channel efficiency between nodes and the base station by using a probe frame and the joint surplus energy in assessing the CH selection. Moreover, the sending power of a node in different periods is regulated by the signal fade margin method. The simulation results demonstrate that compared with several common algorithms, the power-efficient and fading-aware clustering with a cross-layer (PEAFC-CL) protocol features a stable network topology and adaptability under signal time-varying fading, which effectively prolongs the lifetime of the network and reduces network packet loss, thus making it more applicable to the complex and variable environment characteristic of a coal mine face. PMID:27338380

  7. Technology Developments Integrating a Space Network Communications Testbed

    NASA Technical Reports Server (NTRS)

    Kwong, Winston; Jennings, Esther; Clare, Loren; Leang, Dee

    2006-01-01

    As future manned and robotic space explorations missions involve more complex systems, it is essential to verify, validate, and optimize such systems through simulation and emulation in a low cost testbed environment. The goal of such a testbed is to perform detailed testing of advanced space and ground communications networks, technologies, and client applications that are essential for future space exploration missions. We describe the development of new technologies enhancing our Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) that enable its integration in a distributed space communications testbed. MACHETE combines orbital modeling, link analysis, and protocol and service modeling to quantify system performance based on comprehensive considerations of different aspects of space missions. It can simulate entire networks and can interface with external (testbed) systems. The key technology developments enabling the integration of MACHETE into a distributed testbed are the Monitor and Control module and the QualNet IP Network Emulator module. Specifically, the Monitor and Control module establishes a standard interface mechanism to centralize the management of each testbed component. The QualNet IP Network Emulator module allows externally generated network traffic to be passed through MACHETE to experience simulated network behaviors such as propagation delay, data loss, orbital effects and other communications characteristics, including entire network behaviors. We report a successful integration of MACHETE with a space communication testbed modeling a lunar exploration scenario. This document is the viewgraph slides of the presentation.

  8. Long-term Monitoring Program Optimization for Chlorinated Volatile Organic Compound Plume, Naval Air Station Brunswick, Maine

    NASA Astrophysics Data System (ADS)

    Calderone, G. M.

    2006-12-01

    A long-term monitoring program was initiated in 1995 at 6 sites at NAS Brunswick, including 3 National Priorities List (Superfund) sites. Primary contaminants of concern include chlorinated volatile organic compounds, including tetrachloroethane, trichloroethene, and vinyl chloride, in addition to metals. More than 80 submersible pumping systems were installed to facilitate sample collection utilizing the low-flow sampling technique. Long-term monitoring of the groundwater is conducted to assess the effectiveness of remedial measures, and monitor changes in contaminant concentrations in the Eastern Plume Operable Unit. Long-term monitoring program activities include quarterly groundwater sampling and analysis at more than 90 wells across 6 sites; surface water, sediment, seep, and leachate sampling and analysis at 3 sites; landfill gas monitoring; well maintenance; engineering inspections of landfill covers and other sites or evidence of stressed vegetation; water level gauging; and treatment plant sampling and analysis. Significant cost savings were achieved by optimizing the sampling network and reducing sampling frequency from quarterly to semi- annual or annual sampling. As part of an ongoing optimization effort, a geostatistical assessment of the Eastern Plume was conducted at the Naval Air Station, Brunswick, Maine. The geostatistical assessment used 40 monitoring points and analytical data collected over 3 years. For this geostatistical assessment, EA developed and utilized a database of analytical results generated during 3 years of long-term monitoring which was linked to a Geographic Information System to enhance data visualization capacity. The Geographic Information System included themes for groundwater volatile organic compound concentration, groundwater flow directions, shallow and deep wells, and immediate access to point-specific analytical results. This statistical analysis has been used by the site decision-maker and its conclusions supported a significant reduction in the Long-Term Monitoring Program.

  9. Achieving Crossed Strong Barrier Coverage in Wireless Sensor Network.

    PubMed

    Han, Ruisong; Yang, Wei; Zhang, Li

    2018-02-10

    Barrier coverage has been widely used to detect intrusions in wireless sensor networks (WSNs). It can fulfill the monitoring task while extending the lifetime of the network. Though barrier coverage in WSNs has been intensively studied in recent years, previous research failed to consider the problem of intrusion in transversal directions. If an intruder knows the deployment configuration of sensor nodes, then there is a high probability that it may traverse the whole target region from particular directions, without being detected. In this paper, we introduce the concept of crossed barrier coverage that can overcome this defect. We prove that the problem of finding the maximum number of crossed barriers is NP-hard and integer linear programming (ILP) is used to formulate the optimization problem. The branch-and-bound algorithm is adopted to determine the maximum number of crossed barriers. In addition, we also propose a multi-round shortest path algorithm (MSPA) to solve the optimization problem, which works heuristically to guarantee efficiency while maintaining near-optimal solutions. Several conventional algorithms for finding the maximum number of disjoint strong barriers are also modified to solve the crossed barrier problem and for the purpose of comparison. Extensive simulation studies demonstrate the effectiveness of MSPA.

  10. Statistical process control using optimized neural networks: a case study.

    PubMed

    Addeh, Jalil; Ebrahimzadeh, Ata; Azarbad, Milad; Ranaee, Vahid

    2014-09-01

    The most common statistical process control (SPC) tools employed for monitoring process changes are control charts. A control chart demonstrates that the process has altered by generating an out-of-control signal. This study investigates the design of an accurate system for the control chart patterns (CCPs) recognition in two aspects. First, an efficient system is introduced that includes two main modules: feature extraction module and classifier module. In the feature extraction module, a proper set of shape features and statistical feature are proposed as the efficient characteristics of the patterns. In the classifier module, several neural networks, such as multilayer perceptron, probabilistic neural network and radial basis function are investigated. Based on an experimental study, the best classifier is chosen in order to recognize the CCPs. Second, a hybrid heuristic recognition system is introduced based on cuckoo optimization algorithm (COA) algorithm to improve the generalization performance of the classifier. The simulation results show that the proposed algorithm has high recognition accuracy. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  11. A Workflow-based Intelligent Network Data Movement Advisor with End-to-end Performance Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Michelle M.; Wu, Chase Q.

    2013-11-07

    Next-generation eScience applications often generate large amounts of simulation, experimental, or observational data that must be shared and managed by collaborative organizations. Advanced networking technologies and services have been rapidly developed and deployed to facilitate such massive data transfer. However, these technologies and services have not been fully utilized mainly because their use typically requires significant domain knowledge and in many cases application users are even not aware of their existence. By leveraging the functionalities of an existing Network-Aware Data Movement Advisor (NADMA) utility, we propose a new Workflow-based Intelligent Network Data Movement Advisor (WINDMA) with end-to-end performance optimization formore » this DOE funded project. This WINDMA system integrates three major components: resource discovery, data movement, and status monitoring, and supports the sharing of common data movement workflows through account and database management. This system provides a web interface and interacts with existing data/space management and discovery services such as Storage Resource Management, transport methods such as GridFTP and GlobusOnline, and network resource provisioning brokers such as ION and OSCARS. We demonstrate the efficacy of the proposed transport-support workflow system in several use cases based on its implementation and deployment in DOE wide-area networks.« less

  12. The Management and Security Expert (MASE)

    NASA Technical Reports Server (NTRS)

    Miller, Mark D.; Barr, Stanley J.; Gryphon, Coranth D.; Keegan, Jeff; Kniker, Catherine A.; Krolak, Patrick D.

    1991-01-01

    The Management and Security Expert (MASE) is a distributed expert system that monitors the operating systems and applications of a network. It is capable of gleaning the information provided by the different operating systems in order to optimize hardware and software performance; recognize potential hardware and/or software failure, and either repair the problem before it becomes an emergency, or notify the systems manager of the problem; and monitor applications and known security holes for indications of an intruder or virus. MASE can eradicate much of the guess work of system management.

  13. Optimizing Placement of Weather Stations: Exploring Objective Functions of Meaningful Combinations of Multiple Weather Variables

    NASA Astrophysics Data System (ADS)

    Snyder, A.; Dietterich, T.; Selker, J. S.

    2017-12-01

    Many regions of the world lack ground-based weather data due to inadequate or unreliable weather station networks. For example, most countries in Sub-Saharan Africa have unreliable, sparse networks of weather stations. The absence of these data can have consequences on weather forecasting, prediction of severe weather events, agricultural planning, and climate change monitoring. The Trans-African Hydro-Meteorological Observatory (TAHMO.org) project seeks to address these problems by deploying and operating a large network of weather stations throughout Sub-Saharan Africa. To design the TAHMO network, we must determine where to place weather stations within each country. We should consider how we can create accurate spatio-temporal maps of weather data and how to balance the desired accuracy of each weather variable of interest (precipitation, temperature, relative humidity, etc.). We can express this problem as a joint optimization of multiple weather variables, given a fixed number of weather stations. We use reanalysis data as the best representation of the "true" weather patterns that occur in the region of interest. For each possible combination of sites, we interpolate the reanalysis data between selected locations and calculate the mean average error between the reanalysis ("true") data and the interpolated data. In order to formulate our multi-variate optimization problem, we explore different methods of weighting each weather variable in our objective function. These methods include systematic variation of weights to determine which weather variables have the strongest influence on the network design, as well as combinations targeted for specific purposes. For example, we can use computed evapotranspiration as a metric that combines many weather variables in a way that is meaningful for agricultural and hydrological applications. We compare the errors of the weather station networks produced by each optimization problem formulation. We also compare these errors to those of manually designed weather station networks in West Africa, planned by the respective host-country's meteorological agency.

  14. A Differential Evolution-Based Routing Algorithm for Environmental Monitoring Wireless Sensor Networks

    PubMed Central

    Li, Xiaofang; Xu, Lizhong; Wang, Huibin; Song, Jie; Yang, Simon X.

    2010-01-01

    The traditional Low Energy Adaptive Cluster Hierarchy (LEACH) routing protocol is a clustering-based protocol. The uneven selection of cluster heads results in premature death of cluster heads and premature blind nodes inside the clusters, thus reducing the overall lifetime of the network. With a full consideration of information on energy and distance distribution of neighboring nodes inside the clusters, this paper proposes a new routing algorithm based on differential evolution (DE) to improve the LEACH routing protocol. To meet the requirements of monitoring applications in outdoor environments such as the meteorological, hydrological and wetland ecological environments, the proposed algorithm uses the simple and fast search features of DE to optimize the multi-objective selection of cluster heads and prevent blind nodes for improved energy efficiency and system stability. Simulation results show that the proposed new LEACH routing algorithm has better performance, effectively extends the working lifetime of the system, and improves the quality of the wireless sensor networks. PMID:22219670

  15. An optimization model for the US Air-Traffic System

    NASA Technical Reports Server (NTRS)

    Mulvey, J. M.

    1986-01-01

    A systematic approach for monitoring U.S. air traffic was developed in the context of system-wide planning and control. Towards this end, a network optimization model with nonlinear objectives was chosen as the central element in the planning/control system. The network representation was selected because: (1) it provides a comprehensive structure for depicting essential aspects of the air traffic system, (2) it can be solved efficiently for large scale problems, and (3) the design can be easily communicated to non-technical users through computer graphics. Briefly, the network planning models consider the flow of traffic through a graph as the basic structure. Nodes depict locations and time periods for either individual planes or for aggregated groups of airplanes. Arcs define variables as actual airplanes flying through space or as delays across time periods. As such, a special case of the network can be used to model the so called flow control problem. Due to the large number of interacting variables and the difficulty in subdividing the problem into relatively independent subproblems, an integrated model was designed which will depict the entire high level (above 29000 feet) jet route system for the 48 contiguous states in the U.S. As a first step in demonstrating the concept's feasibility a nonlinear risk/cost model was developed for the Indianapolis Airspace. The nonlinear network program --NLPNETG-- was employed in solving the resulting test cases. This optimization program uses the Truncated-Newton method (quadratic approximation) for determining the search direction at each iteration in the nonlinear algorithm. It was shown that aircraft could be re-routed in an optimal fashion whenever traffic congestion increased beyond an acceptable level, as measured by the nonlinear risk function.

  16. Guidance on home blood pressure monitoring: A statement of the HOPE Asia Network.

    PubMed

    Kario, Kazuomi; Park, Sungha; Buranakitjaroen, Peera; Chia, Yook-Chin; Chen, Chen-Huan; Divinagracia, Romeo; Hoshide, Satoshi; Shin, Jinho; Siddique, Saulat; Sison, Jorge; Soenarta, Arieska Ann; Sogunuru, Guru Prasad; Tay, Jam Chin; Turana, Yuda; Wong, Lawrence; Zhang, Yuqing; Wang, Ji-Guang

    2018-03-01

    Hypertension is an important modifiable cardiovascular risk factor and a leading cause of death throughout Asia. Effective prevention and control of hypertension in the region remain a significant challenge despite the availability of several regional and international guidelines. Out-of-office measurement of blood pressure (BP), including home BP monitoring (HBPM), is an important hypertension management tool. Home BP is better than office BP for predicting cardiovascular risk and HBPM should be considered for all patients with office BP ≥ 130/85 mm Hg. It is important that HBPM is undertaken using a validated device and patients are educated about how to perform HBPM correctly. During antihypertensive therapy, monitoring of home BP control and variability is essential, especially in the morning. This is because HBPM can facilitate the choice of individualized optimal therapy. The evidence and practice points in this document are based on the Hypertension Cardiovascular Outcome Prevention and Evidence (HOPE) Asia Network expert panel consensus recommendations for HBPM in Asia. ©2018 Wiley Periodicals, Inc.

  17. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    USGS Publications Warehouse

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  18. Optimizing a Sensor Network with Data from Hazard Mapping Demonstrated in a Heavy-Vehicle Manufacturing Facility.

    PubMed

    Berman, Jesse D; Peters, Thomas M; Koehler, Kirsten A

    2018-05-28

    To design a method that uses preliminary hazard mapping data to optimize the number and location of sensors within a network for a long-term assessment of occupational concentrations, while preserving temporal variability, accuracy, and precision of predicted hazards. Particle number concentrations (PNCs) and respirable mass concentrations (RMCs) were measured with direct-reading instruments in a large heavy-vehicle manufacturing facility at 80-82 locations during 7 mapping events, stratified by day and season. Using kriged hazard mapping, a statistical approach identified optimal orders for removing locations to capture temporal variability and high prediction precision of PNC and RMC concentrations. We compared optimal-removal, random-removal, and least-optimal-removal orders to bound prediction performance. The temporal variability of PNC was found to be higher than RMC with low correlation between the two particulate metrics (ρ = 0.30). Optimal-removal orders resulted in more accurate PNC kriged estimates (root mean square error [RMSE] = 49.2) at sample locations compared with random-removal order (RMSE = 55.7). For estimates at locations having concentrations in the upper 10th percentile, the optimal-removal order preserved average estimated concentrations better than random- or least-optimal-removal orders (P < 0.01). However, estimated average concentrations using an optimal-removal were not statistically different than random-removal when averaged over the entire facility. No statistical difference was observed for optimal- and random-removal methods for RMCs that were less variable in time and space than PNCs. Optimized removal performed better than random-removal in preserving high temporal variability and accuracy of hazard map for PNC, but not for the more spatially homogeneous RMC. These results can be used to reduce the number of locations used in a network of static sensors for long-term monitoring of hazards in the workplace, without sacrificing prediction performance.

  19. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management

    PubMed Central

    Halicioglu, Kerem; Ozener, Haluk

    2008-01-01

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE–SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters – standard strike-slip model of dislocation theory in an elastic half-space – is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems. PMID:27873783

  20. Geodetic Network Design and Optimization on the Active Tuzla Fault (Izmir, Turkey) for Disaster Management.

    PubMed

    Halicioglu, Kerem; Ozener, Haluk

    2008-08-19

    Both seismological and geodynamic research emphasize that the Aegean Region, which comprises the Hellenic Arc, the Greek mainland and Western Turkey is the most seismically active region in Western Eurasia. The convergence of the Eurasian and African lithospheric plates forces a westward motion on the Anatolian plate relative to the Eurasian one. Western Anatolia is a valuable laboratory for Earth Science research because of its complex geological structure. Izmir is a large city in Turkey with a population of about 2.5 million that is at great risk from big earthquakes. Unfortunately, previous geodynamics studies performed in this region are insufficient or cover large areas instead of specific faults. The Tuzla Fault, which is aligned trending NE-SW between the town of Menderes and Cape Doganbey, is an important fault in terms of seismic activity and its proximity to the city of Izmir. This study aims to perform a large scale investigation focusing on the Tuzla Fault and its vicinity for better understanding of the region's tectonics. In order to investigate the crustal deformation along the Tuzla Fault and Izmir Bay, a geodetic network has been designed and optimizations were performed. This paper suggests a schedule for a crustal deformation monitoring study which includes research on the tectonics of the region, network design and optimization strategies, theory and practice of processing. The study is also open for extension in terms of monitoring different types of fault characteristics. A one-dimensional fault model with two parameters - standard strike-slip model of dislocation theory in an elastic half-space - is formulated in order to determine which sites are suitable for the campaign based geodetic GPS measurements. Geodetic results can be used as a background data for disaster management systems.

  1. National Seismic Network of Georgia

    NASA Astrophysics Data System (ADS)

    Tumanova, N.; Kakhoberashvili, S.; Omarashvili, V.; Tserodze, M.; Akubardia, D.

    2016-12-01

    Georgia, as a part of the Southern Caucasus, is tectonically active and structurally complex region. It is one of the most active segments of the Alpine-Himalayan collision belt. The deformation and the associated seismicity are due to the continent-continent collision between the Arabian and Eurasian plates. Seismic Monitoring of country and the quality of seismic data is the major tool for the rapid response policy, population safety, basic scientific research and in the end for the sustainable development of the country. National Seismic Network of Georgia has been developing since the end of 19th century. Digital era of the network started from 2003. Recently continuous data streams from 25 stations acquired and analyzed in the real time. Data is combined to calculate rapid location and magnitude for the earthquake. Information for the bigger events (Ml>=3.5) is simultaneously transferred to the website of the monitoring center and to the related governmental agencies. To improve rapid earthquake location and magnitude estimation the seismic network was enhanced by installing additional 7 new stations. Each new station is equipped with coupled Broadband and Strong Motion seismometers and permanent GPS system as well. To select the sites for the 7 new base stations, we used standard network optimization techniques. To choose the optimal sites for new stations we've taken into account geometry of the existed seismic network, topographic conditions of the site. For each site we studied local geology (Vs30 was mandatory for each site), local noise level and seismic vault construction parameters. Due to the country elevation, stations were installed in the high mountains, no accessible in winter due to the heavy snow conditions. To secure online data transmission we used satellite data transmission as well as cell data network coverage from the different local companies. As a result we've already have the improved earthquake location and event magnitudes. We've analyzed data from each station to calculate signal-to-nose ratio. Comparing these calculations with the ones for the existed stations showed that signal-to-nose ratio for new stations has much better value. National Seismic Network of Georgia is planning to install more stations to improve seismic network coverage.

  2. Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision

    NASA Astrophysics Data System (ADS)

    Hendrawan, Y.; Hawa, L. C.; Damayanti, R.

    2018-03-01

    This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.

  3. Global Detection of Live Virtual Machine Migration Based on Cellular Neural Networks

    PubMed Central

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better. PMID:24959631

  4. Global detection of live virtual machine migration based on cellular neural networks.

    PubMed

    Xie, Kang; Yang, Yixian; Zhang, Ling; Jing, Maohua; Xin, Yang; Li, Zhongxian

    2014-01-01

    In order to meet the demands of operation monitoring of large scale, autoscaling, and heterogeneous virtual resources in the existing cloud computing, a new method of live virtual machine (VM) migration detection algorithm based on the cellular neural networks (CNNs), is presented. Through analyzing the detection process, the parameter relationship of CNN is mapped as an optimization problem, in which improved particle swarm optimization algorithm based on bubble sort is used to solve the problem. Experimental results demonstrate that the proposed method can display the VM migration processing intuitively. Compared with the best fit heuristic algorithm, this approach reduces the processing time, and emerging evidence has indicated that this new approach is affordable to parallelism and analog very large scale integration (VLSI) implementation allowing the VM migration detection to be performed better.

  5. Airborne Detection and Tracking of Geologic Leakage Sites

    NASA Astrophysics Data System (ADS)

    Jacob, Jamey; Allamraju, Rakshit; Axelrod, Allan; Brown, Calvin; Chowdhary, Girish; Mitchell, Taylor

    2014-11-01

    Safe storage of CO2 to reduce greenhouse gas emissions without adversely affecting energy use or hindering economic growth requires development of monitoring technology that is capable of validating storage permanence while ensuring the integrity of sequestration operations. Soil gas monitoring has difficulty accurately distinguishing gas flux signals related to leakage from those associated with meteorologically driven changes of soil moisture and temperature. Integrated ground and airborne monitoring systems are being deployed capable of directly detecting CO2 concentration in storage sites. Two complimentary approaches to detecting leaks in the carbon sequestration fields are presented. The first approach focuses on reducing the requisite network communication for fusing individual Gaussian Process (GP) CO2 sensing models into a global GP CO2 model. The GP fusion approach learns how to optimally allocate the static and mobile sensors. The second approach leverages a hierarchical GP-Sigmoidal Gaussian Cox Process for airborne predictive mission planning to optimally reducing the entropy of the global CO2 model. Results from the approaches will be presented.

  6. 2007 Beyond SBIR Phase II: Bringing Technology Edge to the Warfighter

    DTIC Science & Technology

    2007-08-23

    Systems Trade-Off Analysis and Optimization Verification and Validation On-Board Diagnostics and Self - healing Security and Anti-Tampering Rapid...verification; Safety and reliability analysis of flight and mission critical systems On-Board Diagnostics and Self - Healing Model-based monitoring and... self - healing On-board diagnostics and self - healing ; Autonomic computing; Network intrusion detection and prevention Anti-Tampering and Trust

  7. Evaluation of neural network modeling to predict non-water-stressed leaf temperature in wine grape for calculation of crop water stress index

    USDA-ARS?s Scientific Manuscript database

    Precision irrigation management in wine grape production is hindered by the lack of a reliable method to easily quantify and monitor vine water status. Mild to moderate water stress is desirable in wine grape for controlling vine vigor and optimizing fruit yield and quality. A crop water stress ind...

  8. Optimizing the real-time automatic location of the events produced in Romania using an advanced processing system

    NASA Astrophysics Data System (ADS)

    Neagoe, Cristian; Grecu, Bogdan; Manea, Liviu

    2016-04-01

    National Institute for Earth Physics (NIEP) operates a real time seismic network which is designed to monitor the seismic activity on the Romanian territory, which is dominated by the intermediate earthquakes (60-200 km) from Vrancea area. The ability to reduce the impact of earthquakes on society depends on the existence of a large number of high-quality observational data. The development of the network in recent years and an advanced seismic acquisition are crucial to achieving this objective. The software package used to perform the automatic real-time locations is Seiscomp3. An accurate choice of the Seiscomp3 setting parameters is necessary to ensure the best performance of the real-time system i.e., the most accurate location for the earthquakes and avoiding any false events. The aim of this study is to optimize the algorithms of the real-time system that detect and locate the earthquakes in the monitored area. This goal is pursued by testing different parameters (e.g., STA/LTA, filters applied to the waveforms) on a data set of representative earthquakes of the local seismicity. The results are compared with the locations from the Romanian Catalogue ROMPLUS.

  9. The use of UNIX in a real-time environment

    NASA Technical Reports Server (NTRS)

    Luken, R. D.; Simons, P. C.

    1986-01-01

    This paper describes a project to evaluate the feasibility of using commercial off-the-shelf hardware and the UNIX operating system, to implement a real-time control and monitor system. A functional subset of the Checkout, Control and Monitor System was chosen as the test bed for the project. The project consists of three separate architecture implementations: a local area bus network, a star network, and a central host. The motivation for this project stemmed from the need to find a way to implement real-time systems, without the cost burden of developing and maintaining custom hardware and unique software. This has always been accepted as the only option because of the need to optimize the implementation for performance. However, with the cost/performance of today's hardware, the inefficiencies of high-level languages and portable operating systems can be effectively overcome.

  10. Power Allocation Based on Data Classification in Wireless Sensor Networks

    PubMed Central

    Wang, Houlian; Zhou, Gongbo

    2017-01-01

    Limited node energy in wireless sensor networks is a crucial factor which affects the monitoring of equipment operation and working conditions in coal mines. In addition, due to heterogeneous nodes and different data acquisition rates, the number of arriving packets in a queue network can differ, which may lead to some queue lengths reaching the maximum value earlier compared with others. In order to tackle these two problems, an optimal power allocation strategy based on classified data is proposed in this paper. Arriving data is classified into dissimilar classes depending on the number of arriving packets. The problem is formulated as a Lyapunov drift optimization with the objective of minimizing the weight sum of average power consumption and average data class. As a result, a suboptimal distributed algorithm without any knowledge of system statistics is presented. The simulations, conducted in the perfect channel state information (CSI) case and the imperfect CSI case, reveal that the utility can be pushed arbitrarily close to optimal by increasing the parameter V, but with a corresponding growth in the average delay, and that other tunable parameters W and the classification method in the interior of utility function can trade power optimality for increased average data class. The above results show that data in a high class has priorities to be processed than data in a low class, and energy consumption can be minimized in this resource allocation strategy. PMID:28498346

  11. Medical informatics in medical research - the Severe Malaria in African Children (SMAC) Network's experience.

    PubMed

    Olola, C H O; Missinou, M A; Issifou, S; Anane-Sarpong, E; Abubakar, I; Gandi, J N; Chagomerana, M; Pinder, M; Agbenyega, T; Kremsner, P G; Newton, C R J C; Wypij, D; Taylor, T E

    2006-01-01

    Computers are widely used for data management in clinical trials in the developed countries, unlike in developing countries. Dependable systems are vital for data management, and medical decision making in clinical research. Monitoring and evaluation of data management is critical. In this paper we describe database structures and procedures of systems used to implement, coordinate, and sustain data management in Africa. We outline major lessons, challenges and successes achieved, and recommendations to improve medical informatics application in biomedical research in sub-Saharan Africa. A consortium of experienced research units at five sites in Africa in studying children with disease formed a new clinical trials network, Severe Malaria in African Children. In December 2000, the network introduced an observational study involving these hospital-based sites. After prototyping, relational database management systems were implemented for data entry and verification, data submission and quality assurance monitoring. Between 2000 and 2005, 25,858 patients were enrolled. Failure to meet data submission deadline and data entry errors correlated positively (correlation coefficient, r = 0.82), with more errors occurring when data was submitted late. Data submission lateness correlated inversely with hospital admissions (r = -0.62). Developing and sustaining dependable DBMS, ongoing modifications to optimize data management is crucial for clinical studies. Monitoring and communication systems are vital in multi-center networks for good data management. Data timeliness is associated with data quality and hospital admissions.

  12. [Telemetry in the clinical setting].

    PubMed

    Hilbel, Thomas; Helms, Thomas M; Mikus, Gerd; Katus, Hugo A; Zugck, Christian

    2008-09-01

    Telemetric cardiac monitoring was invented in 1949 by Norman J Holter. Its clinical use started in the early 1960s. In the hospital, biotelemetry allows early mobilization of patients with cardiovascular risk and addresses the need for arrhythmia or oxygen saturation monitoring. Nowadays telemetry either uses vendor-specific UHF band broadcasting or the digital ISM band (Industrial, Scientific, and Medical Band) standardized Wi-Fi network technology. Modern telemetry radio transmitters can measure and send multiple physiological parameters like multi-channel ECG, NIPB and oxygen saturation. The continuous measurement of oxygen saturation is mandatory for the remote monitoring of patients with cardiac pacemakers. Real 12-lead ECG systems with diagnostic quality are an advantage for monitoring patients with chest pain syndromes or in drug testing wards. Modern systems are light-weight and deliver a maximum of carrying comfort due to optimized cable design. Important for the system selection is a sophisticated detection algorithm with a maximum reduction of artifacts. Home-monitoring of implantable cardiac devices with telemetric functionalities are becoming popular because it allows remote diagnosis of proper device functionality and also optimization of the device settings. Continuous real-time monitoring at home for patients with chronic disease may be possible in the future using Digital Video Broadcasting Terrestrial (DVB-T) technology in Europe, but is currently not yet available.

  13. Contrast research of CDMA and GSM network optimization

    NASA Astrophysics Data System (ADS)

    Wu, Yanwen; Liu, Zehong; Zhou, Guangyue

    2004-03-01

    With the development of mobile telecommunication network, users of CDMA advanced their request of network service quality. While the operators also change their network management object from signal coverage to performance improvement. In that case, reasonably layout & optimization of mobile telecommunication network, reasonably configuration of network resource, improvement of the service quality, and increase the enterprise's core competition ability, all those have been concerned by the operator companies. This paper firstly looked into the flow of CDMA network optimization. Then it dissertated to some keystones in the CDMA network optimization, like PN code assignment, calculation of soft handover, etc. As GSM is also the similar cellular mobile telecommunication system like CDMA, so this paper also made a contrast research of CDMA and GSM network optimization in details, including the similarity and the different. In conclusion, network optimization is a long time job; it will run through the whole process of network construct. By the adjustment of network hardware (like BTS equipments, RF systems, etc.) and network software (like parameter optimized, configuration optimized, capacity optimized, etc.), network optimization work can improve the performance and service quality of the network.

  14. Anomaly Detection Using Optimally-Placed μPMU Sensors in Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jamei, Mahdi; Scaglione, Anna; Roberts, Ciaran

    IEEE As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. Here, focusing on Micro-Phasor Measurement Unit (μPMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. And due to the key role of the μPMU devices in our architecture, a source-constrained optimal μPMU placement is also described that finds the best location ofmore » the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real μPMU data.« less

  15. Anomaly Detection Using Optimally-Placed μPMU Sensors in Distribution Grids

    DOE PAGES

    Jamei, Mahdi; Scaglione, Anna; Roberts, Ciaran; ...

    2017-10-25

    IEEE As the distribution grid moves toward a tightly-monitored network, it is important to automate the analysis of the enormous amount of data produced by the sensors to increase the operators situational awareness about the system. Here, focusing on Micro-Phasor Measurement Unit (μPMU) data, we propose a hierarchical architecture for monitoring the grid and establish a set of analytics and sensor fusion primitives for the detection of abnormal behavior in the control perimeter. And due to the key role of the μPMU devices in our architecture, a source-constrained optimal μPMU placement is also described that finds the best location ofmore » the devices with respect to our rules. The effectiveness of the proposed methods are tested through the synthetic and real μPMU data.« less

  16. Design of a monitor and simulation terminal (master) for space station telerobotics and telescience

    NASA Technical Reports Server (NTRS)

    Lopez, L.; Konkel, C.; Harmon, P.; King, S.

    1989-01-01

    Based on Space Station and planetary spacecraft communication time delays and bandwidth limitations, it will be necessary to develop an intelligent, general purpose ground monitor terminal capable of sophisticated data display and control of on-orbit facilities and remote spacecraft. The basic elements that make up a Monitor and Simulation Terminal (MASTER) include computer overlay video, data compression, forward simulation, mission resource optimization and high level robotic control. Hardware and software elements of a MASTER are being assembled for testbed use. Applications of Neural Networks (NNs) to some key functions of a MASTER are also discussed. These functions are overlay graphics adjustment, object correlation and kinematic-dynamic characterization of the manipulator.

  17. Optimized Autonomous Space In-situ Sensor-Web for volcano monitoring

    USGS Publications Warehouse

    Song, W.-Z.; Shirazi, B.; Kedar, S.; Chien, S.; Webb, F.; Tran, D.; Davis, A.; Pieri, D.; LaHusen, R.; Pallister, J.; Dzurisin, D.; Moran, S.; Lisowski, M.

    2008-01-01

    In response to NASA's announced requirement for Earth hazard monitoring sensor-web technology, a multidisciplinary team involving sensor-network experts (Washington State University), space scientists (JPL), and Earth scientists (USGS Cascade Volcano Observatory (CVO)), is developing a prototype dynamic and scaleable hazard monitoring sensor-web and applying it to volcano monitoring. The combined Optimized Autonomous Space -In-situ Sensor-web (OASIS) will have two-way communication capability between ground and space assets, use both space and ground data for optimal allocation of limited power and bandwidth resources on the ground, and use smart management of competing demands for limited space assets. It will also enable scalability and seamless infusion of future space and in-situ assets into the sensor-web. The prototype will be focused on volcano hazard monitoring at Mount St. Helens, which has been active since October 2004. The system is designed to be flexible and easily configurable for many other applications as well. The primary goals of the project are: 1) integrating complementary space (i.e., Earth Observing One (EO-1) satellite) and in-situ (ground-based) elements into an interactive, autonomous sensor-web; 2) advancing sensor-web power and communication resource management technology; and 3) enabling scalability for seamless infusion of future space and in-situ assets into the sensor-web. To meet these goals, we are developing: 1) a test-bed in-situ array with smart sensor nodes capable of making autonomous data acquisition decisions; 2) efficient self-organization algorithm of sensor-web topology to support efficient data communication and command control; 3) smart bandwidth allocation algorithms in which sensor nodes autonomously determine packet priorities based on mission needs and local bandwidth information in real-time; and 4) remote network management and reprogramming tools. The space and in-situ control components of the system will be integrated such that each element is capable of autonomously tasking the other. Sensor-web data acquisition and dissemination will be accomplished through the use of the Open Geospatial Consortium Sensorweb Enablement protocols. The three-year project will demonstrate end-to-end system performance with the in-situ test-bed at Mount St. Helens and NASA's EO-1 platform. ??2008 IEEE.

  18. Application of a moment tensor inversion code developed for mining-induced seismicity to fracture monitoring of civil engineering materials

    NASA Astrophysics Data System (ADS)

    Linzer, Lindsay; Mhamdi, Lassaad; Schumacher, Thomas

    2015-01-01

    A moment tensor inversion (MTI) code originally developed to compute source mechanisms from mining-induced seismicity data is now being used in the laboratory in a civil engineering research environment. Quantitative seismology methods designed for geological environments are being tested with the aim of developing techniques to assess and monitor fracture processes in structural concrete members such as bridge girders. In this paper, we highlight aspects of the MTI_Toolbox programme that make it applicable to performing inversions on acoustic emission (AE) data recorded by networks of uniaxial sensors. The influence of the configuration of a seismic network on the conditioning of the least-squares system and subsequent moment tensor results for a real, 3-D network are compared to a hypothetical 2-D version of the same network. This comparative analysis is undertaken for different cases: for networks consisting entirely of triaxial or uniaxial sensors; for both P and S-waves, and for P-waves only. The aim is to guide the optimal design of sensor configurations where only uniaxial sensors can be installed. Finally, the findings of recent laboratory experiments where the MTI_Toolbox has been applied to a concrete beam test are presented and discussed.

  19. Effects of spatial configuration of imperviousness and green infrastructure networks on hydrologic response in a residential sewershed

    NASA Astrophysics Data System (ADS)

    Lim, Theodore C.; Welty, Claire

    2017-09-01

    Green infrastructure (GI) is an approach to stormwater management that promotes natural processes of infiltration and evapotranspiration, reducing surface runoff to conventional stormwater drainage infrastructure. As more urban areas incorporate GI into their stormwater management plans, greater understanding is needed on the effects of spatial configuration of GI networks on hydrological performance, especially in the context of potential subsurface and lateral interactions between distributed facilities. In this research, we apply a three-dimensional, coupled surface-subsurface, land-atmosphere model, ParFlow.CLM, to a residential urban sewershed in Washington DC that was retrofitted with a network of GI installations between 2009 and 2015. The model was used to test nine additional GI and imperviousness spatial network configurations for the site and was compared with monitored pipe-flow data. Results from the simulations show that GI located in higher flow-accumulation areas of the site intercepted more surface runoff, even during wetter and multiday events. However, a comparison of the differences between scenarios and levels of variation and noise in monitored data suggests that the differences would only be detectable between the most and least optimal GI/imperviousness configurations.

  20. Real-Time and Secure Wireless Health Monitoring

    PubMed Central

    Dağtaş, S.; Pekhteryev, G.; Şahinoğlu, Z.; Çam, H.; Challa, N.

    2008-01-01

    We present a framework for a wireless health monitoring system using wireless networks such as ZigBee. Vital signals are collected and processed using a 3-tiered architecture. The first stage is the mobile device carried on the body that runs a number of wired and wireless probes. This device is also designed to perform some basic processing such as the heart rate and fatal failure detection. At the second stage, further processing is performed by a local server using the raw data transmitted by the mobile device continuously. The raw data is also stored at this server. The processed data as well as the analysis results are then transmitted to the service provider center for diagnostic reviews as well as storage. The main advantages of the proposed framework are (1) the ability to detect signals wirelessly within a body sensor network (BSN), (2) low-power and reliable data transmission through ZigBee network nodes, (3) secure transmission of medical data over BSN, (4) efficient channel allocation for medical data transmission over wireless networks, and (5) optimized analysis of data using an adaptive architecture that maximizes the utility of processing and computational capacity at each platform. PMID:18497866

  1. Energy efficient wireless sensor network for structural health monitoring using distributed embedded piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Li, Peng; Olmi, Claudio; Song, Gangbing

    2010-04-01

    Piezoceramic based transducers are widely researched and used for structural health monitoring (SHM) systems due to the piezoceramic material's inherent advantage of dual sensing and actuation. Wireless sensor network (WSN) technology benefits from advances made in piezoceramic based structural health monitoring systems, allowing easy and flexible installation, low system cost, and increased robustness over wired system. However, piezoceramic wireless SHM systems still faces some drawbacks, one of these is that the piezoceramic based SHM systems require relatively high computational capabilities to calculate damage information, however, battery powered WSN sensor nodes have strict power consumption limitation and hence limited computational power. On the other hand, commonly used centralized processing networks require wireless sensors to transmit all data back to the network coordinator for analysis. This signal processing procedure can be problematic for piezoceramic based SHM applications as it is neither energy efficient nor robust. In this paper, we aim to solve these problems with a distributed wireless sensor network for piezoceramic base structural health monitoring systems. Three important issues: power system, waking up from sleep impact detection, and local data processing, are addressed to reach optimized energy efficiency. Instead of sweep sine excitation that was used in the early research, several sine frequencies were used in sequence to excite the concrete structure. The wireless sensors record the sine excitations and compute the time domain energy for each sine frequency locally to detect the energy change. By comparing the data of the damaged concrete frame with the healthy data, we are able to find out the damage information of the concrete frame. A relative powerful wireless microcontroller was used to carry out the sampling and distributed data processing in real-time. The distributed wireless network dramatically reduced the data transmission between wireless sensor and the wireless coordinator, which in turn reduced the power consumption of the overall system.

  2. A Gradient Optimization Approach to Adaptive Multi-Robot Control

    DTIC Science & Technology

    2009-09-01

    implemented for deploying a group of three flying robots with downward facing cameras to monitor an environment on the ground. Thirdly, the multi-robot...theoretically proven, and implemented on multi-robot platforms. Thesis Supervisor: Daniela Rus Title: Professor of Electrical Engineering and Computer...often nonlinear, and they are coupled through a network which changes over time. Thirdly, implementing multi-robot controllers requires maintaining mul

  3. Finite Energy and Bounded Actuator Attacks on Cyber-Physical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djouadi, Seddik M; Melin, Alexander M; Ferragut, Erik M

    As control system networks are being connected to enterprise level networks for remote monitoring, operation, and system-wide performance optimization, these same connections are providing vulnerabilities that can be exploited by malicious actors for attack, financial gain, and theft of intellectual property. Much effort in cyber-physical system (CPS) protection has focused on protecting the borders of the system through traditional information security techniques. Less effort has been applied to the protection of cyber-physical systems from intelligent attacks launched after an attacker has defeated the information security protections to gain access to the control system. In this paper, attacks on actuator signalsmore » are analyzed from a system theoretic context. The threat surface is classified into finite energy and bounded attacks. These two broad classes encompass a large range of potential attacks. The effect of theses attacks on a linear quadratic (LQ) control are analyzed, and the optimal actuator attacks for both finite and infinite horizon LQ control are derived, therefore the worst case attack signals are obtained. The closed-loop system under the optimal attack signals is given and a numerical example illustrating the effect of an optimal bounded attack is provided.« less

  4. Deployment-based lifetime optimization model for homogeneous Wireless Sensor Network under retransmission.

    PubMed

    Li, Ruiying; Liu, Xiaoxi; Xie, Wei; Huang, Ning

    2014-12-10

    Sensor-deployment-based lifetime optimization is one of the most effective methods used to prolong the lifetime of Wireless Sensor Network (WSN) by reducing the distance-sensitive energy consumption. In this paper, data retransmission, a major consumption factor that is usually neglected in the previous work, is considered. For a homogeneous WSN, monitoring a circular target area with a centered base station, a sensor deployment model based on regular hexagonal grids is analyzed. To maximize the WSN lifetime, optimization models for both uniform and non-uniform deployment schemes are proposed by constraining on coverage, connectivity and success transmission rate. Based on the data transmission analysis in a data gathering cycle, the WSN lifetime in the model can be obtained through quantifying the energy consumption at each sensor location. The results of case studies show that it is meaningful to consider data retransmission in the lifetime optimization. In particular, our investigations indicate that, with the same lifetime requirement, the number of sensors needed in a non-uniform topology is much less than that in a uniform one. Finally, compared with a random scheme, simulation results further verify the advantage of our deployment model.

  5. A review on existing OSSEs and their implications on European marine observation requirements

    NASA Astrophysics Data System (ADS)

    She, Jun

    2017-04-01

    Marine observations are essential for understanding marine processes and improving the forecast quality, they are also expensive. It has always been an important issue to optimize sampling schemes of marine observational networks so that the value of marine observations can be maximized and the cost can be lowered. Ocean System Simulation Experiment (OSSE) is an efficient tool in assessing impacts of proposed future sampling schemes on reconstructing and forecasting the ocean and ecosystem conditions. In this study existing OSSE research results from EU projects (such as JERICO, OPEC, SANGOMA, E-AIMS and AtlantOS), institutional studies and review papers are collected and analyzed, according to regions (Arctic, Baltic, N. Atlantic, Mediterranean Sea and Black Sea) and instruments/variables. The preliminary results show that significant gaps for OSSEs in regions and instruments. Among the existing OSSEs, Argo (Bio-Argo and Deep See Argo), gliders and ferrybox are the most often investigated instruments. Although many of the OSSEs are dedicated for very specific monitoring strategies and not sufficiently comprehensive for making solid recommendations for optimizing the existing networks, the detailed findings for future marine observation requirements from the OSSEs will be summarized in the presentation. Recommendations for systematic OSSEs for optimizing European marine observation networks are also given.

  6. EPMOSt: An Energy-Efficient Passive Monitoring System for Wireless Sensor Networks

    PubMed Central

    Garcia, Fernando P.; Andrade, Rossana M. C.; Oliveira, Carina T.; de Souza, José Neuman

    2014-01-01

    Monitoring systems are important for debugging and analyzing Wireless Sensor Networks (WSN). In passive monitoring, a monitoring network needs to be deployed in addition to the network to be monitored, named the target network. The monitoring network captures and analyzes packets transmitted by the target network. An energy-efficient passive monitoring system is necessary when we need to monitor a WSN in a real scenario because the lifetime of the monitoring network is extended and, consequently, the target network benefits from the monitoring for a longer time. In this work, we have identified, analyzed and compared the main passive monitoring systems proposed for WSN. During our research, we did not identify any passive monitoring system for WSN that aims to reduce the energy consumption of the monitoring network. Therefore, we propose an Energy-efficient Passive MOnitoring SysTem for WSN named EPMOSt that provides monitoring information using a Simple Network Management Protocol (SNMP) agent. Thus, any management tool that supports the SNMP protocol can be integrated with this monitoring system. Experiments with real sensors were performed in several scenarios. The results obtained show the energy efficiency of the proposed monitoring system and the viability of using it to monitor WSN in real scenarios. PMID:24949639

  7. Model-Based Sensor Placement for Component Condition Monitoring and Fault Diagnosis in Fossil Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mobed, Parham; Pednekar, Pratik; Bhattacharyya, Debangsu

    Design and operation of energy producing, near “zero-emission” coal plants has become a national imperative. This report on model-based sensor placement describes a transformative two-tier approach to identify the optimum placement, number, and type of sensors for condition monitoring and fault diagnosis in fossil energy system operations. The algorithms are tested on a high fidelity model of the integrated gasification combined cycle (IGCC) plant. For a condition monitoring network, whether equipment should be considered at a unit level or a systems level depends upon the criticality of the process equipment, its likeliness to fail, and the level of resolution desiredmore » for any specific failure. Because of the presence of a high fidelity model at the unit level, a sensor network can be designed to monitor the spatial profile of the states and estimate fault severity levels. In an IGCC plant, besides the gasifier, the sour water gas shift (WGS) reactor plays an important role. In view of this, condition monitoring of the sour WGS reactor is considered at the unit level, while a detailed plant-wide model of gasification island, including sour WGS reactor and the Selexol process, is considered for fault diagnosis at the system-level. Finally, the developed algorithms unify the two levels and identifies an optimal sensor network that maximizes the effectiveness of the overall system-level fault diagnosis and component-level condition monitoring. This work could have a major impact on the design and operation of future fossil energy plants, particularly at the grassroots level where the sensor network is yet to be identified. In addition, the same algorithms developed in this report can be further enhanced to be used in retrofits, where the objectives could be upgrade (addition of more sensors) and relocation of existing sensors.« less

  8. Optimizing Energy Consumption in Vehicular Sensor Networks by Clustering Using Fuzzy C-Means and Fuzzy Subtractive Algorithms

    NASA Astrophysics Data System (ADS)

    Ebrahimi, A.; Pahlavani, P.; Masoumi, Z.

    2017-09-01

    Traffic monitoring and managing in urban intelligent transportation systems (ITS) can be carried out based on vehicular sensor networks. In a vehicular sensor network, vehicles equipped with sensors such as GPS, can act as mobile sensors for sensing the urban traffic and sending the reports to a traffic monitoring center (TMC) for traffic estimation. The energy consumption by the sensor nodes is a main problem in the wireless sensor networks (WSNs); moreover, it is the most important feature in designing these networks. Clustering the sensor nodes is considered as an effective solution to reduce the energy consumption of WSNs. Each cluster should have a Cluster Head (CH), and a number of nodes located within its supervision area. The cluster heads are responsible for gathering and aggregating the information of clusters. Then, it transmits the information to the data collection center. Hence, the use of clustering decreases the volume of transmitting information, and, consequently, reduces the energy consumption of network. In this paper, Fuzzy C-Means (FCM) and Fuzzy Subtractive algorithms are employed to cluster sensors and investigate their performance on the energy consumption of sensors. It can be seen that the FCM algorithm and Fuzzy Subtractive have been reduced energy consumption of vehicle sensors up to 90.68% and 92.18%, respectively. Comparing the performance of the algorithms implies the 1.5 percent improvement in Fuzzy Subtractive algorithm in comparison.

  9. Smart Rocks for Bridge Scour Monitoring: Design and Localization Using Electromagnetic Techniques and Embedded Orientation Sensors

    NASA Astrophysics Data System (ADS)

    Radchenko, Andro

    River bridge scour is an erosion process in which flowing water removes sediment materials (such as sand, rocks) from a bridge foundation, river beds and banks. As a result, the level of the river bed near a bridge pier is lowering such that the bridge foundation stability can be compromised, and the bridge can collapse. The scour is a dynamic process, which can accelerate rapidly during a flood event. Thus, regular monitoring of the scour progress is necessary to be performed at most river bridges. Present techniques are usually expensive, require large man/hour efforts, and often lack the real-time monitoring capabilities. In this dissertation a new method--'Smart Rocks Network for bridge scour monitoring' is introduced. The method is based on distributed wireless sensors embedded in ground underwater nearby the bridge pillars. The sensor nodes are unconstrained in movement, are equipped with years-lasting batteries and intelligent custom designed electronics, which minimizes power consumption during operation and communication. The electronic part consists of a microcontroller, communication interfaces, orientation and environment sensors (such as are accelerometer, magnetometer, temperature and pressure sensors), supporting power supplies and circuitries. Embedded in the soil nearby a bridge pillar the Smart Rocks can move/drift together with the sediments, and act as the free agent probes transmitting the unique signature signals to the base-station monitors. Individual movement of a Smart Rock can be remotely detected processing the orientation sensors reading. This can give an indication of the on-going scour progress, and set a flag for the on-site inspection. The map of the deployed Smart Rocks Network can be obtained utilizing the custom developed in-network communication protocol with signals intensity (RSSI) analysis. Particle Swarm Optimization (PSO) is applied for map reconstruction. Analysis of the map can provide detailed insight into the scour progress and topology. Smart Rocks Network wireless communication is based on the magnetoinductive (MI) link, at low (125 KHz) frequency, allowing for signal to penetrate through the water, rocks, and the bridge structure. The dissertation describes the Smart Rocks Network implementation, its electronic design and the electromagnetic/computational intelligence techniques used for the network mapping.

  10. Real-Time GPS Monitoring for Earthquake Rapid Assessment in the San Francisco Bay Area

    NASA Astrophysics Data System (ADS)

    Guillemot, C.; Langbein, J. O.; Murray, J. R.

    2012-12-01

    The U.S. Geological Survey Earthquake Science Center has deployed a network of eight real-time Global Positioning System (GPS) stations in the San Francisco Bay area and is implementing software applications to continuously evaluate the status of the deformation within the network. Real-time monitoring of the station positions is expected to provide valuable information for rapidly estimating source parameters should a large earthquake occur in the San Francisco Bay area. Because earthquake response applications require robust data access, as a first step we have developed a suite of web-based applications which are now routinely used to monitor the network's operational status and data streaming performance. The web tools provide continuously updated displays of important telemetry parameters such as data latency and receive rates, as well as source voltage and temperature information within each instrument enclosure. Automated software on the backend uses the streaming performance data to mitigate the impact of outages, radio interference and bandwidth congestion on deformation monitoring operations. A separate set of software applications manages the recovery of lost data due to faulty communication links. Displacement estimates are computed in real-time for various combinations of USGS, Plate Boundary Observatory (PBO) and Bay Area Regional Deformation (BARD) network stations. We are currently comparing results from two software packages (one commercial and one open-source) used to process 1-Hz data on the fly and produce estimates of differential positions. The continuous monitoring of telemetry makes it possible to tune the network to minimize the impact of transient interruptions of the data flow, from one or more stations, on the estimated positions. Ongoing work is focused on using data streaming performance history to optimize the quality of the position, reduce drift and outliers by switching to the best set of stations within the network, and automatically select the "next best" station to use as reference. We are also working towards minimizing the loss of streamed data during concurrent data downloads by improving file management on the GPS receivers.

  11. Quantifying 10 years of Improvements in Earthquake and Tsunami Monitoring in the Caribbean and Adjacent Regions

    NASA Astrophysics Data System (ADS)

    von Hillebrandt-Andrade, C.; Huerfano Moreno, V. A.; McNamara, D. E.; Saurel, J. M.

    2014-12-01

    The magnitude-9.3 Sumatra-Andaman Islands earthquake of December 26, 2004, increased global awareness to the destructive hazard of earthquakes and tsunamis. Post event assessments of global coastline vulnerability highlighted the Caribbean as a region of high hazard and risk and that it was poorly monitored. Nearly 100 tsunamis have been reported for the Caribbean region and Adjacent Regions in the past 500 years and continue to pose a threat for its nations, coastal areas along the Gulf of Mexico, and the Atlantic seaboard of North and South America. Significant efforts to improve monitoring capabilities have been undertaken since this time including an expansion of the United States Geological Survey (USGS) Global Seismographic Network (GSN) (McNamara et al., 2006) and establishment of the United Nations Educational, Scientific and Cultural Organization (UNESCO) Intergovernmental Coordination Group (ICG) for the Tsunami and other Coastal Hazards Warning System for the Caribbean and Adjacent Regions (CARIBE EWS). The minimum performance standards it recommended for initial earthquake locations include: 1) Earthquake detection within 1 minute, 2) Minimum magnitude threshold = M4.5, and 3) Initial hypocenter error of <30 km. In this study, we assess current compliance with performance standards and model improvements in earthquake and tsunami monitoring capabilities in the Caribbean region since the first meeting of the UNESCO ICG-Caribe EWS in 2006. The three measures of network capability modeled in this study are: 1) minimum Mw detection threshold; 2) P-wave detection time of an automatic processing system and; 3) theoretical earthquake location uncertainty. By modeling three measures of seismic network capability, we can optimize the distribution of ICG-Caribe EWS seismic stations and select an international network that will be contributed from existing real-time broadband national networks in the region. Sea level monitoring improvements both offshore and along the coast will also be addressed. With the support of Member States and other countries and organizations it has been possible to significantly expand the sea level network thus reducing the amount of time it now takes to verify tsunamis.

  12. A Wireless Sensor System for Real-Time Monitoring and Fault Detection of Motor Arrays

    PubMed Central

    Medina-García, Jonathan; Sánchez-Rodríguez, Trinidad; Galán, Juan Antonio Gómez; Delgado, Aránzazu; Gómez-Bravo, Fernando; Jiménez, Raúl

    2017-01-01

    This paper presents a wireless fault detection system for industrial motors that combines vibration, motor current and temperature analysis, thus improving the detection of mechanical faults. The design also considers the time of detection and further possible actions, which are also important for the early detection of possible malfunctions, and thus for avoiding irreversible damage to the motor. The remote motor condition monitoring is implemented through a wireless sensor network (WSN) based on the IEEE 802.15.4 standard. The deployed network uses the beacon-enabled mode to synchronize several sensor nodes with the coordinator node, and the guaranteed time slot mechanism provides data monitoring with a predetermined latency. A graphic user interface offers remote access to motor conditions and real-time monitoring of several parameters. The developed wireless sensor node exhibits very low power consumption since it has been optimized both in terms of hardware and software. The result is a low cost, highly reliable and compact design, achieving a high degree of autonomy of more than two years with just one 3.3 V/2600 mAh battery. Laboratory and field tests confirm the feasibility of the wireless system. PMID:28245623

  13. A Wireless Sensor System for Real-Time Monitoring and Fault Detection of Motor Arrays.

    PubMed

    Medina-García, Jonathan; Sánchez-Rodríguez, Trinidad; Galán, Juan Antonio Gómez; Delgado, Aránzazu; Gómez-Bravo, Fernando; Jiménez, Raúl

    2017-02-25

    This paper presents a wireless fault detection system for industrial motors that combines vibration, motor current and temperature analysis, thus improving the detection of mechanical faults. The design also considers the time of detection and further possible actions, which are also important for the early detection of possible malfunctions, and thus for avoiding irreversible damage to the motor. The remote motor condition monitoring is implemented through a wireless sensor network (WSN) based on the IEEE 802.15.4 standard. The deployed network uses the beacon-enabled mode to synchronize several sensor nodes with the coordinator node, and the guaranteed time slot mechanism provides data monitoring with a predetermined latency. A graphic user interface offers remote access to motor conditions and real-time monitoring of several parameters. The developed wireless sensor node exhibits very low power consumption since it has been optimized both in terms of hardware and software. The result is a low cost, highly reliable and compact design, achieving a high degree of autonomy of more than two years with just one 3.3 V/2600 mAh battery. Laboratory and field tests confirm the feasibility of the wireless system.

  14. A Control of a Mono and Multi Scale Measurement of a Grid

    NASA Astrophysics Data System (ADS)

    Elloumi, Imene; Ravelomanana, Sahobimaholy; Jelliti, Manel; Sibilla, Michelle; Desprats, Thierry

    The capacity to ensure the seamless mobility with the end-to-end Quality of Service (QoS) represents a vital criterion of success in the grid use. In this paper we hence posit a method of monitoring interconnection network of the grid (cluster, local grid and aggregate grids) in order to control its QoS. Such monitoring can guarantee a persistent control of the system state of health, a diagnostic and an optimization pertinent enough for better real time exploitation. A better exploitation is synonymous with identifying networking problems that affect the application domain. This can be carried out by control measurements as well as mono and multi scale for such metrics as: the bandwidth, CPU speed and load. The solution proposed, which is a management generic solution independently from the technologies, aims to automate human expertise and thereby more autonomy.

  15. Condition monitoring of an electro-magnetic brake using an artificial neural network

    NASA Astrophysics Data System (ADS)

    Gofran, T.; Neugebauer, P.; Schramm, D.

    2017-10-01

    This paper presents a data-driven approach to Condition Monitoring of Electromagnetic brakes without use of additional sensors. For safe and efficient operation of electric motor a regular evaluation and replacement of the friction surface of the brake is required. One such evaluation method consists of direct or indirect sensing of the air-gap between pressure plate and magnet. A larger gap is generally indicative of worn surface(s). Traditionally this has been accomplished by the use of additional sensors - making existing systems complex, cost- sensitive and difficult to maintain. In this work a feed-forward Artificial Neural Network (ANN) is learned with the electrical data of the brake by supervised learning method to estimate the air-gap. The ANN model is optimized on the training set and validated using the test set. The experimental results of estimated air-gap with accuracy of over 95% demonstrate the validity of the proposed approach.

  16. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    NASA Astrophysics Data System (ADS)

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  17. Optimizing the Energy and Throughput of a Water-Quality Monitoring System.

    PubMed

    Olatinwo, Segun O; Joubert, Trudi-H

    2018-04-13

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  18. Optimizing the Energy and Throughput of a Water-Quality Monitoring System

    PubMed Central

    Olatinwo, Segun O.

    2018-01-01

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity. PMID:29652866

  19. Open Source Platform Application to Groundwater Characterization and Monitoring

    NASA Astrophysics Data System (ADS)

    Ntarlagiannis, D.; Day-Lewis, F. D.; Falzone, S.; Lane, J. W., Jr.; Slater, L. D.; Robinson, J.; Hammett, S.

    2017-12-01

    Groundwater characterization and monitoring commonly rely on the use of multiple point sensors and human labor. Due to the number of sensors, labor, and other resources needed, establishing and maintaining an adequate groundwater monitoring network can be both labor intensive and expensive. To improve and optimize the monitoring network design, open source software and hardware components could potentially provide the platform to control robust and efficient sensors thereby reducing costs and labor. This work presents early attempts to create a groundwater monitoring system incorporating open-source software and hardware that will control the remote operation of multiple sensors along with data management and file transfer functions. The system is built around a Raspberry PI 3, that controls multiple sensors in order to perform on-demand, continuous or `smart decision' measurements while providing flexibility to incorporate additional sensors to meet the demands of different projects. The current objective of our technology is to monitor exchange of ionic tracers between mobile and immobile porosity using a combination of fluid and bulk electrical-conductivity measurements. To meet this objective, our configuration uses four sensors (pH, specific conductance, pressure, temperature) that can monitor the fluid electrical properties of interest and guide the bulk electrical measurement. This system highlights the potential of using open source software and hardware components for earth sciences applications. The versatility of the system makes it ideal for use in a large number of applications, and the low cost allows for high resolution (spatially and temporally) monitoring.

  20. A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks.

    PubMed

    Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang

    2017-08-08

    Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs.

  1. Energy Optimization Using a Case-Based Reasoning Strategy

    PubMed Central

    Herrera-Viedma, Enrique

    2018-01-01

    At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC) systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS) deployed in a Cloud environment with a wireless sensor network (WSN) in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN). The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices. PMID:29543729

  2. Energy Optimization Using a Case-Based Reasoning Strategy.

    PubMed

    González-Briones, Alfonso; Prieto, Javier; De La Prieta, Fernando; Herrera-Viedma, Enrique; Corchado, Juan M

    2018-03-15

    At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC) systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS) deployed in a Cloud environment with a wireless sensor network (WSN) in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN). The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckner, Mark A; Bobrek, Miljko; Farquhar, Ethan

    Wireless Access Points (WAP) remain one of the top 10 network security threats. This research is part of an effort to develop a physical (PHY) layer aware Radio Frequency (RF) air monitoring system with multi-factor authentication to provide a first-line of defense for network security--stopping attackers before they can gain access to critical infrastructure networks through vulnerable WAPs. This paper presents early results on the identification of OFDM-based 802.11a WiFi devices using RF Distinct Native Attribute (RF-DNA) fingerprints produced by the Fractional Fourier Transform (FRFT). These fingerprints are input to a "Learning from Signals" (LFS) classifier which uses hybrid Differentialmore » Evolution/Conjugate Gradient (DECG) optimization to determine the optimal features for a low-rank model to be used for future predictions. Results are presented for devices under the most challenging conditions of intra-manufacturer classification, i.e., same-manufacturer, same-model, differing only in serial number. The results of Fractional Fourier Domain (FRFD) RF-DNA fingerprints demonstrate significant improvement over results based on Time Domain (TD), Spectral Domain (SD) and even Wavelet Domain (WD) fingerprints.« less

  4. A probabilistic dynamic energy model for ad-hoc wireless sensors network with varying topology

    NASA Astrophysics Data System (ADS)

    Al-Husseini, Amal

    In this dissertation we investigate the behavior of Wireless Sensor Networks (WSNs) from the degree distribution and evolution perspective. In specific, we focus on implementation of a scale-free degree distribution topology for energy efficient WSNs. WSNs is an emerging technology that finds its applications in different areas such as environment monitoring, agricultural crop monitoring, forest fire monitoring, and hazardous chemical monitoring in war zones. This technology allows us to collect data without human presence or intervention. Energy conservation/efficiency is one of the major issues in prolonging the active life WSNs. Recently, many energy aware and fault tolerant topology control algorithms have been presented, but there is dearth of research focused on energy conservation/efficiency of WSNs. Therefore, we study energy efficiency and fault-tolerance in WSNs from the degree distribution and evolution perspective. Self-organization observed in natural and biological systems has been directly linked to their degree distribution. It is widely known that scale-free distribution bestows robustness, fault-tolerance, and access efficiency to system. Fascinated by these properties, we propose two complex network theoretic self-organizing models for adaptive WSNs. In particular, we focus on adopting the Barabasi and Albert scale-free model to fit into the constraints and limitations of WSNs. We developed simulation models to conduct numerical experiments and network analysis. The main objective of studying these models is to find ways to reducing energy usage of each node and balancing the overall network energy disrupted by faulty communication among nodes. The first model constructs the wireless sensor network relative to the degree (connectivity) and remaining energy of every individual node. We observed that it results in a scale-free network structure which has good fault tolerance properties in face of random node failures. The second model considers additional constraints on the maximum degree of each node as well as the energy consumption relative to degree changes. This gives more realistic results from a dynamical network perspective. It results in balanced network-wide energy consumption. The results show that networks constructed using the proposed approach have good properties for different centrality measures. The outcomes of the presented research are beneficial to building WSN control models with greater self-organization properties which leads to optimal energy consumption.

  5. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    PubMed

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system throughput performance.

  6. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  7. The Real Time Mission Monitor: A Situational Awareness Tool For Managing Experiment Assets

    NASA Technical Reports Server (NTRS)

    Blakeslee, Richard; Hall, John; Goodman, Michael; Parker, Philip; Freudinger, Larry; He, Matt

    2007-01-01

    The NASA Real Time Mission Monitor (RTMM) is a situational awareness tool that integrates satellite, airborne and surface data sets; weather information; model and forecast outputs; and vehicle state data (e.g., aircraft navigation, satellite tracks and instrument field-of-views) for field experiment management RTMM optimizes science and logistic decision-making during field experiments by presenting timely data and graphics to the users to improve real time situational awareness of the experiment's assets. The RTMM is proven in the field as it supported program managers, scientists, and aircraft personnel during the NASA African Monsoon Multidisciplinary Analyses experiment during summer 2006 in Cape Verde, Africa. The integration and delivery of this information is made possible through data acquisition systems, network communication links and network server resources built and managed by collaborators at NASA Dryden Flight Research Center (DFRC) and Marshall Space Flight Center (MSFC). RTMM is evolving towards a more flexible and dynamic combination of sensor ingest, network computing, and decision-making activities through the use of a service oriented architecture based on community standards and protocols.

  8. Design and Deployment of Low-Cost Sensors for Monitoring the Water Quality and Fish Behavior in Aquaculture Tanks during the Feeding Process

    PubMed Central

    Parra, Lorena; García, Laura

    2018-01-01

    The monitoring of farming processes can optimize the use of resources and improve its sustainability and profitability. In fish farms, the water quality, tank environment, and fish behavior must be monitored. Wireless sensor networks (WSNs) are a promising option to perform this monitoring. Nevertheless, its high cost is slowing the expansion of its use. In this paper, we propose a set of sensors for monitoring the water quality and fish behavior in aquaculture tanks during the feeding process. The WSN is based on physical sensors, composed of simple electronic components. The system proposed can monitor water quality parameters, tank status, the feed falling and fish swimming depth and velocity. In addition, the system includes a smart algorithm to reduce the energy waste when sending the information from the node to the database. The system is composed of three nodes in each tank that send the information though the local area network to a database on the Internet and a smart algorithm that detects abnormal values and sends alarms when they happen. All the sensors are designed, calibrated, and deployed to ensure its suitability. The greatest efforts have been accomplished with the fish presence sensor. The total cost of the sensors and nodes for the proposed system is less than 90 €. PMID:29494560

  9. Design and Deployment of Low-Cost Sensors for Monitoring the Water Quality and Fish Behavior in Aquaculture Tanks during the Feeding Process.

    PubMed

    Parra, Lorena; Sendra, Sandra; García, Laura; Lloret, Jaime

    2018-03-01

    The monitoring of farming processes can optimize the use of resources and improve its sustainability and profitability. In fish farms, the water quality, tank environment, and fish behavior must be monitored. Wireless sensor networks (WSNs) are a promising option to perform this monitoring. Nevertheless, its high cost is slowing the expansion of its use. In this paper, we propose a set of sensors for monitoring the water quality and fish behavior in aquaculture tanks during the feeding process. The WSN is based on physical sensors, composed of simple electronic components. The system proposed can monitor water quality parameters, tank status, the feed falling and fish swimming depth and velocity. In addition, the system includes a smart algorithm to reduce the energy waste when sending the information from the node to the database. The system is composed of three nodes in each tank that send the information though the local area network to a database on the Internet and a smart algorithm that detects abnormal values and sends alarms when they happen. All the sensors are designed, calibrated, and deployed to ensure its suitability. The greatest efforts have been accomplished with the fish presence sensor. The total cost of the sensors and nodes for the proposed system is less than 90 €.

  10. Decentralized diagnostics based on a distributed micro-genetic algorithm for transducer networks monitoring large experimental systems.

    PubMed

    Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M

    2014-09-01

    Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.

  11. Predictability in space launch vehicle anomaly detection using intelligent neuro-fuzzy systems

    NASA Technical Reports Server (NTRS)

    Gulati, Sandeep; Toomarian, Nikzad; Barhen, Jacob; Maccalla, Ayanna; Tawel, Raoul; Thakoor, Anil; Daud, Taher

    1994-01-01

    Included in this viewgraph presentation on intelligent neuroprocessors for launch vehicle health management systems (HMS) are the following: where the flight failures have been in launch vehicles; cumulative delay time; breakdown of operations hours; failure of Mars Probe; vehicle health management (VHM) cost optimizing curve; target HMS-STS auxiliary power unit location; APU monitoring and diagnosis; and integration of neural networks and fuzzy logic.

  12. Demonstration and Validation of GTS Long-Term Monitoring Optimization Software at Military and Government Sites

    DTIC Science & Technology

    2011-02-01

    Defense DoE Department of Energy DPT Direct push technology EPA Environmental Protection Agency ERPIMS Enviromental Restoration Program...and 3) assessing whether new wells should be added and where (i.e., network adequacy). • Predict allows import and comparison of new sampling...data against previously estimated trends and maps. Two options include trend flagging and plume flagging to identify potentially anomalous new values

  13. Sensing Models and Sensor Network Architectures for Transport Infrastructure Monitoring in Smart Cities

    NASA Astrophysics Data System (ADS)

    Simonis, Ingo

    2015-04-01

    Transport infrastructure monitoring and analysis is one of the focus areas in the context of smart cities. With the growing number of people moving into densely populated urban metro areas, precise tracking of moving people and goods is the basis for profound decision-making and future planning. With the goal of defining optimal extensions and modifications to existing transport infrastructures, multi-modal transport has to be monitored and analysed. This process is performed on the basis of sensor networks that combine a variety of sensor models, types, and deployments within the area of interest. Multi-generation networks, consisting of a number of sensor types and versions, are causing further challenges for the integration and processing of sensor observations. These challenges are not getting any smaller with the development of the Internet of Things, which brings promising opportunities, but is currently stuck in a type of protocol war between big industry players from both the hardware and network infrastructure domain. In this paper, we will highlight how the OGC suite of standards, with the Sensor Web standards developed by the Sensor Web Enablement Initiative together with the latest developments by the Sensor Web for Internet of Things community can be applied to the monitoring and improvement of transport infrastructures. Sensor Web standards have been applied in the past to pure technical domains, but need to be broadened now in order to meet new challenges. Only cross domain approaches will allow to develop satisfying transport infrastructure approaches that take into account requirements coming form a variety of sectors such as tourism, administration, transport industry, emergency services, or private people. The goal is the development of interoperable components that can be easily integrated within data infrastructures and follow well defined information models to allow robust processing.

  14. Application of Multiregressive Linear Models, Dynamic Kriging Models and Neural Network Models to Predictive Maintenance of Hydroelectric Power Systems

    NASA Astrophysics Data System (ADS)

    Lucifredi, A.; Mazzieri, C.; Rossi, M.

    2000-05-01

    Since the operational conditions of a hydroelectric unit can vary within a wide range, the monitoring system must be able to distinguish between the variations of the monitored variable caused by variations of the operation conditions and those due to arising and progressing of failures and misoperations. The paper aims to identify the best technique to be adopted for the monitoring system. Three different methods have been implemented and compared. Two of them use statistical techniques: the first, the linear multiple regression, expresses the monitored variable as a linear function of the process parameters (independent variables), while the second, the dynamic kriging technique, is a modified technique of multiple linear regression representing the monitored variable as a linear combination of the process variables in such a way as to minimize the variance of the estimate error. The third is based on neural networks. Tests have shown that the monitoring system based on the kriging technique is not affected by some problems common to the other two models e.g. the requirement of a large amount of data for their tuning, both for training the neural network and defining the optimum plane for the multiple regression, not only in the system starting phase but also after a trivial operation of maintenance involving the substitution of machinery components having a direct impact on the observed variable. Or, in addition, the necessity of different models to describe in a satisfactory way the different ranges of operation of the plant. The monitoring system based on the kriging statistical technique overrides the previous difficulties: it does not require a large amount of data to be tuned and is immediately operational: given two points, the third can be immediately estimated; in addition the model follows the system without adapting itself to it. The results of the experimentation performed seem to indicate that a model based on a neural network or on a linear multiple regression is not optimal, and that a different approach is necessary to reduce the amount of work during the learning phase using, when available, all the information stored during the initial phase of the plant to build the reference baseline, elaborating, if it is the case, the raw information available. A mixed approach using the kriging statistical technique and neural network techniques could optimise the result.

  15. Optimizing observational networks combining gliders, moored buoys and FerryBox in the Bay of Biscay and English Channel

    NASA Astrophysics Data System (ADS)

    Charria, Guillaume; Lamouroux, Julien; De Mey, Pierre

    2016-10-01

    Designing optimal observation networks in coastal oceans remains one of the major challenges towards the implementation of future efficient Integrated Ocean Observing Systems to monitor the coastal environment. In the Bay of Biscay and the English Channel, the diversity of involved processes (e.g. tidally-driven circulation, plume dynamics) requires to adapt observing systems to the specific targeted environments. Also important is the requirement for those systems to sustain coastal applications. Two observational network design experiments have been implemented for the spring season in two regions: the Loire River plume (northern part of the Bay of Biscay) and the Western English Channel. The method used to perform these experiments is based on the ArM (Array Modes) formalism using an ensemble-based approach without data assimilation. The first experiment in the Loire River plume aims to explore different possible glider endurance lines combined with a fixed mooring to monitor temperature and salinity. Main results show an expected improvement when combining glider and mooring observations. The experiment also highlights that the chosen transect (along-shore and North-South, cross-shore) does not significantly impact the efficiency of the network. Nevertheless, the classification from the method results in slightly better performances for along-shore and North-South sections. In the Western English Channel, a tidally-driven circulation system, added value of using a glider below FerryBox temperature and salinity measurements has been assessed. FerryBox systems are characterised by a high frequency sampling rate crossing the region 2 to 3 times a day. This efficient sampling, as well as the specific vertical hydrological structure (which is homogeneous in many sub-regions of the domain), explains the fact that the added value of an associated glider transect is not significant. These experiments combining existing and future observing systems, as well as numerical ensemble simulations, highlight the key issue of monitoring the whole water column in and close to river plumes (using gliders for example) and the efficiency of the surface high frequency sampling from FerryBoxes in macrotidal regions.

  16. Bandwidth variable transceivers with artificial neural network-aided provisioning and capacity improvement capabilities in meshed optical networks with cascaded ROADM filtering

    NASA Astrophysics Data System (ADS)

    Zhou, Xingyu; Zhuge, Qunbi; Qiu, Meng; Xiang, Meng; Zhang, Fangyuan; Wu, Baojian; Qiu, Kun; Plant, David V.

    2018-02-01

    We investigate the capacity improvement achieved by bandwidth variable transceivers (BVT) in meshed optical networks with cascaded ROADM filtering at fixed channel spacing, and then propose an artificial neural network (ANN)-aided provisioning scheme to select optimal symbol rate and modulation format for the BVTs in this scenario. Compared with a fixed symbol rate transceiver with standard QAMs, it is shown by both experiments and simulations that BVTs can increase the average capacity by more than 17%. The ANN-aided BVT provisioning method uses parameters monitored from a coherent receiver and then employs a trained ANN to transform these parameters into the desired configuration. It is verified by simulation that the BVT with the proposed provisioning method can approach the upper limit of the system capacity obtained by brute-force search under various degrees of flexibilities.

  17. Intelligent control and adaptive systems; Proceedings of the Meeting, Philadelphia, PA, Nov. 7, 8, 1989

    NASA Technical Reports Server (NTRS)

    Rodriguez, Guillermo (Editor)

    1990-01-01

    Various papers on intelligent control and adaptive systems are presented. Individual topics addressed include: control architecture for a Mars walking vehicle, representation for error detection and recovery in robot task plans, real-time operating system for robots, execution monitoring of a mobile robot system, statistical mechanics models for motion and force planning, global kinematics for manipulator planning and control, exploration of unknown mechanical assemblies through manipulation, low-level representations for robot vision, harmonic functions for robot path construction, simulation of dual behavior of an autonomous system. Also discussed are: control framework for hand-arm coordination, neural network approach to multivehicle navigation, electronic neural networks for global optimization, neural network for L1 norm linear regression, planning for assembly with robot hands, neural networks in dynamical systems, control design with iterative learning, improved fuzzy process control of spacecraft autonomous rendezvous using a genetic algorithm.

  18. Completing and sustaining IMS network for the CTBT Verification Regime

    NASA Astrophysics Data System (ADS)

    Meral Ozel, N.

    2015-12-01

    The CTBT International Monitoring System is to be comprised of 337 facilities located all over the world for the purpose of detecting and locating nuclear test explosions. Major challenges remain, namely the completion of the network where most of the remaining stations have either environmental, logistical and/or political issues to surmont (89% of the stations have already been built) and the sustainment of a reliable and state-of the-art network covering 4 technologies - seismic, infrasound , hydroacoustic and radionuclide. To have a credible and trustworthy verification system ready for entry into force of the Treaty, the CTBTO is protecting and enhancing its investment of its global network of stations and is providing effective data to the International Data Centre (IDC) and Member States. Regarding the protection of the CTBTO's investment and enhanced sustainment of IMS station operations, the IMS Division is enhancing the capabilities of the monitoring system by applying advances in instrumentation and introducing new software applications that are fit for purpose. Some examples are the development of noble gas laboratory systems to process and analyse subsoil samples, development of a mobile noble gas system for onsite inspection purposes, optimization of Beta Gamma detectors for Xenon detection, assessing and improving the efficiency of wind noise reduction systems for infrasound stations, development and testing of infrasound stations with a self-calibrating capability, and research into the use of modular designs for the hydroacoustic network.

  19. Early detection and evaluation of waste through sensorized containers for a collection monitoring application.

    PubMed

    Rovetta, Alberto; Xiumin, Fan; Vicentini, Federico; Minghua, Zhu; Giusti, Alessandro; Qichang, He

    2009-12-01

    The present study describes a novel application for use in the monitoring of municipal solid waste, based on distributed sensor technology and geographical information systems. Original field testing and evaluation of the application were carried out in Pudong, Shanghai (PR China). The local waste management system in Pudong features particular requirements related to the rapidly increasing rate of waste production. In view of the fact that collected waste is currently deployed to landfills or to incineration plants within the context investigated, the key aspects to be taken into account in waste collection procedures include monitoring of the overall amount of waste produced, quantitative measurement of the waste present at each collection point and identification of classes of material present in the collected waste. The case study described herein focuses particularly on the above mentioned aspects, proposing the implementation of a network of sensorized waste containers linked to a data management system. Containers used were equipped with a set of sensors mounted onto standard waste bins. The design, implementation and validation procedures applied are subsequently described. The main aim to be achieved by data collection and evaluation was to provide for feasibility analysis of the final device. Data pertaining to the content of waste containers, sampled and processed by means of devices validated on two purpose-designed prototypes, were therefore uploaded to a central monitoring server using GPRS connection. The data monitoring and management modules are integrated into an existing application used by local municipal authorities. A field test campaign was performed in the Pudong area. The system was evaluated in terms of real data flow from the network nodes (containers) as well as in terms of optimization functions, such as collection vehicle routing and scheduling. The most important outcomes obtained were related to calculations of waste weight and volume. The latter data were subsequently used as parameters for the routing optimization of collection trucks and material density evaluation.

  20. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels

    PubMed Central

    Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng

    2016-01-01

    Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks. PMID:27916917

  1. A Routing Protocol for Multisink Wireless Sensor Networks in Underground Coalmine Tunnels.

    PubMed

    Xia, Xu; Chen, Zhigang; Liu, Hui; Wang, Huihui; Zeng, Feng

    2016-11-30

    Traditional underground coalmine monitoring systems are mainly based on the use of wired transmission. However, when cables are damaged during an accident, it is difficult to obtain relevant data on environmental parameters and the emergency situation underground. To address this problem, the use of wireless sensor networks (WSNs) has been proposed. However, the shape of coalmine tunnels is not conducive to the deployment of WSNs as they are long and narrow. Therefore, issues with the network arise, such as extremely large energy consumption, very weak connectivity, long time delays, and a short lifetime. To solve these problems, in this study, a new routing protocol algorithm for multisink WSNs based on transmission power control is proposed. First, a transmission power control algorithm is used to negotiate the optimal communication radius and transmission power of each sink. Second, the non-uniform clustering idea is adopted to optimize the cluster head selection. Simulation results are subsequently compared to the Centroid of the Nodes in a Partition (CNP) strategy and show that the new algorithm delivers a good performance: power efficiency is increased by approximately 70%, connectivity is increased by approximately 15%, the cluster interference is diminished by approximately 50%, the network lifetime is increased by approximately 6%, and the delay is reduced with an increase in the number of sinks.

  2. SDN-Enabled Dynamic Feedback Control and Sensing in Agile Optical Networks

    NASA Astrophysics Data System (ADS)

    Lin, Likun

    Fiber optic networks are no longer just pipelines for transporting data in the long haul backbone. Exponential growth in traffic in metro-regional areas has pushed higher capacity fiber toward the edge of the network, and highly dynamic patterns of heterogeneous traffic have emerged that are often bursty, severely stressing the historical "fat and dumb pipe" static optical network, which would need to be massively over-provisioned to deal with these loads. What is required is a more intelligent network with a span of control over the optical as well as electrical transport mechanisms which enables handling of service requests in a fast and efficient way that guarantees quality of service (QoS) while optimizing capacity efficiency. An "agile" optical network is a reconfigurable optical network comprised of high speed intelligent control system fed by real-time in situ network sensing. It provides fast response in the control and switching of optical signals in response to changing traffic demands and network conditions. This agile control of optical signals is enabled by pushing switching decisions downward in the network stack to the physical layer. Implementing such agility is challenging due to the response dynamics and interactions of signals in the physical layer. Control schemes must deal with issues such as dynamic power equalization, EDFA transients and cascaded noise effects, impairments due to self-phase modulation and dispersion, and channel-to-channel cross talk. If these issues are not properly predicted and mitigated, attempts at dynamic control can drive the optical network into an unstable state. In order to enable high speed actuation of signal modulators and switches, the network controller must be able to make decisions based on predictive models. In this thesis, we consider how to take advantage of Software Defined Networking (SDN) capabilities for network reconfiguration, combined with embedded models that access updates from deployed network monitoring sensors. In order to maintain signal quality while optimizing network resources, we find that it is essential to model and update estimates of the physical link impairments in real-time. In this thesis, we consider the key elements required to enable an agile optical network, with contributions as follows: • Control Framework: extended the SDN concept to include the optical transport network through extensions to the OpenFlow (OF) protocol. A unified SDN control plane is built to facilitate control and management capability across the electrical/packet-switched and optical/circuit-switched portions of the network seamlessly. The SDN control plane serves as a platform to abstract the resources of multilayer/multivendor networks. Through this platform, applications can dynamically request the network resources to meet their service requirements. • Use of In-situ Monitors: enabled real-time physical impairment sensing in the control plane using in-situ Optical Performance Monitoring (OPM) and bit error rate (BER) analyzers. OPM and BER values are used as quantitative indicators of the link status and are fed to the control plane through a high-speed data collection interface to form a closed-loop feedback system to enable adaptive resource allocation. • Predictive Network Model: used a network model embedded in the control layer to study the link status. The estimated results of network status is fed into the control decisions to precompute the network resources. The performance of the network model can be enhanced by the sensing results. • Real-Time Control Algorithms: investigated various dynamic resource allocation mechanisms supporting an agile optical network. Intelligent routing and wavelength switching for recovering from traffic impairments is achieved experimentally in the agile optical network within one second. A distance-adaptive spectrum allocation scheme to address transmission impairments caused by cascaded Wavelength Selective Switches (WSS) is proposed and evaluated for improving network spectral efficiency.

  3. The role of optimality in characterizing CO2 seepage from geological carbon sequestration sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cortis, Andrea; Oldenburg, Curtis M.; Benson, Sally M.

    Storage of large amounts of carbon dioxide (CO{sub 2}) in deep geological formations for greenhouse gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this work we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size ofmore » the: (1) region that needs to be monitored; (2) footprint of the measurement approach, and (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO{sub 2} storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO{sub 2} seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO{sub 2} fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO{sub 2} seepage areas.« less

  4. Non-invasive continuous blood pressure measurement based on mean impact value method, BP neural network, and genetic algorithm.

    PubMed

    Tan, Xia; Ji, Zhong; Zhang, Yadan

    2018-04-25

    Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.

  5. Metro Optical Networks for Homeland Security

    NASA Astrophysics Data System (ADS)

    Bechtel, James H.

    Metro optical networks provide an enticing opportunity for strengthening homeland security. Many existing and emerging fiber-optic networks can be adapted for enhanced security applications. Applications include airports, theme parks, sports venues, and border surveillance systems. Here real-time high-quality video and captured images can be collected, transported, processed, and stored for security applications. Video and data collection are important also at correctional facilities, courts, infrastructure (e.g., dams, bridges, railroads, reservoirs, power stations), and at military and other government locations. The scaling of DWDM-based networks allows vast amounts of data to be collected and transported including biometric features of individuals at security check points. Here applications will be discussed along with potential solutions and challenges. Examples of solutions to these problems are given. This includes a discussion of metropolitan aggregation platforms for voice, video, and data that are SONET compliant for use in SONET networks and the use of DWDM technology for scaling and transporting a variety of protocols. Element management software allows not only network status monitoring, but also provides optimized allocation of network resources through the use of optical switches or electrical cross connects.

  6. Developing a Framework for Effective Network Capacity Planning

    NASA Technical Reports Server (NTRS)

    Yaprak, Ece

    2005-01-01

    As Internet traffic continues to grow exponentially, developing a clearer understanding of, and appropriately measuring, network's performance is becoming ever more critical. An important challenge faced by the Information Resources Directorate (IRD) at the Johnson Space Center in this context remains not only monitoring and maintaining a secure network, but also better understanding the capacity and future growth potential boundaries of its network. This requires capacity planning which involves modeling and simulating different network alternatives, and incorporating changes in design as technologies, components, configurations, and applications change, to determine optimal solutions in light of IRD's goals, objectives and strategies. My primary task this summer was to address this need. I evaluated network-modeling tools from OPNET Technologies Inc. and Compuware Corporation. I generated a baseline model for Building 45 using both tools by importing "real" topology/traffic information using IRD's various network management tools. I compared each tool against the other in terms of the advantages and disadvantages of both tools to accomplish IRD's goals. I also prepared step-by-step "how to design a baseline model" tutorial for both OPNET and Compuware products.

  7. Optimal topologies for maximizing network transmission capacity

    NASA Astrophysics Data System (ADS)

    Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.

    2018-04-01

    It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.

  8. Development of hybrid genetic-algorithm-based neural networks using regression trees for modeling air quality inside a public transportation bus.

    PubMed

    Kadiyala, Akhil; Kaur, Devinder; Kumar, Ashok

    2013-02-01

    The present study developed a novel approach to modeling indoor air quality (IAQ) of a public transportation bus by the development of hybrid genetic-algorithm-based neural networks (also known as evolutionary neural networks) with input variables optimized from using the regression trees, referred as the GART approach. This study validated the applicability of the GART modeling approach in solving complex nonlinear systems by accurately predicting the monitored contaminants of carbon dioxide (CO2), carbon monoxide (CO), nitric oxide (NO), sulfur dioxide (SO2), 0.3-0.4 microm sized particle numbers, 0.4-0.5 microm sized particle numbers, particulate matter (PM) concentrations less than 1.0 microm (PM10), and PM concentrations less than 2.5 microm (PM2.5) inside a public transportation bus operating on 20% grade biodiesel in Toledo, OH. First, the important variables affecting each monitored in-bus contaminant were determined using regression trees. Second, the analysis of variance was used as a complimentary sensitivity analysis to the regression tree results to determine a subset of statistically significant variables affecting each monitored in-bus contaminant. Finally, the identified subsets of statistically significant variables were used as inputs to develop three artificial neural network (ANN) models. The models developed were regression tree-based back-propagation network (BPN-RT), regression tree-based radial basis function network (RBFN-RT), and GART models. Performance measures were used to validate the predictive capacity of the developed IAQ models. The results from this approach were compared with the results obtained from using a theoretical approach and a generalized practicable approach to modeling IAQ that included the consideration of additional independent variables when developing the aforementioned ANN models. The hybrid GART models were able to capture majority of the variance in the monitored in-bus contaminants. The genetic-algorithm-based neural network IAQ models outperformed the traditional ANN methods of the back-propagation and the radial basis function networks. The novelty of this research is the development of a novel approach to modeling vehicular indoor air quality by integration of the advanced methods of genetic algorithms, regression trees, and the analysis of variance for the monitored in-vehicle gaseous and particulate matter contaminants, and comparing the results obtained from using the developed approach with conventional artificial intelligence techniques of back propagation networks and radial basis function networks. This study validated the newly developed approach using holdout and threefold cross-validation methods. These results are of great interest to scientists, researchers, and the public in understanding the various aspects of modeling an indoor microenvironment. This methodology can easily be extended to other fields of study also.

  9. Comparison of land use regression models for NO2 based on routine and campaign monitoring data from an urban area of Japan.

    PubMed

    Kashima, Saori; Yorifuji, Takashi; Sawada, Norie; Nakaya, Tomoki; Eboshida, Akira

    2018-08-01

    Typically, land use regression (LUR) models have been developed using campaign monitoring data rather than routine monitoring data. However, the latter have advantages such as low cost and long-term coverage. Based on the idea that LUR models representing regional differences in air pollution and regional road structures are optimal, the objective of this study was to evaluate the validity of LUR models for nitrogen dioxide (NO 2 ) based on routine and campaign monitoring data obtained from an urban area. We selected the city of Suita in Osaka (Japan). We built a model based on routine monitoring data obtained from all sites (routine-LUR-All), and a model based on campaign monitoring data (campaign-LUR) within the city. Models based on routine monitoring data obtained from background sites (routine-LUR-BS) and based on data obtained from roadside sites (routine-LUR-RS) were also built. The routine LUR models were based on monitoring networks across two prefectures (i.e., Osaka and Hyogo prefectures). We calculated the predictability of the each model. We then compared the predicted NO 2 concentrations from each model with measured annual average NO 2 concentrations from evaluation sites. The routine-LUR-All and routine-LUR-BS models both predicted NO 2 concentrations well: adjusted R 2 =0.68 and 0.76, respectively, and root mean square error=3.4 and 2.1ppb, respectively. The predictions from the routine-LUR-All model were highly correlated with the measured NO 2 concentrations at evaluation sites. Although the predicted NO 2 concentrations from each model were correlated, the LUR models based on routine networks, and particularly those based on all monitoring sites, provided better visual representations of the local road conditions in the city. The present study demonstrated that LUR models based on routine data could estimate local traffic-related air pollution in an urban area. The importance and usefulness of data from routine monitoring networks should be acknowledged. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Energy-Efficient Control with Harvesting Predictions for Solar-Powered Wireless Sensor Networks.

    PubMed

    Zou, Tengyue; Lin, Shouying; Feng, Qijie; Chen, Yanlian

    2016-01-04

    Wireless sensor networks equipped with rechargeable batteries are useful for outdoor environmental monitoring. However, the severe energy constraints of the sensor nodes present major challenges for long-term applications. To achieve sustainability, solar cells can be used to acquire energy from the environment. Unfortunately, the energy supplied by the harvesting system is generally intermittent and considerably influenced by the weather. To improve the energy efficiency and extend the lifetime of the networks, we propose algorithms for harvested energy prediction using environmental shadow detection. Thus, the sensor nodes can adjust their scheduling plans accordingly to best suit their energy production and residual battery levels. Furthermore, we introduce clustering and routing selection methods to optimize the data transmission, and a Bayesian network is used for warning notifications of bottlenecks along the path. The entire system is implemented on a real-time Texas Instruments CC2530 embedded platform, and the experimental results indicate that these mechanisms sustain the networks' activities in an uninterrupted and efficient manner.

  11. Characterizing air quality data from complex network perspective.

    PubMed

    Fan, Xinghua; Wang, Li; Xu, Huihui; Li, Shasha; Tian, Lixin

    2016-02-01

    Air quality depends mainly on changes in emission of pollutants and their precursors. Understanding its characteristics is the key to predicting and controlling air quality. In this study, complex networks were built to analyze topological characteristics of air quality data by correlation coefficient method. Firstly, PM2.5 (particulate matter with aerodynamic diameter less than 2.5 μm) indexes of eight monitoring sites in Beijing were selected as samples from January 2013 to December 2014. Secondly, the C-C method was applied to determine the structure of phase space. Points in the reconstructed phase space were considered to be nodes of the network mapped. Then, edges were determined by nodes having the correlation greater than a critical threshold. Three properties of the constructed networks, degree distribution, clustering coefficient, and modularity, were used to determine the optimal value of the critical threshold. Finally, by analyzing and comparing topological properties, we pointed out that similarities and difference in the constructed complex networks revealed influence factors and their different roles on real air quality system.

  12. A versatile and interoperable network sensors for water resources monitoring

    NASA Astrophysics Data System (ADS)

    Ortolani, Alberto; Brandini, Carlo; Costantini, Roberto; Costanza, Letizia; Innocenti, Lucia; Sabatini, Francesco; Gozzini, Bernardo

    2010-05-01

    Monitoring systems to assess water resources quantity and quality require extensive use of in-situ measurements, that have great limitations like difficulties to access and share data, and to customise and easy reconfigure sensors network to fulfil end-users needs during monitoring or crisis phases. In order to address such limitations Sensor Web Enablement technologies for sensors management have been developed and applied to different environmental context under the EU-funded OSIRIS project (Open architecture for Smart and Interoperable networks in Risk management based on In-situ Sensors, www.osiris-fp6.eu). The main objective of OSIRIS was to create a monitoring system to manage different environmental crisis situations, through an efficient data processing chain where in-situ sensors are connected via an intelligent and versatile network infrastructure (based on web technologies) that enables end-users to remotely access multi-domain sensors information. Among the project application, one was focused on underground fresh-water monitoring and management. With this aim a monitoring system to continuously and automatically check water quality and quantity has been designed and built in a pilot test, identified as a portion of the Amiata aquifer feeding the Santa Fiora springs (Grosseto, Italy). This aquifer present some characteristics that make it greatly vulnerable under some conditions. It is a volcanic aquifer with a fractured structure. The volcanic nature in Santa Fiora causes levels of arsenic concentrations that normally are very close to the threshold stated by law, but that sometimes overpass such threshold for reasons still not fully understood. The presence of fractures makes the infiltration rate very inhomogeneous from place to place and very high in correspondence of big fractures. In case of liquid-pollutant spills (typically hydrocarbons spills from tanker accidents or leakage from house tanks containing fuel for heating), these fractures can act as shortcuts to the heart of the aquifer, causing water contamination much faster than what inferable from average infiltration rates. A new system has been set up, upgrading a legacy sensor network with new sensors to address the monitoring and emergency phase management. Where necessary sensors have been modified in order to manage the whole sensor network through SWE services. The network manage sensors for water parameters (physical and chemical) and for atmospheric ones (for supporting the management of accidental crises). A main property of the developed architecture is that it can be easily reconfigured to pass from the monitoring to the alert phase, by changing sampling frequencies of interesting parameters, or deploying specific additional sensors on identified optimal positions (as in case of the hydrocarbon spill). A hydrogeological model, coupled through a hydrological interface to the atmospheric forcing, has been implemented for the area. Model products (accessed through the same web interface than sensors) give a fundamental added value to the upgraded sensors network (e.g. for data merging procedures). Together with the available measurements, it is shown how the model improves the knowledge of the local hydrogeological system, gives a fundamental support to eventually reconfigure the system (e.g. support on transportable sensors position). The network, basically conceived for real-time monitoring, allow to accumulate an unprecedent amount of information for the aquifer. The availability of such a large set of data (in terms of continuously measured water levels, fluxes, precipitation, concentrations, etc.) from the system, gives a unique opportunity for studying the influences of hydrogeological and geopedological parameters on arsenic and concentrations of other chemicals that are naturally present in water.

  13. Point Positioning Service for Natural Hazard Monitoring

    NASA Astrophysics Data System (ADS)

    Bar-Sever, Y. E.

    2014-12-01

    In an effort to improve natural hazard monitoring, JPL has invested in updating and enlarging its global real-time GNSS tracking network, and has launched a unique service - real-time precise positioning for natural hazard monitoring, entitled GREAT Alert (GNSS Real-Time Earthquake and Tsunami Alert). GREAT Alert leverages the full technological and operational capability of the JPL's Global Differential GPS System [www.gdgps.net] to offer owners of real-time dual-frequency GNSS receivers: Sub-5 cm (3D RMS) real-time, absolute positioning in ITRF08, regardless of location Under 5 seconds turnaround time Full covariance information Estimates of ancillary parameters (such as troposphere) optionally provided This service enables GNSS networks operators to instantly have access to the most accurate and reliable real-time positioning solutions for their sites, and also to the hundreds of participating sites globally, assuring inter-consistency and uniformity across all solutions. Local authorities with limited technical and financial resources can now access to the best technology, and share environmental data to the benefit of the entire pacific region. We will describe the specialized precise point positioning techniques employed by the GREAT Alert service optimized for natural hazard monitoring, and in particular Earthquake monitoring. We address three fundamental aspects of these applications: 1) small and infrequent motion, 2) the availability of data at a central location, and 3) the need for refined solutions at several time scales

  14. Value of information analysis for groundwater quality monitoring network design Case study: Eocene Aquifer, Palestine

    NASA Astrophysics Data System (ADS)

    Khader, A.; McKee, M.

    2010-12-01

    Value of information (VOI) analysis evaluates the benefit of collecting additional information to reduce or eliminate uncertainty in a specific decision-making context. It makes explicit any expected potential losses from errors in decision making due to uncertainty and identifies the “best” information collection strategy as one that leads to the greatest expected net benefit to the decision-maker. This study investigates the willingness to pay for groundwater quality monitoring in the Eocene Aquifer, Palestine, which is an unconfined aquifer located in the northern part of the West Bank. The aquifer is being used by 128,000 Palestinians to fulfill domestic and agricultural demands. The study takes into account the consequences of pollution and the options the decision maker might face. Since nitrate is the major pollutant in the aquifer, the consequences of nitrate pollution were analyzed, which mainly consists of the possibility of methemoglobinemia (blue baby syndrome). In this case, the value of monitoring was compared to the costs of treating for methemoglobinemia or the costs of other options like water treatment, using bottled water or importing water from outside the aquifer. And finally, an optimal monitoring network that takes into account the uncertainties in recharge (climate), aquifer properties (hydraulic conductivity), pollutant chemical reaction (decay factor), and the value of monitoring is designed by utilizing a sparse Bayesian modeling algorithm called a relevance vector machine.

  15. A Novel Cross-Layer Routing Protocol Based on Network Coding for Underwater Sensor Networks

    PubMed Central

    Wang, Hao; Wang, Shilian; Bu, Renfei; Zhang, Eryang

    2017-01-01

    Underwater wireless sensor networks (UWSNs) have attracted increasing attention in recent years because of their numerous applications in ocean monitoring, resource discovery and tactical surveillance. However, the design of reliable and efficient transmission and routing protocols is a challenge due to the low acoustic propagation speed and complex channel environment in UWSNs. In this paper, we propose a novel cross-layer routing protocol based on network coding (NCRP) for UWSNs, which utilizes network coding and cross-layer design to greedily forward data packets to sink nodes efficiently. The proposed NCRP takes full advantages of multicast transmission and decode packets jointly with encoded packets received from multiple potential nodes in the entire network. The transmission power is optimized in our design to extend the life cycle of the network. Moreover, we design a real-time routing maintenance protocol to update the route when detecting inefficient relay nodes. Substantial simulations in underwater environment by Network Simulator 3 (NS-3) show that NCRP significantly improves the network performance in terms of energy consumption, end-to-end delay and packet delivery ratio compared with other routing protocols for UWSNs. PMID:28786915

  16. 40 CFR 58.13 - Monitoring network completion.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Monitoring network completion. 58.13... (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.13 Monitoring network completion. (a) The network of NCore multipollutant sites must be physically established no later than January 1, 2011...

  17. 40 CFR 58.13 - Monitoring network completion.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Monitoring network completion. 58.13... (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.13 Monitoring network completion. (a) The network of NCore multipollutant sites must be physically established no later than January 1, 2011...

  18. Optimal cost design of water distribution networks using a decomposition approach

    NASA Astrophysics Data System (ADS)

    Lee, Ho Min; Yoo, Do Guen; Sadollah, Ali; Kim, Joong Hoon

    2016-12-01

    Water distribution network decomposition, which is an engineering approach, is adopted to increase the efficiency of obtaining the optimal cost design of a water distribution network using an optimization algorithm. This study applied the source tracing tool in EPANET, which is a hydraulic and water quality analysis model, to the decomposition of a network to improve the efficiency of the optimal design process. The proposed approach was tested by carrying out the optimal cost design of two water distribution networks, and the results were compared with other optimal cost designs derived from previously proposed optimization algorithms. The proposed decomposition approach using the source tracing technique enables the efficient decomposition of an actual large-scale network, and the results can be combined with the optimal cost design process using an optimization algorithm. This proves that the final design in this study is better than those obtained with other previously proposed optimization algorithms.

  19. Moball-Buoy Network: A Near-Real-Time Ground-Truth Distributed Monitoring System to Map Ice, Weather, Chemical Species, and Radiations, in the Arctic

    NASA Astrophysics Data System (ADS)

    Davoodi, F.; Shahabi, C.; Burdick, J.; Rais-Zadeh, M.; Menemenlis, D.

    2014-12-01

    The work had been funded by NASA HQ's office of Cryospheric Sciences Program. Recent observations of the Arctic have shown that sea ice has diminished drastically, consequently impacting the environment in the Arctic and beyond. Certain factors such as atmospheric anomalies, wind forces, temperature increase, and change in the distribution of cold and warm waters contribute to the sea ice reduction. However current measurement capabilities lack the accuracy, temporal sampling, and spatial coverage required to effectively quantify each contributing factor and to identify other missing factors. Addressing the need for new measurement capabilities for the new Arctic regime, we propose a game-changing in-situ Arctic-wide Distributed Mobile Monitoring system called Moball-buoy Network. Moball-buoy Network consists of a number of wind-propelled self-powered inflatable spheres referred to as Moball-buoys. The Moball-buoys are self-powered. They use their novel mechanical control and energy harvesting system to use the abundance of wind in the Arctic for their controlled mobility and energy harvesting. They are equipped with an array of low-power low-mass sensors and micro devices able to measure a wide range of environmental factors such as the ice conditions, chemical species wind vector patterns, cloud coverage, air temperature and pressure, electromagnetic fields, surface and subsurface water conditions, short- and long-wave radiations, bathymetry, and anthropogenic factors such as pollutions. The stop-and-go motion capability, using their novel mechanics, and the heads up cooperation control strategy at the core of the proposed distributed system enable the sensor network to be reconfigured dynamically according to the priority of the parameters to be monitored. The large number of Moball-buoys with their ground-based, sea-based, satellite and peer-to-peer communication capabilities would constitute a wireless mesh network that provides an interface for a global control system. This control system will ensure arctic-wide coverage, will optimize Moball-buoys monitoring efforts according to their available resources and the priority of local areas of high scientific value within the Arctic region. Moball-buoy Network is expected to be the first robust and persistent Arctic-wide environment monitoring system capable of providing reliable readings in near real time

  20. Optimized MPPT-based converter for TEG energy harvester to power wireless sensor and monitoring system in nuclear power plant

    NASA Astrophysics Data System (ADS)

    Xing, Shaoxu; Anakok, Isil; Zuo, Lei

    2017-04-01

    Accidents like Fukushima Disasters push people to improve the monitoring systems for the nuclear power plants. Thus, various types of energy harvesters are designed to power these systems and the Thermoelectric Generator (TEG) energy harvester is one of them. In order to enhance the amount of harvested power and the system efficiency, the power management stage needs to be carefully designed. In this paper, a power converter with optimized Maximum Power Point Tracking (MPPT) is proposed for the TEG Energy Harvester to power the wireless sensor network in nuclear power plant. The TEG Energy Harvester is installed on the coolant pipe of the nuclear plant and harvests energy from its heat energy while the power converter with optimized MPPT can make the TEG Energy Harvester output the maximum power, quickly response to the voltage change and provide sufficient energy for wireless sensor system to monitor the operation of the nuclear power plant. Due to the special characteristics of the Single-Ended Primary Inductor Converter (SEPIC) when it is working in the Discontinuous Inductor Current Mode (DICM) and Continuous Conduction Mode (CCM), the MPPT method presented in this paper would be able to control the converter to achieve the maximum output power in any working conditions of the TEG system with a simple circuit. The optimized MPPT algorithm will significantly reduce the cost and simplify the system as well as achieve a good performance. Experiment test results have shown that, comparing to a fixed- duty-cycle SEPIC which is specifically designed for the working on the secondary coolant loop in nuclear power plant, the optimized MPPT algorithm increased the output power by 55%.

  1. Wireless visual sensor network resource allocation using cross-layer optimization

    NASA Astrophysics Data System (ADS)

    Bentley, Elizabeth S.; Matyjas, John D.; Medley, Michael J.; Kondi, Lisimachos P.

    2009-01-01

    In this paper, we propose an approach to manage network resources for a Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network where nodes monitor scenes with varying levels of motion. It uses cross-layer optimization across the physical layer, the link layer and the application layer. Our technique simultaneously assigns a source coding rate, a channel coding rate, and a power level to all nodes in the network based on one of two criteria that maximize the quality of video of the entire network as a whole, subject to a constraint on the total chip rate. One criterion results in the minimal average end-to-end distortion amongst all nodes, while the other criterion minimizes the maximum distortion of the network. Our approach allows one to determine the capacity of the visual sensor network based on the number of nodes and the quality of video that must be transmitted. For bandwidth-limited applications, one can also determine the minimum bandwidth needed to accommodate a number of nodes with a specific target chip rate. Video captured by a sensor node camera is encoded and decoded using the H.264 video codec by a centralized control unit at the network layer. To reduce the computational complexity of the solution, Universal Rate-Distortion Characteristics (URDCs) are obtained experimentally to relate bit error probabilities to the distortion of corrupted video. Bit error rates are found first by using Viterbi's upper bounds on the bit error probability and second, by simulating nodes transmitting data spread by Total Square Correlation (TSC) codes over a Rayleigh-faded DS-CDMA channel and receiving that data using Auxiliary Vector (AV) filtering.

  2. A method of network topology optimization design considering application process characteristic

    NASA Astrophysics Data System (ADS)

    Wang, Chunlin; Huang, Ning; Bai, Yanan; Zhang, Shuo

    2018-03-01

    Communication networks are designed to meet the usage requirements of users for various network applications. The current studies of network topology optimization design mainly considered network traffic, which is the result of network application operation, but not a design element of communication networks. A network application is a procedure of the usage of services by users with some demanded performance requirements, and has obvious process characteristic. In this paper, we first propose a method to optimize the design of communication network topology considering the application process characteristic. Taking the minimum network delay as objective, and the cost of network design and network connective reliability as constraints, an optimization model of network topology design is formulated, and the optimal solution of network topology design is searched by Genetic Algorithm (GA). Furthermore, we investigate the influence of network topology parameter on network delay under the background of multiple process-oriented applications, which can guide the generation of initial population and then improve the efficiency of GA. Numerical simulations show the effectiveness and validity of our proposed method. Network topology optimization design considering applications can improve the reliability of applications, and provide guidance for network builders in the early stage of network design, which is of great significance in engineering practices.

  3. Mathematical model of highways network optimization

    NASA Astrophysics Data System (ADS)

    Sakhapov, R. L.; Nikolaeva, R. V.; Gatiyatullin, M. H.; Makhmutov, M. M.

    2017-12-01

    The article deals with the issue of highways network design. Studies show that the main requirement from road transport for the road network is to ensure the realization of all the transport links served by it, with the least possible cost. The goal of optimizing the network of highways is to increase the efficiency of transport. It is necessary to take into account a large number of factors that make it difficult to quantify and qualify their impact on the road network. In this paper, we propose building an optimal variant for locating the road network on the basis of a mathematical model. The article defines the criteria for optimality and objective functions that reflect the requirements for the road network. The most fully satisfying condition for optimality is the minimization of road and transport costs. We adopted this indicator as a criterion of optimality in the economic-mathematical model of a network of highways. Studies have shown that each offset point in the optimal binding road network is associated with all other corresponding points in the directions providing the least financial costs necessary to move passengers and cargo from this point to the other corresponding points. The article presents general principles for constructing an optimal network of roads.

  4. Real-Time Alpine Measurement System Using Wireless Sensor Networks

    PubMed Central

    2017-01-01

    Monitoring the snow pack is crucial for many stakeholders, whether for hydro-power optimization, water management or flood control. Traditional forecasting relies on regression methods, which often results in snow melt runoff predictions of low accuracy in non-average years. Existing ground-based real-time measurement systems do not cover enough physiographic variability and are mostly installed at low elevations. We present the hardware and software design of a state-of-the-art distributed Wireless Sensor Network (WSN)-based autonomous measurement system with real-time remote data transmission that gathers data of snow depth, air temperature, air relative humidity, soil moisture, soil temperature, and solar radiation in physiographically representative locations. Elevation, aspect, slope and vegetation are used to select network locations, and distribute sensors throughout a given network location, since they govern snow pack variability at various scales. Three WSNs were installed in the Sierra Nevada of Northern California throughout the North Fork of the Feather River, upstream of the Oroville dam and multiple powerhouses along the river. The WSNs gathered hydrologic variables and network health statistics throughout the 2017 water year, one of northern Sierra’s wettest years on record. These networks leverage an ultra-low-power wireless technology to interconnect their components and offer recovery features, resilience to data loss due to weather and wildlife disturbances and real-time topological visualizations of the network health. Data show considerable spatial variability of snow depth, even within a 1 km2 network location. Combined with existing systems, these WSNs can better detect precipitation timing and phase in, monitor sub-daily dynamics of infiltration and surface runoff during precipitation or snow melt, and inform hydro power managers about actual ablation and end-of-season date across the landscape. PMID:29120376

  5. Real-Time Alpine Measurement System Using Wireless Sensor Networks.

    PubMed

    Malek, Sami A; Avanzi, Francesco; Brun-Laguna, Keoma; Maurer, Tessa; Oroza, Carlos A; Hartsough, Peter C; Watteyne, Thomas; Glaser, Steven D

    2017-11-09

    Monitoring the snow pack is crucial for many stakeholders, whether for hydro-power optimization, water management or flood control. Traditional forecasting relies on regression methods, which often results in snow melt runoff predictions of low accuracy in non-average years. Existing ground-based real-time measurement systems do not cover enough physiographic variability and are mostly installed at low elevations. We present the hardware and software design of a state-of-the-art distributed Wireless Sensor Network (WSN)-based autonomous measurement system with real-time remote data transmission that gathers data of snow depth, air temperature, air relative humidity, soil moisture, soil temperature, and solar radiation in physiographically representative locations. Elevation, aspect, slope and vegetation are used to select network locations, and distribute sensors throughout a given network location, since they govern snow pack variability at various scales. Three WSNs were installed in the Sierra Nevada of Northern California throughout the North Fork of the Feather River, upstream of the Oroville dam and multiple powerhouses along the river. The WSNs gathered hydrologic variables and network health statistics throughout the 2017 water year, one of northern Sierra's wettest years on record. These networks leverage an ultra-low-power wireless technology to interconnect their components and offer recovery features, resilience to data loss due to weather and wildlife disturbances and real-time topological visualizations of the network health. Data show considerable spatial variability of snow depth, even within a 1 km 2 network location. Combined with existing systems, these WSNs can better detect precipitation timing and phase in, monitor sub-daily dynamics of infiltration and surface runoff during precipitation or snow melt, and inform hydro power managers about actual ablation and end-of-season date across the landscape.

  6. Network-Based Real-time Integrated Fire Detection and Alarm (FDA) System with Building Automation

    NASA Astrophysics Data System (ADS)

    Anwar, F.; Boby, R. I.; Rashid, M. M.; Alam, M. M.; Shaikh, Z.

    2017-11-01

    Fire alarm systems have become increasingly an important lifesaving technology in many aspects, such as applications to detect, monitor and control any fire hazard. A large sum of money is being spent annually to install and maintain the fire alarm systems in buildings to protect property and lives from the unexpected spread of fire. Several methods are already developed and it is improving on a daily basis to reduce the cost as well as increase quality. An integrated Fire Detection and Alarm (FDA) systems with building automation was studied, to reduce cost and improve their reliability by preventing false alarm. This work proposes an improved framework for FDA system to ensure a robust intelligent network of FDA control panels in real-time. A shortest path algorithmic was chosen for series of buildings connected by fiber optic network. The framework shares information and communicates with each fire alarm panels connected in peer to peer configuration and declare the network state using network address declaration from any building connected in network. The fiber-optic connection was proposed to reduce signal noises, thus increasing large area coverage, real-time communication and long-term safety. Based on this proposed method an experimental setup was designed and a prototype system was developed to validate the performance in practice. Also, the distributed network system was proposed to connect with an optional remote monitoring terminal panel to validate proposed network performance and ensure fire survivability where the information is sequentially transmitted. The proposed FDA system is different from traditional fire alarm and detection system in terms of topology as it manages group of buildings in an optimal and efficient manner.Introduction

  7. Practical synchronization on complex dynamical networks via optimal pinning control

    NASA Astrophysics Data System (ADS)

    Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu

    2015-07-01

    We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.

  8. Analysis of Energy Efficiency in WSN by Considering SHM Application

    NASA Astrophysics Data System (ADS)

    Kumar, Pawan; Naresh Babu, Merugu; Raju, Kota Solomon, Dr; Sharma, Sudhir Kumar, Dr; Jain, Vaibhav

    2017-08-01

    The Wireless Sensor Network is composed of a significant number of autonomous nodes deployed in an extensive or remote area. In WSN, the sensor nodes have a limited transmission range, processing speed and storage capabilities as well as their energy resources are also limited. In WSN all nodes are not directly connected. The primary objective for all kind of WSN is to enhance and optimize the network lifetime i.e. to minimize the energy consumption in the WSN. There are lots of applications of WSN out of which this research paper focuses upon the Structural Health Monitoring application in which 50 Meter bridge has been taken as a test application for the simulation purpose.

  9. Survey of WBSNs for Pre-Hospital Assistance: Trends to Maximize the Network Lifetime and Video Transmission Techniques

    PubMed Central

    Gonzalez, Enrique; Peña, Raul; Vargas-Rosales, Cesar; Avila, Alfonso; Perez-Diaz de Cerio, David

    2015-01-01

    This survey aims to encourage the multidisciplinary communities to join forces for innovation in the mobile health monitoring area. Specifically, multidisciplinary innovations in medical emergency scenarios can have a significant impact on the effectiveness and quality of the procedures and practices in the delivery of medical care. Wireless body sensor networks (WBSNs) are a promising technology capable of improving the existing practices in condition assessment and care delivery for a patient in a medical emergency. This technology can also facilitate the early interventions of a specialist physician during the pre-hospital period. WBSNs make possible these early interventions by establishing remote communication links with video/audio support and by providing medical information such as vital signs, electrocardiograms, etc. in real time. This survey focuses on relevant issues needed to understand how to setup a WBSN for medical emergencies. These issues are: monitoring vital signs and video transmission, energy efficient protocols, scheduling, optimization and energy consumption on a WBSN. PMID:26007741

  10. A Wireless Sensor Network-Based Ubiquitous Paprika Growth Management System

    PubMed Central

    Hwang, Jeonghwan; Shin, Changsun; Yoe, Hyun

    2010-01-01

    Wireless Sensor Network (WSN) technology can facilitate advances in productivity, safety and human quality of life through its applications in various industries. In particular, the application of WSN technology to the agricultural area, which is labor-intensive compared to other industries, and in addition is typically lacking in IT technology applications, adds value and can increase the agricultural productivity. This study attempts to establish a ubiquitous agricultural environment and improve the productivity of farms that grow paprika by suggesting a ‘Ubiquitous Paprika Greenhouse Management System’ using WSN technology. The proposed system can collect and monitor information related to the growth environment of crops outside and inside paprika greenhouses by installing WSN sensors and monitoring images captured by CCTV cameras. In addition, the system provides a paprika greenhouse environment control facility for manual and automatic control from a distance, improves the convenience and productivity of users, and facilitates an optimized environment to grow paprika based on the growth environment data acquired by operating the system. PMID:22163543

  11. Utilization of wireless structural health monitoring as decision making tools for a condition and reliability-based assessment of railroad bridges

    NASA Astrophysics Data System (ADS)

    Flanigan, Katherine A.; Johnson, Nephi R.; Hou, Rui; Ettouney, Mohammed; Lynch, Jerome P.

    2017-04-01

    The ability to quantitatively assess the condition of railroad bridges facilitates objective evaluation of their robustness in the face of hazard events. Of particular importance is the need to assess the condition of railroad bridges in networks that are exposed to multiple hazards. Data collected from structural health monitoring (SHM) can be used to better maintain a structure by prompting preventative (rather than reactive) maintenance strategies and supplying quantitative information to aid in recovery. To that end, a wireless monitoring system is validated and installed on the Harahan Bridge which is a hundred-year-old long-span railroad truss bridge that crosses the Mississippi River near Memphis, TN. This bridge is exposed to multiple hazards including scour, vehicle/barge impact, seismic activity, and aging. The instrumented sensing system targets non-redundant structural components and areas of the truss and floor system that bridge managers are most concerned about based on previous inspections and structural analysis. This paper details the monitoring system and the analytical method for the assessment of bridge condition based on automated data-driven analyses. Two primary objectives of monitoring the system performance are discussed: 1) monitoring fatigue accumulation in critical tensile truss elements; and 2) monitoring the reliability index values associated with sub-system limit states of these members. Moreover, since the reliability index is a scalar indicator of the safety of components, quantifiable condition assessment can be used as an objective metric so that bridge owners can make informed damage mitigation strategies and optimize resource management on single bridge or network levels.

  12. Information theory-based decision support system for integrated design of multivariable hydrometric networks

    NASA Astrophysics Data System (ADS)

    Keum, Jongho; Coulibaly, Paulin

    2017-07-01

    Adequate and accurate hydrologic information from optimal hydrometric networks is an essential part of effective water resources management. Although the key hydrologic processes in the water cycle are interconnected, hydrometric networks (e.g., streamflow, precipitation, groundwater level) have been routinely designed individually. A decision support framework is proposed for integrated design of multivariable hydrometric networks. The proposed method is applied to design optimal precipitation and streamflow networks simultaneously. The epsilon-dominance hierarchical Bayesian optimization algorithm was combined with Shannon entropy of information theory to design and evaluate hydrometric networks. Specifically, the joint entropy from the combined networks was maximized to provide the most information, and the total correlation was minimized to reduce redundant information. To further optimize the efficiency between the networks, they were designed by maximizing the conditional entropy of the streamflow network given the information of the precipitation network. Compared to the traditional individual variable design approach, the integrated multivariable design method was able to determine more efficient optimal networks by avoiding the redundant stations. Additionally, four quantization cases were compared to evaluate their effects on the entropy calculations and the determination of the optimal networks. The evaluation results indicate that the quantization methods should be selected after careful consideration for each design problem since the station rankings and the optimal networks can change accordingly.

  13. Probabilistic Assessment of High-Throughput Wireless Sensor Networks

    PubMed Central

    Kim, Robin E.; Mechitov, Kirill; Sim, Sung-Han; Spencer, Billie F.; Song, Junho

    2016-01-01

    Structural health monitoring (SHM) using wireless smart sensors (WSS) has the potential to provide rich information on the state of a structure. However, because of their distributed nature, maintaining highly robust and reliable networks can be challenging. Assessing WSS network communication quality before and after finalizing a deployment is critical to achieve a successful WSS network for SHM purposes. Early studies on WSS network reliability mostly used temporal signal indicators, composed of a smaller number of packets, to assess the network reliability. However, because the WSS networks for SHM purpose often require high data throughput, i.e., a larger number of packets are delivered within the communication, such an approach is not sufficient. Instead, in this study, a model that can assess, probabilistically, the long-term performance of the network is proposed. The proposed model is based on readily-available measured data sets that represent communication quality during high-throughput data transfer. Then, an empirical limit-state function is determined, which is further used to estimate the probability of network communication failure. Monte Carlo simulation is adopted in this paper and applied to a small and a full-bridge wireless networks. By performing the proposed analysis in complex sensor networks, an optimized sensor topology can be achieved. PMID:27258270

  14. Cloud Computing with Context Cameras

    NASA Astrophysics Data System (ADS)

    Pickles, A. J.; Rosing, W. E.

    2016-05-01

    We summarize methods and plans to monitor and calibrate photometric observations with our autonomous, robotic network of 2m, 1m and 40cm telescopes. These are sited globally to optimize our ability to observe time-variable sources. Wide field "context" cameras are aligned with our network telescopes and cycle every ˜2 minutes through BVr'i'z' filters, spanning our optical range. We measure instantaneous zero-point offsets and transparency (throughput) against calibrators in the 5-12m range from the all-sky Tycho2 catalog, and periodically against primary standards. Similar measurements are made for all our science images, with typical fields of view of ˜0.5 degrees. These are matched against Landolt, Stetson and Sloan standards, and against calibrators in the 10-17m range from the all-sky APASS catalog. Such measurements provide pretty good instantaneous flux calibration, often to better than 5%, even in cloudy conditions. Zero-point and transparency measurements can be used to characterize, monitor and inter-compare sites and equipment. When accurate calibrations of Target against Standard fields are required, monitoring measurements can be used to select truly photometric periods when accurate calibrations can be automatically scheduled and performed.

  15. 40 CFR 58.13 - Monitoring network completion.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Monitoring network completion. 58.13 Section 58.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.13 Monitoring network completion. (a...

  16. 40 CFR 58.13 - Monitoring network completion.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Monitoring network completion. 58.13 Section 58.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.13 Monitoring network completion. (a...

  17. 40 CFR 58.13 - Monitoring network completion.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 6 2013-07-01 2013-07-01 false Monitoring network completion. 58.13 Section 58.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.13 Monitoring network completion. (a...

  18. Fugitive Methane Gas Emission Monitoring in oil and gas industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, Levente

    Identifying fugitive methane leaks allow optimization of the extraction process, can extend gas extraction equipment lifetime, and eliminate hazardous work conditions. We demonstrate a wireless sensor network based on cost effective and robust chemi-resistive methane sensors combined with real time analytics to identify leaks from 2 scfh to 10000 scfh. The chemi-resistive sensors were validated for sensitivity better than 1 ppm of methane plume detection. The real time chemical sensor and wind data is integrated into an inversion models to identify the location and the magnitude of the methane leak. This integrated solution can be deployed in outdoor environment formore » long term monitoring of chemical plumes.« less

  19. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization.

    PubMed

    Liu, Qingshan; Guo, Zhishan; Wang, Jun

    2012-02-01

    In this paper, a one-layer recurrent neural network is proposed for solving pseudoconvex optimization problems subject to linear equality and bound constraints. Compared with the existing neural networks for optimization (e.g., the projection neural networks), the proposed neural network is capable of solving more general pseudoconvex optimization problems with equality and bound constraints. Moreover, it is capable of solving constrained fractional programming problems as a special case. The convergence of the state variables of the proposed neural network to achieve solution optimality is guaranteed as long as the designed parameters in the model are larger than the derived lower bounds. Numerical examples with simulation results illustrate the effectiveness and characteristics of the proposed neural network. In addition, an application for dynamic portfolio optimization is discussed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Distributed Interplanetary Delay/Disruption Tolerant Network (DTN) Monitor and Control System

    NASA Technical Reports Server (NTRS)

    Wang, Shin-Ywan

    2012-01-01

    The main purpose of Distributed interplanetary Delay Tolerant Network Monitor and Control System as a DTN system network management implementation in JPL is defined to provide methods and tools that can monitor the DTN operation status, detect and resolve DTN operation failures in some automated style while either space network or some heterogeneous network is infused with DTN capability. In this paper, "DTN Monitor and Control system in Deep Space Network (DSN)" exemplifies a case how DTN Monitor and Control system can be adapted into a space network as it is DTN enabled.

  1. Optimized Autonomous Space - In-situ Sensorweb: A new Tool for Monitoring Restless Volcanoes

    NASA Astrophysics Data System (ADS)

    Lahusen, R. G.; Kedar, S.; Song, W.; Chien, S.; Shirazi, B.; Davies, A.; Tran, D.; Pieri, D.

    2007-12-01

    An interagency team of earth scientists, space scientists and computer scientists are collaborating to develop a real-time monitoring system optimized for rapid deployment at restless volcanoes. The primary goals of this Optimized Autonomous Space In-situ Sensorweb (OASIS) are: 1) integrate complementary space and in-situ (ground-based) elements into an interactive, autonomous sensorweb; 2) advance sensorweb power and communication resource management technology; and 3) enable scalability for seamless infusion of future space and in-situ assets into the sensorweb. A prototype system will be deployed on Mount St. Helens by December 2009. Each node will include GPS, seismic, infrasonic and lightning (for ash plume detection) sensors plus autonomous decision making capabilities and interaction with EO-1 multi-spectral satellite. This three year project is jointly funded by NASA AIST program and USGS Volcano Hazards Program. Work has begun with a rigorous multi-disciplinary discussion and resulted in a system requirements document aimed to guide the design of OASIS and future networks and to achieve the project's stated goals. In this presentation we will highlight the key OASIS system requirements, their rationale and the physical and technical challenges they pose. Preliminary design decisions will be presented.

  2. Network placement optimization for large-scale distributed system

    NASA Astrophysics Data System (ADS)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  3. A graph decomposition-based approach for water distribution network optimization

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.; Deuerlein, Jochen W.

    2013-04-01

    A novel optimization approach for water distribution network design is proposed in this paper. Using graph theory algorithms, a full water network is first decomposed into different subnetworks based on the connectivity of the network's components. The original whole network is simplified to a directed augmented tree, in which the subnetworks are substituted by augmented nodes and directed links are created to connect them. Differential evolution (DE) is then employed to optimize each subnetwork based on the sequence specified by the assigned directed links in the augmented tree. Rather than optimizing the original network as a whole, the subnetworks are sequentially optimized by the DE algorithm. A solution choice table is established for each subnetwork (except for the subnetwork that includes a supply node) and the optimal solution of the original whole network is finally obtained by use of the solution choice tables. Furthermore, a preconditioning algorithm is applied to the subnetworks to produce an approximately optimal solution for the original whole network. This solution specifies promising regions for the final optimization algorithm to further optimize the subnetworks. Five water network case studies are used to demonstrate the effectiveness of the proposed optimization method. A standard DE algorithm (SDE) and a genetic algorithm (GA) are applied to each case study without network decomposition to enable a comparison with the proposed method. The results show that the proposed method consistently outperforms the SDE and GA (both with tuned parameters) in terms of both the solution quality and efficiency.

  4. Spatial prediction of water quality variables along a main river channel, in presence of pollution hotspots.

    PubMed

    Rizo-Decelis, L D; Pardo-Igúzquiza, E; Andreo, B

    2017-12-15

    In order to treat and evaluate the available data of water quality and fully exploit monitoring results (e.g. characterize regional patterns, optimize monitoring networks, infer conditions at unmonitored locations, etc.), it is crucial to develop improved and efficient methodologies. Accordingly, estimation of water quality along fluvial ecosystems is a frequent task in environment studies. In this work, a particular case of this problem is examined, namely, the estimation of water quality along a main stem of a large basin (where most anthropic activity takes place), from observational data measured along this river channel. We adapted topological kriging to this case, where each watershed contains all the watersheds of the upstream observed data ("nested support effect"). Data analysis was additionally extended by taking into account the upstream distance to the closest contamination hotspot as an external drift. We propose choosing the best estimation method by cross-validation. The methodological approach in spatial variability modeling may be used for optimizing the water quality monitoring of a given watercourse. The methodology presented is applied to 28 water quality variables measured along the Santiago River in Western Mexico. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Optimization of wireless Bluetooth sensor systems.

    PubMed

    Lonnblad, J; Castano, J; Ekstrom, M; Linden, M; Backlund, Y

    2004-01-01

    Within this study, three different Bluetooth sensor systems, replacing cables for transmission of biomedical sensor data, have been designed and evaluated. The three sensor architectures are built on 1-, 2- and 3-chip solutions and depending on the monitoring situation and signal character, different solutions are optimal. Essential parameters for all systems have been low physical weight and small size, resistance to interference and interoperability with other technologies as global- or local networks, PC's and mobile phones. Two different biomedical input signals, ECG and PPG (photoplethysmography), have been used to evaluate the three solutions. The study shows that it is possibly to continuously transmit an analogue signal. At low sampling rates and slowly varying parameters, as monitoring the heart rate with PPG, the 1-chip solution is the most suitable, offering low power consumption and thus a longer battery lifetime or a smaller battery, minimizing the weight of the sensor system. On the other hand, when a higher sampling rate is required, as an ECG, the 3-chip architecture, with a FPGA or micro-controller, offers the best solution and performance. Our conclusion is that Bluetooth might be useful in replacing cables of medical monitoring systems.

  6. Optimizing Dynamical Network Structure for Pinning Control

    NASA Astrophysics Data System (ADS)

    Orouskhani, Yasin; Jalili, Mahdi; Yu, Xinghuo

    2016-04-01

    Controlling dynamics of a network from any initial state to a final desired state has many applications in different disciplines from engineering to biology and social sciences. In this work, we optimize the network structure for pinning control. The problem is formulated as four optimization tasks: i) optimizing the locations of driver nodes, ii) optimizing the feedback gains, iii) optimizing simultaneously the locations of driver nodes and feedback gains, and iv) optimizing the connection weights. A newly developed population-based optimization technique (cat swarm optimization) is used as the optimization method. In order to verify the methods, we use both real-world networks, and model scale-free and small-world networks. Extensive simulation results show that the optimal placement of driver nodes significantly outperforms heuristic methods including placing drivers based on various centrality measures (degree, betweenness, closeness and clustering coefficient). The pinning controllability is further improved by optimizing the feedback gains. We also show that one can significantly improve the controllability by optimizing the connection weights.

  7. Wireless technologies for the monitoring of strategic civil infrastructures: an ambient vibration test of the Faith Bridge, Istanbul, Turkey

    NASA Astrophysics Data System (ADS)

    Picozzi, M.; Milkereit, C.; Zulfikar, C.; Ditommaso, R.; Erdik, M.; Safak, E.; Fleming, K.; Ozel, O.; Zschau, J.; Apaydin, N.

    2008-12-01

    The monitoring of strategic civil infrastructures to ensure their structural integrity is a task of major importance, especially in earthquake-prone areas. Classical approaches to such monitoring are based on visual inspections and the use of wired systems. While the former has the drawback that the structure is only superficially examined and discontinuously in time, wired systems are relatively expensive and time consuming to install. Today, however, wireless systems represent an advanced, easily installed and operated tool to be used for monitoring purposes, resulting in a wide and interesting range of possible applications. Within the framework of the earthquake early warning projects SAFER (Seismic eArly warning For EuRope) and EDIM (Earthquake Disaster Information systems for the Marmara Sea region, Turkey), new low-cost wireless sensors with the capability to automatically rearrange their communications scheme are being developed. The reduced sensitivity of these sensors, arising from the use of low-cost components, is compensated by the possibility of deploying high-density self-organizing networks performing real-time data acquisition and analysis. Thanks to the developed system's versatility, it has been possible to perform an experimental ambient vibration test with a network of 24 sensors on the Fatih Sultan Mehmet Bridge, Istanbul (Turkey), a gravity-anchored suspension bridge spanning the Bosphorus Strait with distance between its towers of 1090 m. Preliminary analysis of the data has demonstrated that the main modal properties of the bridge can be retrieved, and may therefore be regularly re-evaluated as part of a long-term monitoring program. Using a multi-hop communications technique, data could be exchanged among groups of sensors over distances of a few hundred meters. Thus, the test showed that, although more work is required to optimize the communication parameters, the performance of the network offers encouragement for us to follow this research direction in developing wireless systems for the monitoring of civil infrastructures.

  8. A network monitor for HTTPS protocol based on proxy

    NASA Astrophysics Data System (ADS)

    Liu, Yangxin; Zhang, Lingcui; Zhou, Shuguang; Li, Fenghua

    2016-10-01

    With the explosive growth of harmful Internet information such as pornography, violence, and hate messages, network monitoring is essential. Traditional network monitors is based mainly on bypass monitoring. However, we can't filter network traffic using bypass monitoring. Meanwhile, only few studies focus on the network monitoring for HTTPS protocol. That is because HTTPS data is in the encrypted traffic, which makes it difficult to monitor. This paper proposes a network monitor for HTTPS protocol based on proxy. We adopt OpenSSL to establish TLS secure tunes between clients and servers. Epoll is used to handle a large number of concurrent client connections. We also adopt Knuth- Morris-Pratt string searching algorithm (or KMP algorithm) to speed up the search process. Besides, we modify request packets to reduce the risk of errors and modify response packets to improve security. Experiments show that our proxy can monitor the content of all tested HTTPS websites efficiently with little loss of network performance.

  9. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  10. Tabu Search enhances network robustness under targeted attacks

    NASA Astrophysics Data System (ADS)

    Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi

    2016-03-01

    We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.

  11. Toward the application of ecological concepts in EU coastal water management.

    PubMed

    de Jonge, Victor N

    2007-01-01

    The EU Water Framework Directive demands the protection of the functioning and the structure of our aquatic ecosystems. The defined means to realize this goal are: (1) optimization of the habitat providing conditions and (2) optimizing the water quality. The effects of the measures on the structure and functioning of the aquatic ecosystems then has to be assessed and judged. The available tool to do this is 'monitoring'. The present monitoring activities in The Netherlands cover target monitoring and trend monitoring. This is insufficient to meet the requirements of the EU. It is, given the EU demands, the ongoing budget reductions in The Netherlands and an increasing flow of unused new ecological concepts and theories (e.g. new theoretical insights related to resource competition theory, intermediate disturbance hypothesis and tools to judge the system quality like ecological network analysis) suggested to reconsider the present monitoring tasks among governmental services (final responsibility for the program and logistic support) and the academia (data analyses, data interpretation and development of concepts suitable for ecosystem modelling and tools to judge the quality of our ecosystems). This will lead to intensified co-operation between both arena's and consequently increased exchange of knowledge and ideas. Suggestions are done to extend the Dutch monitoring by surveillance monitoring and to change the focus from 'station oriented' to 'area oriented' without changing the operational aspects and its costs. The extended data sets will allow proper calibration and validation of developed dynamic ecosystem models which is not possible now. The described 'cost-effective' change in the environmental monitoring will also let biological and ecological theories play the pivotal role they should play in future integrated environmental management.

  12. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks.

    PubMed

    Ma, Junjie; Meng, Fansheng; Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-02-16

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths.

  13. Distributed Water Pollution Source Localization with Mobile UV-Visible Spectrometer Probes in Wireless Sensor Networks

    PubMed Central

    Zhou, Yuexi; Wang, Yeyao; Shi, Ping

    2018-01-01

    Pollution accidents that occur in surface waters, especially in drinking water source areas, greatly threaten the urban water supply system. During water pollution source localization, there are complicated pollutant spreading conditions and pollutant concentrations vary in a wide range. This paper provides a scalable total solution, investigating a distributed localization method in wireless sensor networks equipped with mobile ultraviolet-visible (UV-visible) spectrometer probes. A wireless sensor network is defined for water quality monitoring, where unmanned surface vehicles and buoys serve as mobile and stationary nodes, respectively. Both types of nodes carry UV-visible spectrometer probes to acquire in-situ multiple water quality parameter measurements, in which a self-adaptive optical path mechanism is designed to flexibly adjust the measurement range. A novel distributed algorithm, called Dual-PSO, is proposed to search for the water pollution source, where one particle swarm optimization (PSO) procedure computes the water quality multi-parameter measurements on each node, utilizing UV-visible absorption spectra, and another one finds the global solution of the pollution source position, regarding mobile nodes as particles. Besides, this algorithm uses entropy to dynamically recognize the most sensitive parameter during searching. Experimental results demonstrate that online multi-parameter monitoring of a drinking water source area with a wide dynamic range is achieved by this wireless sensor network and water pollution sources are localized efficiently with low-cost mobile node paths. PMID:29462929

  14. Detection of Intermediate-Period Transiting Planets with a Network of Small Telescopes: transitsearch.org

    NASA Astrophysics Data System (ADS)

    Seagroves, Scott; Harker, Justin; Laughlin, Gregory; Lacy, Justin; Castellano, Tim

    2003-12-01

    We describe a project (transitsearch.org) currently attempting to discover transiting intermediate-period planets orbiting bright parent stars, and we simulate that project's performance. The discovery of such a transit would be an important astronomical advance, bridging the critical gap in understanding between HD 209458b and Jupiter. However, the task is made difficult by intrinsically low transit probabilities and small transit duty cycles. This project's efficient and economical strategy is to photometrically monitor stars that are known (from radial velocity surveys) to bear planets, using a network of widely spaced observers with small telescopes. These observers, each individually capable of precision (1%) differential photometry, monitor candidates during the time windows in which the radial velocity solution predicts a transit if the orbital inclination is close to 90°. We use Monte Carlo techniques to simulate the performance of this network, performing simulations with different configurations of observers in order to optimize coordination of an actual campaign. Our results indicate that transitsearch.org can reliably rule out or detect planetary transits within the current catalog of known planet-bearing stars. A distributed network of skilled amateur astronomers and small college observatories is a cost-effective method for discovering the small number of transiting planets with periods in the range 10 days

  15. Optimizing Nutrient Uptake in Biological Transport Networks

    NASA Astrophysics Data System (ADS)

    Ronellenfitsch, Henrik; Katifori, Eleni

    2013-03-01

    Many biological systems employ complex networks of vascular tubes to facilitate transport of solute nutrients, examples include the vascular system of plants (phloem), some fungi, and the slime-mold Physarum. It is believed that such networks are optimized through evolution for carrying out their designated task. We propose a set of hydrodynamic governing equations for solute transport in a complex network, and obtain the optimal network architecture for various classes of optimizing functionals. We finally discuss the topological properties and statistical mechanics of the resulting complex networks, and examine correspondence of the obtained networks to those found in actual biological systems.

  16. Toward controlling perturbations in robotic sensor networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Ashis G.; Majumder, Saikat R.

    2014-06-01

    Robotic sensor networks (RSNs), which consist of networks of sensors placed on mobile robots, are being increasingly used for environment monitoring applications. In particular, a lot of work has been done on simultaneous localization and mapping of the robots, and optimal sensor placement for environment state estimation1. The deployment of RSNs, however, remains challenging in harsh environments where the RSNs have to deal with significant perturbations in the forms of wind gusts, turbulent water flows, sand storms, or blizzards that disrupt inter-robot communication and individual robot stability. Hence, there is a need to be able to control such perturbations and bring the networks to desirable states with stable nodes (robots) and minimal operational performance (environment sensing). Recent work has demonstrated the feasibility of controlling the non-linear dynamics in other communication networks like emergency management systems and power grids by introducing compensatory perturbations to restore network stability and operation2. In this paper, we develop a computational framework to investigate the usefulness of this approach for RSNs in marine environments. Preliminary analysis shows promising performance and identifies bounds on the original perturbations within which it is possible to control the networks.

  17. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  18. GEO Carbon and GHG Initiative Task 3: Optimizing in-situ measurements of essential carbon cycle variables across observational networks

    NASA Astrophysics Data System (ADS)

    Durden, D.; Muraoka, H.; Scholes, R. J.; Kim, D. G.; Loescher, H. W.; Bombelli, A.

    2017-12-01

    The development of an integrated global carbon cycle observation system to monitor changes in the carbon cycle, and ultimately the climate system, across the globe is of crucial importance in the 21stcentury. This system should be comprised of space and ground-based observations, in concert with modelling and analysis, to produce more robust budgets of carbon and other greenhouse gases (GHGs). A global initiative, the GEO Carbon and GHG Initiative, is working within the framework of Group on Earth Observations (GEO) to promote interoperability and provide integration across different parts of the system, particularly at domain interfaces. Thus, optimizing the efforts of existing networks and initiatives to reduce uncertainties in budgets of carbon and other GHGs. This is a very ambitious undertaking; therefore, the initiative is separated into tasks to provide actionable objectives. Task 3 focuses on the optimization of in-situ observational networks. The main objective of Task 3 is to develop and implement a procedure for enhancing and refining the observation system for identified essential carbon cycle variables (ECVs) that meets user-defined specifications at minimum total cost. This work focuses on the outline of the implementation plan, which includes a review of essential carbon cycle variables and observation technologies, mapping the ECVs performance, and analyzing gaps and opportunities in order to design an improved observing system. A description of the gap analysis of in-situ observations that will begin in the terrestrial domain to address issues of missing coordination and large spatial gaps, then extend to ocean and atmospheric observations in the future, will be outlined as the subsequent step to landscape mapping of existing observational networks.

  19. Geodetic Volcano Monitoring Research in Canary Islands: Recent Results

    NASA Astrophysics Data System (ADS)

    Fernandez, J.; Gonzalez, P. J.; Arjona, A.; Camacho, A. G.; Prieto, J. F.; Seco, A.; Tizzani, P.; Manzo, M. R.; Lanari, R.; Blanco, P.; Mallorqui, J. J.

    2009-05-01

    The Canarian Archipelago is an oceanic island volcanic chain with a long-standing history of volcanic activity (> 40 Ma). It is located off the NW coast of the African continent, lying over a transitional crust of the Atlantic African passive margin. At least 12 eruptions have been occurred on the islands of Lanzarote, Tenerife and La Palma in the last 500 years. Volcanism manifest predominantly as basaltic strombolian monogenetic activity (whole archipelago) and central felsic volcanism (active only in Tenerife Island). We concentrate our studies in the two most active islands, Tenerife and La Palma. In these islands, we tested different methodologies of geodetic monitoring systems. We use a combination of ground- and space-based techniques. At Tenerife Island, a differential interferometric study was performed to detect areas of deformation. DInSAR detected two clear areas of deformation, using this results a survey-based GPS network was designed and optimized to control those deformations and the rest of the island. Finally, using SBAS DInSAR results weak spatial long- wavelength subsidence signals has been detected. At La Palma, the first DInSAR analysis have not shown any clear deformation, so a first time series analysis was performed detecting a clear subsidence signal at Teneguia volcano, as for Tenerife a GPS network was designed and optimized taking into account stable and deforming areas. After several years of activities, geodetic results served to study ground deformations caused by a wide variety of sources, such as changes in groundwater levels, volcanic activity, volcano-tectonics, gravitational loading, etc. These results proof that a combination of ground-based and space-based techniques is suitable tool for geodetic volcano monitoring in Canary Islands. Finally, we would like to strength that those results could have serious implications on the continuous geodetic monitoring system design and implementation for the Canary Islands which is under development nowadays.

  20. Energy-Efficient Control with Harvesting Predictions for Solar-Powered Wireless Sensor Networks

    PubMed Central

    Zou, Tengyue; Lin, Shouying; Feng, Qijie; Chen, Yanlian

    2016-01-01

    Wireless sensor networks equipped with rechargeable batteries are useful for outdoor environmental monitoring. However, the severe energy constraints of the sensor nodes present major challenges for long-term applications. To achieve sustainability, solar cells can be used to acquire energy from the environment. Unfortunately, the energy supplied by the harvesting system is generally intermittent and considerably influenced by the weather. To improve the energy efficiency and extend the lifetime of the networks, we propose algorithms for harvested energy prediction using environmental shadow detection. Thus, the sensor nodes can adjust their scheduling plans accordingly to best suit their energy production and residual battery levels. Furthermore, we introduce clustering and routing selection methods to optimize the data transmission, and a Bayesian network is used for warning notifications of bottlenecks along the path. The entire system is implemented on a real-time Texas Instruments CC2530 embedded platform, and the experimental results indicate that these mechanisms sustain the networks’ activities in an uninterrupted and efficient manner. PMID:26742042

  1. Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions.

    PubMed

    Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi

    2016-12-24

    It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources.

  2. Compound Event Barrier Coverage in Wireless Sensor Networks under Multi-Constraint Conditions

    PubMed Central

    Zhuang, Yaoming; Wu, Chengdong; Zhang, Yunzhou; Jia, Zixi

    2016-01-01

    It is important to monitor compound event by barrier coverage issues in wireless sensor networks (WSNs). Compound event barrier coverage (CEBC) is a novel coverage problem. Unlike traditional ones, the data of compound event barrier coverage comes from different types of sensors. It will be subject to multiple constraints under complex conditions in real-world applications. The main objective of this paper is to design an efficient algorithm for complex conditions that can combine the compound event confidence. Moreover, a multiplier method based on an active-set strategy (ASMP) is proposed to optimize the multiple constraints in compound event barrier coverage. The algorithm can calculate the coverage ratio efficiently and allocate the sensor resources reasonably in compound event barrier coverage. The proposed algorithm can simplify complex problems to reduce the computational load of the network and improve the network efficiency. The simulation results demonstrate that the proposed algorithm is more effective and efficient than existing methods, especially in the allocation of sensor resources. PMID:28029118

  3. WaterNet:The NASA Water Cycle Solutions Network

    NASA Astrophysics Data System (ADS)

    Belvedere, D. R.; Houser, P. R.; Pozzi, W.; Imam, B.; Schiffer, R.; Schlosser, C. A.; Gupta, H.; Martinez, G.; Lopez, V.; Vorosmarty, C.; Fekete, B.; Matthews, D.; Lawford, R.; Welty, C.; Seck, A.

    2008-12-01

    Water is essential to life and directly impacts and constrains society's welfare, progress, and sustainable growth, and is continuously being transformed by climate change, erosion, pollution, and engineering. Projections of the effects of such factors will remain speculative until more effective global prediction systems and applications are implemented. NASA's unique role is to use its view from space to improve water and energy cycle monitoring and prediction, and has taken steps to collaborate and improve interoperability with existing networks and nodes of research organizations, operational agencies, science communities, and private industry. WaterNet is a Solutions Network, devoted to the identification and recommendation of candidate solutions that propose ways in which water-cycle related NASA research results can be skillfully applied by partner agencies, international organizations, state, and local governments. It is designed to improve and optimize the sustained ability of water cycle researchers, stakeholders, organizations and networks to interact, identify, harness, and extend NASA research results to augment Decision Support Tools that address national needs.

  4. Performance improvement of optical CDMA networks with stochastic artificial bee colony optimization technique

    NASA Astrophysics Data System (ADS)

    Panda, Satyasen

    2018-05-01

    This paper proposes a modified artificial bee colony optimization (ABC) algorithm based on levy flight swarm intelligence referred as artificial bee colony levy flight stochastic walk (ABC-LFSW) optimization for optical code division multiple access (OCDMA) network. The ABC-LFSW algorithm is used to solve asset assignment problem based on signal to noise ratio (SNR) optimization in OCDM networks with quality of service constraints. The proposed optimization using ABC-LFSW algorithm provides methods for minimizing various noises and interferences, regulating the transmitted power and optimizing the network design for improving the power efficiency of the optical code path (OCP) from source node to destination node. In this regard, an optical system model is proposed for improving the network performance with optimized input parameters. The detailed discussion and simulation results based on transmitted power allocation and power efficiency of OCPs are included. The experimental results prove the superiority of the proposed network in terms of power efficiency and spectral efficiency in comparison to networks without any power allocation approach.

  5. Research on robust optimization of emergency logistics network considering the time dependence characteristic

    NASA Astrophysics Data System (ADS)

    WANG, Qingrong; ZHU, Changfeng; LI, Ying; ZHANG, Zhengkun

    2017-06-01

    Considering the time dependence of emergency logistic network and complexity of the environment that the network exists in, in this paper the time dependent network optimization theory and robust discrete optimization theory are combined, and the emergency logistics dynamic network optimization model with characteristics of robustness is built to maximize the timeliness of emergency logistics. On this basis, considering the complexity of dynamic network and the time dependence of edge weight, an improved ant colony algorithm is proposed to realize the coupling of the optimization algorithm and the network time dependence and robustness. Finally, a case study has been carried out in order to testify validity of this robustness optimization model and its algorithm, and the value of different regulation factors was analyzed considering the importance of the value of the control factor in solving the optimal path. Analysis results show that this model and its algorithm above-mentioned have good timeliness and strong robustness.

  6. Optimizing Coverage of Three-Dimensional Wireless Sensor Networks by Means of Photon Mapping

    DTIC Science & Technology

    2013-12-01

    information if it does not display a currently valid OMB control number. 1. REPORT DATE DEC 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00...information about the monitored space is sensed?” Solving this formulation of the AGP relies upon the creation of a model describing how a set of...simulated photons will propagate in a 3D virtual environment. Furthermore, the photon model requires an efficient data structure with small memory

  7. A conceptual ground-water-quality monitoring network for San Fernando Valley, California

    USGS Publications Warehouse

    Setmire, J.G.

    1985-01-01

    A conceptual groundwater-quality monitoring network was developed for San Fernando Valley to provide the California State Water Resources Control Board with an integrated, basinwide control system to monitor the quality of groundwater. The geology, occurrence and movement of groundwater, land use, background water quality, and potential sources of pollution were described and then considered in designing the conceptual monitoring network. The network was designed to monitor major known and potential point and nonpoint sources of groundwater contamination over time. The network is composed of 291 sites where wells are needed to define the groundwater quality. The ideal network includes four specific-purpose networks to monitor (1) ambient water quality, (2) nonpoint sources of pollution, (3) point sources of pollution, and (4) line sources of pollution. (USGS)

  8. Network-optimized congestion pricing : a parable, model and algorithm

    DOT National Transportation Integrated Search

    1995-05-31

    This paper recites a parable, formulates a model and devises an algorithm for optimizing tolls on a road network. Such tolls induce an equilibrium traffic flow that is at once system-optimal and user-optimal. The parable introduces the network-wide c...

  9. Monitoring of patients treated with lithium for bipolar disorder: an international survey.

    PubMed

    Nederlof, M; Heerdink, E R; Egberts, A C G; Wilting, I; Stoker, L J; Hoekstra, R; Kupka, R W

    2018-04-14

    Adequate monitoring of patients using lithium is needed for optimal dosing and for early identification of patients with (potential) ADEs. The objective was to internationally assess how health care professionals monitor patients treated with lithium for bipolar disorder. Using networks of various professional organizations, an anonymous online survey was conducted among health care professionals prescribing lithium. Target lithium serum levels and frequency of monitoring was assessed together with monitoring of physical and laboratory parameters. Reasons to and not to monitor and use of guidelines and institutional protocols, and local monitoring systems were investigated. The survey was completed by 117 health care professionals incorporating responses from twenty-four countries. All prescribers reported to monitor lithium serum levels on a regular basis, with varying target ranges. Almost all (> 97%) monitored thyroid and renal function before start and during maintenance treatment. Reported monitoring of other laboratory and physical parameters was variable. The majority of respondents (74%) used guidelines or institutional protocols for monitoring. In general, the prescriber was responsible for monitoring, had to request every monitoring parameter separately and only a minority of patients was automatically invited. Lithium serum levels, renal and thyroid function were monitored by (almost) all physicians. However, there was considerable variation in other monitoring parameters. Our results help to understand why prescribers of lithium monitor patients and what their main reasons are not to monitor patients using lithium.

  10. Saltwater intrusion monitoring in Florida

    USGS Publications Warehouse

    Prinos, Scott T.

    2016-01-01

    Florida's communities are largely dependent on freshwater from groundwater aquifers. Existing saltwater in the aquifers, or seawater that intrudes parts of the aquifers that were fresh, can make the water unusable without additional processing. The quality of Florida's saltwater intrusion monitoring networks varies. In Miami-Dade and Broward Counties, for example, there is a well-designed network with recently constructed short open-interval monitoring wells that bracket the saltwater interface in the Biscayne aquifer. Geochemical analyses of water samples from the network help scientists evaluate pathways of saltwater intrusion and movement of the saltwater interface. Geophysical measurements, collected in these counties, aid the mapping of the saltwater interface and the design of monitoring networks. In comparison, deficiencies in the Collier County monitoring network include the positioning of monitoring wells, reliance on wells with long open intervals that when sampled might provide questionable results, and the inability of existing analyses to differentiate between multiple pathways of saltwater intrusion. A state-wide saltwater intrusion monitoring network is being planned; the planned network could improve saltwater intrusion monitoring by adopting the applicable strategies of the networks of Miami-Dade and Broward Counties, and by addressing deficiencies such as those described for the Collier County network.

  11. Optimal percolation on multiplex networks.

    PubMed

    Osat, Saeed; Faqeeh, Ali; Radicchi, Filippo

    2017-11-16

    Optimal percolation is the problem of finding the minimal set of nodes whose removal from a network fragments the system into non-extensive disconnected clusters. The solution to this problem is important for strategies of immunization in disease spreading, and influence maximization in opinion dynamics. Optimal percolation has received considerable attention in the context of isolated networks. However, its generalization to multiplex networks has not yet been considered. Here we show that approximating the solution of the optimal percolation problem on a multiplex network with solutions valid for single-layer networks extracted from the multiplex may have serious consequences in the characterization of the true robustness of the system. We reach this conclusion by extending many of the methods for finding approximate solutions of the optimal percolation problem from single-layer to multiplex networks, and performing a systematic analysis on synthetic and real-world multiplex networks.

  12. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  13. Mitigation of the consequence of seismically induced damage on a utility water network by means of next generation SCADA

    NASA Astrophysics Data System (ADS)

    Robertson, Jamie; Shinozuka, Masanobu; Wu, Felix

    2011-04-01

    When a lifeline system such as a water delivery network is damaged due to a severe earthquake, it is critical to identify its location and extent of the damage in real time in order to minimize the potentially disastrous consequence such damage could otherwise entail. This paper demonstrates how the degree of such minimization can be estimated qualitatively by using the water delivery system of Irvine Water Ranch District (IRWD) as testbed, when it is subjected to magnitude 6.6 San Joaquin Hills Earthquake. In this demonstration, we consider two cases when the IRWD system is equipped or not equipped with a next generation SCADA which consists of a network of MEMS acceleration sensors densely populated and optimally located. These sensors are capable of identifying the location and extent of the damage as well as transmitting the data to the SCADA center for monitoring and control.

  14. On the value of information for Industry 4.0

    NASA Astrophysics Data System (ADS)

    Omenzetter, Piotr

    2018-03-01

    Industry 4.0, or the fourth industrial revolution, that blurs the boundaries between the physical and the digital, is underpinned by vast amounts of data collected by sensors that monitor processes and components of smart factories that continuously communicate amongst one another and with the network hubs via the internet of things. Yet, collection of those vast amounts of data, which are inherently imperfect and burdened with uncertainties and noise, entails costs including hardware and software, data storage, processing, interpretation and integration into the decision-making process to name just the few main expenditures. This paper discusses a framework for rationalizing the adoption of (big) data collection for Industry 4.0. The pre-posterior Bayesian decision analysis is used to that end and industrial process evolution with time is conceptualized as a stochastic observable and controllable dynamical system. The chief underlying motivation is to be able to use the collected data in such a way as to derive the most benefit from them by trading off successfully the management of risks pertinent to failure of the monitored processes and/or its components against the cost of data collection, processing and interpretation. This enables formulation of optimization problems for data collection, e.g. for selecting the monitoring system type, topology and/or time of deployment. An illustrative example utilizing monitoring of the operation of an assembly line and optimizing the topology of a monitoring system is provided to illustrate the theoretical concepts.

  15. 40 CFR 58.10 - Annual monitoring network plan and periodic network assessment.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Annual monitoring network plan and periodic network assessment. 58.10 Section 58.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.10 Annual...

  16. 40 CFR 58.10 - Annual monitoring network plan and periodic network assessment.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Annual monitoring network plan and periodic network assessment. 58.10 Section 58.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR QUALITY SURVEILLANCE Monitoring Network § 58.10 Annual...

  17. Determine the optimal carrier selection for a logistics network based on multi-commodity reliability criterion

    NASA Astrophysics Data System (ADS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2013-05-01

    From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.

  18. A parallel adaptive quantum genetic algorithm for the controllability of arbitrary networks.

    PubMed

    Li, Yuhong; Gong, Guanghong; Li, Ni

    2018-01-01

    In this paper, we propose a novel algorithm-parallel adaptive quantum genetic algorithm-which can rapidly determine the minimum control nodes of arbitrary networks with both control nodes and state nodes. The corresponding network can be fully controlled with the obtained control scheme. We transformed the network controllability issue into a combinational optimization problem based on the Popov-Belevitch-Hautus rank condition. A set of canonical networks and a list of real-world networks were experimented. Comparison results demonstrated that the algorithm was more ideal to optimize the controllability of networks, especially those larger-size networks. We demonstrated subsequently that there were links between the optimal control nodes and some network statistical characteristics. The proposed algorithm provides an effective approach to improve the controllability optimization of large networks or even extra-large networks with hundreds of thousands nodes.

  19. 78 FR 57845 - Notice of Availability (NOA) for Strategic Network Optimization (SNO) Program Environmental...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-20

    ... (NOA) for Strategic Network Optimization (SNO) Program Environmental Assessment AGENCY: Defense Logistics Agency, DoD. ACTION: Notice of Availability (NOA) for Strategic Network Optimization (SNO) Program... implement the SNO initiative for improvements to material distribution network for the Department of Defense...

  20. Coordinated and uncoordinated optimization of networks

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-06-01

    In this paper, we consider spatial networks that realize a balance between an infrastructure cost (the cost of wire needed to connect the network in space) and communication efficiency, measured by average shortest path length. A global optimization procedure yields network topologies in which this balance is optimized. These are compared with network topologies generated by a competitive process in which each node strives to optimize its own cost-communication balance. Three phases are observed in globally optimal configurations for different cost-communication trade offs: (i) regular small worlds, (ii) starlike networks, and (iii) trees with a center of interconnected hubs. In the latter regime, i.e., for very expensive wire, power laws in the link length distributions P(w)∝w-α are found, which can be explained by a hierarchical organization of the networks. In contrast, in the local optimization process the presence of sharp transitions between different network regimes depends on the dimension of the underlying space. Whereas for d=∞ sharp transitions between fully connected networks, regular small worlds, and highly cliquish periphery-core networks are found, for d=1 sharp transitions are absent and the power law behavior in the link length distribution persists over a much wider range of link cost parameters. The measured power law exponents are in agreement with the hypothesis that the locally optimized networks consist of multiple overlapping suboptimal hierarchical trees.

  1. Reliable Collection of Real-Time Patient Physiologic Data from less Reliable Networks: a "Monitor of Monitors" System (MoMs).

    PubMed

    Hu, Peter F; Yang, Shiming; Li, Hsiao-Chi; Stansbury, Lynn G; Yang, Fan; Hagegeorge, George; Miller, Catriona; Rock, Peter; Stein, Deborah M; Mackenzie, Colin F

    2017-01-01

    Research and practice based on automated electronic patient monitoring and data collection systems is significantly limited by system down time. We asked whether a triple-redundant Monitor of Monitors System (MoMs) to collect and summarize key information from system-wide data sources could achieve high fault tolerance, early diagnosis of system failure, and improve data collection rates. In our Level I trauma center, patient vital signs(VS) monitors were networked to collect real time patient physiologic data streams from 94 bed units in our various resuscitation, operating, and critical care units. To minimize the impact of server collection failure, three BedMaster® VS servers were used in parallel to collect data from all bed units. To locate and diagnose system failures, we summarized critical information from high throughput datastreams in real-time in a dashboard viewer and compared the before and post MoMs phases to evaluate data collection performance as availability time, active collection rates, and gap duration, occurrence, and categories. Single-server collection rates in the 3-month period before MoMs deployment ranged from 27.8 % to 40.5 % with combined 79.1 % collection rate. Reasons for gaps included collection server failure, software instability, individual bed setting inconsistency, and monitor servicing. In the 6-month post MoMs deployment period, average collection rates were 99.9 %. A triple redundant patient data collection system with real-time diagnostic information summarization and representation improved the reliability of massive clinical data collection to nearly 100 % in a Level I trauma center. Such data collection framework may also increase the automation level of hospital-wise information aggregation for optimal allocation of health care resources.

  2. The Role of Optimality in Characterizing CO2 Seepage from Geological Carbon Sequestration Sites

    NASA Astrophysics Data System (ADS)

    Cortis, A.; Oldenburg, C. M.; Benson, S. M.

    2007-12-01

    Storage of large amounts of carbon dioxide (CO2) in deep geological formations for greenhouse-gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this talk we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size of the: (1) region that needs to be monitored; (2) footprint of the measurement approach; (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO2 storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO2 seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO2 fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO2 seepage areas. This work was carried out within the ZERT project, funded by the Assistant Secretary for Fossil Energy, Office of Sequestration, Hydrogen, and Clean Coal Fuels, National Energy Technology Laboratory, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.

  3. Application of Neural Network Optimized by Mind Evolutionary Computation in Building Energy Prediction

    NASA Astrophysics Data System (ADS)

    Song, Chen; Zhong-Cheng, Wu; Hong, Lv

    2018-03-01

    Building Energy forecasting plays an important role in energy management and plan. Using mind evolutionary algorithm to find the optimal network weights and threshold, to optimize the BP neural network, can overcome the problem of the BP neural network into a local minimum point. The optimized network is used for time series prediction, and the same month forecast, to get two predictive values. Then two kinds of predictive values are put into neural network, to get the final forecast value. The effectiveness of the method was verified by experiment with the energy value of three buildings in Hefei.

  4. Thermodynamic characterization of synchronization-optimized oscillator networks

    NASA Astrophysics Data System (ADS)

    Yanagita, Tatsuo; Ichinomiya, Takashi

    2014-12-01

    We consider a canonical ensemble of synchronization-optimized networks of identical oscillators under external noise. By performing a Markov chain Monte Carlo simulation using the Kirchhoff index, i.e., the sum of the inverse eigenvalues of the Laplacian matrix (as a graph Hamiltonian of the network), we construct more than 1 000 different synchronization-optimized networks. We then show that the transition from star to core-periphery structure depends on the connectivity of the network, and is characterized by the node degree variance of the synchronization-optimized ensemble. We find that thermodynamic properties such as heat capacity show anomalies for sparse networks.

  5. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors.

    PubMed

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-09-21

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors.

  6. Design of a ground-water-quality monitoring network for the Salinas River basin, California

    USGS Publications Warehouse

    Showalter, P.K.; Akers, J.P.; Swain, L.A.

    1984-01-01

    A regional ground-water quality monitoring network for the entire Salinas River drainage basin was designed to meet the needs of the California State Water Resources Control Board. The project included phase 1--identifying monitoring networks that exist in the region; phase 2--collecting information about the wells in each network; and phase 3--studying the factors--such as geology, land use, hydrology, and geohydrology--that influence the ground-water quality, and designing a regional network. This report is the major product of phase 3. Based on the authors ' understanding of the ground-water-quality monitoring system and input from local offices, an ideal network was designed. The proposed network includes 317 wells and 8 stream-gaging stations. Because limited funds are available to implement the monitoring network, the proposed network is designed to correspond to the ideal network insofar as practicable, and is composed mainly of 214 wells that are already being monitored by a local agency. In areas where network wells are not available, arrangements will be made to add wells to local networks. The data collected by this network will be used to assess the ground-water quality of the entire Salinas River drainage basin. After 2 years of data are collected, the network will be evaluated to test whether it is meeting the network objectives. Subsequent network evaluations will be done very 5 years. (USGS)

  7. Exploiting node mobility for energy optimization in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    El-Moukaddem, Fatme Mohammad

    Wireless Sensor Networks (WSNs) have become increasingly available for data-intensive applications such as micro-climate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit the sheer amount of data generated within an application's lifetime to the base station despite the fact that sensor nodes have limited power supplies such as batteries or small solar panels. The availability of numerous low-cost robotic units (e.g. Robomote and Khepera) has made it possible to construct sensor networks consisting of mobile sensor nodes. It has been shown that the controlled mobility offered by mobile sensors can be exploited to improve the energy efficiency of a network. In this thesis, we propose schemes that use mobile sensor nodes to reduce the energy consumption of data-intensive WSNs. Our approaches differ from previous work in two main aspects. First, our approaches do not require complex motion planning of mobile nodes, and hence can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless communications into a holistic optimization framework. We consider three problems arising from the limited energy in the sensor nodes. In the first problem, the network consists of mostly static nodes and contains only a few mobile nodes. In the second and third problems, we assume essentially that all nodes in the WSN are mobile. We first study a new problem called max-data mobile relay configuration (MMRC ) that finds the positions of a set of mobile sensors, referred to as relays, that maximize the total amount of data gathered by the network during its lifetime. We show that the MMRC problem is surprisingly complex even for a trivial network topology due to the joint consideration of the energy consumption of both wireless communication and mechanical locomotion. We present optimal MMRC algorithms and practical distributed implementations for several important network topologies and applications. Second, we consider the problem of minimizing the total energy consumption of a network. We design an iterative algorithm that improves a given configuration by relocating nodes to new positions. We show that this algorithm converges to the optimal configuration for the given transmission routes. Moreover, we propose an efficient distributed implementation that does not require explicit synchronization. Finally, we consider the problem of maximizing the lifetime of the network. We propose an approach that exploits the mobility of the nodes to balance the energy consumption throughout the network. We develop efficient algorithms for single and multiple round approaches. For all three problems, we evaluate the efficiency of our algorithms through simulations. Our simulation results based on realistic energy models obtained from existing mobile and static sensor platforms show that our approaches significantly improve the network's performance and outperform existing approaches.

  8. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  9. Toward the Limits of Uniformity of Mixed Metallicity SWCNT TFT Arrays with Spark-Synthesized and Surface-Density-Controlled Nanotube Networks.

    PubMed

    Kaskela, Antti; Mustonen, Kimmo; Laiho, Patrik; Ohno, Yutaka; Kauppinen, Esko I

    2015-12-30

    We report the fabrication of thin film transistors (TFTs) from networks of nonbundled single-walled carbon nanotubes with controlled surface densities. Individual nanotubes were synthesized by using a spark generator-based floating catalyst CVD process. High uniformity and the control of SWCNT surface density were realized by mixing of the SWCNT aerosol in a turbulent flow mixer and monitoring the online number concentration with a condensation particle counter at the reactor outlet in real time. The networks consist of predominantly nonbundled SWCNTs with diameters of 1.0-1.3 nm, mean length of 3.97 μm, and metallic to semiconducting tube ratio of 1:2. The ON/OFF ratio and charge carrier mobility of SWCNT TFTs were simultaneously optimized through fabrication of devices with SWCNT surface densities ranging from 0.36 to 1.8 μm(-2) and channel lengths and widths from 5 to 100 μm and from 100 to 500 μm, respectively. The density optimized TFTs exhibited excellent performance figures with charge carrier mobilities up to 100 cm(2) V(-1) s(-1) and ON/OFF current ratios exceeding 1 × 10(6), combined with high uniformity and more than 99% of devices working as theoretically expected.

  10. Progress and lessons learned from water-quality monitoring networks

    USGS Publications Warehouse

    Myers, Donna N.; Ludtke, Amy S.

    2017-01-01

    Stream-quality monitoring networks in the United States were initiated and expanded after passage of successive federal water-pollution control laws from 1948 to 1972. The first networks addressed information gaps on the extent and severity of stream pollution and served as early warning systems for spills. From 1965 to 1972, monitoring networks expanded to evaluate compliance with stream standards, track emerging issues, and assess water-quality status and trends. After 1972, concerns arose regarding the ability of monitoring networks to determine if water quality was getting better or worse and why. As a result, monitoring networks adopted a hydrologic systems approach targeted to key water-quality issues, accounted for human and natural factors affecting water quality, innovated new statistical methods, and introduced geographic information systems and models that predict water quality at unmeasured locations. Despite improvements, national-scale monitoring networks have declined over time. Only about 1%, or 217, of more than 36,000 US Geological Survey monitoring sites sampled from 1975 to 2014 have been operated throughout the four decades since passage of the 1972 Clean Water Act. Efforts to sustain monitoring networks are important because these networks have collected information crucial to the description of water-quality trends over time and are providing information against which to evaluate future trends.

  11. Efficient large-scale graph data optimization for intelligent video surveillance

    NASA Astrophysics Data System (ADS)

    Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming

    2017-08-01

    Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.

  12. A Mobile Sensor Network to Map CO2 in Urban Environments

    NASA Astrophysics Data System (ADS)

    Lee, J.; Christen, A.; Nesic, Z.; Ketler, R.

    2014-12-01

    Globally, an estimated 80% of all fuel-based CO2 emissions into the atmosphere are attributable to cities, but there is still a lack of tools to map, visualize and monitor emissions to the scales at which emissions reduction strategies can be implemented - the local and urban scale. Mobile CO2 sensors, such as those attached to taxis and other existing mobile platforms, may be a promising way to observe and map CO2 mixing ratios across heterogenous urban environments with a limited number of sensors. Emerging modular open source technologies, and inexpensive compact sensor components not only enable rapid prototyping and replication, but also are allowing for the miniaturization and mobilization of traditionally fixed sensor networks. We aim to optimize the methods and technologies for monitoring CO2 in cities using a network of CO2 sensors deployable on vehicles and bikes. Our sensor technology is contained in a compact weather-proof case (35.8cm x 27.8cm x 11.8cm), powered independently by battery or by car, and includes the Li-Cor Li-820 infrared gas analyzer (Licor Inc, lincoln, NB, USA), Arduino Mega microcontroller (Arduino CC, Italy) and Adafruit GPS (Adafruit Technologies, NY, USA), and digital air temperature thermometer which measure CO2 mixing ratios (ppm), geolocation and speed, pressure and temperature, respectively at 1-second intervals. With the deployment of our sensor technology, we will determine if such a semi-autonomous mobile approach to monitoring CO2 in cities can determine excess urban CO2 mixing ratios (i.e. the 'urban CO2 dome') when compared to values measured at a fixed, remote background site. We present results from a pilot study in Vancouver, BC, where the a network of our new sensors was deployed both in fixed network and in a mobile campaign and examine the spatial biases of the two methods.

  13. Do you see what I hear: experiments in multi-channel sound and 3D visualization for network monitoring?

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Hall, David L.

    2010-04-01

    Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.

  14. Energy Harvesting for Structural Health Monitoring Sensor Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, G.; Farrar, C. R.; Todd, M. D.

    2007-02-26

    This report has been developed based on information exchanges at a 2.5-day workshop on energy harvesting for embedded structural health monitoring (SHM) sensing systems that was held June 28-30, 2005, at Los Alamos National Laboratory. The workshop was hosted by the LANL/UCSD Engineering Institute (EI). This Institute is an education- and research-focused collaboration between Los Alamos National Laboratory (LANL) and the University of California, San Diego (UCSD), Jacobs School of Engineering. A Statistical Pattern Recognition paradigm for SHM is first presented and the concept of energy harvesting for embedded sensing systems is addressed with respect to the data acquisition portionmore » of this paradigm. Next, various existing and emerging sensing modalities used for SHM and their respective power requirements are summarized, followed by a discussion of SHM sensor network paradigms, power requirements for these networks and power optimization strategies. Various approaches to energy harvesting and energy storage are discussed and limitations associated with the current technology are addressed. This discussion also addresses current energy harvesting applications and system integration issues. The report concludes by defining some future research directions and possible technology demonstrations that are aimed at transitioning the concept of energy harvesting for embedded SHM sensing systems from laboratory research to field-deployed engineering prototypes.« less

  15. Hybrid intelligent monironing systems for thermal power plant trips

    NASA Astrophysics Data System (ADS)

    Barsoum, Nader; Ismail, Firas Basim

    2012-11-01

    Steam boiler is one of the main equipment in thermal power plants. If the steam boiler trips it may lead to entire shutdown of the plant, which is economically burdensome. Early boiler trips monitoring is crucial to maintain normal and safe operational conditions. In the present work two artificial intelligent monitoring systems specialized in boiler trips have been proposed and coded within the MATLAB environment. The training and validation of the two systems has been performed using real operational data captured from the plant control system of selected power plant. An integrated plant data preparation framework for seven boiler trips with related operational variables has been proposed for IMSs data analysis. The first IMS represents the use of pure Artificial Neural Network system for boiler trip detection. All seven boiler trips under consideration have been detected by IMSs before or at the same time of the plant control system. The second IMS represents the use of Genetic Algorithms and Artificial Neural Networks as a hybrid intelligent system. A slightly lower root mean square error was observed in the second system which reveals that the hybrid intelligent system performed better than the pure neural network system. Also, the optimal selection of the most influencing variables performed successfully by the hybrid intelligent system.

  16. Optical Network Virtualisation Using Multitechnology Monitoring and SDN-Enabled Optical Transceiver

    NASA Astrophysics Data System (ADS)

    Ou, Yanni; Davis, Matthew; Aguado, Alejandro; Meng, Fanchao; Nejabati, Reza; Simeonidou, Dimitra

    2018-05-01

    We introduce the real-time multi-technology transport layer monitoring to facilitate the coordinated virtualisation of optical and Ethernet networks supported by optical virtualise-able transceivers (V-BVT). A monitoring and network resource configuration scheme is proposed to include the hardware monitoring in both Ethernet and Optical layers. The scheme depicts the data and control interactions among multiple network layers under the software defined network (SDN) background, as well as the application that analyses the monitored data obtained from the database. We also present a re-configuration algorithm to adaptively modify the composition of virtual optical networks based on two criteria. The proposed monitoring scheme is experimentally demonstrated with OpenFlow (OF) extensions for a holistic (re-)configuration across both layers in Ethernet switches and V-BVTs.

  17. Availability Issues in Wireless Visual Sensor Networks

    PubMed Central

    Costa, Daniel G.; Silva, Ivanovitch; Guedes, Luiz Affonso; Vasques, Francisco; Portugal, Paulo

    2014-01-01

    Wireless visual sensor networks have been considered for a large set of monitoring applications related with surveillance, tracking and multipurpose visual monitoring. When sensors are deployed over a monitored field, permanent faults may happen during the network lifetime, reducing the monitoring quality or rendering parts or the entire network unavailable. In a different way from scalar sensor networks, camera-enabled sensors collect information following a directional sensing model, which changes the notions of vicinity and redundancy. Moreover, visual source nodes may have different relevancies for the applications, according to the monitoring requirements and cameras' poses. In this paper we discuss the most relevant availability issues related to wireless visual sensor networks, addressing availability evaluation and enhancement. Such discussions are valuable when designing, deploying and managing wireless visual sensor networks, bringing significant contributions to these networks. PMID:24526301

  18. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  19. Electronic neural networks for global optimization

    NASA Technical Reports Server (NTRS)

    Thakoor, A. P.; Moopenn, A. W.; Eberhardt, S.

    1990-01-01

    An electronic neural network with feedback architecture, implemented in analog custom VLSI is described. Its application to problems of global optimization for dynamic assignment is discussed. The convergence properties of the neural network hardware are compared with computer simulation results. The neural network's ability to provide optimal or near optimal solutions within only a few neuron time constants, a speed enhancement of several orders of magnitude over conventional search methods, is demonstrated. The effect of noise on the circuit dynamics and the convergence behavior of the neural network hardware is also examined.

  20. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis.

    PubMed

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  1. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis

    NASA Astrophysics Data System (ADS)

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Objective. Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. Approach. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. Main results. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Significance. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  2. Effective contaminant detection networks in uncertain groundwater flow fields.

    PubMed

    Hudak, P F

    2001-01-01

    A mass transport simulation model tested seven contaminant detection-monitoring networks under a 40 degrees range of groundwater flow directions. Each monitoring network contained five wells located 40 m from a rectangular landfill. The 40-m distance (lag) was measured in different directions, depending upon the strategy used to design a particular monitoring network. Lagging the wells parallel to the central flow path was more effective than alternative design strategies. Other strategies allowed higher percentages of leaks to migrate between monitoring wells. Results of this study suggest that centrally lagged groundwater monitoring networks perform most effectively in uncertain groundwater-flow fields.

  3. Topology Optimization for Energy Management in Underwater Sensor Networks

    DTIC Science & Technology

    2015-02-01

    1 To appear in International Journal of Control as a regular paper Topology Optimization for Energy Management in Underwater Sensor Networks ⋆ Devesh...K. Jha1 Thomas A. Wettergren2 Asok Ray1 Kushal Mukherjee3 Keywords: Underwater Sensor Network , Energy Management, Pareto Optimization, Adaptation...Optimization for Energy Management in Underwater Sensor Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d

  4. [The fundamental role of stage control technology on the detectability for Salmonella networking laboratory].

    PubMed

    Zhou, Yong-ming; Chen, Xiu-hua; Xu, Wen; Jin, Hui-ming; Li, Chao-qun; Liang, Wei-li; Wang, Duo-chun; Yan, Mei-ying; Lou, Jing; Kan, Biao; Ran, Lu; Cui, Zhi-gang; Wang, Shu-kun; Xu, Xue-bin

    2013-11-01

    To evaluated the fundamental role of stage control technology (SCT) on the detectability for Salmonella networking laboratories. Appropriate Salmonella detection methods after key point control being evaluated, were establishment and optimized. Our training and evaluation networking laboratories participated in the World Health Organization-Global Salmonella Surveillance Project (WHO-GSS) and China-U.S. Collaborative Program on Emerging and Re-emerging infectious diseases Project (GFN) in Shanghai. Staff members from the Yunnan Yuxi city Center for Disease Control and Prevention were trained on Salmonella isolation from diarrhea specimens. Data on annual Salmonella positive rates was collected from the provincial-level monitoring sites to be part of the GSS and GFN projects from 2006 to 2012. The methodology was designed based on the conventional detection procedure of Salmonella which involved the processes as enrichment, isolation, species identification and sero-typing. These methods were simultaneously used to satisfy the sensitivity requirements on non-typhoid Salmonella detection for networking laboratories. Public Health Laboratories in Shanghai had developed from 5 in 2006 to 9 in 2011, and Clinical laboratories from 8 to 22. Number of clinical isolates, including typhoid and non-typhoid Salmonella increased from 196 in 2006 to 1442 in 2011. The positive rate of Salmonella isolated from the clinical diarrhea cases was 2.4% in Yuxi county, in 2012. At present, three other provincial monitoring sites were using the SBG technique as selectivity enrichment broth for Salmonella isolation, with Shanghai having the most stable positive baseline. The method of SCT was proved the premise of the network laboratory construction. Based on this, the improvement of precise phenotypic identification and molecular typing capabilities could reach the level equivalent to the national networking laboratory.

  5. Inclusion of tank configurations as a variable in the cost optimization of branched piped-water networks

    NASA Astrophysics Data System (ADS)

    Hooda, Nikhil; Damani, Om

    2017-06-01

    The classic problem of the capital cost optimization of branched piped networks consists of choosing pipe diameters for each pipe in the network from a discrete set of commercially available pipe diameters. Each pipe in the network can consist of multiple segments of differing diameters. Water networks also consist of intermediate tanks that act as buffers between incoming flow from the primary source and the outgoing flow to the demand nodes. The network from the primary source to the tanks is called the primary network, and the network from the tanks to the demand nodes is called the secondary network. During the design stage, the primary and secondary networks are optimized separately, with the tanks acting as demand nodes for the primary network. Typically the choice of tank locations, their elevations, and the set of demand nodes to be served by different tanks is manually made in an ad hoc fashion before any optimization is done. It is desirable therefore to include this tank configuration choice in the cost optimization process itself. In this work, we explain why the choice of tank configuration is important to the design of a network and describe an integer linear program model that integrates the tank configuration to the standard pipe diameter selection problem. In order to aid the designers of piped-water networks, the improved cost optimization formulation is incorporated into our existing network design system called JalTantra.

  6. Use of NTRIP for optimizing the decoding algorithm for real-time data streams.

    PubMed

    He, Zhanke; Tang, Wenda; Yang, Xuhai; Wang, Liming; Liu, Jihua

    2014-10-10

    As a network transmission protocol, Networked Transport of RTCM via Internet Protocol (NTRIP) is widely used in GPS and Global Orbiting Navigational Satellite System (GLONASS) Augmentation systems, such as Continuous Operational Reference System (CORS), Wide Area Augmentation System (WAAS) and Satellite Based Augmentation Systems (SBAS). With the deployment of BeiDou Navigation Satellite system(BDS) to serve the Asia-Pacific region, there are increasing needs for ground monitoring of the BeiDou Navigation Satellite system and the development of the high-precision real-time BeiDou products. This paper aims to optimize the decoding algorithm of NTRIP Client data streams and the user authentication strategies of the NTRIP Caster based on NTRIP. The proposed method greatly enhances the handling efficiency and significantly reduces the data transmission delay compared with the Federal Agency for Cartography and Geodesy (BKG) NTRIP. Meanwhile, a transcoding method is proposed to facilitate the data transformation from the BINary EXchange (BINEX) format to the RTCM format. The transformation scheme thus solves the problem of handing real-time data streams from Trimble receivers in the BeiDou Navigation Satellite System indigenously developed by China.

  7. Feasibility analysis of using inverse modeling for estimating field-scale evapotranspiration in maize and soybean fields from soil water content monitoring networks

    NASA Astrophysics Data System (ADS)

    Foolad, Foad; Franz, Trenton E.; Wang, Tiejun; Gibson, Justin; Kilic, Ayse; Allen, Richard G.; Suyker, Andrew

    2017-03-01

    In this study, the feasibility of using inverse vadose zone modeling for estimating field-scale actual evapotranspiration (ETa) was explored at a long-term agricultural monitoring site in eastern Nebraska. Data from both point-scale soil water content (SWC) sensors and the area-average technique of cosmic-ray neutron probes were evaluated against independent ETa estimates from a co-located eddy covariance tower. While this methodology has been successfully used for estimates of groundwater recharge, it was essential to assess the performance of other components of the water balance such as ETa. In light of recent evaluations of land surface models (LSMs), independent estimates of hydrologic state variables and fluxes are critically needed benchmarks. The results here indicate reasonable estimates of daily and annual ETa from the point sensors, but with highly varied soil hydraulic function parameterizations due to local soil texture variability. The results of multiple soil hydraulic parameterizations leading to equally good ETa estimates is consistent with the hydrological principle of equifinality. While this study focused on one particular site, the framework can be easily applied to other SWC monitoring networks across the globe. The value-added products of groundwater recharge and ETa flux from the SWC monitoring networks will provide additional and more robust benchmarks for the validation of LSM that continues to improve their forecast skill. In addition, the value-added products of groundwater recharge and ETa often have more direct impacts on societal decision-making than SWC alone. Water flux impacts human decision-making from policies on the long-term management of groundwater resources (recharge), to yield forecasts (ETa), and to optimal irrigation scheduling (ETa). Illustrating the societal benefits of SWC monitoring is critical to insure the continued operation and expansion of these public datasets.

  8. Bio-mimic optimization strategies in wireless sensor networks: a survey.

    PubMed

    Adnan, Md Akhtaruzzaman; Abdur Razzaque, Mohammd; Ahmed, Ishtiaque; Isnin, Ismail Fauzi

    2013-12-24

    For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted.

  9. ASSESSING THE COMPARABILITY OF AMMONIUM, NITRATE AND SULFATE CONCENTRATIONS MEASURED BY THREE AIR QUALITY MONITORING NETWORKS

    EPA Science Inventory

    Airborne fine particulate matter across the United States is monitored by different networks, the three prevalent ones presently being the Clean Air Status and Trend Network (CASTNet), the Interagency Monitoring of PROtected Visual Environment Network (IMPROVE) and the Speciati...

  10. Transient stability enhancement of modern power grid using predictive Wide-Area Monitoring and Control

    NASA Astrophysics Data System (ADS)

    Yousefian, Reza

    This dissertation presents a real-time Wide-Area Control (WAC) designed based on artificial intelligence for large scale modern power systems transient stability enhancement. The WAC using the measurements available from Phasor Measurement Units (PMUs) at generator buses, monitors the global oscillations in the system and optimally augments the local excitation system of the synchronous generators. The complexity of the power system stability problem along with uncertainties and nonlinearities makes the conventional modeling non-practical or inaccurate. In this work Reinforcement Learning (RL) algorithm on the benchmark of Neural Networks (NNs) is used to map the nonlinearities of the system in real-time. This method different from both the centralized and the decentralized control schemes, employs a number of semi-autonomous agents to collaborate with each other to perform optimal control theory well-suited for WAC applications. Also, to handle the delays in Wide-Area Monitoring (WAM) and adapt the RL toward the robust control design, Temporal Difference (TD) is proposed as a solver for RL problem or optimal cost function. However, the main drawback of such WAC design is that it is challenging to determine if an offline trained network is valid to assess the stability of the power system once the system is evolved to a different operating state or network topology. In order to address the generality issue of NNs, a value priority scheme is proposed in this work to design a hybrid linear and nonlinear controllers. The algorithm so-called supervised RL is based on mixture of experts, where it is initialized by linear controller and as the performance and identification of the RL controller improves in real-time switches to the other controller. This work also focuses on transient stability and develops Lyapunov energy functions for synchronous generators to monitor the stability stress of the system. Using such energies as a cost function guarantees the convergence toward optimal post-fault solutions. These energy functions are developed on inter-area oscillations of the system identified online with Prony analysis. Finally, this work investigates the impacts of renewable energy resources, in specific Doubly Fed Induction Generator (DFIG)-based wind turbines, on power system transient stability and control. As the penetration of such resources is increased in transmission power system, neglecting the impacts of them will make the WAC design non-realistic. An energy function is proposed for DFIGs based on their dynamic performance in transient disturbances. Further, this energy is augmented to synchronous generators' energy as a global cost function, which is minimized by the WAC signals. We discuss the relative advantages and bottlenecks of each architecture and methodology using dynamic simulations of several test systems including a 2-area 8 bus system, IEEE 39 bus system, and IEEE 68 bus system in EMTP and real-time simulators. Being nonlinear-based, fast, accurate, and non-model based design, the proposed WAC system shows better transient and damping response when compared to conventional control schemes and local PSSs.

  11. Vehicle monitoring under Vehicular Ad-Hoc Networks (VANET) parameters employing illumination invariant correlation filters for the Pakistan motorway police

    NASA Astrophysics Data System (ADS)

    Gardezi, A.; Umer, T.; Butt, F.; Young, R. C. D.; Chatwin, C. R.

    2016-04-01

    A spatial domain optimal trade-off Maximum Average Correlation Height (SPOT-MACH) filter has been previously developed and shown to have advantages over frequency domain implementations in that it can be made locally adaptive to spatial variations in the input image background clutter and normalised for local intensity changes. The main concern for using the SPOT-MACH is its computationally intensive nature. However in the past enhancements techniques were proposed for the SPOT-MACH to make its execution time comparable to its frequency domain counterpart. In this paper a novel approach is discussed which uses VANET parameters coupled with the SPOT-MACH in order to minimise the extensive processing of the large video dataset acquired from the Pakistan motorways surveillance system. The use of VANET parameters gives us an estimation criterion of the flow of traffic on the Pakistan motorway network and acts as a precursor to the training algorithm. The use of VANET in this scenario would contribute heavily towards the computational complexity minimization of the proposed monitoring system.

  12. Developments for the Automation and Remote Control of the Radio Telescopes of the Geodetic Observatory Wettzell

    NASA Astrophysics Data System (ADS)

    Neidhardt, Alexander; Schönberger, Matthias; Plötz, Christian; Kronschnabl, Gerhard

    2014-12-01

    VGOS is a challenge for all fields of a new radio telescope. For the future software and hardware control mechanisms, it also requires new developments and solutions. More experiments, more data, high-speed data transfers through the Internet, and a real-time monitoring of current system status information must be handled. Additionally, an optimization of the observation shifts is required to reduce work load and costs. Within the framework of the development of the new 13.2-m Twin radio Telescopes Wettzell (TTW) and in combination with upgrades of the 20-m Radio Telescope Wettzell (RTW), some new technical realizations are under development and testing. Besides the activities for the realization of remote control, mainly supported during the project ``Novel EXploration Pushing Robust e-VLBI Services (NEXPReS)'' of the European VLBI Network (EVN), autonomous, automated, and unattended observations are also planned. A basic infrastructure should enable these, e.g., independent monitoring and security systems or additional, local high-speed transfer networks to ship data directly from a telescope to the main control room.

  13. Optimization of rainfall networks using information entropy and temporal variability analysis

    NASA Astrophysics Data System (ADS)

    Wang, Wenqi; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-04-01

    Rainfall networks are the most direct sources of precipitation data and their optimization and evaluation are essential and important. Information entropy can not only represent the uncertainty of rainfall distribution but can also reflect the correlation and information transmission between rainfall stations. Using entropy this study performs optimization of rainfall networks that are of similar size located in two big cities in China, Shanghai (in Yangtze River basin) and Xi'an (in Yellow River basin), with respect to temporal variability analysis. Through an easy-to-implement greedy ranking algorithm based on the criterion called, Maximum Information Minimum Redundancy (MIMR), stations of the networks in the two areas (each area is further divided into two subareas) are ranked during sliding inter-annual series and under different meteorological conditions. It is found that observation series with different starting days affect the ranking, alluding to the temporal variability during network evaluation. We propose a dynamic network evaluation framework for considering temporal variability, which ranks stations under different starting days with a fixed time window (1-year, 2-year, and 5-year). Therefore, we can identify rainfall stations which are temporarily of importance or redundancy and provide some useful suggestions for decision makers. The proposed framework can serve as a supplement for the primary MIMR optimization approach. In addition, during different periods (wet season or dry season) the optimal network from MIMR exhibits differences in entropy values and the optimal network from wet season tended to produce higher entropy values. Differences in spatial distribution of the optimal networks suggest that optimizing the rainfall network for changing meteorological conditions may be more recommended.

  14. Steam distribution and energy delivery optimization using wireless sensors

    NASA Astrophysics Data System (ADS)

    Olama, Mohammed M.; Allgood, Glenn O.; Kuruganti, Teja P.; Sukumar, Sreenivas R.; Djouadi, Seddik M.; Lake, Joe E.

    2011-05-01

    The Extreme Measurement Communications Center at Oak Ridge National Laboratory (ORNL) explores the deployment of a wireless sensor system with a real-time measurement-based energy efficiency optimization framework in the ORNL campus. With particular focus on the 12-mile long steam distribution network in our campus, we propose an integrated system-level approach to optimize the energy delivery within the steam distribution system. We address the goal of achieving significant energy-saving in steam lines by monitoring and acting on leaking steam valves/traps. Our approach leverages an integrated wireless sensor and real-time monitoring capabilities. We make assessments on the real-time status of the distribution system by mounting acoustic sensors on the steam pipes/traps/valves and observe the state measurements of these sensors. Our assessments are based on analysis of the wireless sensor measurements. We describe Fourier-spectrum based algorithms that interpret acoustic vibration sensor data to characterize flows and classify the steam system status. We are able to present the sensor readings, steam flow, steam trap status and the assessed alerts as an interactive overlay within a web-based Google Earth geographic platform that enables decision makers to take remedial action. We believe our demonstration serves as an instantiation of a platform that extends implementation to include newer modalities to manage water flow, sewage and energy consumption.

  15. Wireless Sensor Networks - Node Localization for Various Industry Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derr, Kurt; Manic, Milos

    Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less

  16. Wireless Sensor Networks - Node Localization for Various Industry Problems

    DOE PAGES

    Derr, Kurt; Manic, Milos

    2015-06-01

    Fast, effective monitoring following airborne releases of toxic substances is critical to mitigate risks to threatened population areas. Wireless sensor nodes at fixed predetermined locations may monitor such airborne releases and provide early warnings to the public. A challenging algorithmic problem is determining the locations to place these sensor nodes while meeting several criteria: 1) provide complete coverage of the domain, and 2) create a topology with problem dependent node densities, while 3) minimizing the number of sensor nodes. This manuscript presents a novel approach to determining optimal sensor placement, Advancing Front mEsh generation with Constrained dElaunay Triangulation and Smoothingmore » (AFECETS) that addresses these criteria. A unique aspect of AFECETS is the ability to determine wireless sensor node locations for areas of high interest (hospitals, schools, high population density areas) that require higher density of nodes for monitoring environmental conditions, a feature that is difficult to find in other research work. The AFECETS algorithm was tested on several arbitrary shaped domains. AFECETS simulation results show that the algorithm 1) provides significant reduction in the number of nodes, in some cases over 40%, compared to an advancing front mesh generation algorithm, 2) maintains and improves optimal spacing between nodes, and 3) produces simulation run times suitable for real-time applications.« less

  17. Artificial neural network assisted kinetic spectrophotometric technique for simultaneous determination of paracetamol and p-aminophenol in pharmaceutical samples using localized surface plasmon resonance band of silver nanoparticles

    NASA Astrophysics Data System (ADS)

    Khodaveisi, Javad; Dadfarnia, Shayessteh; Haji Shabani, Ali Mohammad; Rohani Moghadam, Masoud; Hormozi-Nezhad, Mohammad Reza

    2015-03-01

    Spectrophotometric analysis method based on the combination of the principal component analysis (PCA) with the feed-forward neural network (FFNN) and the radial basis function network (RBFN) was proposed for the simultaneous determination of paracetamol (PAC) and p-aminophenol (PAP). This technique relies on the difference between the kinetic rates of the reactions between analytes and silver nitrate as the oxidizing agent in the presence of polyvinylpyrrolidone (PVP) which is the stabilizer. The reactions are monitored at the analytical wavelength of 420 nm of the localized surface plasmon resonance (LSPR) band of the formed silver nanoparticles (Ag-NPs). Under the optimized conditions, the linear calibration graphs were obtained in the concentration range of 0.122-2.425 μg mL-1 for PAC and 0.021-5.245 μg mL-1 for PAP. The limit of detection in terms of standard approach (LODSA) and upper limit approach (LODULA) were calculated to be 0.027 and 0.032 μg mL-1 for PAC and 0.006 and 0.009 μg mL-1 for PAP. The important parameters were optimized for the artificial neural network (ANN) models. Statistical parameters indicated that the ability of the both methods is comparable. The proposed method was successfully applied to the simultaneous determination of PAC and PAP in pharmaceutical preparations.

  18. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  19. Toward Optimal Transport Networks

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Kincaid, Rex K.; Vargo, Erik P.

    2008-01-01

    Strictly evolutionary approaches to improving the air transport system a highly complex network of interacting systems no longer suffice in the face of demand that is projected to double or triple in the near future. Thus evolutionary approaches should be augmented with active design methods. The ability to actively design, optimize and control a system presupposes the existence of predictive modeling and reasonably well-defined functional dependences between the controllable variables of the system and objective and constraint functions for optimization. Following recent advances in the studies of the effects of network topology structure on dynamics, we investigate the performance of dynamic processes on transport networks as a function of the first nontrivial eigenvalue of the network's Laplacian, which, in turn, is a function of the network s connectivity and modularity. The last two characteristics can be controlled and tuned via optimization. We consider design optimization problem formulations. We have developed a flexible simulation of network topology coupled with flows on the network for use as a platform for computational experiments.

  20. Dynamic mobility applications policy analysis : policy and institutional issues for intelligent network flow optimization (INFLO).

    DOT National Transportation Integrated Search

    2014-12-01

    The report documents policy considerations for the Intelligent Network Flow Optimization (INFLO) connected vehicle applications bundle. INFLO aims to optimize network flow on freeways and arterials by informing motorists of existing and impendi...

  1. Statistical approaches used to assess and redesign surface water-quality-monitoring networks.

    PubMed

    Khalil, B; Ouarda, T B M J

    2009-11-01

    An up-to-date review of the statistical approaches utilized for the assessment and redesign of surface water quality monitoring (WQM) networks is presented. The main technical aspects of network design are covered in four sections, addressing monitoring objectives, water quality variables, sampling frequency and spatial distribution of sampling locations. This paper discusses various monitoring objectives and related procedures used for the assessment and redesign of long-term surface WQM networks. The appropriateness of each approach for the design, contraction or expansion of monitoring networks is also discussed. For each statistical approach, its advantages and disadvantages are examined from a network design perspective. Possible methods to overcome disadvantages and deficiencies in the statistical approaches that are currently in use are recommended.

  2. Optimal Time-Resource Allocation for Energy-Efficient Physical Activity Detection

    PubMed Central

    Thatte, Gautam; Li, Ming; Lee, Sangwon; Emken, B. Adar; Annavaram, Murali; Narayanan, Shrikanth; Spruijt-Metz, Donna; Mitra, Urbashi

    2011-01-01

    The optimal allocation of samples for physical activity detection in a wireless body area network for health-monitoring is considered. The number of biometric samples collected at the mobile device fusion center, from both device-internal and external Bluetooth heterogeneous sensors, is optimized to minimize the transmission power for a fixed number of samples, and to meet a performance requirement defined using the probability of misclassification between multiple hypotheses. A filter-based feature selection method determines an optimal feature set for classification, and a correlated Gaussian model is considered. Using experimental data from overweight adolescent subjects, it is found that allocating a greater proportion of samples to sensors which better discriminate between certain activity levels can result in either a lower probability of error or energy-savings ranging from 18% to 22%, in comparison to equal allocation of samples. The current activity of the subjects and the performance requirements do not significantly affect the optimal allocation, but employing personalized models results in improved energy-efficiency. As the number of samples is an integer, an exhaustive search to determine the optimal allocation is typical, but computationally expensive. To this end, an alternate, continuous-valued vector optimization is derived which yields approximately optimal allocations and can be implemented on the mobile fusion center due to its significantly lower complexity. PMID:21796237

  3. Serial Network Flow Monitor

    NASA Technical Reports Server (NTRS)

    Robinson, Julie A.; Tate-Brown, Judy M.

    2009-01-01

    Using a commercial software CD and minimal up-mass, SNFM monitors the Payload local area network (LAN) to analyze and troubleshoot LAN data traffic. Validating LAN traffic models may allow for faster and more reliable computer networks to sustain systems and science on future space missions. Research Summary: This experiment studies the function of the computer network onboard the ISS. On-orbit packet statistics are captured and used to validate ground based medium rate data link models and enhance the way that the local area network (LAN) is monitored. This information will allow monitoring and improvement in the data transfer capabilities of on-orbit computer networks. The Serial Network Flow Monitor (SNFM) experiment attempts to characterize the network equivalent of traffic jams on board ISS. The SNFM team is able to specifically target historical problem areas including the SAMS (Space Acceleration Measurement System) communication issues, data transmissions from the ISS to the ground teams, and multiple users on the network at the same time. By looking at how various users interact with each other on the network, conflicts can be identified and work can begin on solutions. SNFM is comprised of a commercial off the shelf software package that monitors packet traffic through the payload Ethernet LANs (local area networks) on board ISS.

  4. Deployment-based lifetime optimization for linear wireless sensor networks considering both retransmission and discrete power control.

    PubMed

    Li, Ruiying; Ma, Wenting; Huang, Ning; Kang, Rui

    2017-01-01

    A sophisticated method for node deployment can efficiently reduce the energy consumption of a Wireless Sensor Network (WSN) and prolong the corresponding network lifetime. Pioneers have proposed many node deployment based lifetime optimization methods for WSNs, however, the retransmission mechanism and the discrete power control strategy, which are widely used in practice and have large effect on the network energy consumption, are often neglected and assumed as a continuous one, respectively, in the previous studies. In this paper, both retransmission and discrete power control are considered together, and a more realistic energy-consumption-based network lifetime model for linear WSNs is provided. Using this model, we then propose a generic deployment-based optimization model that maximizes network lifetime under coverage, connectivity and transmission rate success constraints. The more accurate lifetime evaluation conduces to a longer optimal network lifetime in the realistic situation. To illustrate the effectiveness of our method, both one-tiered and two-tiered uniformly and non-uniformly distributed linear WSNs are optimized in our case studies, and the comparisons between our optimal results and those based on relatively inaccurate lifetime evaluation show the advantage of our method when investigating WSN lifetime optimization problems.

  5. Application of Frequency of Detection Methods in Design and Optimization of the INL Site Ambient Air Monitoring Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rood, Arthur S.; Sondrup, A. Jeffrey

    This report presents an evaluation of a hypothetical INL Site monitoring network and the existing INL air monitoring network using frequency of detection methods. The hypothetical network was designed to address the requirement in 40 CFR Part 61, Subpart H (2006) that “emissions of radionuclides to ambient air from U.S. DOE facilities shall not exceed those amounts that would cause any member of the public to receive in any year an effective dose equivalent exceeding 10 mrem/year.” To meet the requirement for monitoring only, “radionuclide releases that would result in an effective dose of 10% of the standard shall bemore » readily detectable and distinguishable from background.” Thus, the hypothetical network consists of air samplers placed at residence locations that surround INL and at other locations where onsite livestock grazing takes place. Two exposure scenarios were used in this evaluation: a resident scenario and a shepherd/rancher scenario. The resident was assumed to be continuously present at their residence while the shepherd/rancher was assumed to be present 24-hours at a fixed location on the grazing allotment. Important radionuclides were identified from annual INL radionuclide National Emission Standards for Hazardous Pollutants reports. Important radionuclides were defined as those that potentially contribute 1% or greater to the annual total dose at the radionuclide National Emission Standards for Hazardous Pollutants maximally exposed individual location and include H-3, Am-241, Pu-238, Pu 239, Cs-137, Sr-90, and I-131. For this evaluation, the network performance objective was set at achieving a frequency of detection greater than or equal to 95%. Results indicated that the hypothetical network for the resident scenario met all performance objectives for H-3 and I-131 and most performance objectives for Cs-137 and Sr-90. However, all actinides failed to meet the performance objectives for most sources. The shepherd/rancher scenario showed that air samplers placed around the facilities every 22.5 degrees were very effective in detecting releases, but this arrangement is not practical or cost effective. However, it was shown that a few air samplers placed in the prevailing wind direction around each facility could achieve the performance objective of a frequency of detection greater than or equal to 95% for the shepherd/rancher scenario. The results also indicate some of the current sampler locations have little or no impact on the network frequency of detection and could be removed from the network with no appreciable deterioration of performance. Results show that with some slight modifications to the existing network (i.e., additional samplers added north and south of the Materials and Fuels Complex and ineffective samplers removed), the network would achieve performance objectives for all sources for both the resident and shepherd/rancher scenario.« less

  6. Description of real-time Ada software implementation of a power system monitor for the Space Station Freedom PMAD DC testbed

    NASA Technical Reports Server (NTRS)

    Ludwig, Kimberly; Mackin, Michael; Wright, Theodore

    1991-01-01

    The Ada language software development to perform the electrical system monitoring functions for the NASA Lewis Research Center's Power Management and Distribution (PMAD) DC testbed is described. The results of the effort to implement this monitor are presented. The PMAD DC testbed is a reduced-scale prototype of the electrical power system to be used in the Space Station Freedom. The power is controlled by smart switches known as power control components (or switchgear). The power control components are currently coordinated by five Compaq 382/20e computers connected through an 802.4 local area network. One of these computers is designated as the control node with the other four acting as subsidiary controllers. The subsidiary controllers are connected to the power control components with a Mil-Std-1553 network. An operator interface is supplied by adding a sixth computer. The power system monitor algorithm is comprised of several functions including: periodic data acquisition, data smoothing, system performance analysis, and status reporting. Data is collected from the switchgear sensors every 100 milliseconds, then passed through a 2 Hz digital filter. System performance analysis includes power interruption and overcurrent detection. The reporting mechanism notifies an operator of any abnormalities in the system. Once per second, the system monitor provides data to the control node for further processing, such as state estimation. The system monitor required a hardware time interrupt to activate the data acquisition function. The execution time of the code was optimized using an assembly language routine. The routine allows direct vectoring of the processor to Ada language procedures that perform periodic control activities. A summary of the advantages and side effects of this technique are discussed.

  7. Optimizing fixed observational assets in a coastal observatory

    NASA Astrophysics Data System (ADS)

    Frolov, Sergey; Baptista, António; Wilkin, Michael

    2008-11-01

    Proliferation of coastal observatories necessitates an objective approach to managing of observational assets. In this article, we used our experience in the coastal observatory for the Columbia River estuary and plume to identify and address common problems in managing of fixed observational assets, such as salinity, temperature, and water level sensors attached to pilings and moorings. Specifically, we addressed the following problems: assessing the quality of an existing array, adding stations to an existing array, removing stations from an existing array, validating an array design, and targeting of an array toward data assimilation or monitoring. Our analysis was based on a combination of methods from oceanographic and statistical literature, mainly on the statistical machinery of the best linear unbiased estimator. The key information required for our analysis was the covariance structure for a field of interest, which was computed from the output of assimilated and non-assimilated models of the Columbia River estuary and plume. The network optimization experiments in the Columbia River estuary and plume proved to be successful, largely withstanding the scrutiny of sensitivity and validation studies, and hence providing valuable insight into optimization and operation of the existing observational network. Our success in the Columbia River estuary and plume suggest that algorithms for optimal placement of sensors are reaching maturity and are likely to play a significant role in the design of emerging ocean observatories, such as the United State's ocean observation initiative (OOI) and integrated ocean observing system (IOOS) observatories, and smaller regional observatories.

  8. Models and algorithm of optimization launch and deployment of virtual network functions in the virtual data center

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2017-10-01

    The goal of our investigation is optimization of network work in virtual data center. The advantage of modern infrastructure virtualization lies in the possibility to use software-defined networks. However, the existing optimization of algorithmic solutions does not take into account specific features working with multiple classes of virtual network functions. The current paper describes models characterizing the basic structures of object of virtual data center. They including: a level distribution model of software-defined infrastructure virtual data center, a generalized model of a virtual network function, a neural network model of the identification of virtual network functions. We also developed an efficient algorithm for the optimization technology of containerization of virtual network functions in virtual data center. We propose an efficient algorithm for placing virtual network functions. In our investigation we also generalize the well renowned heuristic and deterministic algorithms of Karmakar-Karp.

  9. Smart-Grid Backbone Network Real-Time Delay Reduction via Integer Programming.

    PubMed

    Pagadrai, Sasikanth; Yilmaz, Muhittin; Valluri, Pratyush

    2016-08-01

    This research investigates an optimal delay-based virtual topology design using integer linear programming (ILP), which is applied to the current backbone networks such as smart-grid real-time communication systems. A network traffic matrix is applied and the corresponding virtual topology problem is solved using the ILP formulations that include a network delay-dependent objective function and lightpath routing, wavelength assignment, wavelength continuity, flow routing, and traffic loss constraints. The proposed optimization approach provides an efficient deterministic integration of intelligent sensing and decision making, and network learning features for superior smart grid operations by adaptively responding the time-varying network traffic data as well as operational constraints to maintain optimal virtual topologies. A representative optical backbone network has been utilized to demonstrate the proposed optimization framework whose simulation results indicate that superior smart-grid network performance can be achieved using commercial networks and integer programming.

  10. Optimization of cascading failure on complex network based on NNIA

    NASA Astrophysics Data System (ADS)

    Zhu, Qian; Zhu, Zhiliang; Qi, Yi; Yu, Hai; Xu, Yanjie

    2018-07-01

    Recently, the robustness of networks under cascading failure has attracted extensive attention. Different from previous studies, we concentrate on how to improve the robustness of the networks from the perspective of intelligent optimization. We establish two multi-objective optimization models that comprehensively consider the operational cost of the edges in the networks and the robustness of the networks. The NNIA (Non-dominated Neighbor Immune Algorithm) is applied to solve the optimization models. We finished simulations of the Barabási-Albert (BA) network and Erdös-Rényi (ER) network. In the solutions, we find the edges that can facilitate the propagation of cascading failure and the edges that can suppress the propagation of cascading failure. From the conclusions, we take optimal protection measures to weaken the damage caused by cascading failures. We also consider actual situations of operational cost feasibility of the edges. People can make a more practical choice based on the operational cost. Our work will be helpful in the design of highly robust networks or improvement of the robustness of networks in the future.

  11. Optimization of neural network architecture using genetic programming improves detection and modeling of gene-gene interactions in studies of human diseases

    PubMed Central

    Ritchie, Marylyn D; White, Bill C; Parker, Joel S; Hahn, Lance W; Moore, Jason H

    2003-01-01

    Background Appropriate definition of neural network architecture prior to data analysis is crucial for successful data mining. This can be challenging when the underlying model of the data is unknown. The goal of this study was to determine whether optimizing neural network architecture using genetic programming as a machine learning strategy would improve the ability of neural networks to model and detect nonlinear interactions among genes in studies of common human diseases. Results Using simulated data, we show that a genetic programming optimized neural network approach is able to model gene-gene interactions as well as a traditional back propagation neural network. Furthermore, the genetic programming optimized neural network is better than the traditional back propagation neural network approach in terms of predictive ability and power to detect gene-gene interactions when non-functional polymorphisms are present. Conclusion This study suggests that a machine learning strategy for optimizing neural network architecture may be preferable to traditional trial-and-error approaches for the identification and characterization of gene-gene interactions in common, complex human diseases. PMID:12846935

  12. Water quality monitoring for high-priority water bodies in the Sonoran Desert network

    Treesearch

    Terry W. Sprouse; Robert M. Emanuel; Sara A. Strorrer

    2005-01-01

    This paper describes a network monitoring program for “high priority” water bodies in the Sonoran Desert Network of the National Park Service. Protocols were developed for monitoring selected waters for ten of the eleven parks in the Network. Park and network staff assisted in identifying potential locations of testing sites, local priorities, and how water quality...

  13. Network Monitoring and Fault Detection on the University of Illinois at Urbana-Champaign Campus Computer Network.

    ERIC Educational Resources Information Center

    Sng, Dennis Cheng-Hong

    The University of Illinois at Urbana-Champaign (UIUC) has a large campus computer network serving a community of about 20,000 users. With such a large network, it is inevitable that there are a wide variety of technologies co-existing in a multi-vendor environment. Effective network monitoring tools can help monitor traffic and link usage, as well…

  14. Analysis backpropagation methods with neural network for prediction of children's ability in psychomotoric

    NASA Astrophysics Data System (ADS)

    Izhari, F.; Dhany, H. W.; Zarlis, M.; Sutarman

    2018-03-01

    A good age in optimizing aspects of development is at the age of 4-6 years, namely with psychomotor development. Psychomotor is broader, more difficult to monitor but has a meaningful value for the child's life because it directly affects his behavior and deeds. Therefore, there is a problem to predict the child's ability level based on psychomotor. This analysis uses backpropagation method analysis with artificial neural network to predict the ability of the child on the psychomotor aspect by generating predictions of the child's ability on psychomotor and testing there is a mean squared error (MSE) value at the end of the training of 0.001. There are 30% of children aged 4-6 years have a good level of psychomotor ability, excellent, less good, and good enough.

  15. Clustering ENTLN sferics to improve TGF temporal analysis

    NASA Astrophysics Data System (ADS)

    Pradhan, E.; Briggs, M. S.; Stanbro, M.; Cramer, E.; Heckman, S.; Roberts, O.

    2017-12-01

    Using TGFs detected with Fermi Gamma-ray Burst Monitor (GBM) and simultaneous radio sferics detected by Earth Network Total Lightning Network (ENTLN), we establish a temporal co-relation between them. The first step is to find ENTLN strokes that that are closely associated to GBM TGFs. We then identify all the related strokes in the lightning flash that the TGF-associated-stroke belongs to. After trying several algorithms, we found out that the DBSCAN clustering algorithm was best for clustering related ENTLN strokes into flashes. The operation of DBSCAN was optimized using a single seperation measure that combined time and distance seperation. Previous analysis found that these strokes show three timescales with respect to the gamma-ray time. We will use the improved identification of flashes to research this.

  16. A Wearable Respiratory Biofeedback System Based on Generalized Body Sensor Network

    PubMed Central

    Liu, Guan-Zheng; Huang, Bang-Yu

    2011-01-01

    Abstract Wearable medical devices have enabled unobtrusive monitoring of vital signs and emerging biofeedback services in a pervasive manner. This article describes a wearable respiratory biofeedback system based on a generalized body sensor network (BSN) platform. The compact BSN platform was tailored for the strong requirements of overall system optimizations. A waist-worn biofeedback device was designed using the BSN. Extensive bench tests have shown that the generalized BSN worked as intended. In-situ experiments with 22 subjects indicated that the biofeedback device was discreet, easy to wear, and capable of offering wearable respiratory trainings. Pilot studies on wearable training patterns and resultant heart rate variability suggested that paced respirations at abdominal level and with identical inhaling/exhaling ratio were more appropriate for decreasing sympathetic arousal and increasing parasympathetic activities. PMID:21545293

  17. Network anomaly detection system with optimized DS evidence theory.

    PubMed

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network-complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each sensor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly.

  18. Georgia's Stream-Water-Quality Monitoring Network, 2006

    USGS Publications Warehouse

    Nobles, Patricia L.; ,

    2006-01-01

    The USGS stream-water-quality monitoring network for Georgia is an aggregation of smaller networks and individual monitoring stations that have been established in cooperation with Federal, State, and local agencies. These networks collectively provide data from 130 sites, 62 of which are monitored continuously in real time using specialized equipment that transmits these data via satellite to a centralized location for processing and storage. These data are made available on the Web in near real time at http://waterdata.usgs.gov/ga/nwis/ Ninety-eight stations are sampled periodically for a more extensive suite of chemical and biological constituents that require laboratory analysis. Both the continuous and the periodic water-quality data are archived and maintained in the USGS National Water Information System and are available to cooperators, water-resource managers, and the public. The map at right shows the USGS stream-water-quality monitoring network for Georgia and major watersheds. The network represents an aggregation of smaller networks and individual monitoring stations that collectively provide data from 130 sites.

  19. Bio-Mimic Optimization Strategies in Wireless Sensor Networks: A Survey

    PubMed Central

    Adnan, Md. Akhtaruzzaman; Razzaque, Mohammd Abdur; Ahmed, Ishtiaque; Isnin, Ismail Fauzi

    2014-01-01

    For the past 20 years, many authors have focused their investigations on wireless sensor networks. Various issues related to wireless sensor networks such as energy minimization (optimization), compression schemes, self-organizing network algorithms, routing protocols, quality of service management, security, energy harvesting, etc., have been extensively explored. The three most important issues among these are energy efficiency, quality of service and security management. To get the best possible results in one or more of these issues in wireless sensor networks optimization is necessary. Furthermore, in number of applications (e.g., body area sensor networks, vehicular ad hoc networks) these issues might conflict and require a trade-off amongst them. Due to the high energy consumption and data processing requirements, the use of classical algorithms has historically been disregarded. In this context contemporary researchers started using bio-mimetic strategy-based optimization techniques in the field of wireless sensor networks. These techniques are diverse and involve many different optimization algorithms. As far as we know, most existing works tend to focus only on optimization of one specific issue of the three mentioned above. It is high time that these individual efforts are put into perspective and a more holistic view is taken. In this paper we take a step in that direction by presenting a survey of the literature in the area of wireless sensor network optimization concentrating especially on the three most widely used bio-mimetic algorithms, namely, particle swarm optimization, ant colony optimization and genetic algorithm. In addition, to stimulate new research and development interests in this field, open research issues, challenges and future research directions are highlighted. PMID:24368702

  20. COMPARISON OF DATA FROM THE STN AND IMPROVE NETWORKS

    EPA Science Inventory

    Two national chemical speciation-monitoring networks operate currently within the United States. The Interagency Monitoring of Protected Visual Environments (IMPROVE) monitoring network operates primarily in rural areas collecting aerosol and optical data to better understand th...

  1. The new DMT SAFEGUARD low-cost GNSS measuring system and its application in the field of geotechnical deformation and movement monitoring

    NASA Astrophysics Data System (ADS)

    Schröder, Daniel

    2017-04-01

    In the recent years an increasing awareness of geodetic measurement systems and their application for monitoring projects is clearly visible. With geodetic sensors it is possible to detect safety-related changes at monitoring objects with high temporal density, high accuracy and in a very reliable manner. Quality acquisitions, processing and storage of monitoring data as well as a professional on-site implementation are the most important requirements and challenges to contemporary systems in civil engineering, mining as well as oil and gas production. Monitoring measures provide important input for early warning, alarm, protection and verification of potential hazardous environments and therefore the risk management applied to projects have a significant influence. The implementation has to follow an optimization process incorporating necessary accuracy, reliability and economic efficiency. From the economical point of view the costs per observation point are crucial for most monitoring projects. Keeping in mind that the costs of classical high-end GNSS stations with a geodetic dual-frequency receiver is within the range of several 10,000 euro. Large monitoring networks with a high number of simultaneously observed points are very expensive and therefore eventually have to be cut back, substituted by compromising methods or totally withdrawn. A further development in the area of GNSS receivers could reduce this disadvantage. Within the last few years single-frequency receivers that record L1-signals of GPS/GLONASS and offer sub-centimeter positioning accuracies are increasingly offered on the market. The accuracy of GNSS measurements depends on many factors as the hardware itself as well as on external influences related to the measurement principals. The external influences can be strongly reduced or eliminated by appropriate measuring and processing methods. For a reliable monitoring system it is necessary that the results are comparable and consistent for each epoch. Based on these requirements DMT has developed the new DMT SAFEGUARD GNSS. In this article the latest developments in the field of low-cost GNSS are shown by different examples from industry and authorities. By means of a detailed accuracy study the DMT SAFEGUARD GNSS system applicability will be demonstrated. The study shows possibilities to detect coordinate shifts on sub centimeter level by using suitable data processing approaches and permanent network solutions. In addition to the DMT SAFEGUARD GNSS system this article illustrates the combination with further relevant sensors to integrated multisensorial networks. Such networks include geodetic data, geophysical data, geotechnical data, video, audio etc. For the central integration of all sensor types DMT has developed a web-based monitoring system - DMT SAFEGUARD which offers individual customizing, sophisticated analysis tools as well as comprehensive reporting options.

  2. Continuous Seismic Threshold Monitoring

    DTIC Science & Technology

    1992-05-31

    Continuous threshold monitoring is a technique for using a seismic network to monitor a geographical area continuously in time. The method provides...area. Two approaches are presented. Site-specific monitoring: By focusing a seismic network on a specific target site, continuous threshold monitoring...recorded events at the site. We define the threshold trace for the network as the continuous time trace of computed upper magnitude limits of seismic

  3. Designing Industrial Networks Using Ecological Food Web Metrics.

    PubMed

    Layton, Astrid; Bras, Bert; Weissburg, Marc

    2016-10-18

    Biologically Inspired Design (biomimicry) and Industrial Ecology both look to natural systems to enhance the sustainability and performance of engineered products, systems and industries. Bioinspired design (BID) traditionally has focused on a unit operation and single product level. In contrast, this paper describes how principles of network organization derived from analysis of ecosystem properties can be applied to industrial system networks. Specifically, this paper examines the applicability of particular food web matrix properties as design rules for economically and biologically sustainable industrial networks, using an optimization model developed for a carpet recycling network. Carpet recycling network designs based on traditional cost and emissions based optimization are compared to designs obtained using optimizations based solely on ecological food web metrics. The analysis suggests that networks optimized using food web metrics also were superior from a traditional cost and emissions perspective; correlations between optimization using ecological metrics and traditional optimization ranged generally from 0.70 to 0.96, with flow-based metrics being superior to structural parameters. Four structural food parameters provided correlations nearly the same as that obtained using all structural parameters, but individual structural parameters provided much less satisfactory correlations. The analysis indicates that bioinspired design principles from ecosystems can lead to both environmentally and economically sustainable industrial resource networks, and represent guidelines for designing sustainable industry networks.

  4. Neural network for nonsmooth pseudoconvex optimization with general convex constraints.

    PubMed

    Bian, Wei; Ma, Litao; Qin, Sitian; Xue, Xiaoping

    2018-05-01

    In this paper, a one-layer recurrent neural network is proposed for solving a class of nonsmooth, pseudoconvex optimization problems with general convex constraints. Based on the smoothing method, we construct a new regularization function, which does not depend on any information of the feasible region. Thanks to the special structure of the regularization function, we prove the global existence, uniqueness and "slow solution" character of the state of the proposed neural network. Moreover, the state solution of the proposed network is proved to be convergent to the feasible region in finite time and to the optimal solution set of the related optimization problem subsequently. In particular, the convergence of the state to an exact optimal solution is also considered in this paper. Numerical examples with simulation results are given to show the efficiency and good characteristics of the proposed network. In addition, some preliminary theoretical analysis and application of the proposed network for a wider class of dynamic portfolio optimization are included. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Energy-Aware Multipath Routing Scheme Based on Particle Swarm Optimization in Mobile Ad Hoc Networks

    PubMed Central

    Robinson, Y. Harold; Rajaram, M.

    2015-01-01

    Mobile ad hoc network (MANET) is a collection of autonomous mobile nodes forming an ad hoc network without fixed infrastructure. Dynamic topology property of MANET may degrade the performance of the network. However, multipath selection is a great challenging task to improve the network lifetime. We proposed an energy-aware multipath routing scheme based on particle swarm optimization (EMPSO) that uses continuous time recurrent neural network (CTRNN) to solve optimization problems. CTRNN finds the optimal loop-free paths to solve link disjoint paths in a MANET. The CTRNN is used as an optimum path selection technique that produces a set of optimal paths between source and destination. In CTRNN, particle swarm optimization (PSO) method is primly used for training the RNN. The proposed scheme uses the reliability measures such as transmission cost, energy factor, and the optimal traffic ratio between source and destination to increase routing performance. In this scheme, optimal loop-free paths can be found using PSO to seek better link quality nodes in route discovery phase. PSO optimizes a problem by iteratively trying to get a better solution with regard to a measure of quality. The proposed scheme discovers multiple loop-free paths by using PSO technique. PMID:26819966

  6. The Geothermic Fatigue Hydraulic Fracturing Experiment in Äspö Hard Rock Laboratory, Sweden: New Insights Into Fracture Process through In-situ AE Monitoring

    NASA Astrophysics Data System (ADS)

    Kwiatek, G.; Plenkers, K.; Zang, A.; Stephansson, O.; Stenberg, L.

    2016-12-01

    The geothermic Fatigue Hydraulic Fracturing (FHF) in situ experiment (Nova project 54-14-1) took place in the Äspö Hard Rock Laboratory/Sweden in a 1.8 Ma old granitic to dioritic rock mass. The experiment aims at optimizing geothermal heat exchange in crystalline rock mass by multistage hydraulic fracturing at 10 m scale. Six fractures are driven by three different water injection schemes (continuous, cyclic, pulse pressurization) inside a 28 m long, horizontal borehole at depth level 410 m. The rock volume subject to hydraulic fracturing and monitored by three different networks with acoustic emission (AE), micro-seismicity and electromagnetic sensors is about 30 m x 30 m x 30 m in size. The 16-channel In-situ AE monitoring network by GMuG monitored the rupture generation and propagation in the frequency range 1000 Hz to 100,000 Hz corresponding to rupture dimensions from cm- to dm-scale. The in-situ AE monitoring system detected and analyzed AE activity in-situ (P- and S-wave picking, localization). The results were used to review the ongoing microfracturing activity in near real-time. The in-situ AE monitoring network successfully recorded and localized 196 seismic events for most, but not all, hydraulic fractures. All AE events detected in-situ occurred during fracturing time periods. The source parameters (fracture sizes, moment magnitudes, static stress drop) of AE events framing injection periods were calculated using the combined spectral fitting/spectra ratio techniques. The AE activity is clustered in space and clearly outline the fractures location, its orientation, and expansion as well as their temporal evolution. The outward migration of AE events away from the borehole is observed. Fractures extend up to 7 m from the injection interval in the horizontal borehole. The fractures orientation and location correlate for most fractures roughly with the results gained by image packer. Clear differences in seismic response between hydraulic fractures in different formations and injection schemes are visible which need further investigation. For further analysis all AE data of fracturing time periods were recorded continuously with 1 MHz sampling frequency per channel.

  7. Optimal resource allocation strategy for two-layer complex networks

    NASA Astrophysics Data System (ADS)

    Ma, Jinlong; Wang, Lixin; Li, Sufeng; Duan, Congwen; Liu, Yu

    2018-02-01

    We study the traffic dynamics on two-layer complex networks, and focus on its delivery capacity allocation strategy to enhance traffic capacity measured by the critical value Rc. With the limited packet-delivering capacity, we propose a delivery capacity allocation strategy which can balance the capacities of non-hub nodes and hub nodes to optimize the data flow. With the optimal value of parameter αc, the maximal network capacity is reached because most of the nodes have shared the appropriate delivery capacity by the proposed delivery capacity allocation strategy. Our work will be beneficial to network service providers to design optimal networked traffic dynamics.

  8. Sensor Network Architectures for Monitoring Underwater Pipelines

    PubMed Central

    Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (Radio Frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring. PMID:22346669

  9. Sensor network architectures for monitoring underwater pipelines.

    PubMed

    Mohamed, Nader; Jawhar, Imad; Al-Jaroodi, Jameela; Zhang, Liren

    2011-01-01

    This paper develops and compares different sensor network architecture designs that can be used for monitoring underwater pipeline infrastructures. These architectures are underwater wired sensor networks, underwater acoustic wireless sensor networks, RF (radio frequency) wireless sensor networks, integrated wired/acoustic wireless sensor networks, and integrated wired/RF wireless sensor networks. The paper also discusses the reliability challenges and enhancement approaches for these network architectures. The reliability evaluation, characteristics, advantages, and disadvantages among these architectures are discussed and compared. Three reliability factors are used for the discussion and comparison: the network connectivity, the continuity of power supply for the network, and the physical network security. In addition, the paper also develops and evaluates a hierarchical sensor network framework for underwater pipeline monitoring.

  10. A generalized optimization principle for asymmetric branching in fluidic networks

    PubMed Central

    Stephenson, David

    2016-01-01

    When applied to a branching network, Murray’s law states that the optimal branching of vascular networks is achieved when the cube of the parent channel radius is equal to the sum of the cubes of the daughter channel radii. It is considered integral to understanding biological networks and for the biomimetic design of artificial fluidic systems. However, despite its ubiquity, we demonstrate that Murray’s law is only optimal (i.e. maximizes flow conductance per unit volume) for symmetric branching, where the local optimization of each individual channel corresponds to the global optimum of the network as a whole. In this paper, we present a generalized law that is valid for asymmetric branching, for any cross-sectional shape, and for a range of fluidic models. We verify our analytical solutions with the numerical optimization of a bifurcating fluidic network for the examples of laminar, turbulent and non-Newtonian fluid flows. PMID:27493583

  11. Optimization of wireless sensor networks based on chicken swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qingxi; Zhu, Lihua

    2017-05-01

    In order to reduce the energy consumption of wireless sensor network and improve the survival time of network, the clustering routing protocol of wireless sensor networks based on chicken swarm optimization algorithm was proposed. On the basis of LEACH agreement, it was improved and perfected that the points on the cluster and the selection of cluster head using the chicken group optimization algorithm, and update the location of chicken which fall into the local optimum by Levy flight, enhance population diversity, ensure the global search capability of the algorithm. The new protocol avoided the die of partial node of intensive using by making balanced use of the network nodes, improved the survival time of wireless sensor network. The simulation experiments proved that the protocol is better than LEACH protocol on energy consumption, also is better than that of clustering routing protocol based on particle swarm optimization algorithm.

  12. Optimal exponential synchronization of general chaotic delayed neural networks: an LMI approach.

    PubMed

    Liu, Meiqin

    2009-09-01

    This paper investigates the optimal exponential synchronization problem of general chaotic neural networks with or without time delays by virtue of Lyapunov-Krasovskii stability theory and the linear matrix inequality (LMI) technique. This general model, which is the interconnection of a linear delayed dynamic system and a bounded static nonlinear operator, covers several well-known neural networks, such as Hopfield neural networks, cellular neural networks (CNNs), bidirectional associative memory (BAM) networks, and recurrent multilayer perceptrons (RMLPs) with or without delays. Using the drive-response concept, time-delay feedback controllers are designed to synchronize two identical chaotic neural networks as quickly as possible. The control design equations are shown to be a generalized eigenvalue problem (GEVP) which can be easily solved by various convex optimization algorithms to determine the optimal control law and the optimal exponential synchronization rate. Detailed comparisons with existing results are made and numerical simulations are carried out to demonstrate the effectiveness of the established synchronization laws.

  13. LinkMind: link optimization in swarming mobile sensor networks.

    PubMed

    Ngo, Trung Dung

    2011-01-01

    A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation.

  14. LinkMind: Link Optimization in Swarming Mobile Sensor Networks

    PubMed Central

    Ngo, Trung Dung

    2011-01-01

    A swarming mobile sensor network is comprised of a swarm of wirelessly connected mobile robots equipped with various sensors. Such a network can be applied in an uncertain environment for services such as cooperative navigation and exploration, object identification and information gathering. One of the most advantageous properties of the swarming wireless sensor network is that mobile nodes can work cooperatively to organize an ad-hoc network and optimize the network link capacity to maximize the transmission of gathered data from a source to a target. This paper describes a new method of link optimization of swarming mobile sensor networks. The new method is based on combination of the artificial potential force guaranteeing connectivities of the mobile sensor nodes and the max-flow min-cut theorem of graph theory ensuring optimization of the network link capacity. The developed algorithm is demonstrated and evaluated in simulation. PMID:22164070

  15. Optimization of municipal pressure pumping station layout and sewage pipe network design

    NASA Astrophysics Data System (ADS)

    Tian, Jiandong; Cheng, Jilin; Gong, Yi

    2018-03-01

    Accelerated urbanization places extraordinary demands on sewer networks; thus optimization research to improve the design of these systems has practical significance. In this article, a subsystem nonlinear programming model is developed to optimize pumping station layout and sewage pipe network design. The subsystem model is expanded into a large-scale complex nonlinear programming system model to find the minimum total annual cost of the pumping station and network of all pipe segments. A comparative analysis is conducted using the sewage network in Taizhou City, China, as an example. The proposed method demonstrated that significant cost savings could have been realized if the studied system had been optimized using the techniques described in this article. Therefore, the method has practical value for optimizing urban sewage projects and provides a reference for theoretical research on optimization of urban drainage pumping station layouts.

  16. Reconfiguration of Smart Distribution Network in the Presence of Renewable DG’s Using GWO Algorithm

    NASA Astrophysics Data System (ADS)

    Siavash, M.; Pfeifer, C.; Rahiminejad, A.; Vahidi, B.

    2017-08-01

    In this paper, the optimal reconfiguration of smart distribution system is performed with the aim of active power loss reduction and voltage stability improvement. The distribution network is considered equipped with wind turbines and solar cells as Renewable DG’s (RDG’s). Because of the presence of smart metering devices, the network state is known accurately at any moment. Based on the network conditions (the amount of load and generation of RDG’s), the optimal configuration of the network is obtained. The optimization problem is solved using a recently introduced method known as Grey Wolf Optimizer (GWO). The proposed approach is applied on 69-bus radial test system and the results of the GWO are compared to those of Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). The results show the effectiveness of the proposed approach and the selected optimization method.

  17. Joint Optimization of Receiver Placement and Illuminator Selection for a Multiband Passive Radar Network.

    PubMed

    Xie, Rui; Wan, Xianrong; Hong, Sheng; Yi, Jianxin

    2017-06-14

    The performance of a passive radar network can be greatly improved by an optimal radar network structure. Generally, radar network structure optimization consists of two aspects, namely the placement of receivers in suitable places and selection of appropriate illuminators. The present study investigates issues concerning the joint optimization of receiver placement and illuminator selection for a passive radar network. Firstly, the required radar cross section (RCS) for target detection is chosen as the performance metric, and the joint optimization model boils down to the partition p -center problem (PPCP). The PPCP is then solved by a proposed bisection algorithm. The key of the bisection algorithm lies in solving the partition set covering problem (PSCP), which can be solved by a hybrid algorithm developed by coupling the convex optimization with the greedy dropping algorithm. In the end, the performance of the proposed algorithm is validated via numerical simulations.

  18. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks.

    PubMed

    Chang, Yuchao; Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Yuan, Baoqing Li andXiaobing

    2017-07-19

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum-minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms.

  19. MAJOR MONITORING NETWORKS: A FOUNDATION TO PRESERVE, PROTECT AND RESTORE

    EPA Science Inventory

    MAJOR MONITORING NETWORKS: A FOUNDATION TO PRESERVE, PROTECT, AND RESTORE

    Ideally, major human and environmental monitoring networks should provide the scientific information needed for policy and management decision-making processes. It is widely recognized that reliable...

  20. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks

    PubMed Central

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-01-01

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network. PMID:27754405

  1. A Novel Energy Efficient Topology Control Scheme Based on a Coverage-Preserving and Sleep Scheduling Model for Sensor Networks.

    PubMed

    Shi, Binbin; Wei, Wei; Wang, Yihuai; Shu, Wanneng

    2016-10-14

    In high-density sensor networks, scheduling some sensor nodes to be in the sleep mode while other sensor nodes remain active for monitoring or forwarding packets is an effective control scheme to conserve energy. In this paper, a Coverage-Preserving Control Scheduling Scheme (CPCSS) based on a cloud model and redundancy degree in sensor networks is proposed. Firstly, the normal cloud model is adopted for calculating the similarity degree between the sensor nodes in terms of their historical data, and then all nodes in each grid of the target area can be classified into several categories. Secondly, the redundancy degree of a node is calculated according to its sensing area being covered by the neighboring sensors. Finally, a centralized approximation algorithm based on the partition of the target area is designed to obtain the approximate minimum set of nodes, which can retain the sufficient coverage of the target region and ensure the connectivity of the network at the same time. The simulation results show that the proposed CPCSS can balance the energy consumption and optimize the coverage performance of the sensor network.

  2. Estimation of the Botanical Composition of Clover-Grass Leys from RGB Images Using Data Simulation and Fully Convolutional Neural Networks

    PubMed Central

    Steen, Kim Arild; Green, Ole; Karstoft, Henrik

    2017-01-01

    Optimal fertilization of clover-grass fields relies on knowledge of the clover and grass fractions. This study shows how knowledge can be obtained by analyzing images collected in fields automatically. A fully convolutional neural network was trained to create a pixel-wise classification of clover, grass, and weeds in red, green, and blue (RGB) images of clover-grass mixtures. The estimated clover fractions of the dry matter from the images were found to be highly correlated with the real clover fractions of the dry matter, making this a cheap and non-destructive way of monitoring clover-grass fields. The network was trained solely on simulated top-down images of clover-grass fields. This enables the network to distinguish clover, grass, and weed pixels in real images. The use of simulated images for training reduces the manual labor to a few hours, as compared to more than 3000 h when all the real images are annotated for training. The network was tested on images with varied clover/grass ratios and achieved an overall pixel classification accuracy of 83.4%, while estimating the dry matter clover fraction with a standard deviation of 7.8%. PMID:29258215

  3. Multidisciplinary Design Optimization for Aeropropulsion Engines and Solid Modeling/Animation via the Integrated Forced Methods

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.

  4. Monitoring groundwater: optimising networks to take account of cost effectiveness, legal requirements and enforcement realities

    NASA Astrophysics Data System (ADS)

    Allan, A.; Spray, C.

    2013-12-01

    The quality of monitoring networks and modeling in environmental regulation is increasingly important. This is particularly true with respect to groundwater management, where data may be limited, physical processes poorly understood and timescales very long. The powers of regulators may be fatally undermined by poor or non-existent networks, primarily through mismatches between the legal standards that networks must meet, actual capacity and the evidentiary standards of courts. For example, in the second and third implementation reports on the Water Framework Directive, the European Commission drew attention to gaps in the standards of mandatory monitoring networks, where the standard did not meet the reality. In that context, groundwater monitoring networks should provide a reliable picture of groundwater levels and a ';coherent and comprehensive' overview of chemical status so that anthropogenically influenced long-term upward trends in pollutant levels can be tracked. Confidence in this overview should be such that 'the uncertainty from the monitoring process should not add significantly to the uncertainty of controlling the risk', with densities being sufficient to allow assessment of the impact of abstractions and discharges on levels in groundwater bodies at risk. The fact that the legal requirements for the quality of monitoring networks are set out in very vague terms highlights the many variables that can influence the design of monitoring networks. However, the quality of a monitoring network as part of the armory of environmental regulators is potentially of crucial importance. If, as part of enforcement proceedings, a regulator takes an offender to court and relies on conclusions derived from monitoring networks, a defendant may be entitled to question those conclusions. If the credibility, reliability or relevance of a monitoring network can be undermined, because it is too sparse, for example, this could have dramatic consequences on the ability of a regulator to ensure compliance with legal standards. On the other hand, it can be ruinously expensive to set up a monitoring network in remote areas and regulators must therefore balance the cost effectiveness of these networks against the chance that a court might question their fitness for purpose. This presentation will examine how regulators can balance legal standards for monitoring against the cost of developing and maintaining the requisite networks, while still producing observable improvements in water and ecosystem quality backed by legally enforceable sanctions for breaches. Reflecting the findings from the EU-funded GENESIS project, it will look at case law from around the world to assess how tribunals balance competing models, and the extent to which decisions may be revisited in the light of new scientific understanding. Finally, it will make recommendations to assist regulators in optimising their network designs for enforcement.

  5. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  6. Influence maximization in complex networks through optimal percolation.

    PubMed

    Morone, Flaviano; Makse, Hernán A

    2015-08-06

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  7. Energy latency tradeoffs for medium access and sleep scheduling in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gang, Lu

    Wireless sensor networks are expected to be used in a wide range of applications from environment monitoring to event detection. The key challenge is to provide energy efficient communication; however, latency remains an important concern for many applications that require fast response. The central thesis of this work is that energy efficient medium access and sleep scheduling mechanisms can be designed without necessarily sacrificing application-specific latency performance. We validate this thesis through results from four case studies that cover various aspects of medium access and sleep scheduling design in wireless sensor networks. Our first effort, DMAC, is to design an adaptive low latency and energy efficient MAC for data gathering to reduce the sleep latency. We propose staggered schedule, duty cycle adaptation, data prediction and the use of more-to-send packets to enable seamless packet forwarding under varying traffic load and channel contentions. Simulation and experimental results show significant energy savings and latency reduction while ensuring high data reliability. The second research effort, DESS, investigates the problem of designing sleep schedules in arbitrary network communication topologies to minimize the worst case end-to-end latency (referred to as delay diameter). We develop a novel graph-theoretical formulation, derive and analyze optimal solutions for the tree and ring topologies and heuristics for arbitrary topologies. The third study addresses the problem of minimum latency joint scheduling and routing (MLSR). By constructing a novel delay graph, the optimal joint scheduling and routing can be solved by M node-disjoint paths algorithm under multiple channel model. We further extended the algorithm to handle dynamic traffic changes and topology changes. A heuristic solution is proposed for MLSR under single channel interference. In the fourth study, EEJSPC, we first formulate a fundamental optimization problem that provides tunable energy-latency-throughput tradeoffs with joint scheduling and power control and present both exponential and polynomial complexity solutions. Then we investigate the problem of minimizing total transmission energy while satisfying transmission requests within a latency bound, and present an iterative approach which converges rapidly to the optimal parameter settings.

  8. Optimization of hierarchical structure and nanoscale-enabled plasmonic refraction for window electrodes in photovoltaics.

    PubMed

    Han, Bing; Peng, Qiang; Li, Ruopeng; Rong, Qikun; Ding, Yang; Akinoglu, Eser Metin; Wu, Xueyuan; Wang, Xin; Lu, Xubing; Wang, Qianming; Zhou, Guofu; Liu, Jun-Ming; Ren, Zhifeng; Giersig, Michael; Herczynski, Andrzej; Kempa, Krzysztof; Gao, Jinwei

    2016-09-26

    An ideal network window electrode for photovoltaic applications should provide an optimal surface coverage, a uniform current density into and/or from a substrate, and a minimum of the overall resistance for a given shading ratio. Here we show that metallic networks with quasi-fractal structure provides a near-perfect practical realization of such an ideal electrode. We find that a leaf venation network, which possesses key characteristics of the optimal structure, indeed outperforms other networks. We further show that elements of hierarchal topology, rather than details of the branching geometry, are of primary importance in optimizing the networks, and demonstrate this experimentally on five model artificial hierarchical networks of varied levels of complexity. In addition to these structural effects, networks containing nanowires are shown to acquire transparency exceeding the geometric constraint due to the plasmonic refraction.

  9. Network Anomaly Detection System with Optimized DS Evidence Theory

    PubMed Central

    Liu, Yuan; Wang, Xiaofeng; Liu, Kaiyu

    2014-01-01

    Network anomaly detection has been focused on by more people with the fast development of computer network. Some researchers utilized fusion method and DS evidence theory to do network anomaly detection but with low performance, and they did not consider features of network—complicated and varied. To achieve high detection rate, we present a novel network anomaly detection system with optimized Dempster-Shafer evidence theory (ODS) and regression basic probability assignment (RBPA) function. In this model, we add weights for each senor to optimize DS evidence theory according to its previous predict accuracy. And RBPA employs sensor's regression ability to address complex network. By four kinds of experiments, we find that our novel network anomaly detection model has a better detection rate, and RBPA as well as ODS optimization methods can improve system performance significantly. PMID:25254258

  10. Reduction of streamflow monitoring networks by a reference point approach

    NASA Astrophysics Data System (ADS)

    Cetinkaya, Cem P.; Harmancioglu, Nilgun B.

    2014-05-01

    Adoption of an integrated approach to water management strongly forces policy and decision-makers to focus on hydrometric monitoring systems as well. Existing hydrometric networks need to be assessed and revised against the requirements on water quantity data to support integrated management. One of the questions that a network assessment study should resolve is whether a current monitoring system can be consolidated in view of the increased expenditures in time, money and effort imposed on the monitoring activity. Within the last decade, governmental monitoring agencies in Turkey have foreseen an audit on all their basin networks in view of prevailing economic pressures. In particular, they question how they can decide whether monitoring should be continued or terminated at a particular site in a network. The presented study is initiated to address this question by examining the applicability of a method called “reference point approach” (RPA) for network assessment and reduction purposes. The main objective of the study is to develop an easily applicable and flexible network reduction methodology, focusing mainly on the assessment of the “performance” of existing streamflow monitoring networks in view of variable operational purposes. The methodology is applied to 13 hydrometric stations in the Gediz Basin, along the Aegean coast of Turkey. The results have shown that the simplicity of the method, in contrast to more complicated computational techniques, is an asset that facilitates the involvement of decision makers in application of the methodology for a more interactive assessment procedure between the monitoring agency and the network designer. The method permits ranking of hydrometric stations with regard to multiple objectives of monitoring and the desired attributes of the basin network. Another distinctive feature of the approach is that it also assists decision making in cases with limited data and metadata. These features of the RPA approach highlight its advantages over the existing network assessment and reduction methods.

  11. Node Deployment Algorithm Based on Connected Tree for Underwater Sensor Networks

    PubMed Central

    Jiang, Peng; Wang, Xingmin; Jiang, Lurong

    2015-01-01

    Designing an efficient deployment method to guarantee optimal monitoring quality is one of the key topics in underwater sensor networks. At present, a realistic approach of deployment involves adjusting the depths of nodes in water. One of the typical algorithms used in such process is the self-deployment depth adjustment algorithm (SDDA). This algorithm mainly focuses on maximizing network coverage by constantly adjusting node depths to reduce coverage overlaps between two neighboring nodes, and thus, achieves good performance. However, the connectivity performance of SDDA is irresolute. In this paper, we propose a depth adjustment algorithm based on connected tree (CTDA). In CTDA, the sink node is used as the first root node to start building a connected tree. Finally, the network can be organized as a forest to maintain network connectivity. Coverage overlaps between the parent node and the child node are then reduced within each sub-tree to optimize coverage. The hierarchical strategy is used to adjust the distance between the parent node and the child node to reduce node movement. Furthermore, the silent mode is adopted to reduce communication cost. Simulations show that compared with SDDA, CTDA can achieve high connectivity with various communication ranges and different numbers of nodes. Moreover, it can realize coverage as high as that of SDDA with various sensing ranges and numbers of nodes but with less energy consumption. Simulations under sparse environments show that the connectivity and energy consumption performances of CTDA are considerably better than those of SDDA. Meanwhile, the connectivity and coverage performances of CTDA are close to those depth adjustment algorithms base on connected dominating set (CDA), which is an algorithm similar to CTDA. However, the energy consumption of CTDA is less than that of CDA, particularly in sparse underwater environments. PMID:26184209

  12. Pollution source localization in an urban water supply network based on dynamic water demand.

    PubMed

    Yan, Xuesong; Zhu, Zhixin; Li, Tian

    2017-10-27

    Urban water supply networks are susceptible to intentional, accidental chemical, and biological pollution, which pose a threat to the health of consumers. In recent years, drinking-water pollution incidents have occurred frequently, seriously endangering social stability and security. The real-time monitoring for water quality can be effectively implemented by placing sensors in the water supply network. However, locating the source of pollution through the data detection obtained by water quality sensors is a challenging problem. The difficulty lies in the limited number of sensors, large number of water supply network nodes, and dynamic user demand for water, which leads the pollution source localization problem to an uncertainty, large-scale, and dynamic optimization problem. In this paper, we mainly study the dynamics of the pollution source localization problem. Previous studies of pollution source localization assume that hydraulic inputs (e.g., water demand of consumers) are known. However, because of the inherent variability of urban water demand, the problem is essentially a fluctuating dynamic problem of consumer's water demand. In this paper, the water demand is considered to be stochastic in nature and can be described using Gaussian model or autoregressive model. On this basis, an optimization algorithm is proposed based on these two dynamic water demand change models to locate the pollution source. The objective of the proposed algorithm is to find the locations and concentrations of pollution sources that meet the minimum between the analogue and detection values of the sensor. Simulation experiments were conducted using two different sizes of urban water supply network data, and the experimental results were compared with those of the standard genetic algorithm.

  13. Roadmap to Long-Term Monitoring Optimization

    EPA Pesticide Factsheets

    This roadmap focuses on optimization of established long-term monitoring programs for groundwater. Tools and techniques discussed concentrate on methods for optimizing the monitoring frequency and spatial (three-dimensional) distribution of wells ...

  14. Network planning under uncertainties

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2008-11-01

    One of the main focuses for network planning is on the optimization of network resources required to build a network under certain traffic demand projection. Traditionally, the inputs to this type of network planning problems are treated as deterministic. In reality, the varying traffic requirements and fluctuations in network resources can cause uncertainties in the decision models. The failure to include the uncertainties in the network design process can severely affect the feasibility and economics of the network. Therefore, it is essential to find a solution that can be insensitive to the uncertain conditions during the network planning process. As early as in the 1960's, a network planning problem with varying traffic requirements over time had been studied. Up to now, this kind of network planning problems is still being active researched, especially for the VPN network design. Another kind of network planning problems under uncertainties that has been studied actively in the past decade addresses the fluctuations in network resources. One such hotly pursued research topic is survivable network planning. It considers the design of a network under uncertainties brought by the fluctuations in topology to meet the requirement that the network remains intact up to a certain number of faults occurring anywhere in the network. Recently, the authors proposed a new planning methodology called Generalized Survivable Network that tackles the network design problem under both varying traffic requirements and fluctuations of topology. Although all the above network planning problems handle various kinds of uncertainties, it is hard to find a generic framework under more general uncertainty conditions that allows a more systematic way to solve the problems. With a unified framework, the seemingly diverse models and algorithms can be intimately related and possibly more insights and improvements can be brought out for solving the problem. This motivates us to seek a generic framework for solving the network planning problem under uncertainties. In addition to reviewing the various network planning problems involving uncertainties, we also propose that a unified framework based on robust optimization can be used to solve a rather large segment of network planning problem under uncertainties. Robust optimization is first introduced in the operations research literature and is a framework that incorporates information about the uncertainty sets for the parameters in the optimization model. Even though robust optimization is originated from tackling the uncertainty in the optimization process, it can serve as a comprehensive and suitable framework for tackling generic network planning problems under uncertainties. In this paper, we begin by explaining the main ideas behind the robust optimization approach. Then we demonstrate the capabilities of the proposed framework by giving out some examples of how the robust optimization framework can be applied to the current common network planning problems under uncertain environments. Next, we list some practical considerations for solving the network planning problem under uncertainties with the proposed framework. Finally, we conclude this article with some thoughts on the future directions for applying this framework to solve other network planning problems.

  15. Energy-Efficient Next-Generation Passive Optical Networks Based on Sleep Mode and Heuristic Optimization

    NASA Astrophysics Data System (ADS)

    Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik

    2015-05-01

    In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.

  16. Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach.

    DTIC Science & Technology

    1998-05-01

    Coverage Probability with a Random Optimization Procedure: An Artificial Neural Network Approach by Biing T. Guan, George Z. Gertner, and Alan B...Modeling Training Site Vegetation Coverage Probability with a Random Optimizing Procedure: An Artificial Neural Network Approach 6. AUTHOR(S) Biing...coverage based on past coverage. Approach A literature survey was conducted to identify artificial neural network analysis techniques applicable for

  17. On Maximizing the Lifetime of Wireless Sensor Networks by Optimally Assigning Energy Supplies

    PubMed Central

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; Gonzalez-Castaño, Francisco Javier

    2013-01-01

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively. PMID:23939582

  18. On maximizing the lifetime of Wireless Sensor Networks by optimally assigning energy supplies.

    PubMed

    Asorey-Cacheda, Rafael; García-Sánchez, Antonio Javier; García-Sánchez, Felipe; García-Haro, Joan; González-Castano, Francisco Javier

    2013-08-09

    The extension of the network lifetime of Wireless Sensor Networks (WSN) is an important issue that has not been appropriately solved yet. This paper addresses this concern and proposes some techniques to plan an arbitrary WSN. To this end, we suggest a hierarchical network architecture, similar to realistic scenarios, where nodes with renewable energy sources (denoted as primary nodes) carry out most message delivery tasks, and nodes equipped with conventional chemical batteries (denoted as secondary nodes) are those with less communication demands. The key design issue of this network architecture is the development of a new optimization framework to calculate the optimal assignment of renewable energy supplies (primary node assignment) to maximize network lifetime, obtaining the minimum number of energy supplies and their node assignment. We also conduct a second optimization step to additionally minimize the number of packet hops between the source and the sink. In this work, we present an algorithm that approaches the results of the optimization framework, but with much faster execution speed, which is a good alternative for large-scale WSN networks. Finally, the network model, the optimization process and the designed algorithm are further evaluated and validated by means of computer simulation under realistic conditions. The results obtained are discussed comparatively.

  19. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  20. AllAboard: Visual Exploration of Cellphone Mobility Data to Optimise Public Transport.

    PubMed

    Di Lorenzo, G; Sbodio, M; Calabrese, F; Berlingerio, M; Pinelli, F; Nair, R

    2016-02-01

    The deep penetration of mobile phones offers cities the ability to opportunistically monitor citizens' mobility and use data-driven insights to better plan and manage services. With large scale data on mobility patterns, operators can move away from the costly, mostly survey based, transportation planning processes, to a more data-centric view, that places the instrumented user at the center of development. In this framework, using mobile phone data to perform transit analysis and optimization represents a new frontier with significant societal impact, especially in developing countries. In this paper we present AllAboard, an intelligent tool that analyses cellphone data to help city authorities in visually exploring urban mobility and optimizing public transport. This is performed within a self contained tool, as opposed to the current solutions which rely on a combination of several distinct tools for analysis, reporting, optimisation and planning. An interactive user interface allows transit operators to visually explore the travel demand in both space and time, correlate it with the transit network, and evaluate the quality of service that a transit network provides to the citizens at very fine grain. Operators can visually test scenarios for transit network improvements, and compare the expected impact on the travellers' experience. The system has been tested using real telecommunication data for the city of Abidjan, Ivory Coast, and evaluated from a data mining, optimisation and user prospective.

  1. Learning Agents for Autonomous Space Asset Management (LAASAM)

    NASA Astrophysics Data System (ADS)

    Scally, L.; Bonato, M.; Crowder, J.

    2011-09-01

    Current and future space systems will continue to grow in complexity and capabilities, creating a formidable challenge to monitor, maintain, and utilize these systems and manage their growing network of space and related ground-based assets. Integrated System Health Management (ISHM), and in particular, Condition-Based System Health Management (CBHM), is the ability to manage and maintain a system using dynamic real-time data to prioritize, optimize, maintain, and allocate resources. CBHM entails the maintenance of systems and equipment based on an assessment of current and projected conditions (situational and health related conditions). A complete, modern CBHM system comprises a number of functional capabilities: sensing and data acquisition; signal processing; conditioning and health assessment; diagnostics and prognostics; and decision reasoning. In addition, an intelligent Human System Interface (HSI) is required to provide the user/analyst with relevant context-sensitive information, the system condition, and its effect on overall situational awareness of space (and related) assets. Colorado Engineering, Inc. (CEI) and Raytheon are investigating and designing an Intelligent Information Agent Architecture that will provide a complete range of CBHM and HSI functionality from data collection through recommendations for specific actions. The research leverages CEI’s expertise with provisioning management network architectures and Raytheon’s extensive experience with learning agents to define a system to autonomously manage a complex network of current and future space-based assets to optimize their utilization.

  2. "Catch the Pendulum": The Problem of Asymmetric Data Delivery in Electromagnetic Nanonetworks.

    PubMed

    Islam, Nabiul; Misra, Sudip

    2016-09-01

    The network of novel nano-material based nanodevices, known as nanoscale communication networks or nanonetworks has ushered a new communication paradigm in the terahertz band (0.1-10 THz). In this work, first we envisage an architecture of nanonetworks-based Coronary Heart Disease (CHD) monitoring, consisting of nano-macro interface (NM) and nanodevice-embedded Drug Eluting Stents (DESs), termed as nanoDESs. Next, we study the problem of asymmetric data delivery in such nanonetworks-based systems and propose a simple distance-aware power allocation algorithm, named catch-the-pendulum, which optimizes the energy consumption of nanoDESs for communicating data from the underlying nanonetworks to radio frequency (RF) based macro-scale communication networks. The algorithm exploits the periodic change in mean distance between a nanoDES, inserted inside the affected coronary artery, and the NM, fitted in the intercostal space of the rib cage of a patient suffering from a CHD. Extensive simulations confirm superior performance of the proposed algorithm with respect to energy consumption, packet delivery, and shutdown phase.

  3. Crystal surface analysis using matrix textural features classified by a probabilistic neural network

    NASA Astrophysics Data System (ADS)

    Sawyer, Curry R.; Quach, Viet; Nason, Donald; van den Berg, Lodewijk

    1991-12-01

    A system is under development in which surface quality of a growing bulk mercuric iodide crystal is monitored by video camera at regular intervals for early detection of growth irregularities. Mercuric iodide single crystals are employed in radiation detectors. A microcomputer system is used for image capture and processing. The digitized image is divided into multiple overlapping sub-images and features are extracted from each sub-image based on statistical measures of the gray tone distribution, according to the method of Haralick. Twenty parameters are derived from each sub-image and presented to a probabilistic neural network (PNN) for classification. This number of parameters was found to be optimal for the system. The PNN is a hierarchical, feed-forward network that can be rapidly reconfigured as additional training data become available. Training data is gathered by reviewing digital images of many crystals during their growth cycle and compiling two sets of images, those with and without irregularities.

  4. Wireless in-situ Sensor Network for Agriculture and Water Monitoring on a River Basin Scale in Southern Finland: Evaluation from a Data User’s Perspective

    PubMed Central

    Kotamäki, Niina; Thessler, Sirpa; Koskiaho, Jari; Hannukkala, Asko O.; Huitu, Hanna; Huttula, Timo; Havento, Jukka; Järvenpää, Markku

    2009-01-01

    Sensor networks are increasingly being implemented for environmental monitoring and agriculture to provide spatially accurate and continuous environmental information and (near) real-time applications. These networks provide a large amount of data which poses challenges for ensuring data quality and extracting relevant information. In the present paper we describe a river basin scale wireless sensor network for agriculture and water monitoring. The network, called SoilWeather, is unique and the first of this type in Finland. The performance of the network is assessed from the user and maintainer perspectives, concentrating on data quality, network maintenance and applications. The results showed that the SoilWeather network has been functioning in a relatively reliable way, but also that the maintenance and data quality assurance by automatic algorithms and calibration samples requires a lot of effort, especially in continuous water monitoring over large areas. We see great benefits on sensor networks enabling continuous, real-time monitoring, while data quality control and maintenance efforts highlight the need for tight collaboration between sensor and sensor network owners to decrease costs and increase the quality of the sensor data in large scale applications. PMID:22574050

  5. Secure estimation, control and optimization of uncertain cyber-physical systems with applications to power networks

    NASA Astrophysics Data System (ADS)

    Taha, Ahmad Fayez

    Transportation networks, wearable devices, energy systems, and the book you are reading now are all ubiquitous cyber-physical systems (CPS). These inherently uncertain systems combine physical phenomena with communication, data processing, control and optimization. Many CPSs are controlled and monitored by real-time control systems that use communication networks to transmit and receive data from systems modeled by physical processes. Existing studies have addressed a breadth of challenges related to the design of CPSs. However, there is a lack of studies on uncertain CPSs subject to dynamic unknown inputs and cyber-attacks---an artifact of the insertion of communication networks and the growing complexity of CPSs. The objective of this dissertation is to create secure, computational foundations for uncertain CPSs by establishing a framework to control, estimate and optimize the operation of these systems. With major emphasis on power networks, the dissertation deals with the design of secure computational methods for uncertain CPSs, focusing on three crucial issues---(1) cyber-security and risk-mitigation, (2) network-induced time-delays and perturbations and (3) the encompassed extreme time-scales. The dissertation consists of four parts. In the first part, we investigate dynamic state estimation (DSE) methods and rigorously examine the strengths and weaknesses of the proposed routines under dynamic attack-vectors and unknown inputs. In the second part, and utilizing high-frequency measurements in smart grids and the developed DSE methods in the first part, we present a risk mitigation strategy that minimizes the encountered threat levels, while ensuring the continual observability of the system through available, safe measurements. The developed methods in the first two parts rely on the assumption that the uncertain CPS is not experiencing time-delays, an assumption that might fail under certain conditions. To overcome this challenge, networked unknown input observers---observers/estimators for uncertain CPSs---are designed such that the effect of time-delays and cyber-induced perturbations are minimized, enabling secure DSE and risk mitigation in the first two parts. The final part deals with the extreme time-scales encompassed in CPSs, generally, and smart grids, specifically. Operational decisions for long time-scales can adversely affect the security of CPSs for faster time-scales. We present a model that jointly describes steady-state operation and transient stability by combining convex optimal power flow with semidefinite programming formulations of an optimal control problem. This approach can be jointly utilized with the aforementioned parts of the dissertation work, considering time-delays and DSE. The research contributions of this dissertation furnish CPS stakeholders with insights on the design and operation of uncertain CPSs, whilst guaranteeing the system's real-time safety. Finally, although many of the results of this dissertation are tailored to power systems, the results are general enough to be applied for a variety of uncertain CPSs.

  6. A high-resolution ambient seismic noise model for Europe

    NASA Astrophysics Data System (ADS)

    Kraft, Toni

    2014-05-01

    In the past several years, geological energy technologies receive growing attention and have been initiated in or close to urban areas. Some of these technologies involve injecting fluids into the subsurface (e.g., oil and gas development, waste disposal, and geothermal energy development) and have been found or suspected to cause small to moderate sized earthquakes. These earthquakes, which may have gone unnoticed in the past when they occurred in remote sparsely populated areas, are now posing a considerable risk for the public acceptance of these technologies in urban areas. The permanent termination of the EGS project in Basel, Switzerland after a number of induced ML~3 (minor) earthquakes in 2006 is one prominent example. It is therefore essential to the future development and success of these geological energy technologies to develop strategies for managing induced seismicity and keeping the size of induced earthquake at a level that is acceptable to all stakeholders. Most guidelines and recommendations on induced seismicity published since the 1970ies conclude that an indispensable component of such a strategy is the establishment of seismic monitoring in an early stage of a project. This is because an appropriate seismic monitoring is the only way to detect and locate induced microearthquakes with sufficient certainty to develop an understanding of the seismic and geomechanical response of the reservoir to the geotechnical operation. In addition, seismic monitoring lays the foundation for the establishment of advanced traffic light systems and is therefore an important confidence building measure towards the local population and authorities. Due to this development an increasing number of seismic monitoring networks are being installed in densely populated areas with strongly heterogeneous, and unfavorable ambient noise conditions. This poses a major challenge on the network design process, which aims to find the sensor geometry that optimizes the measurement precision (i.e. earthquake location), while considering this extremely complex boundary condition. To solve this problem I have developed a high-resolution ambient seismic noise model for Europe. The model is based on land-use data derived from satellite imagery by the EU-project CORINE in a resolution of 100x100m. The the CORINE data consists of several land-use classes, which, besides others, contain: industrial areas, mines, urban fabric, agricultural areas, permanent corps, forests and open spaces. Additionally, open GIS data for highways, and major and minor roads and railway lines were included from the OpenStreetMap project (www.openstreetmap.org). This data was divided into three classes that represent good, intermediate and bad ambient conditions of the corresponding land-use class based on expert judgment. To account for noise propagation away from its source a smoothing operator was applied to individual land-use noise-fields. Finally, the noise-fields were stacked to obtain an European map of ambient noise conditions. A calibration of this map with data of existing seismic stations Europe allowed me to estimate the expected noise level in actual ground motion units for the three ambient noise condition classes of the map. The result is a high-resolution ambient seismic noise map, that allows the network designer to make educated predictions on the expected noise level for arbitrary location in Europe. The ambient noise model was successfully tested in several network optimization projects in Switzerland and surrounding countries and will hopefully be a valuable contribution to improving the data quality of microseismic monitoring networks in Europe.

  7. Combining Community Engagement and Scientific Approaches in Next-Generation Monitor Siting: The Case of the Imperial County Community Air Network.

    PubMed

    Wong, Michelle; Bejarano, Esther; Carvlin, Graeme; Fellows, Katie; King, Galatea; Lugo, Humberto; Jerrett, Michael; Meltzer, Dan; Northcross, Amanda; Olmedo, Luis; Seto, Edmund; Wilkie, Alexa; English, Paul

    2018-03-15

    Air pollution continues to be a global public health threat, and the expanding availability of small, low-cost air sensors has led to increased interest in both personal and crowd-sourced air monitoring. However, to date, few low-cost air monitoring networks have been developed with the scientific rigor or continuity needed to conduct public health surveillance and inform policy. In Imperial County, California, near the U.S./Mexico border, we used a collaborative, community-engaged process to develop a community air monitoring network that attains the scientific rigor required for research, while also achieving community priorities. By engaging community residents in the project design, monitor siting processes, data dissemination, and other key activities, the resulting air monitoring network data are relevant, trusted, understandable, and used by community residents. Integration of spatial analysis and air monitoring best practices into the network development process ensures that the data are reliable and appropriate for use in research activities. This combined approach results in a community air monitoring network that is better able to inform community residents, support research activities, guide public policy, and improve public health. Here we detail the monitor siting process and outline the advantages and challenges of this approach.

  8. Combining Community Engagement and Scientific Approaches in Next-Generation Monitor Siting: The Case of the Imperial County Community Air Network

    PubMed Central

    Wong, Michelle; Bejarano, Esther; Carvlin, Graeme; King, Galatea; Lugo, Humberto; Jerrett, Michael; Northcross, Amanda; Olmedo, Luis; Seto, Edmund; Wilkie, Alexa; English, Paul

    2018-01-01

    Air pollution continues to be a global public health threat, and the expanding availability of small, low-cost air sensors has led to increased interest in both personal and crowd-sourced air monitoring. However, to date, few low-cost air monitoring networks have been developed with the scientific rigor or continuity needed to conduct public health surveillance and inform policy. In Imperial County, California, near the U.S./Mexico border, we used a collaborative, community-engaged process to develop a community air monitoring network that attains the scientific rigor required for research, while also achieving community priorities. By engaging community residents in the project design, monitor siting processes, data dissemination, and other key activities, the resulting air monitoring network data are relevant, trusted, understandable, and used by community residents. Integration of spatial analysis and air monitoring best practices into the network development process ensures that the data are reliable and appropriate for use in research activities. This combined approach results in a community air monitoring network that is better able to inform community residents, support research activities, guide public policy, and improve public health. Here we detail the monitor siting process and outline the advantages and challenges of this approach. PMID:29543726

  9. Operating Systems for Wireless Sensor Networks: A Survey

    PubMed Central

    Farooq, Muhammad Omer; Kunz, Thomas

    2011-01-01

    This paper presents a survey on the current state-of-the-art in Wireless Sensor Network (WSN) Operating Systems (OSs). In recent years, WSNs have received tremendous attention in the research community, with applications in battlefields, industrial process monitoring, home automation, and environmental monitoring, to name but a few. A WSN is a highly dynamic network because nodes die due to severe environmental conditions and battery power depletion. Furthermore, a WSN is composed of miniaturized motes equipped with scarce resources e.g., limited memory and computational abilities. WSNs invariably operate in an unattended mode and in many scenarios it is impossible to replace sensor motes after deployment, therefore a fundamental objective is to optimize the sensor motes’ life time. These characteristics of WSNs impose additional challenges on OS design for WSN, and consequently, OS design for WSN deviates from traditional OS design. The purpose of this survey is to highlight major concerns pertaining to OS design in WSNs and to point out strengths and weaknesses of contemporary OSs for WSNs, keeping in mind the requirements of emerging WSN applications. The state-of-the-art in operating systems for WSNs has been examined in terms of the OS Architecture, Programming Model, Scheduling, Memory Management and Protection, Communication Protocols, Resource Sharing, Support for Real-Time Applications, and additional features. These features are surveyed for both real-time and non-real-time WSN operating systems. PMID:22163934

  10. Operating systems for wireless sensor networks: a survey.

    PubMed

    Farooq, Muhammad Omer; Kunz, Thomas

    2011-01-01

    This paper presents a survey on the current state-of-the-art in Wireless Sensor Network (WSN) Operating Systems (OSs). In recent years, WSNs have received tremendous attention in the research community, with applications in battlefields, industrial process monitoring, home automation, and environmental monitoring, to name but a few. A WSN is a highly dynamic network because nodes die due to severe environmental conditions and battery power depletion. Furthermore, a WSN is composed of miniaturized motes equipped with scarce resources e.g., limited memory and computational abilities. WSNs invariably operate in an unattended mode and in many scenarios it is impossible to replace sensor motes after deployment, therefore a fundamental objective is to optimize the sensor motes' life time. These characteristics of WSNs impose additional challenges on OS design for WSN, and consequently, OS design for WSN deviates from traditional OS design. The purpose of this survey is to highlight major concerns pertaining to OS design in WSNs and to point out strengths and weaknesses of contemporary OSs for WSNs, keeping in mind the requirements of emerging WSN applications. The state-of-the-art in operating systems for WSNs has been examined in terms of the OS Architecture, Programming Model, Scheduling, Memory Management and Protection, Communication Protocols, Resource Sharing, Support for Real-Time Applications, and additional features. These features are surveyed for both real-time and non-real-time WSN operating systems.

  11. Identifying highly connected counties compensates for resource limitations when evaluating national spread of an invasive pathogen.

    PubMed

    Sutrave, Sweta; Scoglio, Caterina; Isard, Scott A; Hutchinson, J M Shawn; Garrett, Karen A

    2012-01-01

    Surveying invasive species can be highly resource intensive, yet near-real-time evaluations of invasion progress are important resources for management planning. In the case of the soybean rust invasion of the United States, a linked monitoring, prediction, and communication network saved U.S. soybean growers approximately $200 M/yr. Modeling of future movement of the pathogen (Phakopsora pachyrhizi) was based on data about current disease locations from an extensive network of sentinel plots. We developed a dynamic network model for U.S. soybean rust epidemics, with counties as nodes and link weights a function of host hectarage and wind speed and direction. We used the network model to compare four strategies for selecting an optimal subset of sentinel plots, listed here in order of increasing performance: random selection, zonal selection (based on more heavily weighting regions nearer the south, where the pathogen overwinters), frequency-based selection (based on how frequently the county had been infected in the past), and frequency-based selection weighted by the node strength of the sentinel plot in the network model. When dynamic network properties such as node strength are characterized for invasive species, this information can be used to reduce the resources necessary to survey and predict invasion progress.

  12. Neural Network Prediction of New Aircraft Design Coefficients

    NASA Technical Reports Server (NTRS)

    Norgaard, Magnus; Jorgensen, Charles C.; Ross, James C.

    1997-01-01

    This paper discusses a neural network tool for more effective aircraft design evaluations during wind tunnel tests. Using a hybrid neural network optimization method, we have produced fast and reliable predictions of aerodynamical coefficients, found optimal flap settings, and flap schedules. For validation, the tool was tested on a 55% scale model of the USAF/NASA Subsonic High Alpha Research Concept aircraft (SHARC). Four different networks were trained to predict coefficients of lift, drag, moment of inertia, and lift drag ratio (C(sub L), C(sub D), C(sub M), and L/D) from angle of attack and flap settings. The latter network was then used to determine an overall optimal flap setting and for finding optimal flap schedules.

  13. Real Time Distributed Embedded Oscillator Operating Frequency Monitoring

    NASA Technical Reports Server (NTRS)

    Pollock, Julie (Inventor); Oliver, Brett D. (Inventor); Brickner, Christopher (Inventor)

    2013-01-01

    A method for clock monitoring in a network is provided. The method comprises receiving a first network clock signal at a network device and comparing the first network clock signal to a local clock signal from a primary oscillator coupled to the network device.

  14. Advanced I&C for Fault-Tolerant Supervisory Control of Small Modular Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cole, Daniel G.

    In this research, we have developed a supervisory control approach to enable automated control of SMRs. By design the supervisory control system has an hierarchical, interconnected, adaptive control architecture. A considerable advantage to this architecture is that it allows subsystems to communicate at different/finer granularity, facilitates monitoring of process at the modular and plant levels, and enables supervisory control. We have investigated the deployment of automation, monitoring, and data collection technologies to enable operation of multiple SMRs. Each unit's controller collects and transfers information from local loops and optimize that unit’s parameters. Information is passed from the each SMR unitmore » controller to the supervisory controller, which supervises the actions of SMR units and manage plant processes. The information processed at the supervisory level will provide operators the necessary information needed for reactor, unit, and plant operation. In conjunction with the supervisory effort, we have investigated techniques for fault-tolerant networks, over which information is transmitted between local loops and the supervisory controller to maintain a safe level of operational normalcy in the presence of anomalies. The fault-tolerance of the supervisory control architecture, the network that supports it, and the impact of fault-tolerance on multi-unit SMR plant control has been a second focus of this research. To this end, we have investigated the deployment of advanced automation, monitoring, and data collection and communications technologies to enable operation of multiple SMRs. We have created a fault-tolerant multi-unit SMR supervisory controller that collects and transfers information from local loops, supervise their actions, and adaptively optimize the controller parameters. The goal of this research has been to develop the methodologies and procedures for fault-tolerant supervisory control of small modular reactors. To achieve this goal, we have identified the following objectives. These objective are an ordered approach to the research: I) Development of a supervisory digital I&C system II) Fault-tolerance of the supervisory control architecture III) Automated decision making and online monitoring.« less

  15. Performance Analysis of Integrated Wireless Sensor and Multibeam Satellite Networks Under Terrestrial Interference

    PubMed Central

    Li, Hongjun; Yin, Hao; Gong, Xiangwu; Dong, Feihong; Ren, Baoquan; He, Yuanzhi; Wang, Jingchao

    2016-01-01

    This paper investigates the performance of integrated wireless sensor and multibeam satellite networks (IWSMSNs) under terrestrial interference. The IWSMSNs constitute sensor nodes (SNs), satellite sinks (SSs), multibeam satellite and remote monitoring hosts (RMHs). The multibeam satellite covers multiple beams and multiple SSs in each beam. The SSs can be directly used as SNs to transmit sensing data to RMHs via the satellite, and they can also be used to collect the sensing data from other SNs to transmit to the RMHs. We propose the hybrid one-dimensional (1D) and 2D beam models including the equivalent intra-beam interference factor β from terrestrial communication networks (TCNs) and the equivalent inter-beam interference factor α from adjacent beams. The terrestrial interference is possibly due to the signals from the TCNs or the signals of sinks being transmitted to other satellite networks. The closed-form approximations of capacity per beam are derived for the return link of IWSMSNs under terrestrial interference by using the Haar approximations where the IWSMSNs experience the Rician fading channel. The optimal joint decoding capacity can be considered as the upper bound where all of the SSs’ signals can be jointly decoded by a super-receiver on board the multibeam satellite or a gateway station that knows all of the code books. While the linear minimum mean square error (MMSE) capacity is where all of the signals of SSs are decoded singularly by a multibeam satellite or a gateway station. The simulations show that the optimal capacities are obviously higher than the MMSE capacities under the same conditions, while the capacities are lowered by Rician fading and converge as the Rician factor increases. α and β jointly affect the performance of hybrid 1D and 2D beam models, and the number of SSs also contributes different effects on the optimal capacity and MMSE capacity of the IWSMSNs. PMID:27754438

  16. Design optimization of the sensor spatial arrangement in a direct magnetic field-based localization system for medical applications.

    PubMed

    Marechal, Luc; Shaohui Foong; Zhenglong Sun; Wood, Kristin L

    2015-08-01

    Motivated by the need for developing a neuronavigation system to improve efficacy of intracranial surgical procedures, a localization system using passive magnetic fields for real-time monitoring of the insertion process of an external ventricular drain (EVD) catheter is conceived and developed. This system operates on the principle of measuring the static magnetic field of a magnetic marker using an array of magnetic sensors. An artificial neural network (ANN) is directly used for solving the inverse problem of magnetic dipole localization for improved efficiency and precision. As the accuracy of localization system is highly dependent on the sensor spatial location, an optimization framework, based on understanding and classification of experimental sensor characteristics as well as prior knowledge of the general trajectory of the localization pathway, for design of such sensing assemblies is described and investigated in this paper. Both optimized and non-optimized sensor configurations were experimentally evaluated and results show superior performance from the optimized configuration. While the approach presented here utilizes ventriculostomy as an illustrative platform, it can be extended to other medical applications that require localization inside the body.

  17. Optimization of hierarchical structure and nanoscale-enabled plasmonic refraction for window electrodes in photovoltaics

    PubMed Central

    Han, Bing; Peng, Qiang; Li, Ruopeng; Rong, Qikun; Ding, Yang; Akinoglu, Eser Metin; Wu, Xueyuan; Wang, Xin; Lu, Xubing; Wang, Qianming; Zhou, Guofu; Liu, Jun-Ming; Ren, Zhifeng; Giersig, Michael; Herczynski, Andrzej; Kempa, Krzysztof; Gao, Jinwei

    2016-01-01

    An ideal network window electrode for photovoltaic applications should provide an optimal surface coverage, a uniform current density into and/or from a substrate, and a minimum of the overall resistance for a given shading ratio. Here we show that metallic networks with quasi-fractal structure provides a near-perfect practical realization of such an ideal electrode. We find that a leaf venation network, which possesses key characteristics of the optimal structure, indeed outperforms other networks. We further show that elements of hierarchal topology, rather than details of the branching geometry, are of primary importance in optimizing the networks, and demonstrate this experimentally on five model artificial hierarchical networks of varied levels of complexity. In addition to these structural effects, networks containing nanowires are shown to acquire transparency exceeding the geometric constraint due to the plasmonic refraction. PMID:27667099

  18. Region 7 States Air Quality Monitoring Plans - Iowa

    EPA Pesticide Factsheets

    National Ambient Air Quality Standard (NAAQS) - Iowa, Kansas, Missouri, and Nebraska; Annual Monitoring Network Plans, Five-Year Monitoring Network Assessments, and approval documentation. Each year, states are required to submit an annual monitoring netwo

  19. Region 7 States Air Quality Monitoring Plans - Missouri

    EPA Pesticide Factsheets

    National Ambient Air Quality Standard (NAAQS) - Iowa, Kansas, Missouri, and Nebraska; Annual Monitoring Network Plans, Five-Year Monitoring Network Assessments, and approval documentation. Each year, states are required to submit an annual monitoring netwo

  20. Region 7 States Air Quality Monitoring Plans - Nebraska

    EPA Pesticide Factsheets

    National Ambient Air Quality Standard (NAAQS) - Iowa, Kansas, Missouri, and Nebraska; Annual Monitoring Network Plans, Five-Year Monitoring Network Assessments, and approval documentation. Each year, states are required to submit an annual monitoring netwo

  1. Region 7 States Air Quality Monitoring Plans - Kansas

    EPA Pesticide Factsheets

    National Ambient Air Quality Standard (NAAQS) - Iowa, Kansas, Missouri, and Nebraska; Annual Monitoring Network Plans, Five-Year Monitoring Network Assessments, and approval documentation. Each year, states are required to submit an annual monitoring netwo

  2. Remote Energy Monitoring System via Cellular Network

    NASA Astrophysics Data System (ADS)

    Yunoki, Shoji; Tamaki, Satoshi; Takada, May; Iwaki, Takashi

    Recently, improvement on power saving and cost efficiency by monitoring the operation status of various facilities over the network has gained attention. Wireless network, especially cellular network, has advantage in mobility, coverage, and scalability. On the other hand, it has disadvantage of low reliability, due to rapid changes in the available bandwidth. We propose a transmission control scheme based on data priority and instantaneous available bandwidth to realize a highly reliable remote monitoring system via cellular network. We have developed our proposed monitoring system and evaluated the effectiveness of our scheme, and proved it reduces the maximum transmission delay of sensor status to 1/10 compared to best effort transmission.

  3. A network security monitor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heberlein, L.T.; Dias, G.V.; Levitt, K.N.

    1989-11-01

    The study of security in computer networks is a rapidly growing area of interest because of the proliferation of networks and the paucity of security measures in most current networks. Since most networks consist of a collection of inter-connected local area networks (LANs), this paper concentrates on the security-related issues in a single broadcast LAN such as Ethernet. Specifically, we formalize various possible network attacks and outline methods of detecting them. Our basic strategy is to develop profiles of usage of network resources and then compare current usage patterns with the historical profile to determine possible security violations. Thus, ourmore » work is similar to the host-based intrusion-detection systems such as SRI's IDES. Different from such systems, however, is our use of a hierarchical model to refine the focus of the intrusion-detection mechanism. We also report on the development of our experimental LAN monitor currently under implementation. Several network attacks have been simulated and results on how the monitor has been able to detect these attacks are also analyzed. Initial results demonstrate that many network attacks are detectable with our monitor, although it can surely be defeated. Current work is focusing on the integration of network monitoring with host-based techniques. 20 refs., 2 figs.« less

  4. Dynamic Hierarchical Energy-Efficient Method Based on Combinatorial Optimization for Wireless Sensor Networks

    PubMed Central

    Tang, Hongying; Cheng, Yongbo; Zhao, Qin; Li, Baoqing; Yuan, Xiaobing

    2017-01-01

    Routing protocols based on topology control are significantly important for improving network longevity in wireless sensor networks (WSNs). Traditionally, some WSN routing protocols distribute uneven network traffic load to sensor nodes, which is not optimal for improving network longevity. Differently to conventional WSN routing protocols, we propose a dynamic hierarchical protocol based on combinatorial optimization (DHCO) to balance energy consumption of sensor nodes and to improve WSN longevity. For each sensor node, the DHCO algorithm obtains the optimal route by establishing a feasible routing set instead of selecting the cluster head or the next hop node. The process of obtaining the optimal route can be formulated as a combinatorial optimization problem. Specifically, the DHCO algorithm is carried out by the following procedures. It employs a hierarchy-based connection mechanism to construct a hierarchical network structure in which each sensor node is assigned to a special hierarchical subset; it utilizes the combinatorial optimization theory to establish the feasible routing set for each sensor node, and takes advantage of the maximum–minimum criterion to obtain their optimal routes to the base station. Various results of simulation experiments show effectiveness and superiority of the DHCO algorithm in comparison with state-of-the-art WSN routing algorithms, including low-energy adaptive clustering hierarchy (LEACH), hybrid energy-efficient distributed clustering (HEED), genetic protocol-based self-organizing network clustering (GASONeC), and double cost function-based routing (DCFR) algorithms. PMID:28753962

  5. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks.

    PubMed

    Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi

    2016-01-01

    Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance.

  6. Design and Evaluation of a Proxy-Based Monitoring System for OpenFlow Networks

    PubMed Central

    Taniguchi, Yoshiaki; Tsutsumi, Hiroaki; Iguchi, Nobukazu; Watanabe, Kenzi

    2016-01-01

    Software-Defined Networking (SDN) has attracted attention along with the popularization of cloud environment and server virtualization. In SDN, the control plane and the data plane are decoupled so that the logical topology and routing control can be configured dynamically depending on network conditions. To obtain network conditions precisely, a network monitoring mechanism is necessary. In this paper, we focus on OpenFlow which is a core technology to realize SDN. We propose, design, implement, and evaluate a network monitoring system for OpenFlow networks. Our proposed system acts as a proxy between an OpenFlow controller and OpenFlow switches. Through experimental evaluations, we confirm that our proposed system can capture packets and monitor traffic information depending on administrator's configuration. In addition, we show that our proposed system does not influence significant performance degradation to overall network performance. PMID:27006977

  7. Feasibility study on a strain based deflection monitoring system for wind turbine blades

    NASA Astrophysics Data System (ADS)

    Lee, Kyunghyun; Aihara, Aya; Puntsagdash, Ganbayar; Kawaguchi, Takayuki; Sakamoto, Hiraku; Okuma, Masaaki

    2017-01-01

    The bending stiffness of the wind turbine blades has decreased due to the trend of wind turbine upsizing. Consequently, the risk of blades breakage by hitting the tower has increased. In order to prevent such incidents, this study proposes a deflection monitoring system that can be installed to already operating wind turbine's blades. The monitoring system is composed of an estimation algorithm to detect blade deflection and a wireless sensor network as a hardware equipment. As for the estimation method for blade deflection, a strain-based estimation algorithm and an objective function for optimal sensor arrangement are proposed. Strain-based estimation algorithm is using a linear correlation between strain and deflections, which can be expressed in a form of a transformation matrix. The objective function includes the terms of strain sensitivity and condition number of the transformation matrix between strain and deflection. In order to calculate the objective function, a simplified experimental model of the blade is constructed by interpolating the mode shape of a blade from modal testing. The interpolation method is effective considering a practical use to operating wind turbines' blades since it is not necessary to establish a finite element model of a blade. On the other hand, a sensor network with wireless connection with an open source hardware is developed. It is installed to a 300 W scale wind turbine and vibration of the blade on operation is investigated.

  8. Sampling design optimization for spatial functions

    USGS Publications Warehouse

    Olea, R.A.

    1984-01-01

    A new procedure is presented for minimizing the sampling requirements necessary to estimate a mappable spatial function at a specified level of accuracy. The technique is based on universal kriging, an estimation method within the theory of regionalized variables. Neither actual implementation of the sampling nor universal kriging estimations are necessary to make an optimal design. The average standard error and maximum standard error of estimation over the sampling domain are used as global indices of sampling efficiency. The procedure optimally selects those parameters controlling the magnitude of the indices, including the density and spatial pattern of the sample elements and the number of nearest sample elements used in the estimation. As an illustration, the network of observation wells used to monitor the water table in the Equus Beds of Kansas is analyzed and an improved sampling pattern suggested. This example demonstrates the practical utility of the procedure, which can be applied equally well to other spatial sampling problems, as the procedure is not limited by the nature of the spatial function. ?? 1984 Plenum Publishing Corporation.

  9. A Parameter Communication Optimization Strategy for Distributed Machine Learning in Sensors

    PubMed Central

    Zhang, Jilin; Tu, Hangdi; Ren, Yongjian; Wan, Jian; Zhou, Li; Li, Mingwei; Wang, Jue; Yu, Lifeng; Zhao, Chang; Zhang, Lei

    2017-01-01

    In order to utilize the distributed characteristic of sensors, distributed machine learning has become the mainstream approach, but the different computing capability of sensors and network delays greatly influence the accuracy and the convergence rate of the machine learning model. Our paper describes a reasonable parameter communication optimization strategy to balance the training overhead and the communication overhead. We extend the fault tolerance of iterative-convergent machine learning algorithms and propose the Dynamic Finite Fault Tolerance (DFFT). Based on the DFFT, we implement a parameter communication optimization strategy for distributed machine learning, named Dynamic Synchronous Parallel Strategy (DSP), which uses the performance monitoring model to dynamically adjust the parameter synchronization strategy between worker nodes and the Parameter Server (PS). This strategy makes full use of the computing power of each sensor, ensures the accuracy of the machine learning model, and avoids the situation that the model training is disturbed by any tasks unrelated to the sensors. PMID:28934163

  10. Optimizing Virtual Network Functions Placement in Virtual Data Center Infrastructure Using Machine Learning

    NASA Astrophysics Data System (ADS)

    Bolodurina, I. P.; Parfenov, D. I.

    2018-01-01

    We have elaborated a neural network model of virtual network flow identification based on the statistical properties of flows circulating in the network of the data center and characteristics that describe the content of packets transmitted through network objects. This enabled us to establish the optimal set of attributes to identify virtual network functions. We have established an algorithm for optimizing the placement of virtual data functions using the data obtained in our research. Our approach uses a hybrid method of visualization using virtual machines and containers, which enables to reduce the infrastructure load and the response time in the network of the virtual data center. The algorithmic solution is based on neural networks, which enables to scale it at any number of the network function copies.

  11. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  12. NSI operations center

    NASA Technical Reports Server (NTRS)

    Zanley, Nancy L.

    1991-01-01

    The NASA Science Internet (NSI) Network Operations Staff is responsible for providing reliable communication connectivity for the NASA science community. As the NSI user community expands, so does the demand for greater interoperability with users and resources on other networks (e.g., NSFnet, ESnet), both nationally and internationally. Coupled with the science community's demand for greater access to other resources is the demand for more reliable communication connectivity. Recognizing this, the NASA Science Internet Project Office (NSIPO) expands its Operations activities. By January 1990, Network Operations was equipped with a telephone hotline, and its staff was expanded to six Network Operations Analysts. These six analysts provide 24-hour-a-day, 7-day-a-week coverage to assist site managers with problem determination and resolution. The NSI Operations staff monitors network circuits and their associated routers. In most instances, NSI Operations diagnoses and reports problems before users realize a problem exists. Monitoring of the NSI TCP/IP Network is currently being done with Proteon's Overview monitoring system. The Overview monitoring system displays a map of the NSI network utilizing various colors to indicate the conditions of the components being monitored. Each node or site is polled via the Simple Network Monitoring Protocol (SNMP). If a circuit goes down, Overview alerts the Network Operations staff with an audible alarm and changes the color of the component. When an alert is received, Network Operations personnel immediately verify and diagnose the problem, coordinate repair with other networking service groups, track problems, and document problem and resolution into a trouble ticket data base. NSI Operations offers the NSI science community reliable connectivity by exercising prompt assessment and resolution of network problems.

  13. Overview of the new National Near-Road Air Quality Monitoring Network

    EPA Science Inventory

    In 2010, EPA promulgated new National Ambient Air Quality Standards (NAAQS) for nitrogen dioxide (NO2). As part of this new NAAQS, EPA required the establishment of a national near-road air quality monitoring network. This network will consist of one NO2 near-road monitoring st...

  14. Compliance Groundwater Monitoring of Nonpoint Sources - Emerging Approaches

    NASA Astrophysics Data System (ADS)

    Harter, T.

    2008-12-01

    Groundwater monitoring networks are typically designed for regulatory compliance of discharges from industrial sites. There, the quality of first encountered (shallow-most) groundwater is of key importance. Network design criteria have been developed for purposes of determining whether an actual or potential, permitted or incidental waste discharge has had or will have a degrading effect on groundwater quality. The fundamental underlying paradigm is that such discharge (if it occurs) will form a distinct contamination plume. Networks that guide (post-contamination) mitigation efforts are designed to capture the shape and dynamics of existing, finite-scale plumes. In general, these networks extend over areas less than one to ten hectare. In recent years, regulatory programs such as the EU Nitrate Directive and the U.S. Clean Water Act have forced regulatory agencies to also control groundwater contamination from non-incidental, recharging, non-point sources, particularly agricultural sources (fertilizer, pesticides, animal waste application, biosolids application). Sources and contamination from these sources can stretch over several tens, hundreds, or even thousands of square kilometers with no distinct plumes. A key question in implementing monitoring programs at the local, regional, and national level is, whether groundwater monitoring can be effectively used as a landowner compliance tool, as is currently done at point-source sites. We compare the efficiency of such traditional site-specific compliance networks in nonpoint source regulation with various designs of regional nonpoint source monitoring networks that could be used for compliance monitoring. We discuss advantages and disadvantages of the site vs. regional monitoring approaches with respect to effectively protecting groundwater resources impacted by nonpoint sources: Site-networks provide a tool to enforce compliance by an individual landowner. But the nonpoint source character of the contamination and its typically large spatial extend requires extensive networks at an individual site to accurately and fairly monitor individual compliance. In contrast, regional networks seemingly fail to hold individual landowners accountable. But regional networks can effectively monitor large-scale impacts and water quality trends; and thus inform regulatory programs that enforce management practices tied to nonpoint source pollution. Regional monitoring networks for compliance purposes can face significant challenges in the implementation due to a regulatory and legal landscape that is exclusively structured to address point sources and individual liability, and due to the non-intensive nature of a regional monitoring program (lack of control of hot spots; lack of accountability of individual landowners).

  15. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems

    PubMed Central

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-01

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN’s MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS. PMID:28134853

  16. IEEE 802.15.4 Frame Aggregation Enhancement to Provide High Performance in Life-Critical Patient Monitoring Systems.

    PubMed

    Akbar, Muhammad Sajjad; Yu, Hongnian; Cang, Shuang

    2017-01-28

    In wireless body area sensor networks (WBASNs), Quality of Service (QoS) provision for patient monitoring systems in terms of time-critical deadlines, high throughput and energy efficiency is a challenging task. The periodic data from these systems generates a large number of small packets in a short time period which needs an efficient channel access mechanism. The IEEE 802.15.4 standard is recommended for low power devices and widely used for many wireless sensor networks applications. It provides a hybrid channel access mechanism at the Media Access Control (MAC) layer which plays a key role in overall successful transmission in WBASNs. There are many WBASN's MAC protocols that use this hybrid channel access mechanism in variety of sensor applications. However, these protocols are less efficient for patient monitoring systems where life critical data requires limited delay, high throughput and energy efficient communication simultaneously. To address these issues, this paper proposes a frame aggregation scheme by using the aggregated-MAC protocol data unit (A-MPDU) which works with the IEEE 802.15.4 MAC layer. To implement the scheme accurately, we develop a traffic patterns analysis mechanism to understand the requirements of the sensor nodes in patient monitoring systems, then model the channel access to find the performance gap on the basis of obtained requirements, finally propose the design based on the needs of patient monitoring systems. The mechanism is initially verified using numerical modelling and then simulation is conducted using NS2.29, Castalia 3.2 and OMNeT++. The proposed scheme provides the optimal performance considering the required QoS.

  17. The balanced mind: the variability of task-unrelated thoughts predicts error monitoring

    PubMed Central

    Allen, Micah; Smallwood, Jonathan; Christensen, Joanna; Gramm, Daniel; Rasmussen, Beinta; Jensen, Christian Gaden; Roepstorff, Andreas; Lutz, Antoine

    2013-01-01

    Self-generated thoughts unrelated to ongoing activities, also known as “mind-wandering,” make up a substantial portion of our daily lives. Reports of such task-unrelated thoughts (TUTs) predict both poor performance on demanding cognitive tasks and blood-oxygen-level-dependent (BOLD) activity in the default mode network (DMN). However, recent findings suggest that TUTs and the DMN can also facilitate metacognitive abilities and related behaviors. To further understand these relationships, we examined the influence of subjective intensity, ruminative quality, and variability of mind-wandering on response inhibition and monitoring, using the Error Awareness Task (EAT). We expected to replicate links between TUT and reduced inhibition, and explored whether variance in TUT would predict improved error monitoring, reflecting a capacity to balance between internal and external cognition. By analyzing BOLD responses to subjective probes and the EAT, we dissociated contributions of the DMN, executive, and salience networks to task performance. While both response inhibition and online TUT ratings modulated BOLD activity in the medial prefrontal cortex (mPFC) of the DMN, the former recruited a more dorsal area implying functional segregation. We further found that individual differences in mean TUTs strongly predicted EAT stop accuracy, while TUT variability specifically predicted levels of error awareness. Interestingly, we also observed co-activation of salience and default mode regions during error awareness, supporting a link between monitoring and TUTs. Altogether our results suggest that although TUT is detrimental to task performance, fluctuations in attention between self-generated and external task-related thought is a characteristic of individuals with greater metacognitive monitoring capacity. Achieving a balance between internally and externally oriented thought may thus aid individuals in optimizing their task performance. PMID:24223545

  18. Influence maximization in complex networks through optimal percolation

    NASA Astrophysics Data System (ADS)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  19. A Hierarchical Modeling for Reactive Power Optimization With Joint Transmission and Distribution Networks by Curve Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, Tao; Li, Cheng; Huang, Can

    Here, in order to solve the reactive power optimization with joint transmission and distribution networks, a hierarchical modeling method is proposed in this paper. It allows the reactive power optimization of transmission and distribution networks to be performed separately, leading to a master–slave structure and improves traditional centralized modeling methods by alleviating the big data problem in a control center. Specifically, the transmission-distribution-network coordination issue of the hierarchical modeling method is investigated. First, a curve-fitting approach is developed to provide a cost function of the slave model for the master model, which reflects the impacts of each slave model. Second,more » the transmission and distribution networks are decoupled at feeder buses, and all the distribution networks are coordinated by the master reactive power optimization model to achieve the global optimality. Finally, numerical results on two test systems verify the effectiveness of the proposed hierarchical modeling and curve-fitting methods.« less

  20. Adaptation, Growth, and Resilience in Biological Distribution Networks

    NASA Astrophysics Data System (ADS)

    Ronellenfitsch, Henrik; Katifori, Eleni

    Highly optimized complex transport networks serve crucial functions in many man-made and natural systems such as power grids and plant or animal vasculature. Often, the relevant optimization functional is nonconvex and characterized by many local extrema. In general, finding the global, or nearly global optimum is difficult. In biological systems, it is believed that such an optimal state is slowly achieved through natural selection. However, general coarse grained models for flow networks with local positive feedback rules for the vessel conductivity typically get trapped in low efficiency, local minima. We show how the growth of the underlying tissue, coupled to the dynamical equations for network development, can drive the system to a dramatically improved optimal state. This general model provides a surprisingly simple explanation for the appearance of highly optimized transport networks in biology such as plant and animal vasculature. In addition, we show how the incorporation of spatially collective fluctuating sources yields a minimal model of realistic reticulation in distribution networks and thus resilience against damage.

Top