Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brancucci Martinez-Anido, C.; Hodge, B. M.
2014-09-01
This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).
Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki
2014-04-01
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
GEOCHEMISTRY OF SULFUR IN IRON CORROSION SCALES FOUND IN DRINKING WATER DISTRIBUTION SYSTEMS
Iron-sulfur geochemistry is important in many natural and engineered environments, including drinking water systems. In the anaerobic environment beneath scales of corroding iron drinking water distribution system pipes, sulfate reducing bacteria (SRB) produce sulfide from natu...
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
Distributed intrusion detection system based on grid security model
NASA Astrophysics Data System (ADS)
Su, Jie; Liu, Yahui
2008-03-01
Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
Maximum entropy approach to H -theory: Statistical mechanics of hierarchical systems
NASA Astrophysics Data System (ADS)
Vasconcelos, Giovani L.; Salazar, Domingos S. P.; Macêdo, A. M. S.
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem—representing the region where the measurements are made—in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017), 10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
Maximum entropy approach to H-theory: Statistical mechanics of hierarchical systems.
Vasconcelos, Giovani L; Salazar, Domingos S P; Macêdo, A M S
2018-02-01
A formalism, called H-theory, is applied to the problem of statistical equilibrium of a hierarchical complex system with multiple time and length scales. In this approach, the system is formally treated as being composed of a small subsystem-representing the region where the measurements are made-in contact with a set of "nested heat reservoirs" corresponding to the hierarchical structure of the system, where the temperatures of the reservoirs are allowed to fluctuate owing to the complex interactions between degrees of freedom at different scales. The probability distribution function (pdf) of the temperature of the reservoir at a given scale, conditioned on the temperature of the reservoir at the next largest scale in the hierarchy, is determined from a maximum entropy principle subject to appropriate constraints that describe the thermal equilibrium properties of the system. The marginal temperature distribution of the innermost reservoir is obtained by integrating over the conditional distributions of all larger scales, and the resulting pdf is written in analytical form in terms of certain special transcendental functions, known as the Fox H functions. The distribution of states of the small subsystem is then computed by averaging the quasiequilibrium Boltzmann distribution over the temperature of the innermost reservoir. This distribution can also be written in terms of H functions. The general family of distributions reported here recovers, as particular cases, the stationary distributions recently obtained by Macêdo et al. [Phys. Rev. E 95, 032315 (2017)10.1103/PhysRevE.95.032315] from a stochastic dynamical approach to the problem.
The use of discontinuities and functional groups to assess relative resilience in complex systems
Allen, Craig R.; Gunderson, Lance; Johnson, A.R.
2005-01-01
It is evident when the resilience of a system has been exceeded and the system qualitatively changed. However, it is not clear how to measure resilience in a system prior to the demonstration that the capacity for resilient response has been exceeded. We argue that self-organizing human and natural systems are structured by a relatively small set of processes operating across scales in time and space. These structuring processes should generate a discontinuous distribution of structures and frequencies, where discontinuities mark the transition from one scale to another. Resilience is not driven by the identity of elements of a system, but rather by the functions those elements provide, and their distribution within and across scales. A self-organizing system that is resilient should maintain patterns of function within and across scales despite the turnover of specific elements (for example, species, cities). However, the loss of functions, or a decrease in functional representation at certain scales will decrease system resilience. It follows that some distributions of function should be more resilient than others. We propose that the determination of discontinuities, and the quantification of function both within and across scales, produce relative measures of resilience in ecological and other systems. We describe a set of methods to assess the relative resilience of a system based upon the determination of discontinuities and the quantification of the distribution of functions in relation to those discontinuities. ?? 2005 Springer Science+Business Media, Inc.
SMART-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodge, Bri-Mathias; Palmintier, Bryan
This presentation provides an overview of full-scale, high-quality, synthetic distribution system data set(s) for testing distribution automation algorithms, distributed control approaches, ADMS capabilities, and other emerging distribution technologies.
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Scaling the Pipe: NASA EOS Terra Data Systems at 10
NASA Technical Reports Server (NTRS)
Wolfe, Robert E.; Ramapriyan, Hampapuram K.
2010-01-01
Standard products from the five sensors on NASA's Earth Observing System's (EOS) Terra satellite are being used world-wide for earth science research and applications. This paper describes the evolution of the Terra data systems over the last decade in which the distributed systems that produce, archive and distribute high quality Terra data products were scaled by two orders of magnitude.
CHLORINE DECAY AND BIOFILM STUDIES IN A PILOT SCALE DRINKING WATER DISTRIBUTION DEAD END PIPE SYSTEM
Chlorine decay experiments using a pilot-scale water distribution dead end pipe system were conducted to define relationships between chlorine decay and environmental factors. These included flow rate, biomass concentration and biofilm density, and initial chlorine concentrations...
Davies, Jim; Michaelian, Kourken
2016-08-01
This article argues for a task-based approach to identifying and individuating cognitive systems. The agent-based extended cognition approach faces a problem of cognitive bloat and has difficulty accommodating both sub-individual cognitive systems ("scaling down") and some supra-individual cognitive systems ("scaling up"). The standard distributed cognition approach can accommodate a wider variety of supra-individual systems but likewise has difficulties with sub-individual systems and faces the problem of cognitive bloat. We develop a task-based variant of distributed cognition designed to scale up and down smoothly while providing a principled means of avoiding cognitive bloat. The advantages of the task-based approach are illustrated by means of two parallel case studies: re-representation in the human visual system and in a biomedical engineering laboratory.
A multidisciplinary approach to the development of low-cost high-performance lightwave networks
NASA Technical Reports Server (NTRS)
Maitan, Jacek; Harwit, Alex
1991-01-01
Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Schock, Michael R; Hyland, Robert N; Welch, Meghan M
2008-06-15
Previously, contaminants, such as AI, As, and Ra, have been shown to accumulate in drinking-water distribution system solids. Accumulated contaminants could be periodically released back into the water supply causing elevated levels at consumers taps, going undetected by most current regulatory monitoring practices and consequently constituting a hidden risk. The objective of this study was to determine the occurrence of over 40 major scale constituents, regulated metals, and other potential metallic inorganic contaminants in drinking-water distribution system Pb (lead) or Pb-lined service lines. The primary method of analysis was inductively coupled plasma-atomic emission spectroscopy, following complete decomposition of scale material. Contaminants and scale constituents were categorized by their average concentrations, and many metals of potential health concern were found to occur at levels sufficient to result in elevated levels at the consumer's taps if they were to be mobilized. The data indicate distinctly nonconservative behavior for many inorganic contaminants in drinking-water distribution systems. This finding suggests an imminent need for further research into the transport and fate of contaminants throughout drinking-water distribution system pipes, as well as a re-evaluation of monitoring protocols in order to more accurately determine the scope and levels of potential consumer exposure.
Feng, Huan; Tappero, Ryan; Zhang, Weiguo; ...
2015-07-26
This study is focused on micro-scale measurement of metal (Ca, Cl, Fe, K, Mn, Cu, Pb, and Zn) distributions in Spartina alterniflora root system. The root samples were collected in the Yangtze River intertidal zone in July 2013. Synchrotron X-ray fluorescence (XRF), computed microtomography (CMT), and X-ray absorption near-edge structure (XANES) techniques, which provide micro-meter scale analytical resolution, were applied to this study. Although it was found that the metals of interest were distributed in both epidermis and vascular tissue with the varying concentrations, the results showed that Fe plaque was mainly distributed in the root epidermis. Other metals (e.g.,more » Cu, Mn, Pb, and Zn) were correlated with Fe in the epidermis possibly due to scavenge by Fe plaque. Relatively high metal concentrations were observed in the root hair tip. As a result, this micro-scale investigation provides insights of understanding the metal uptake and spatial distribution as well as the function of Fe plaque governing metal transport in the root system.« less
The objective of this work is to compare the properties of lead solids formed during bench-scale precipitation experiments to solids found on lead pipe removed from real drinking water distribution systems and metal coupons used in pilot scale corrosion testing. Specifically, so...
Dissolving Pb from lead service lines and Pb-containing brasses and solders has become a major health issue for many water distribution systems. Knowledge of the mineralogy of scales in these pipes is key to modeling this dissolution. The traditional method of determining their ...
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Rotation And Scale Invariant Object Recognition Using A Distributed Associative Memory
NASA Astrophysics Data System (ADS)
Wechsler, Harry; Zimmerman, George Lee
1988-04-01
This paper describes an approach to 2-dimensional object recognition. Complex-log conformal mapping is combined with a distributed associative memory to create a system which recognizes objects regardless of changes in rotation or scale. Recalled information from the memorized database is used to classify an object, reconstruct the memorized version of the object, and estimate the magnitude of changes in scale or rotation. The system response is resistant to moderate amounts of noise and occlusion. Several experiments, using real, gray scale images, are presented to show the feasibility of our approach.
Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings
ERIC Educational Resources Information Center
Morales, Ramses Victor
2009-01-01
Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…
EFFECT OF VARYING FLOW REGIMES ON BIOFILM DENSITIES IN A DISTRIBUTION SYSTEM SIMULATOR
Maintenance of a free chlorine residual within water distribution systems is used to reduce the possibility of microbial contamination. However, it has been demonstrated that biofilms within water distribution systems can harbor coliforms. In laboratory scale studies, others have...
Långmark, Jonas; Storey, Michael V.; Ashbolt, Nicholas J.; Stenström, Thor-Axel
2005-01-01
The accumulation and fate of model microbial “pathogens” within a drinking-water distribution system was investigated in naturally grown biofilms formed in a novel pilot-scale water distribution system provided with chlorinated and UV-treated water. Biofilms were exposed to 1-μm hydrophilic and hydrophobic microspheres, Salmonella bacteriophages 28B, and Legionella pneumophila bacteria, and their fate was monitored over a 38-day period. The accumulation of model pathogens was generally independent of the biofilm cell density and was shown to be dependent on particle surface properties, where hydrophilic spheres accumulated to a larger extent than hydrophobic ones. A higher accumulation of culturable legionellae was measured in the chlorinated system compared to the UV-treated system with increasing residence time. The fate of spheres and fluorescence in situ hybridization-positive legionellae was similar and independent of the primary disinfectant applied and water residence time. The more rapid loss of culturable legionellae compared to the fluorescence in situ hybridization-positive legionellae was attributed to a loss in culturability rather than physical desorption. Loss of bacteriophage 28B plaque-forming ability together with erosion may have affected their fate within biofilms in the pilot-scale distribution system. The current study has demonstrated that desorption was one of the primary mechanisms affecting the loss of microspheres, legionellae, and bacteriophage from biofilms within a pilot-scale distribution system as well as disinfection and biological grazing. In general, two primary disinfection regimens (chlorination and UV treatment) were not shown to have a measurable impact on the accumulation and fate of model microbial pathogens within a water distribution system. PMID:15691920
Långmark, Jonas; Storey, Michael V; Ashbolt, Nicholas J; Stenström, Thor-Axel
2005-02-01
The accumulation and fate of model microbial "pathogens" within a drinking-water distribution system was investigated in naturally grown biofilms formed in a novel pilot-scale water distribution system provided with chlorinated and UV-treated water. Biofilms were exposed to 1-mum hydrophilic and hydrophobic microspheres, Salmonella bacteriophages 28B, and Legionella pneumophila bacteria, and their fate was monitored over a 38-day period. The accumulation of model pathogens was generally independent of the biofilm cell density and was shown to be dependent on particle surface properties, where hydrophilic spheres accumulated to a larger extent than hydrophobic ones. A higher accumulation of culturable legionellae was measured in the chlorinated system compared to the UV-treated system with increasing residence time. The fate of spheres and fluorescence in situ hybridization-positive legionellae was similar and independent of the primary disinfectant applied and water residence time. The more rapid loss of culturable legionellae compared to the fluorescence in situ hybridization-positive legionellae was attributed to a loss in culturability rather than physical desorption. Loss of bacteriophage 28B plaque-forming ability together with erosion may have affected their fate within biofilms in the pilot-scale distribution system. The current study has demonstrated that desorption was one of the primary mechanisms affecting the loss of microspheres, legionellae, and bacteriophage from biofilms within a pilot-scale distribution system as well as disinfection and biological grazing. In general, two primary disinfection regimens (chlorination and UV treatment) were not shown to have a measurable impact on the accumulation and fate of model microbial pathogens within a water distribution system.
Nathaniel Anderson; J. Greg Jones; Deborah Page-Dumroese; Daniel McCollum; Stephen Baker; Daniel Loeffler; Woodam Chung
2013-01-01
Thermochemical biomass conversion systems have the potential to produce heat, power, fuels and other products from forest biomass at distributed scales that meet the needs of some forest industry facilities. However, many of these systems have not been deployed in this sector and the products they produce from forest biomass have not been adequately described or...
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
NASA Astrophysics Data System (ADS)
Salerno, K. Michael; Robbins, Mark O.
2013-12-01
Molecular dynamics simulations with varying damping are used to examine the effects of inertia and spatial dimension on sheared disordered solids in the athermal quasistatic limit. In all cases the distribution of avalanche sizes follows a power law over at least three orders of magnitude in dissipated energy or stress drop. Scaling exponents are determined using finite-size scaling for systems with 103-106 particles. Three distinct universality classes are identified corresponding to overdamped and underdamped limits, as well as a crossover damping that separates the two regimes. For each universality class, the exponent describing the avalanche distributions is the same in two and three dimensions. The spatial extent of plastic deformation is proportional to the energy dissipated in an avalanche. Both rise much more rapidly with system size in the underdamped limit where inertia is important. Inertia also lowers the mean energy of configurations sampled by the system and leads to an excess of large events like that seen in earthquake distributions for individual faults. The distribution of stress values during shear narrows to zero with increasing system size and may provide useful information about the size of elemental events in experimental systems. For overdamped and crossover systems the stress variation scales inversely with the square root of the system size. For underdamped systems the variation is determined by the size of the largest events.
Scale build-up, corrosion rate, and metal release associated with drinking water distribution system pipes have been suggested to relate to the oxidant type and concentration. Conversely, different distribution system metals may exert different oxidant demands. The impact of ox...
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Network placement optimization for large-scale distributed system
NASA Astrophysics Data System (ADS)
Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng
2018-01-01
The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.
Enterprise PACS and image distribution.
Huang, H K
2003-01-01
Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.
Principles for scaling of distributed direct potable water reuse systems: a modeling study.
Guo, Tianjiao; Englehardt, James D
2015-05-15
Scaling of direct potable water reuse (DPR) systems involves tradeoffs of treatment facility economy-of-scale, versus cost and energy of conveyance including energy for upgradient distribution of treated water, and retention of wastewater thermal energy. In this study, a generalized model of the cost of DPR as a function of treatment plant scale, assuming futuristic, optimized conveyance networks, was constructed for purposes of developing design principles. Fractal landscapes representing flat, hilly, and mountainous topographies were simulated, with urban, suburban, and rural housing distributions placed by modified preferential growth algorithm. Treatment plants were allocated by agglomerative hierarchical clustering, networked to buildings by minimum spanning tree. Simulations assume advanced oxidation-based DPR system design, with 20-year design life and capability to mineralize chemical oxygen demand below normal detection limits, allowing implementation in regions where disposal of concentrate containing hormones and antiscalants is not practical. Results indicate that total DPR capital and O&M costs in rural areas, where systems that return nutrients to the land may be more appropriate, are high. However, costs in urban/suburban areas are competitive with current water/wastewater service costs at scales of ca. one plant per 10,000 residences. This size is relatively small, and costs do not increase significantly until plant service areas fall below 100 to 1000 homes. Based on these results, distributed DPR systems are recommended for consideration for urban/suburban water and wastewater system capacity expansion projects. Copyright © 2015 Elsevier Ltd. All rights reserved.
Domain-area distribution anomaly in segregating multicomponent superfluids
NASA Astrophysics Data System (ADS)
Takeuchi, Hiromitsu
2018-01-01
The domain-area distribution in the phase transition dynamics of Z2 symmetry breaking is studied theoretically and numerically for segregating binary Bose-Einstein condensates in quasi-two-dimensional systems. Due to the dynamic-scaling law of the phase ordering kinetics, the domain-area distribution is described by a universal function of the domain area, rescaled by the mean distance between domain walls. The scaling theory for general coarsening dynamics in two dimensions hypothesizes that the distribution during the coarsening dynamics has a hierarchy with the two scaling regimes, the microscopic and macroscopic regimes with distinct power-law exponents. The power law in the macroscopic regime, where the domain size is larger than the mean distance, is universally represented with the Fisher's exponent of the percolation theory in two dimensions. On the other hand, the power-law exponent in the microscopic regime is sensitive to the microscopic dynamics of the system. This conjecture is confirmed by large-scale numerical simulations of the coupled Gross-Pitaevskii equation for binary condensates. In the numerical experiments of the superfluid system, the exponent in the microscopic regime anomalously reaches to its theoretical upper limit of the general scaling theory. The anomaly comes from the quantum-fluid effect in the presence of circular vortex sheets, described by the hydrodynamic approximation neglecting the fluid compressibility. It is also found that the distribution of superfluid circulation along vortex sheets obeys a dynamic-scaling law with different power-law exponents in the two regimes. An analogy to quantum turbulence on the hierarchy of vorticity distribution and the applicability to chiral superfluid 3He in a slab are also discussed.
Transformation of Bisphenol A in Water Distribution Systems, A Pilot-scale Study
Halogenations of bisphenol A (BPA) in a pilot-scale water distribution system (WDS) of cement-lined ductile cast iron pipe were investigated under the condition: pH 7.3±0.3, water flow velocity of 1.0 m/s, and 25 °C ± 1 °C in water temperature. The testing water was chlorinated f...
ERIC Educational Resources Information Center
Warfvinge, Per
2008-01-01
The ECTS grade transfer scale is an interface grade scale to help European universities, students and employers to understand the level of student achievement. Hence, the ECTS scale can be seen as an interface, transforming local scales to a common system where A-E denote passing grades. By definition, ECTS should distribute the passing students…
1996-04-01
time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real
NASA Astrophysics Data System (ADS)
Atsumi, Yu; Nakao, Hiroya
2012-05-01
A system of phase oscillators with repulsive global coupling and periodic external forcing undergoing asynchronous rotation is considered. The synchronization rate of the system can exhibit persistent fluctuations depending on parameters and initial phase distributions, and the amplitude of the fluctuations scales with the system size for uniformly random initial phase distributions. Using the Watanabe-Strogatz transformation that reduces the original system to low-dimensional macroscopic equations, we show that the fluctuations are collective dynamics of the system corresponding to low-dimensional trajectories of the reduced equations. It is argued that the amplitude of the fluctuations is determined by the inhomogeneity of the initial phase distribution, resulting in system-size scaling for the random case.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Scaling theory for information networks.
Moses, Melanie E; Forrest, Stephanie; Davis, Alan L; Lodder, Mike A; Brown, James H
2008-12-06
Networks distribute energy, materials and information to the components of a variety of natural and human-engineered systems, including organisms, brains, the Internet and microprocessors. Distribution networks enable the integrated and coordinated functioning of these systems, and they also constrain their design. The similar hierarchical branching networks observed in organisms and microprocessors are striking, given that the structure of organisms has evolved via natural selection, while microprocessors are designed by engineers. Metabolic scaling theory (MST) shows that the rate at which networks deliver energy to an organism is proportional to its mass raised to the 3/4 power. We show that computational systems are also characterized by nonlinear network scaling and use MST principles to characterize how information networks scale, focusing on how MST predicts properties of clock distribution networks in microprocessors. The MST equations are modified to account for variation in the size and density of transistors and terminal wires in microprocessors. Based on the scaling of the clock distribution network, we predict a set of trade-offs and performance properties that scale with chip size and the number of transistors. However, there are systematic deviations between power requirements on microprocessors and predictions derived directly from MST. These deviations are addressed by augmenting the model to account for decentralized flow in some microprocessor networks (e.g. in logic networks). More generally, we hypothesize a set of constraints between the size, power and performance of networked information systems including transistors on chips, hosts on the Internet and neurons in the brain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less
On the theory of intensity distributions of tornadoes and other low pressure systems
NASA Astrophysics Data System (ADS)
Schielicke, Lisa; Névir, Peter
Approaching from a theoretical point of view, this work presents a theory which unifies intensity distributions of different low pressure systems, based on an energy of displacement. Resulting from a generalized Boltzmann distribution, the expression of this energy of displacement is obtained by radial integration over the forces which are in balance with the pressure gradient force in the horizontal equation of motion. A scale analysis helps to find out which balance of forces prevail. According to the prevailing balances, the expression of the energy of displacement differs for various depressions. Investigating the system at the moment of maximum intensity, the energy of displacement can be interpreted as the work that has to be done to generate and finally eliminate the pressure anomaly, respectively. By choosing the appropriate balance of forces, number-intensity (energy of displacement) distributions show exponential behavior with the same decay rate β for tornadoes and cyclones, if tropical and extra-tropical cyclones are investigated together. The decay rate is related to a characteristic (universal) scale of the energy of displacement which has approximately the value Eu = β- 1 ≈ 1000 m 2s - 2 . In consequence, while the different balances of forces cause the scales of velocity, the energy of displacement scale seems to be universal for all low pressure systems. Additionally, if intensity is expressed as lifetime minimum pressure, the number-intensity (pressure) distributions should be power law distributed. Moreover, this work points out that the choice of the physical quantity which represents the intensity is important concerning the behavior of intensity distributions. Various expressions of the intensity like velocity, kinetic energy, energy of displacement and pressure are possible, but lead to different behavior of the distributions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2018-01-23
Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.
Self-similarity of waiting times in fracture systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Niccolini, G.; Bosia, F.; Carpinteri, A.
2009-08-15
Experimental and numerical results are presented for a fracture experiment carried out on a fiber-reinforced element under flexural loading, and a statistical analysis is performed for acoustic emission waiting-time distributions. By an optimization procedure, a recently proposed scaling law describing these distributions for different event magnitude scales is confirmed by both experimental and numerical data, thus reinforcing the idea that fracture of heterogeneous materials has scaling properties similar to those found for earthquakes. Analysis of the different scaling parameters obtained for experimental and numerical data leads us to formulate the hypothesis that the type of scaling function obtained depends onmore » the level of correlation among fracture events in the system.« less
Novel Directional Protection Scheme for the FREEDM Smart Grid System
NASA Astrophysics Data System (ADS)
Sharma, Nitish
This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.
2014-09-18
and full/scale experimental verifications towards ground/ satellite quantum key distribution0 Oat Qhotonics 4235>9+7,=5;9!អ \\58^ Zin K. Dao Z. Miu T...Conceptual Modeling of a Quantum Key Distribution Simulation Framework Using the Discrete Event System Specification DISSERTATION Jeffrey D. Morris... QUANTUM KEY DISTRIBUTION SIMULATION FRAMEWORK USING THE DISCRETE EVENT SYSTEM SPECIFICATION DISSERTATION Presented to the Faculty Department of Systems
Scaling behavior of sleep-wake transitions across species
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan; Chou, Thomas; Ivanov, Plamen Ch.; Penzel, Thomas; Mochizuki, Takatoshi; Scammell, Thomas; Saper, Clifford B.; Stanley, H. Eugene
2003-03-01
Uncovering the mechanisms controlling sleep is a fascinating scientific challenge. It can be viewed as transitions of states of a very complex system, the brain. We study the time dynamics of short awakenings during sleep for three species: humans, rats and mice. We find, for all three species, that wake durations follow a power-law distribution, and sleep durations follow exponential distributions. Surprisingly, all three species have the same power-law exponent for the distribution of wake durations, but the exponential time scale of the distributions of sleep durations varies across species. We suggest that the dynamics of short awakenings are related to species-independent fluctuations of the system, while the dynamics of sleep is related to system-dependent mechanisms which change with species.
Fractal topography and subsurface water flows from fluvial bedforms to the continental shield
Worman, A.; Packman, A.I.; Marklund, L.; Harvey, J.W.; Stone, S.H.
2007-01-01
Surface-subsurface flow interactions are critical to a wide range of geochemical and ecological processes and to the fate of contaminants in freshwater environments. Fractal scaling relationships have been found in distributions of both land surface topography and solute efflux from watersheds, but the linkage between those observations has not been realized. We show that the fractal nature of the land surface in fluvial and glacial systems produces fractal distributions of recharge, discharge, and associated subsurface flow patterns. Interfacial flux tends to be dominated by small-scale features while the flux through deeper subsurface flow paths tends to be controlled by larger-scale features. This scaling behavior holds at all scales, from small fluvial bedforms (tens of centimeters) to the continental landscape (hundreds of kilometers). The fractal nature of surface-subsurface water fluxes yields a single scale-independent distribution of subsurface water residence times for both near-surface fluvial systems and deeper hydrogeological flows. Copyright 2007 by the American Geophysical Union.
Bringing the CMS distributed computing system into scalable operations
NASA Astrophysics Data System (ADS)
Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.
2010-04-01
Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.
NASA Astrophysics Data System (ADS)
Kelling, S.
2017-12-01
The goal of Biodiversity research is to identify, explain, and predict why a species' distribution and abundance vary through time, space, and with features of the environment. Measuring these patterns and predicting their responses to change are not exercises in curiosity. Today, they are essential tasks for understanding the profound effects that humans have on earth's natural systems, and for developing science-based environmental policies. To gain insight about species' distribution patterns requires studying natural systems at appropriate scales, yet studies of ecological processes continue to be compromised by inadequate attention to scale issues. How spatial and temporal patterns in nature change with scale often reflects fundamental laws of physics, chemistry, or biology, and we can identify such basic, governing laws only by comparing patterns over a wide range of scales. This presentation will provide several examples that integrate bird observations made by volunteers, with NASA Earth Imagery using Big Data analysis techniques to analyze the temporal patterns of bird occurrence across scales—from hemisphere-wide views of bird distributions to the impact of powerful city lights on bird migration.
Optical interconnect for large-scale systems
NASA Astrophysics Data System (ADS)
Dress, William
2013-02-01
This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.
Full Scale Drinking Water System Decontamination at the Water Security Test Bed.
Szabo, Jeffrey; Hall, John; Reese, Steve; Goodrich, Jim; Panguluri, Sri; Meiners, Greg; Ernst, Hiba
2018-03-20
The EPA's Water Security Test Bed (WSTB) facility is a full-scale representation of a drinking water distribution system. In collaboration with the Idaho National Laboratory (INL), EPA designed the WSTB facility to support full-scale evaluations of water infrastructure decontamination, real-time sensors, mobile water treatment systems, and decontamination of premise plumbing and appliances. The EPA research focused on decontamination of 1) Bacillus globigii (BG) spores, a non-pathogenic surrogate for Bacillus anthracis and 2) Bakken crude oil. Flushing and chlorination effectively removed most BG spores from the bulk water but BG spores still remained on the pipe wall coupons. Soluble oil components of Bakken crude oil were removed by flushing although oil components persisted in the dishwasher and refrigerator water dispenser. Using this full-scale distribution system allows EPA to 1) test contaminants without any human health or ecological risk and 2) inform water systems on effective methodologies responding to possible contamination incidents.
Distributed intelligent urban environment monitoring system
NASA Astrophysics Data System (ADS)
Du, Jinsong; Wang, Wei; Gao, Jie; Cong, Rigang
2018-02-01
The current environmental pollution and destruction have developed into a world-wide major social problem that threatens human survival and development. Environmental monitoring is the prerequisite and basis of environmental governance, but overall, the current environmental monitoring system is facing a series of problems. Based on the electrochemical sensor, this paper designs a small, low-cost, easy to layout urban environmental quality monitoring terminal, and multi-terminal constitutes a distributed network. The system has been small-scale demonstration applications and has confirmed that the system is suitable for large-scale promotion
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
Critiquing ';pore connectivity' as basis for in situ flow in geothermal systems
NASA Astrophysics Data System (ADS)
Kenedi, C. L.; Leary, P.; Malin, P.
2013-12-01
Geothermal system in situ flow systematics derived from detailed examination of grain-scale structures, fabrics, mineral alteration, and pore connectivity may be extremely misleading if/when extrapolated to reservoir-scale flow structure. In oil/gas field clastic reservoir operations, it is standard to assume that small scale studies of flow fabric - notably the Kozeny-Carman and Archie's Law treatments at the grain-scale and well-log/well-bore sampling of formations/reservoirs at the cm-m scale - are adequate to define the reservoir-scale flow properties. In the case of clastic reservoirs, however, a wide range of reservoir-scale data wholly discredits this extrapolation: Well-log data show that grain-scale fracture density fluctuation power scales inversely with spatial frequency k, S(k) ~ 1/k^β, 1.0 < β < 1.2, 1cycle/km < k < 1cycle/cm; the scaling is a ';universal' feature of well-logs (neutron porosity, sonic velocity, chemical abundance, mass density, resistivity, in many forms of clastic rock and instances of shale bodies, for both horizontal and vertical wells). Grain-scale fracture density correlates with in situ porosity; spatial fluctuations of porosity φ in well-core correlate with spatial fluctuations in the logarithm of well-core permeability, δφ ~ δlog(κ) with typical correlation coefficient ~ 85%; a similar relation is observed in consolidating sediments/clays, indicating a generic coupling between fluid pressure and solid deformation at pore sites. In situ macroscopic flow systems are lognormally distributed according to κ ~ κ0 exp(α(φ-φ0)), α >>1 an empirical parameter for degree of in situ fracture connectivity; the lognormal distribution applies to well-productivities in US oil fields and NZ geothermal fields, ';frack productivity' in oil/gas shale body reservoirs, ore grade distributions, and trace element abundances. Although presently available evidence for these properties in geothermal reservoirs is limited, there are indications that geothermal system flow essentially obeys the same ';universal' in situ flow rules as does clastic rock: Well-log data from Los Azufres, MX, show power-law scaling S(k) ~ 1/k^β, 1.2 < β < 1.4, for spatial frequency range 2cycles/km to 0.5cycle/m; higher β-values are likely due to the relatively fresh nature of geothermal systems; Well-core at Bulalo (PH) and Ohaaki (NZ) show statistically significant spatial correlation, δφ ~ δlog(κ) Well productivity at Ohaaki/Ngawha (NZ) and in geothermal systems elsewhere are lognormally distributed; K/Th/U abundances lognormally distributed in Los Azufres well-logs We therefore caution that small-scale evidence for in situ flow fabric in geothermal systems that is interpreted in terms of ';pore connectivity' may in fact not reflect how small-scale chemical processes are integrated into a large-scale geothermal flow structure. Rather such small scale studies should (perhaps) be considered in term of the above flow rules. These flow rules are easily incorporated into standard flow simulation codes, in particular the OPM = Open Porous Media open-source industry-standard flow code. Geochemical transport data relevant to geothermal systems can thus be expected to be well modeled by OPM or equivalent (e.g., INL/LANL) codes.
Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.
2013-01-01
The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-08-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for aquifer thermal energy storage (ATES) systems and wells. Recent model studies indicate that meter-scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In a study site in Bierbeek, Belgium, the influence of centimeter-scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3-3.6 %) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6-10.2 %) on the energy output of the ATES system. It is concluded that it is important to incorporate small-scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
NASA Astrophysics Data System (ADS)
Possemiers, Mathias; Huysmans, Marijke; Batelaan, Okke
2015-04-01
Adequate aquifer characterization and simulation using heat transport models are indispensible for determining the optimal design for Aquifer Thermal Energy Storage (ATES) systems and wells. Recent model studies indicate that meter scale heterogeneities in the hydraulic conductivity field introduce a considerable uncertainty in the distribution of thermal energy around an ATES system and can lead to a reduction in the thermal recoverability. In this paper, the influence of centimeter scale clay drapes on the efficiency of a doublet ATES system and the distribution of the thermal energy around the ATES wells are quantified. Multiple-point geostatistical simulation of edge properties is used to incorporate the clay drapes in the models. The results show that clay drapes have an influence both on the distribution of thermal energy in the subsurface and on the efficiency of the ATES system. The distribution of the thermal energy is determined by the strike of the clay drapes, with the major axis of anisotropy parallel to the clay drape strike. The clay drapes have a negative impact (3.3 - 3.6%) on the energy output in the models without a hydraulic gradient. In the models with a hydraulic gradient, however, the presence of clay drapes has a positive influence (1.6 - 10.2%) on the energy output of the ATES system. It is concluded that it is important to incorporate small scale heterogeneities in heat transport models to get a better estimate on ATES efficiency and distribution of thermal energy.
'Fracking', Induced Seismicity and the Critical Earth
NASA Astrophysics Data System (ADS)
Leary, P.; Malin, P. E.
2012-12-01
Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy
NASA Astrophysics Data System (ADS)
Doi, Yusuke; Wakayama, Shirou; Ozaki, Satoshi
To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.
On Predictability of System Anomalies in Real World
2011-08-01
distributed system SETI @home [44]. Different from the above work, this work focuses on quantifying the predictability of real-world system anomalies. V...J.-M. Vincent, and D. Anderson, “Mining for statistical models of availability in large-scale distributed systems: An empirical study of seti @home,” in Proc. of MASCOTS, sept. 2009.
Energy Systems Integration News | Energy Systems Integration Facility |
laboratories to attend the workshop on best practices for distributed energy resource (DER) security. Exploring grid. The U.S. Department of Energy (DOE) H2@Scale initiative is exploring the potential for wide-scale
EFFECT OF BACTERIAL SULFATE REDUCTION ON IRON-CORROSION SCALES
Iron-sulfur geochemistry is important in many natural and engineered environments including drinking water systems. In the anaerobic environment beneath scales of corroding iron drinking water distribution system pipes, sulfate reducing bacteria (SRB) produce sulfide from natura...
NASA Astrophysics Data System (ADS)
Guenther, A. B.; Duhl, T.
2011-12-01
Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.
Corrosion and scaling potential in drinking water distribution system of tabriz, northwestern iran.
Taghipour, Hassan; Shakerkhatibi, Mohammad; Pourakbar, Mojtaba; Belvasi, Mehdi
2012-01-01
This paper discusses the corrosion and scaling potential of Tabriz drinking water distribution system in Northwest of Iran. Internal corrosion of piping is a serious problem in drinking water industry. Corrosive water can cause intrusion of heavy metals especially lead in to water, therefore effecting public health. The aim of this study was to determine corrosion and scaling potential in potable water distribution system of Tabriz during the spring and summer in 2011. This study was carried out using Langlier Saturation Index, Ryznar Stability Index, Puckorius Scaling Index, and Aggressiveness indices. Eighty samples were taken from all over the city within two seasons, spring, and summer. Related parameters including temperature, pH, total dissolved solids, calcium hardness, and total alkalinity in all samples were measured in laboratory according to standard method manual. For the statistical analysis of the results, SPSS software (version 11.5) was used The mean and standard deviation values of Langlier, Ryznar, Puckorius and Aggressiveness Indices were equal to -0.68 (±0.43), 8.43 (±0.55), 7.86 (±0.36) and 11.23 (±0.43), respectively. By survey of corrosion indices, it was found that Tabriz drinking water is corrosive. In order to corrosion control, it is suggested that laboratorial study with regard to the distribution system condition be carried out to adjust effective parameters such as pH.
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Zhang, Zhe; Stout, Janet E; Yu, Victor L; Vidic, Radisav
2008-01-01
Previous studies showed that temperature and total organic carbon in drinking water would cause chlorine dioxide (ClO(2)) loss in a water distribution system and affect the efficiency of ClO(2) for Legionella control. However, among the various causes of ClO(2) loss in a drinking water distribution system, the loss of disinfectant due to the reaction with corrosion scales has not been studied in detail. In this study, the corrosion scales from a galvanized iron pipe and a copper pipe that have been in service for more than 10 years were characterized by energy dispersive spectroscopy (EDS) and X-ray diffraction (XRD). The impact of these corrosion scale materials on ClO(2) decay was investigated in de-ionized water at 25 and 45 degrees C in a batch reactor with floating glass cover. ClO(2) decay was also investigated in a specially designed reactor made from the iron and copper pipes to obtain more realistic reaction rate data. Goethite (alpha-FeOOH) and magnetite (Fe(3)O(4)) were identified as the main components of iron corrosion scale. Cuprite (Cu(2)O) was identified as the major component of copper corrosion scale. The reaction rate of ClO(2) with both iron and copper oxides followed a first-order kinetics. First-order decay rate constants for ClO(2) reactions with iron corrosion scales obtained from the used service pipe and in the iron pipe reactor itself ranged from 0.025 to 0.083 min(-1). The decay rate constant for ClO(2) with Cu(2)O powder and in the copper pipe reactor was much smaller and it ranged from 0.0052 to 0.0062 min(-1). Based on these results, it can be concluded that the corrosion scale will cause much more significant ClO(2) loss in corroded iron pipes of the distribution system than the total organic carbon that may be present in finished water.
Spatio-temporal assessment of food safety risks in Canadian food distribution systems using GIS.
Hashemi Beni, Leila; Villeneuve, Sébastien; LeBlanc, Denyse I; Côté, Kevin; Fazil, Aamir; Otten, Ainsley; McKellar, Robin; Delaquis, Pascal
2012-09-01
While the value of geographic information systems (GIS) is widely applied in public health there have been comparatively few examples of applications that extend to the assessment of risks in food distribution systems. GIS can provide decision makers with strong computing platforms for spatial data management, integration, analysis, querying and visualization. The present report addresses some spatio-analyses in a complex food distribution system and defines influence areas as travel time zones generated through road network analysis on a national scale rather than on a community scale. In addition, a dynamic risk index is defined to translate a contamination event into a public health risk as time progresses. More specifically, in this research, GIS is used to map the Canadian produce distribution system, analyze accessibility to contaminated product by consumers, and estimate the level of risk associated with a contamination event over time, as illustrated in a scenario. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.
Channel-Island Connectivity Affects Water Exposure Time Distributions in a Coastal River Delta
NASA Astrophysics Data System (ADS)
Hiatt, Matthew; Castañeda-Moya, Edward; Twilley, Robert; Hodges, Ben R.; Passalacqua, Paola
2018-03-01
The exposure time is a water transport time scale defined as the cumulative amount of time a water parcel spends in the domain of interest regardless of the number of excursions from the domain. Transport time scales are often used to characterize the nutrient removal potential of aquatic systems, but exposure time distribution estimates are scarce for deltaic systems. Here we analyze the controls on exposure time distributions using a hydrodynamic model in two domains: the Wax Lake delta in Louisiana, USA, and an idealized channel-island complex. In particular, we study the effects of river discharge, vegetation, network geometry, and tides and use a simple model for the fractional removal of nitrate. In both domains, we find that channel-island hydrological connectivity significantly affects exposure time distributions and nitrate removal. The relative contributions of the island and channel portions of the delta to the overall exposure time distribution are controlled by island vegetation roughness and network geometry. Tides have a limited effect on the system's exposure time distribution but can introduce significant spatial variability in local exposure times. The median exposure time for the WLD model is 10 h under the conditions tested and water transport within the islands contributes to 37-50% of the network-scale exposure time distribution and 52-73% of the modeled nitrate removal, indicating that islands may account for the majority of nitrate removal in river deltas.
Investigating the Luminous Environment of SDSS Data Release 4 Mg II Absorption Line Systems
NASA Astrophysics Data System (ADS)
Caler, Michelle A.; Ravi, Sheth K.
2018-01-01
We investigate the luminous environment within a few hundred kiloparsecs of 3760 Mg II absorption line systems. These systems lie along 3760 lines of sight to Sloan Digital Sky Survey (SDSS) Data Release 4 QSOs, have redshifts that range between 0.37 ≤ z ≤ 0.82, and have rest equivalent widths greater than 0.18 Å. We use the SDSS Catalog Archive Server to identify galaxies projected near 3 arcminutes of the absorbing QSO’s position, and a background subtraction technique to estimate the absolute magnitude distribution and luminosity function of galaxies physically associated with these Mg II absorption line systems. The Mg II absorption system sample is split into two parts, with the split occurring at rest equivalent width 0.8 Å, and the resulting absolute magnitude distributions and luminosity functions compared on scales ranging from 50 h-1 kpc to 880 h-1 kpc. We find that, on scales of 100 h-1 kpc and smaller, the two distributions differ: the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width ≥ 0.8 Å (2750 lines of sight) seems to be approximated by that of elliptical-Sa type galaxies, whereas the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width < 0.8 Å (1010 lines of sight) seems to be approximated by that of Sa-Sbc type galaxies. However, on larger scales greater than 200 h-1 kpc, both distributions are broadly consistent with that of elliptical-Sa type galaxies. We note that, in a broader context, these results represent an estimate of the bright end of the galaxy luminosity function at a median redshift of z ˜ 0.65.
Architecture and Programming Models for High Performance Intensive Computation
2016-06-29
Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID
Identification And Distribution Of Vanadinite (Pb5(V5+O4)3Cl) In Lead Pipe Corrosion By-Products
This study presents the first detailed look at vanadium (V) speciation in drinking water pipe corrosion scales. A pool of 34 scale layers from 15 lead or lead-lined pipes representing eight different municipal drinking water distribution systems in the Northeastern and Midwester...
Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus
2018-08-01
Long-term spatial-temporal investigations of microbial dynamics in full-scale drinking water distribution systems are scarce. These investigations can reveal the process, infrastructure, and environmental factors that influence the microbial community, offering opportunities to re-think microbial management in drinking water systems. Often, these insights are missed or are unreliable in short-term studies, which are impacted by stochastic variabilities inherent to large full-scale systems. In this two-year study, we investigated the spatial and temporal dynamics of the microbial community in a large, full scale South African drinking water distribution system that uses three successive disinfection strategies (i.e. chlorination, chloramination and hypochlorination). Monthly bulk water samples were collected from the outlet of the treatment plant and from 17 points in the distribution system spanning nearly 150 km and the bacterial community composition was characterised by Illumina MiSeq sequencing of the V4 hypervariable region of the 16S rRNA gene. Like previous studies, Alpha- and Betaproteobacteria dominated the drinking water bacterial communities, with an increase in Betaproteobacteria post-chloramination. In contrast with previous reports, the observed richness, diversity, and evenness of the bacterial communities were higher in the winter months as opposed to the summer months in this study. In addition to temperature effects, the seasonal variations were also likely to be influenced by changes in average water age in the distribution system and corresponding changes in disinfectant residual concentrations. Spatial dynamics of the bacterial communities indicated distance decay, with bacterial communities becoming increasingly dissimilar with increasing distance between sampling locations. These spatial effects dampened the temporal changes in the bulk water community and were the dominant factor when considering the entire distribution system. However, temporal variations were consistently stronger as compared to spatial changes at individual sampling locations and demonstrated seasonality. This study emphasises the need for long-term studies to comprehensively understand the temporal patterns that would otherwise be missed in short-term investigations. Furthermore, systematic long-term investigations are particularly critical towards determining the impact of changes in source water quality, environmental conditions, and process operations on the changes in microbial community composition in the drinking water distribution system. Copyright © 2018 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Boemer, Jens C.; Vittal, Eknath
The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
1986-12-01
Force and other branches of the military are placing an increased emphasis on system reliablity and maintainability. In studying current systems ...used in the research of proposed systems , by predicting MTTF and MTTR of the new parts and thus, predict the reliability of those parts. The statistics...effectiveness of new systems . Aitchison’s book on the lognormal distribution, printed and used by Cambridge University, highlighted the distributions
UNSOLVED PROBLEMS WITH CORROSION AND DISTRIBUTION SYSTEM INORGANICS
This presentation provides an overview of new research results and remaining research needs with respect to both corrosion control issues (lead, copper, iron) and to issues of inorganic contaminants that can form or accumulate in distribution system water, pipe scales and distrib...
Yang, Fan; Shi, Baoyou; Gu, Junnong; Wang, Dongsheng; Yang, Min
2012-10-15
The corrosion scales on iron pipes could have great impact on the water quality in drinking water distribution systems (DWDS). Unstable and less protective corrosion scale is one of the main factors causing "discolored water" issues when quality of water entering into distribution system changed significantly. The morphological and physicochemical characteristics of corrosion scales formed under different source water histories in duration of about two decades were systematically investigated in this work. Thick corrosion scales or densely distributed corrosion tubercles were mostly found in pipes transporting surface water, but thin corrosion scales and hollow tubercles were mostly discovered in pipes transporting groundwater. Magnetite and goethite were main constituents of iron corrosion products, but the mass ratio of magnetite/goethite (M/G) was significantly different depending on the corrosion scale structure and water source conditions. Thick corrosion scales and hard shell of tubercles had much higher M/G ratio (>1.0), while the thin corrosion scales had no magnetite detected or with much lower M/G ratio. The M/G ratio could be used to identify the characteristics and evaluate the performances of corrosion scales formed under different water conditions. Compared with the pipes transporting ground water, the pipes transporting surface water were more seriously corroded and could be in a relatively more active corrosion status all the time, which was implicated by relatively higher siderite, green rust and total iron contents in their corrosion scales. Higher content of unstable ferric components such as γ-FeOOH, β-FeOOH and amorphous iron oxide existed in corrosion scales of pipes receiving groundwater which was less corroded. Corrosion scales on groundwater pipes with low magnetite content had higher surface area and thus possibly higher sorption capacity. The primary trace inorganic elements in corrosion products were Br and heavy metals. Corrosion products obtained from pipes transporting groundwater had higher levels of Br, Ti, Ba, Cu, Sr, V, Cr, La, Pb and As. Copyright © 2012 Elsevier Ltd. All rights reserved.
Regan, John M; Harrington, Gregory W; Noguera, Daniel R
2002-01-01
Nitrification in drinking water distribution systems is a common operational problem for many utilities that use chloramines for secondary disinfection. The diversity of ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB) in the distribution systems of a pilot-scale chloraminated drinking water treatment system was characterized using terminal restriction fragment length polymorphism (T-RFLP) analysis and 16S rRNA gene (ribosomal DNA [rDNA]) cloning and sequencing. For ammonia oxidizers, 16S rDNA-targeted T-RFLP indicated the presence of Nitrosomonas in each of the distribution systems, with a considerably smaller peak attributable to Nitrosospira-like AOB. Sequences of AOB amplification products aligned within the Nitrosomonas oligotropha cluster and were closely related to N. oligotropha and Nitrosomonas ureae. The nitrite-oxidizing communities were comprised primarily of Nitrospira, although Nitrobacter was detected in some samples. These results suggest a possible selection of AOB related to N. oligotropha and N. ureae in chloraminated systems and demonstrate the presence of NOB, indicating a biological mechanism for nitrite loss that contributes to a reduction in nitrite-associated chloramine decay.
Regan, John M.; Harrington, Gregory W.; Noguera, Daniel R.
2002-01-01
Nitrification in drinking water distribution systems is a common operational problem for many utilities that use chloramines for secondary disinfection. The diversity of ammonia-oxidizing bacteria (AOB) and nitrite-oxidizing bacteria (NOB) in the distribution systems of a pilot-scale chloraminated drinking water treatment system was characterized using terminal restriction fragment length polymorphism (T-RFLP) analysis and 16S rRNA gene (ribosomal DNA [rDNA]) cloning and sequencing. For ammonia oxidizers, 16S rDNA-targeted T-RFLP indicated the presence of Nitrosomonas in each of the distribution systems, with a considerably smaller peak attributable to Nitrosospira-like AOB. Sequences of AOB amplification products aligned within the Nitrosomonas oligotropha cluster and were closely related to N. oligotropha and Nitrosomonas ureae. The nitrite-oxidizing communities were comprised primarily of Nitrospira, although Nitrobacter was detected in some samples. These results suggest a possible selection of AOB related to N. oligotropha and N. ureae in chloraminated systems and demonstrate the presence of NOB, indicating a biological mechanism for nitrite loss that contributes to a reduction in nitrite-associated chloramine decay. PMID:11772611
Distribution-Connected PV's Response to Voltage Sags at Transmission-Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry; Ding, Fei
The ever increasing amount of residential- and commercial-scale distribution-connected PV generation being installed and operated on the U.S.'s electric power system necessitates the use of increased fidelity representative distribution system models for transmission stability studies in order to ensure the continued safe and reliable operation of the grid. This paper describes a distribution model-based analysis that determines the amount of distribution-connected PV that trips off-line for a given voltage sag seen at the distribution circuit's substation. Such sags are what could potentially be experienced over a wide area of an interconnection during a transmission-level line fault. The results of thismore » analysis show that the voltage diversity of the distribution system does cause different amounts of PV generation to be lost for differing severity of voltage sags. The variation of the response is most directly a function of the loading of the distribution system. At low load levels the inversion of the circuit's voltage profile results in considerable differences in the aggregated response of distribution-connected PV Less variation is seen in the response to specific PV deployment scenarios, unless pushed to extremes, and in the total amount of PV penetration attained. A simplified version of the combined CMPLDW and PVD1 models is compared to the results from the model-based analysis. Furthermore, the parameters of the simplified model are tuned to better match the determined response. The resulting tuning parameters do not match the expected physical model of the distribution system and PV systems and thus may indicate that another modeling approach would be warranted.« less
A distribution model for the aerial application of granular agricultural particles
NASA Technical Reports Server (NTRS)
Fernandes, S. T.; Ormsbee, A. I.
1978-01-01
A model is developed to predict the shape of the distribution of granular agricultural particles applied by aircraft. The particle is assumed to have a random size and shape and the model includes the effect of air resistance, distributor geometry and aircraft wake. General requirements for the maintenance of similarity of the distribution for scale model tests are derived and are addressed to the problem of a nongeneral drag law. It is shown that if the mean and variance of the particle diameter and density are scaled according to the scaling laws governing the system, the shape of the distribution will be preserved. Distributions are calculated numerically and show the effect of a random initial lateral position, particle size and drag coefficient. A listing of the computer code is included.
Review of cost versus scale: water and wastewater treatment and reuse processes.
Guo, Tianjiao; Englehardt, James; Wu, Tingting
2014-01-01
The US National Research Council recently recommended direct potable water reuse (DPR), or potable water reuse without environmental buffer, for consideration to address US water demand. However, conveyance of wastewater and water to and from centralized treatment plants consumes on average four times the energy of treatment in the USA, and centralized DPR would further require upgradient distribution of treated water. Therefore, information on the cost of unit treatment processes potentially useful for DPR versus system capacity was reviewed, converted to constant 2012 US dollars, and synthesized in this work. A logarithmic variant of the Williams Law cost function was found applicable over orders of magnitude of system capacity, for the subject processes: activated sludge, membrane bioreactor, coagulation/flocculation, reverse osmosis, ultrafiltration, peroxone and granular activated carbon. Results are demonstrated versus 10 DPR case studies. Because economies of scale found for capital equipment are counterbalanced by distribution/collection network costs, further study of the optimal scale of distributed DPR systems is suggested.
Non-Gaussian Nature of Fracture and the Survival of Fat-Tail Exponents
NASA Astrophysics Data System (ADS)
Tallakstad, Ken Tore; Toussaint, Renaud; Santucci, Stephane; Måløy, Knut Jørgen
2013-04-01
We study the fluctuations of the global velocity Vl(t), computed at various length scales l, during the intermittent mode-I propagation of a crack front. The statistics converge to a non-Gaussian distribution, with an asymmetric shape and a fat tail. This breakdown of the central limit theorem (CLT) is due to the diverging variance of the underlying local crack front velocity distribution, displaying a power law tail. Indeed, by the application of a generalized CLT, the full shape of our experimental velocity distribution at large scale is shown to follow the stable Levy distribution, which preserves the power law tail exponent under upscaling. This study aims to demonstrate in general for crackling noise systems how one can infer the complete scale dependence of the activity—and extreme event distributions—by measuring only at a global scale.
THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS
Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-02-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
Small-Scale System for Evaluation of Stretch-Flangeability with Excellent Reliability
NASA Astrophysics Data System (ADS)
Yoon, Jae Ik; Jung, Jaimyun; Lee, Hak Hyeon; Kim, Hyoung Seop
2018-06-01
We propose a system for evaluating the stretch-flangeability of small-scale specimens based on the hole-expansion ratio (HER). The system has no size effect and shows excellent reproducibility, reliability, and economic efficiency. To verify the reliability and reproducibility of the proposed hole-expansion testing (HET) method, the deformation behavior of the conventional standard stretch-flangeability evaluation method was compared with the proposed method using finite-element method simulations. The distribution of shearing defects in the hole-edge region of the specimen, which has a significant influence on the HER, was investigated using scanning electron microscopy. The stretch-flangeability of several kinds of advanced high-strength steel determined using the conventional standard method was compared with that using the proposed small-scale HET method. It was verified that the deformation behavior, morphology and distribution of shearing defects, and stretch-flangeability results for the specimens were the same for the conventional standard method and the proposed small-scale stretch-flangeability evaluation system.
NASA Astrophysics Data System (ADS)
Lian, Enyang; Ren, Yingyu; Han, Yunfeng; Liu, Weixin; Jin, Ningde; Zhao, Junying
2016-11-01
The multi-scale analysis is an important method for detecting nonlinear systems. In this study, we carry out experiments and measure the fluctuation signals from a rotating electric field conductance sensor with eight electrodes. We first use a recurrence plot to recognise flow patterns in vertical upward gas-liquid two-phase pipe flow from measured signals. Then we apply a multi-scale morphological analysis based on the first-order difference scatter plot to investigate the signals captured from the vertical upward gas-liquid two-phase flow loop test. We find that the invariant scaling exponent extracted from the multi-scale first-order difference scatter plot with the bisector of the second-fourth quadrant as the reference line is sensitive to the inhomogeneous distribution characteristics of the flow structure, and the variation trend of the exponent is helpful to understand the process of breakup and coalescence of the gas phase. In addition, we explore the dynamic mechanism influencing the inhomogeneous distribution of the gas phase in terms of adaptive optimal kernel time-frequency representation. The research indicates that the system energy is a factor influencing the distribution of the gas phase and the multi-scale morphological analysis based on the first-order difference scatter plot is an effective method for indicating the inhomogeneous distribution of the gas phase in gas-liquid two-phase flow.
Converting Hangar High Expansion Foam Systems to Prevent Cockpit Damage: Full-Scale Validation Tests
2017-09-01
AFCEC-CO-TY-TR-2018-0001 CONVERTING HANGAR HIGH EXPANSION FOAM SYSTEMS TO PREVENT COCKPIT DAMAGE: FULL-SCALE VALIDATION TESTS Gerard G...REPORT NUMBER(S) 12. DISTRIBUTION/ AVAILABILITY STATEMENT 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: a. REPORT b...09-2017 Final Test Report May 2017 Converting Hangar High Expansion Foam Systems to Prevent Cockpit Damage: Full-Scale Validation Tests N00173-15-D
Universal distribution of component frequencies in biological and technological systems
Pang, Tin Yau; Maslov, Sergei
2013-01-01
Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195
The Brain as a Distributed Intelligent Processing System: An EEG Study
da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo
2011-01-01
Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657
Spatial analysis and characteristics of pig farming in Thailand.
Thanapongtharm, Weerapong; Linard, Catherine; Chinson, Pornpiroon; Kasemsuwan, Suwicha; Visser, Marjolein; Gaughan, Andrea E; Epprech, Michael; Robinson, Timothy P; Gilbert, Marius
2016-10-06
In Thailand, pig production intensified significantly during the last decade, with many economic, epidemiological and environmental implications. Strategies toward more sustainable future developments are currently investigated, and these could be informed by a detailed assessment of the main trends in the pig sector, and on how different production systems are geographically distributed. This study had two main objectives. First, we aimed to describe the main trends and geographic patterns of pig production systems in Thailand in terms of pig type (native, breeding, and fattening pigs), farm scales (smallholder and large-scale farming systems) and type of farming systems (farrow-to-finish, nursery, and finishing systems) based on a very detailed 2010 census. Second, we aimed to study the statistical spatial association between these different types of pig farming distribution and a set of spatial variables describing access to feed and markets. Over the last decades, pig population gradually increased, with a continuously increasing number of pigs per holder, suggesting a continuing intensification of the sector. The different pig-production systems showed very contrasted geographical distributions. The spatial distribution of large-scale pig farms corresponds with that of commercial pig breeds, and spatial analysis conducted using Random Forest distribution models indicated that these were concentrated in lowland urban or peri-urban areas, close to means of transportation, facilitating supply to major markets such as provincial capitals and the Bangkok Metropolitan region. Conversely the smallholders were distributed throughout the country, with higher densities located in highland, remote, and rural areas, where they supply local rural markets. A limitation of the study was that pig farming systems were defined from the number of animals per farm, resulting in their possible misclassification, but this should have a limited impact on the main patterns revealed by the analysis. The very contrasted distribution of different pig production systems present opportunities for future regionalization of pig production. More specifically, the detailed geographical analysis of the different production systems will be used to spatially-inform planning decisions for pig farming accounting for the specific health, environment and economical implications of the different pig production systems.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.
Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J
2016-05-26
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.
Voltage Impacts of Utility-Scale Distributed Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, A.
2014-09-01
Although most utility-scale wind turbines in the United States are added at the transmission level in large wind power plants, distributed wind power offers an alternative that could increase the overall wind power penetration without the need for additional transmission. This report examines the distribution feeder-level voltage issues that can arise when adding utility-scale wind turbines to the distribution system. Four of the Pacific Northwest National Laboratory taxonomy feeders were examined in detail to study the voltage issues associated with adding wind turbines at different distances from the sub-station. General rules relating feeder resistance up to the point of turbinemore » interconnection to the expected maximum voltage change levels were developed. Additional analysis examined line and transformer overvoltage conditions.« less
Dispersion upscaling from a pore scale characterization of Lagrangian velocities
NASA Astrophysics Data System (ADS)
Turuban, Régis; de Anna, Pietro; Jiménez-Martínez, Joaquín; Tabuteau, Hervé; Méheust, Yves; Le Borgne, Tanguy
2013-04-01
Mixing and reactive transport are primarily controlled by the interplay between diffusion, advection and reaction at pore scale. Yet, how the distribution and spatial correlation of the velocity field at pore scale impact these processes is still an open question. Here we present an experimental investigation of the distribution and correlation of pore scale velocities and its relation with upscaled dispersion. We use a quasi two-dimensional (2D) horizontal set up, consisting of two glass plates filled with cylinders representing the grains of the porous medium : the cell is built by soft lithography technique, wich allows for full control of the system geometry. The local velocity field is quantified from particle tracking velocimetry using microspheres that are advected with the pore scale flow. Their displacement is purely advective, as the particle size is chosen large enough to avoid diffusion. We thus obtain particle trajectories as well as lagrangian velocities in the entire system. The measured velocity field shows the existence of a network of preferential flow paths in channels with high velocities, as well as very low velocity in stagnation zones, with a non Gaussian distribution. Lagrangian velocities are long range correlated in time, which implies a non-fickian scaling of the longitudinal variance of particle positions. To upscale this process we develop an effective transport model, based on correlated continous time random walk, which is entirely parametrized by the pore scale velocity distribution and correlation. The model predictions are compared with conservative tracer test data for different Peclet numbers. Furthermore, we investigate the impact of different pore geometries on the distribution and correlation of Lagrangian velocities and we discuss the link between these properties and the effective dispersion behavior.
Daniel J. Isaak; Russell F. Thurow
2006-01-01
Spatially continuous sampling designs, when temporally replicated, provide analytical flexibility and are unmatched in their ability to provide a dynamic system view. We have compiled such a data set by georeferencing the network-scale distribution of Chinook salmon (Oncorhynchus tshawytscha) redds across a large wilderness basin (7330 km2) in...
Spatial optimization for decentralized non-potable water reuse
NASA Astrophysics Data System (ADS)
Kavvada, Olga; Nelson, Kara L.; Horvath, Arpad
2018-06-01
Decentralization has the potential to reduce the scale of the piped distribution network needed to enable non-potable water reuse (NPR) in urban areas by producing recycled water closer to its point of use. However, tradeoffs exist between the economies of scale of treatment facilities and the size of the conveyance infrastructure, including energy for upgradient distribution of recycled water. To adequately capture the impacts from distribution pipes and pumping requirements, site-specific conditions must be accounted for. In this study, a generalized framework (a heuristic modeling approach using geospatial algorithms) is developed that estimates the financial cost, the energy use, and the greenhouse gas emissions associated with NPR (for toilet flushing) as a function of scale of treatment and conveyance networks with the goal of determining the optimal degree of decentralization. A decision-support platform is developed to assess and visualize NPR system designs considering topography, economies of scale, and building size. The platform can be used for scenario development to explore the optimal system size based on the layout of current or new buildings. The model also promotes technology innovation by facilitating the systems-level comparison of options to lower costs, improve energy efficiency, and lower greenhouse gas emissions.
A test of the cross-scale resilience model: Functional richness in Mediterranean-climate ecosystems
Wardwell, D.A.; Allen, Craig R.; Peterson, G.D.; Tyre, A.J.
2008-01-01
Ecological resilience has been proposed to be generated, in part, in the discontinuous structure of complex systems. Environmental discontinuities are reflected in discontinuous, aggregated animal body mass distributions. Diversity of functional groups within body mass aggregations (scales) and redundancy of functional groups across body mass aggregations (scales) has been proposed to increase resilience. We evaluate that proposition by analyzing mammalian and avian communities of Mediterranean-climate ecosystems. We first determined that body mass distributions for each animal community were discontinuous. We then calculated the variance in richness of function across aggregations in each community, and compared observed values with distributions created by 1000 simulations using a null of random distribution of function, with the same n, number of discontinuities and number of functional groups as the observed data. Variance in the richness of functional groups across scales was significantly lower in real communities than in simulations in eight of nine sites. The distribution of function across body mass aggregations in the animal communities we analyzed was non-random, and supports the contentions of the cross-scale resilience model. ?? 2007 Elsevier B.V. All rights reserved.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
A new family of distribution functions for spherical galaxies
NASA Astrophysics Data System (ADS)
Gerhard, Ortwin E.
1991-06-01
The present study describes a new family of anisotropic distribution functions for stellar systems designed to keep control of the orbit distribution at fixed energy. These are quasi-separable functions of energy and angular momentum, and they are specified in terms of a circularity function h(x) which fixes the distribution of orbits on the potential's energy surfaces outside some anisotropy radius. Detailed results are presented for a particular set of radially anisotropic circularity functions h-alpha(x). In the scale-free logarithmic potential, exact analytic solutions are shown to exist for all scale-free circularity functions. Intrinsic and projected velocity dispersions are calculated and the expected properties are presented in extensive tables and graphs. Several applications of the quasi-separable distribution functions are discussed. They include the effects of anisotropy or a dark halo on line-broadening functions, the radial orbit instability in anisotropic spherical systems, and violent relaxation in spherical collapse.
NASA Astrophysics Data System (ADS)
Ma, Xin-Xin; Lin, Zhan; Jin, Hong-Lin; Chen, Hua-Ran; Jiao, Li-Guo
2017-11-01
In this study, the distribution characteristics of scale height at various solar activity levels were statistically analyzed using the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) radio occultation data for 2007-2013. The results show that: (1) in the mid-high latitude region, the daytime (06-17LT) scale height exhibits annual variations in the form of a single peak structure with the crest appearing in summer. At the high latitude region, an annual variation is also observed for nighttime (18-05LT) scale height; (2) changes in the spatial distribution of the scale height occur. The crests are deflected towards the north during daytime (12-14LT) at a geomagnetic longitude of 60°W-180°W, and they are distributed roughly along the geomagnetic equator at 60°W-180°E. In the approximate region of 120°W-150°E and 50°S-80°S, the scale height values are significantly higher than those in other mid-latitude areas. This region enlarges with increased solar activity, and shows an approximately symmetric distribution about 0° geomagnetic longitude. Nighttime (00-02LT) scale height values in the high-latitude region are larger than those in the low-mid latitude region. These results could serve as reference for the study of ionosphere distribution and construction of the corresponding profile model.
Low-Energy, Low-Cost Production of Ethylene by Low- Temperature Oxidative Coupling of Methane
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radaelli, Guido; Chachra, Gaurav; Jonnavittula, Divya
In this project, we develop a catalytic process technology for distributed small-scale production of ethylene by oxidative coupling of methane at low temperatures using an advanced catalyst. The Low Temperature Oxidative Coupling of Methane (LT-OCM) catalyst system is enabled by a novel chemical catalyst and process pioneered by Siluria, at private expense, over the last six years. Herein, we develop the LT-OCM catalyst system for distributed small-scale production of ethylene by identifying and addressing necessary process schemes, unit operations and process parameters that limit the economic viability and mass penetration of this technology to manufacture ethylene at small-scales. The outputmore » of this program is process concepts for small-scale LT-OCM catalyst based ethylene production, lab-scale verification of the novel unit operations adopted in the proposed concept, and an analysis to validate the feasibility of the proposed concepts.« less
Pond fractals in a tidal flat.
Cael, B B; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.
NASA Astrophysics Data System (ADS)
Cael, B. B.; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.
NASA Astrophysics Data System (ADS)
Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng
2018-02-01
A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
Previous work has shown that contaminants, such as Al, As and Ra, can accumulate in drinking water distribution system solids. The release of accumulated contaminants back into the water supply could result in elevated levels at consumers’ taps, and current monitoring practices d...
Previous work has shown that contaminants such as Al, As and Ra, can accumulate in drinking water distribution system solids. The release of accumulated contaminants back into the water supply could conceivably result in elevated levels at consumers’ taps. The current regulatory...
It is well known that model-building of chlorine decay in real water distribution systems is difficult because chlorine decay is influenced by many factors (e.g., bulk water demand, pipe-wall demand, piping material, flow velocity, and residence time). In this paper, experiments ...
BENCH-SCALE STUDIES ON THE SIMULTANEOUS FORMATION OF PCBS AND PCDDS/FS FROM COMBUSTION SYSTEMS
The paper reports on a bench-scale experimental study to characterize a newly built reactor system that was built to: produce levels and distributions of polychlorinated dibenzo-p-dioxin and polychlorinated dibenzofuran (PCDD/F) production similar to those achieved by previous re...
Species, functional groups, and thresholds in ecological resilience
Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris
2012-01-01
The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.
Scott, Daniel B; Van Dyke, Michele I; Anderson, William B; Huck, Peter M
2015-12-01
The potential for regrowth of nitrifying microorganisms was monitored in 2 full-scale chloraminated drinking water distribution systems in Ontario, Canada, over a 9-month period. Quantitative PCR was used to measure amoA genes from ammonia-oxidizing bacteria (AOB) and ammonia-oxidizing archaea (AOA), and these values were compared with water quality parameters that can influence nitrifier survival and growth, including total chlorine, ammonia, temperature, pH, and organic carbon. Although there were no severe nitrification episodes, AOB and AOA were frequently detected at low concentrations in samples collected from both distribution systems. A culture-based presence-absence test confirmed the presence of viable nitrifiers. AOB were usually present in similar or greater numbers than AOA in both systems. As well, AOB showed higher regrowth potential compared with AOA in both systems. Statistically significant correlations were measured between several water quality parameters of relevance to nitrification. Total chlorine was negatively correlated with both nitrifiers and heterotrophic plate count (HPC) bacteria, and ammonia levels were positively correlated with nitrifiers. Of particular importance was the strong correlation between HPC and AOB, which reinforced the usefulness of HPC as an operational parameter to measure general microbiological conditions in distribution systems.
Patterned layers of adsorbed extracellular matrix proteins: influence on mammalian cell adhesion.
Dupont-Gillain, C C; Alaerts, J A; Dewez, J L; Rouxhet, P G
2004-01-01
Three patterned systems aiming at the control of mammalian cell behavior are presented. The determinant feature common to these systems is the spatial distribution of extracellular matrix (ECM) proteins (mainly collagen) on polymer substrates. This distribution differs from one system to another with respect to the scale at which it is affected, from the supracellular to the supramolecular scale, and with respect to the way it is produced. In the first system, the surface of polystyrene was oxidized selectively to form micrometer-scale patterns, using photolithography. Adsorption of ECM proteins in presence of a competitor was enhanced on the oxidized domains, allowing selective cell adhesion to be achieved. In the second system, electron beam lithography was used to engrave grooves (depth and width approximately 1 microm) on a poly(methyl methacrylate) (PMMA) substratum. No modification of the surface chemistry associated to the created topography could be detected. Cell orientation along the grooves was only observed when collagen was preadsorbed on the substratum. In the third system, collagen adsorbed on PMMA was dried in conditions ensuring the formation of a nanometer-scale pattern. Cell adhesion was enhanced on such patterned collagen layers compared to smooth collagen layers.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
NASA Technical Reports Server (NTRS)
Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick;
2001-01-01
A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.
Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel
2016-01-01
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894
Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel
2016-08-16
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.
Development of Ada language control software for the NASA power management and distribution test bed
NASA Technical Reports Server (NTRS)
Wright, Ted; Mackin, Michael; Gantose, Dave
1989-01-01
The Ada language software developed to control the NASA Lewis Research Center's Power Management and Distribution testbed is described. The testbed is a reduced-scale prototype of the electric power system to be used on space station Freedom. It is designed to develop and test hardware and software for a 20-kHz power distribution system. The distributed, multiprocessor, testbed control system has an easy-to-use operator interface with an understandable English-text format. A simple interface for algorithm writers that uses the same commands as the operator interface is provided, encouraging interactive exploration of the system.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Effect of PVC and iron materials on Mn(II) deposition in drinking water distribution systems.
Cerrato, José M; Reyes, Lourdes P; Alvarado, Carmen N; Dietrich, Andrea M
2006-08-01
Polyvinyl chloride (PVC) and iron pipe materials differentially impacted manganese deposition within a drinking water distribution system that experiences black water problems because it receives soluble manganese from a surface water reservoir that undergoes biogeochemical cycling of manganese. The water quality study was conducted in a section of the distribution system of Tegucigalpa, Honduras and evaluated the influence of iron and PVC pipe materials on the concentrations of soluble and particulate iron and manganese, and determined the composition of scales formed on PVC and iron pipes. As expected, total Fe concentrations were highest in water from iron pipes. Water samples obtained from PVC pipes showed higher total Mn concentrations and more black color than that obtained from iron pipes. Scanning electron microscopy demonstrated that manganese was incorporated into the iron tubercles and thus not readily dislodged from the pipes by water flow. The PVC pipes contained a thin surface scale consisting of white and brown layers of different chemical composition; the brown layer was in contact with the water and contained 6% manganese by weight. Mn composed a greater percentage by weight of the PVC scale than the iron pipe scale; the PVC scale was easily dislodged by flowing water. This research demonstrates that interactions between water and the infrastructure used for its supply affect the quality of the final drinking water.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes.
Uhl, Jonathan T; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A W; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R; Liaw, P K; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A. W.; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J.; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R.; Liaw, P. K.; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A.
2015-01-01
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes. PMID:26572103
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
Solar Newsletter | Solar Research | NREL
, General Electric Optimize Voltage Control for Utility-Scale PV As utilities increasingly add solar power components that may be used to integrate distributed solar PV onto distribution systems. More than 335 data Innovation Award for Grid Reliability PV Demonstration First Solar, the California Independent System
Lead Pipe Scale Analysis Using Broad-Beam Argon Ion Milling to Elucidate Drinking Water Corrosion
Herein, we compared the characterization of lead pipe scale removed from a drinking water distribution system using two different cross section methods (conventional polishing and argon ion beam etching). The pipe scale solids were analyzed using scanning electron microscopy (SEM...
Resource Management for Distributed Parallel Systems
NASA Technical Reports Server (NTRS)
Neuman, B. Clifford; Rao, Santosh
1993-01-01
Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.
Spatial Distribution of Fate and Transport Parameters Using Cxtfit in a Karstified Limestone Model
NASA Astrophysics Data System (ADS)
Toro, J.; Padilla, I. Y.
2017-12-01
Karst environments have a high capacity to transport and store large amounts of water. This makes karst aquifers a productive resource for human consumption and ecological integrity, but also makes them vulnerable to potential contamination of hazardous chemical substances. High heterogeneity and anisotropy of karst aquifer properties make them very difficult to characterize for accurate prediction of contaminant mobility and persistence in groundwater. Current technologies to characterize and quantify flow and transport processes at field-scale is limited by low resolution of spatiotemporal data. To enhance this resolution and provide the essential knowledge of karst groundwater systems, studies at laboratory scale can be conducted. This work uses an intermediate karstified lab-scale physical model (IKLPM) to study fate and transport processes and assess viable tools to characterize heterogeneities in karst systems. Transport experiments are conducted in the IKLPM using step injections of calcium chloride, uranine, and rhodamine wt tracers. Temporal concentration distributions (TCDs) obtained from the experiments are analyzed using the method of moments and CXTFIT to quantify fate and transport parameters in the system at various flow rates. The spatial distribution of the estimated fate and transport parameters for the tracers revealed high variability related to preferential flow heterogeneities and scale dependence. Results are integrated to define spatially-variable transport regions within the system and assess their fate and transport characteristics.
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
2016-01-01
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
Tong, Huiyan; Zhao, Peng; Zhang, Hongwei; Tian, Yimei; Chen, Xi; Zhao, Weigao; Li, Mei
2015-01-01
Deterioration and leakage of drinking water in distribution systems have been a major issue in the water industry for years, which are associated with corrosion. This paper discovers that occluded water in the scales of the pipes has an acidic environment and high concentration of iron, manganese, chloride, sulfate and nitrate, which aggravates many pipeline leakage accidents. Six types of water samples have been analyzed under the flowing and stagnant periods. Both the water in the exterior of the tubercles and stagnant water carry suspended iron particles, which explains the occurrence of "red water" when the system hydraulic conditions change. Nitrate is more concentrated in occluded water under flowing condition in comparison with that in flowing water. However, the concentration of nitrate in occluded water under stagnant condition is found to be less than that in stagnant water. A high concentration of manganese is found to exist in steady water, occluded water and stagnant water. These findings impact secondary pollution and the corrosion of pipes and containers used in drinking water distribution systems. The unique method that taking occluded water from tiny holes which were drilled from the pipes' exteriors carefully according to the positions of corrosion scales has an important contribution to research on corrosion in distribution systems. And this paper furthers our understanding and contributes to the growing body of knowledge regarding occluded environments in corrosion scales. Copyright © 2014 Elsevier Ltd. All rights reserved.
Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant
Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.
2016-01-01
There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365
NASA Astrophysics Data System (ADS)
Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao
2018-04-01
Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.
Understanding scaling through history-dependent processes with collapsing sample space.
Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan
2015-04-28
History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.
Computer programs for smoothing and scaling airfoil coordinates
NASA Technical Reports Server (NTRS)
Morgan, H. L., Jr.
1983-01-01
Detailed descriptions are given of the theoretical methods and associated computer codes of a program to smooth and a program to scale arbitrary airfoil coordinates. The smoothing program utilizes both least-squares polynomial and least-squares cubic spline techniques to smooth interatively the second derivatives of the y-axis airfoil coordinates with respect to a transformed x-axis system which unwraps the airfoil and stretches the nose and trailing-edge regions. The corresponding smooth airfoil coordinates are then determined by solving a tridiagonal matrix of simultaneous cubic-spline equations relating the y-axis coordinates and their corresponding second derivatives. A technique for computing the camber and thickness distribution of the smoothed airfoil is also discussed. The scaling program can then be used to scale the thickness distribution generated by the smoothing program to a specific maximum thickness which is then combined with the camber distribution to obtain the final scaled airfoil contour. Computer listings of the smoothing and scaling programs are included.
Synchronization in scale-free networks: The role of finite-size effects
NASA Astrophysics Data System (ADS)
Torres, D.; Di Muro, M. A.; La Rocca, C. E.; Braunstein, L. A.
2015-06-01
Synchronization problems in complex networks are very often studied by researchers due to their many applications to various fields such as neurobiology, e-commerce and completion of tasks. In particular, scale-free networks with degree distribution P(k)∼ k-λ , are widely used in research since they are ubiquitous in Nature and other real systems. In this paper we focus on the surface relaxation growth model in scale-free networks with 2.5< λ <3 , and study the scaling behavior of the fluctuations, in the steady state, with the system size N. We find a novel behavior of the fluctuations characterized by a crossover between two regimes at a value of N=N* that depends on λ: a logarithmic regime, found in previous research, and a constant regime. We propose a function that describes this crossover, which is in very good agreement with the simulations. We also find that, for a system size above N* , the fluctuations decrease with λ, which means that the synchronization of the system improves as λ increases. We explain this crossover analyzing the role of the network's heterogeneity produced by the system size N and the exponent of the degree distribution.
Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...
Multi-time Scale Coordination of Distributed Energy Resources in Isolated Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony; Xie, Le; Butler-Purry, Karen
2016-03-31
In isolated power systems, including microgrids, distributed assets, such as renewable energy resources (e.g. wind, solar) and energy storage, can be actively coordinated to reduce dependency on fossil fuel generation. The key challenge of such coordination arises from significant uncertainty and variability occurring at small time scales associated with increased penetration of renewables. Specifically, the problem is with ensuring economic and efficient utilization of DERs, while also meeting operational objectives such as adequate frequency performance. One possible solution is to reduce the time step at which tertiary controls are implemented and to ensure feedback and look-ahead capability are incorporated tomore » handle variability and uncertainty. However, reducing the time step of tertiary controls necessitates investigating time-scale coupling with primary controls so as not to exacerbate system stability issues. In this paper, an optimal coordination (OC) strategy, which considers multiple time-scales, is proposed for isolated microgrid systems with a mix of DERs. This coordination strategy is based on an online moving horizon optimization approach. The effectiveness of the strategy was evaluated in terms of economics, technical performance, and computation time by varying key parameters that significantly impact performance. The illustrative example with realistic scenarios on a simulated isolated microgrid test system suggests that the proposed approach is generalizable towards designing multi-time scale optimal coordination strategies for isolated power systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nole, Michael; Daigle, Hugh; Cook, Ann E.
The goal of this study is to computationally determine the potential distribution patterns of diffusion-driven methane hydrate accumulations in coarse-grained marine sediments. Diffusion of dissolved methane in marine gas hydrate systems has been proposed as a potential transport mechanism through which large concentrations of hydrate can preferentially accumulate in coarse-grained sediments over geologic time. Using one-dimensional compositional reservoir simulations, we examine hydrate distribution patterns at the scale of individual sand layers (1 to 20 m thick) that are deposited between microbially active fine-grained material buried through the gas hydrate stability zone (GHSZ). We then extrapolate to two- dimensional and basin-scalemore » three-dimensional simulations, where we model dipping sands and multilayered systems. We find that properties of a sand layer including pore size distribution, layer thickness, dip, and proximity to other layers in multilayered systems all exert control on diffusive methane fluxes toward and within a sand, which in turn impact the distribution of hydrate throughout a sand unit. In all of these simulations, we incorporate data on physical properties and sand layer geometries from the Terrebonne Basin gas hydrate system in the Gulf of Mexico. We demonstrate that diffusion can generate high hydrate saturations (upward of 90%) at the edges of thin sands at shallow depths within the GHSZ, but that it is ineffective at producing high hydrate saturations throughout thick (greater than 10 m) sands buried deep within the GHSZ. As a result, we find that hydrate in fine-grained material can preserve high hydrate saturations in nearby thin sands with burial.« less
Nole, Michael; Daigle, Hugh; Cook, Ann E.; ...
2017-02-01
The goal of this study is to computationally determine the potential distribution patterns of diffusion-driven methane hydrate accumulations in coarse-grained marine sediments. Diffusion of dissolved methane in marine gas hydrate systems has been proposed as a potential transport mechanism through which large concentrations of hydrate can preferentially accumulate in coarse-grained sediments over geologic time. Using one-dimensional compositional reservoir simulations, we examine hydrate distribution patterns at the scale of individual sand layers (1 to 20 m thick) that are deposited between microbially active fine-grained material buried through the gas hydrate stability zone (GHSZ). We then extrapolate to two- dimensional and basin-scalemore » three-dimensional simulations, where we model dipping sands and multilayered systems. We find that properties of a sand layer including pore size distribution, layer thickness, dip, and proximity to other layers in multilayered systems all exert control on diffusive methane fluxes toward and within a sand, which in turn impact the distribution of hydrate throughout a sand unit. In all of these simulations, we incorporate data on physical properties and sand layer geometries from the Terrebonne Basin gas hydrate system in the Gulf of Mexico. We demonstrate that diffusion can generate high hydrate saturations (upward of 90%) at the edges of thin sands at shallow depths within the GHSZ, but that it is ineffective at producing high hydrate saturations throughout thick (greater than 10 m) sands buried deep within the GHSZ. As a result, we find that hydrate in fine-grained material can preserve high hydrate saturations in nearby thin sands with burial.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Module-level power electronics, such as DC power optimizers, microinverters, and those found in AC modules, are increasing in popularity in smaller-scale photovoltaic (PV) systems as their prices continue to decline. Therefore, it is important to provide PV modelers with guidelines about how to model these distributed power electronics appropriately in PV modeling software. This paper extends the work completed at NREL that provided recommendations to model the performance of distributed power electronics in NREL’s popular PVWatts calculator [1], to provide similar guidelines for modeling these technologies in NREL's more complex System Advisor Model (SAM). Module-level power electronics - such asmore » DC power optimizers, microinverters, and those found in AC modules-- are increasing in popularity in smaller-scale photovoltaic (PV) systems as their prices continue to decline. Therefore, it is important to provide PV modelers with guidelines about how to model these distributed power electronics appropriately in PV modeling software.« less
2015-05-01
Evaluation Center of Excellence SUAS Small Unmanned Aircraft System SUT System under Test T&E Test and Evaluation TARDEC Tank Automotive Research...17 Distribution A: Distribution Unlimited 2 Background In the past decade, unmanned systems have significantly impacted warfare...environments at a speed and scale beyond manned capability. However, current unmanned systems operate with minimal autonomy. To meet warfighter needs and
Boehm, Alexandria B
2002-10-15
In this study, we extend the established scaling theory for cluster size distributions generated during unsteady coagulation to number-flux distributions that arise during steady-state coagulation and settling in an unmixed water mass. The scaling theory predicts self-similar number-flux distributions and power-law decay of total number flux with depth. The shape of the number-flux distributions and the power-law exponent describing the decay of the total number flux are shown to depend on the homogeneity and small i/j limit of the coagulation kernel and the exponent kappa, which describes the variation in settling velocity with cluster volume. Particle field measurements from Lake Zurich, collected by U. Weilenmann and co-workers (Limnol. Oceanogr.34, 1 (1989)), are used to illustrate how the scaling predictions can be applied to a natural system. This effort indicates that within the mid-depth region of Lake Zurich, clusters of the same size preferentially interact and large clusters react with one another more quickly than small ones, indicative of clusters coagulating in a reaction-limited regime.
NASA Astrophysics Data System (ADS)
Massiot, Cécile; Nicol, Andrew; McNamara, David D.; Townend, John
2017-08-01
Analysis of fracture orientation, spacing, and thickness from acoustic borehole televiewer (BHTV) logs and cores in the andesite-hosted Rotokawa geothermal reservoir (New Zealand) highlights potential controls on the geometry of the fracture system. Cluster analysis of fracture orientations indicates four fracture sets. Probability distributions of fracture spacing and thickness measured on BHTV logs are estimated for each fracture set, using maximum likelihood estimations applied to truncated size distributions to account for sampling bias. Fracture spacing is dominantly lognormal, though two subordinate fracture sets have a power law spacing. This difference in spacing distributions may reflect the influence of the andesitic sequence stratification (lognormal) and tectonic faults (power law). Fracture thicknesses of 9-30 mm observed in BHTV logs, and 1-3 mm in cores, are interpreted to follow a power law. Fractures in thin sections (˜5 μm thick) do not fit this power law distribution, which, together with their orientation, reflect a change of controls on fracture thickness from uniform (such as thermal) controls at thin section scale to anisotropic (tectonic) at core and BHTV scales of observation. However, the ˜5% volumetric percentage of fractures within the rock at all three scales suggests a self-similar behavior in 3-D. Power law thickness distributions potentially associated with power law fluid flow rates, and increased connectivity where fracture sets intersect, may cause the large permeability variations that occur at hundred meter scales in the reservoir. The described fracture geometries can be incorporated into fracture and flow models to explore the roles of fracture connectivity, stress, and mineral precipitation/dissolution on permeability in such andesite-hosted geothermal systems.
Scaling earthquake ground motions for performance-based assessment of buildings
Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.
2011-01-01
The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.
“Colored water” resulting from suspended iron particles is a common drinking water consumer complaint which is largely impacted by water chemistry. A bench scale study, performed on a 90 year-old corroded cast-iron pipe section removed from a drinking water distribution system, w...
Measuring neuronal avalanches in disordered systems with absorbing states
NASA Astrophysics Data System (ADS)
Girardi-Schappo, M.; Tragtenberg, M. H. R.
2018-04-01
Power-law-shaped avalanche-size distributions are widely used to probe for critical behavior in many different systems, particularly in neural networks. The definition of avalanche is ambiguous. Usually, theoretical avalanches are defined as the activity between a stimulus and the relaxation to an inactive absorbing state. On the other hand, experimental neuronal avalanches are defined by the activity between consecutive silent states. We claim that the latter definition may be extended to some theoretical models to characterize their power-law avalanches and critical behavior. We study a system in which the separation of driving and relaxation time scales emerges from its structure. We apply both definitions of avalanche to our model. Both yield power-law-distributed avalanches that scale with system size in the critical point as expected. Nevertheless, we find restricted power-law-distributed avalanches outside of the critical region within the experimental procedure, which is not expected by the standard theoretical definition. We remark that these results are dependent on the model details.
The Global Distribution of Precipitation and Clouds. Chapter 2.4
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Adler, Robert; Huffman, George; Rossow, William; Ritter, Michael; Curtis, Scott
2004-01-01
The water cycle is the key circuit moving water through the Earth's system. This large system, powered by energy from the sun, is a continuous exchange of moisture between the oceans, the atmosphere, and the land. Precipitation (including rain, snow, sleet, freezing rain, and hail), is the primary mechanism for transporting water from the atmosphere back to the Earth's surface and is the key physical process that links aspects of climate, weather, and the global water cycle. Global precipitation and associate cloud processes are critical for understanding the water cycle balance on a global scale and interactions with the Earth's climate system. However, unlike measurement of less dynamic and more homogenous meteorological fields such as pressure or even temperature, accurate assessment of global precipitation is particularly challenging due to its highly stochastic and rapidly changing nature. It is not uncommon to observe a broad spectrum of precipitation rates and distributions over very localized time scales. Furthermore, precipitating systems generally exhibit nonhomogeneous spatial distributions of rain rates over local to global domains.
Shared versus distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.
R&D100: Lightweight Distributed Metric Service
Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike
2018-06-12
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
R&D100: Lightweight Distributed Metric Service
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentile, Ann; Brandt, Jim; Tucker, Tom
2015-11-19
On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.
Globular cluster systems - Comparative evolution of Galactic halos
NASA Astrophysics Data System (ADS)
Harris, William E.
Space distributions, metallicity/age distributions, and kinematics are considered for the Milky Way halo system. Comparisons are made with other systems, and time scales for dynamical evolution are considered. It is noted that the globular cluster subsystems of halos resemble each other more closely than their parent galaxies do; this forms a reasonable basis for supposing that they represent a kind of underlying unity in the protogalaxy formation process.
The Statistics of Urban Scaling and Their Connection to Zipf’s Law
Gomez-Lievano, Andres; Youn, HyeJin; Bettencourt, Luís M. A.
2012-01-01
Urban scaling relations characterizing how diverse properties of cities vary on average with their population size have recently been shown to be a general quantitative property of many urban systems around the world. However, in previous studies the statistics of urban indicators were not analyzed in detail, raising important questions about the full characterization of urban properties and how scaling relations may emerge in these larger contexts. Here, we build a self-consistent statistical framework that characterizes the joint probability distributions of urban indicators and city population sizes across an urban system. To develop this framework empirically we use one of the most granular and stochastic urban indicators available, specifically measuring homicides in cities of Brazil, Colombia and Mexico, three nations with high and fast changing rates of violent crime. We use these data to derive the conditional probability of the number of homicides per year given the population size of a city. To do this we use Bayes’ rule together with the estimated conditional probability of city size given their number of homicides and the distribution of total homicides. We then show that scaling laws emerge as expectation values of these conditional statistics. Knowledge of these distributions implies, in turn, a relationship between scaling and population size distribution exponents that can be used to predict Zipf’s exponent from urban indicator statistics. Our results also suggest how a general statistical theory of urban indicators may be constructed from the stochastic dynamics of social interaction processes in cities. PMID:22815745
Alipour, Vali; Dindarloo, Kavoos; Mahvi, Amir Hossein; Rezaei, Leila
2015-03-01
Corrosion and scaling is a major problem in water distribution systems, thus evaluation of water corrosivity properties is a routine test in water networks. To evaluate water stability in the Bandar Abbas water distribution system, the network was divided into 15 clusters and 45 samples were taken. Langelier, Ryznar, Puckorius, Larson-Skold (LS) and Aggressive indices were determined and compared to the marble test. The mean parameters included were pH (7.8 ± 0.1), electrical conductivity (1,083.9 ± 108.7 μS/cm), total dissolved solids (595.7 ± 54.7 mg/L), Cl (203.5 ± 18.7 mg/L), SO₄(174.7 ± 16.0 mg/L), alkalinity (134.5 ± 9.7 mg/L), total hardness (156.5 ± 9.3 mg/L), HCO₃(137.4 ± 13.0 mg/L) and calcium hardness (71.8 ± 4.3 mg/L). According to the Ryznar, Puckorius and Aggressive Indices, all samples were stable; based on the Langelier Index, 73% of samples were slightly corrosive and the rest were scale forming; according to the LS index, all samples were corrosive. Marble test results showed tested water of all 15 clusters tended to scale formation. Water in Bandar Abbas is slightly scale forming. The most appropriate indices for the network conditions are the Aggressive, Puckorius and Ryznar indices that were consistent with the marble test.
Efficient On-Demand Operations in Large-Scale Infrastructures
ERIC Educational Resources Information Center
Ko, Steven Y.
2009-01-01
In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…
Competition and Cooperation of Distributed Generation and Power System
NASA Astrophysics Data System (ADS)
Miyake, Masatoshi; Nanahara, Toshiya
Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.
The revolution in data gathering systems
NASA Technical Reports Server (NTRS)
Cambra, J. M.; Trover, W. F.
1975-01-01
Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.
Transdisciplinary application of the cross-scale resilience model
Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.
2014-01-01
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.
Improving Distributed Diagnosis Through Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino
2011-01-01
Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.
Self-dissimilarity as a High Dimensional Complexity Measure
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Macready, William
2005-01-01
For many systems characterized as "complex" the patterns exhibited on different scales differ markedly from one another. For example the biomass distribution in a human body "looks very different" depending on the scale at which one examines it. Conversely, the patterns at different scales in "simple" systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity "signature" of that system. Here we present a novel quantification of self-dissimilarity. This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain-forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used a s the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity we present several numerical experiments. In particular, we show that underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series produced by that map
Cassini, Filippo; Scheutz, Charlotte; Skov, Bent H; Mou, Zishen; Kjeldsen, Peter
2017-05-01
Greenhouse gas mitigation at landfills by methane oxidation in engineered biocover systems is believed to be a cost effective technology, but so far a full quantitative evaluation of the efficiency of the technology in full scale has only been carried out in a few cases. A third generation semi-passive biocover system was constructed at the AV Miljø Landfill, Denmark. The biocover system was fed by landfill gas pumped out of three leachate collection wells. An innovative gas distribution system was used to overcome the commonly observed surface emission hot spot areas resulting from an uneven gas distribution to the active methane oxidation layer, leading to areas with methane overloading. Performed screening of methane and carbon dioxide surface concentrations, as well as flux measurement using a flux chamber at the surface of the biocover, showed homogenous distributions indicating an even gas distribution. This was supported by results from a tracer gas test where the compound HFC-134a was added to the gas inlet over an adequately long time period to obtain tracer gas stationarity in the whole biocover system. Studies of the tracer gas movement within the biocover system showed a very even gas distribution in gas probes installed in the gas distribution layer. Also the flux of tracer gas out of the biocover surface, as measured by flux chamber technique, showed a spatially even distribution. Installed probes logging the temperature and moisture content of the methane oxidation layer at different depths showed elevated temperatures in the layer with temperature differences to the ambient temperature in the range of 25-50°C at the deepest measuring point due to the microbial processes occurring in the layer. The moisture measurements showed that infiltrating precipitation was efficiently drained away from the methane oxidation layer. Copyright © 2017 Elsevier Ltd. All rights reserved.
Integrating scales of seagrass monitoring to meet conservation needs
Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.
2012-01-01
We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.
Empirical evidence for multi-scaled controls on wildfire size distributions in California
NASA Astrophysics Data System (ADS)
Povak, N.; Hessburg, P. F., Sr.; Salter, R. B.
2014-12-01
Ecological theory asserts that regional wildfire size distributions are examples of self-organized critical (SOC) systems. Controls on SOC event-size distributions by virtue are purely endogenous to the system and include the (1) frequency and pattern of ignitions, (2) distribution and size of prior fires, and (3) lagged successional patterns after fires. However, recent work has shown that the largest wildfires often result from extreme climatic events, and that patterns of vegetation and topography may help constrain local fire spread, calling into question the SOC model's simplicity. Using an atlas of >12,000 California wildfires (1950-2012) and maximum likelihood estimation (MLE), we fit four different power-law models and broken-stick regressions to fire-size distributions across 16 Bailey's ecoregions. Comparisons among empirical fire size distributions across ecoregions indicated that most ecoregion's fire-size distributions were significantly different, suggesting that broad-scale top-down controls differed among ecoregions. One-parameter power-law models consistently fit a middle range of fire sizes (~100 to 10000 ha) across most ecoregions, but did not fit to larger and smaller fire sizes. We fit the same four power-law models to patch size distributions of aspect, slope, and curvature topographies and found that the power-law models fit to a similar middle range of topography patch sizes. These results suggested that empirical evidence may exist for topographic controls on fire sizes. To test this, we used neutral landscape modeling techniques to determine if observed fire edges corresponded with aspect breaks more often than expected by random. We found significant differences between the empirical and neutral models for some ecoregions, particularly within the middle range of fire sizes. Our results, combined with other recent work, suggest that controls on ecoregional fire size distributions are multi-scaled and likely are not purely SOC. California wildfire ecosystems appear to be adaptive, governed by stationary and non-stationary controls, which may be either exogenous or endogenous to the system.
Global deformation on the surface of Venus
NASA Technical Reports Server (NTRS)
Bilotti, Frank; Connors, Chris; Suppe, John
1992-01-01
Large-scale mapping of tectonic structures on Venus shows that there is an organized global distribution to deformation. The structures we emphasize are linear compressive mountain belts, extensional rafted zones, and the small-scale but widely distributed wrinkle ridges. Ninety percent of the area of the planet's compressive mountain belts are concentrated in the northern hemisphere whereas the southern hemisphere is dominated by extension and small-scale compression. We propose that this striking concentration of fold belts in the northern hemisphere, along with the globe-encircling equatorial rift system, represents a global organization to deformation on Venus.
Li, Manjie; Liu, Zhaowei; Chen, Yongcan; Hai, Yang
2016-12-01
Interaction between old, corroded iron pipe surfaces and bulk water is crucial to the water quality protection in drinking water distribution systems (WDS). Iron released from corrosion products will deteriorate water quality and lead to red water. This study attempted to understand the effects of pipe materials on corrosion scale characteristics and water quality variations in WDS. A more than 20-year-old hybrid pipe section assembled of unlined cast iron pipe (UCIP) and galvanized iron pipe (GIP) was selected to investigate physico-chemical characteristics of corrosion scales and their effects on water quality variations. Scanning Electron Microscope (SEM), Energy Dispersive X-ray Spectroscopy (EDS), Inductively Coupled Plasma (ICP) and X-ray Diffraction (XRD) were used to analyze micromorphology and chemical composition of corrosion scales. In bench testing, water quality parameters, such as pH, dissolved oxygen (DO), oxidation reduction potential (ORP), alkalinity, conductivity, turbidity, color, Fe 2+ , Fe 3+ and Zn 2+ , were determined. Scale analysis and bench-scale testing results demonstrated a significant effect of pipe materials on scale characteristics and thereby water quality variations in WDS. Characteristics of corrosion scales sampled from different pipe segments show obvious differences, both in physical and chemical aspects. Corrosion scales were found highly amorphous. Thanks to the protection of zinc coatings, GIP system was identified as the best water quality stability, in spite of high zinc release potential. It is deduced that the complicated composition of corrosion scales and structural break by the weld result in the diminished water quality stability in HP system. Measurement results showed that iron is released mainly in ferric particulate form. Copyright © 2016 Elsevier Ltd. All rights reserved.
Scale relativity and hierarchical structuring of planetary systems
NASA Astrophysics Data System (ADS)
Galopeau, P. H. M.; Nottale, L.; da Rocha, D.; Tran Minh, N.
2003-04-01
The theory of scale relativity, applied to macroscopic gravitational systems like planetary systems, allows one to predict quantization laws of several key parameters characterizing those systems (distance between planets and central star, obliquity, eccentricity...) which are organized in a hierarchical way. In the framework of the scale relativity approach, one demonstrates that the motion (at relatively large time-scales) of the bodies in planetary systems, described in terms of fractal geodesic trajectories, is governed by a Schrödinger-like equation. Preferential orbits are predicted in terms of probability density peaks with semi-major axis given by: a_n = GMn^2/w^2 (M is the mass of the central star and w is a velocity close to 144 km s-1 in the case of our inner solar system and of the presently observed exoplanets). The velocity of the planet orbiting at this distance satisfies the relation v_n = w/n. Moreover, the mass distribution of the planets in our solar system can be accounted for in this model. These predictions are in good agreement with the observed values of the actual orbital parameters. Furthermore, the exoplanets which have been recently discovered around nearby stars also follow the same law in terms of the same constant in a highly significant statistical way. The theory of scale relativity also predicts structures for the obliquities and inclinations of the planets and satellites: the probability density of their distribution between 0 and pi are expected to display peaks at particular angles θ_k = kpi/n. A statistical agreement is obtained for our solar system with n=7. Another prediction concerns the distribution of the planets eccentricities e. The theory foresees a quantization law e = k/n where k is an integer and n is the quantum number that characterizes semi-major axes. The presently known exoplanet eccentricities are compatible with this theoretical prediction. Finally, although all these planetary systems may look very different from our solar system, they actually present universal structures comparable to ours, so that a high probability to discover exoplanets having orbital characteristics very similar to the Earth's ones can be expected.
Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes
Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; ...
2015-11-17
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simplemore » mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stressdependent cutoff function. In conclusion, the results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.« less
Large-area photogrammetry based testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul
2017-03-01
An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.
El-Chakhtoura, Joline; Prest, Emmanuelle; Saikaly, Pascal; van Loosdrecht, Mark; Hammes, Frederik; Vrouwenvelder, Hans
2015-05-01
Understanding the biological stability of drinking water distribution systems is imperative in the framework of process control and risk management. The objective of this research was to examine the dynamics of the bacterial community during drinking water distribution at high temporal resolution. Water samples (156 in total) were collected over short time-scales (minutes/hours/days) from the outlet of a treatment plant and a location in its corresponding distribution network. The drinking water is treated by biofiltration and disinfectant residuals are absent during distribution. The community was analyzed by 16S rRNA gene pyrosequencing and flow cytometry as well as conventional, culture-based methods. Despite a random dramatic event (detected with pyrosequencing and flow cytometry but not with plate counts), the bacterial community profile at the two locations did not vary significantly over time. A diverse core microbiome was shared between the two locations (58-65% of the taxa and 86-91% of the sequences) and found to be dependent on the treatment strategy. The bacterial community structure changed during distribution, with greater richness detected in the network and phyla such as Acidobacteria and Gemmatimonadetes becoming abundant. The rare taxa displayed the highest dynamicity, causing the major change during water distribution. This change did not have hygienic implications and is contingent on the sensitivity of the applied methods. The concept of biological stability therefore needs to be revised. Biostability is generally desired in drinking water guidelines but may be difficult to achieve in large-scale complex distribution systems that are inherently dynamic. Copyright © 2015 Elsevier Ltd. All rights reserved.
2004-01-01
login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a
Taming active turbulence with patterned soft interfaces.
Guillamat, P; Ignés-Mullol, J; Sagués, F
2017-09-15
Active matter embraces systems that self-organize at different length and time scales, often exhibiting turbulent flows apparently deprived of spatiotemporal coherence. Here, we use a layer of a tubulin-based active gel to demonstrate that the geometry of active flows is determined by a single length scale, which we reveal in the exponential distribution of vortex sizes of active turbulence. Our experiments demonstrate that the same length scale reemerges as a cutoff for a scale-free power law distribution of swirling laminar flows when the material evolves in contact with a lattice of circular domains. The observed prevalence of this active length scale can be understood by considering the role of the topological defects that form during the spontaneous folding of microtubule bundles. These results demonstrate an unexpected strategy for active systems to adapt to external stimuli, and provide with a handle to probe the existence of intrinsic length and time scales.Active nematics consist of self-driven components that develop orientational order and turbulent flow. Here Guillamat et al. investigate an active nematic constrained in a quasi-2D geometrical setup and show that there exists an intrinsic length scale that determines the geometry in all forcing regimes.
Recurrence and interoccurrence behavior of self-organized complex phenomena
NASA Astrophysics Data System (ADS)
Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.
2007-08-01
The sandpile, forest-fire and slider-block models are said to exhibit self-organized criticality. Associated natural phenomena include landslides, wildfires, and earthquakes. In all cases the frequency-size distributions are well approximated by power laws (fractals). Another important aspect of both the models and natural phenomena is the statistics of interval times. These statistics are particularly important for earthquakes. For earthquakes it is important to make a distinction between interoccurrence and recurrence times. Interoccurrence times are the interval times between earthquakes on all faults in a region whereas recurrence times are interval times between earthquakes on a single fault or fault segment. In many, but not all cases, interoccurrence time statistics are exponential (Poissonian) and the events occur randomly. However, the distribution of recurrence times are often Weibull to a good approximation. In this paper we study the interval statistics of slip events using a slider-block model. The behavior of this model is sensitive to the stiffness α of the system, α=kC/kL where kC is the spring constant of the connector springs and kL is the spring constant of the loader plate springs. For a soft system (small α) there are no system-wide events and interoccurrence time statistics of the larger events are Poissonian. For a stiff system (large α), system-wide events dominate the energy dissipation and the statistics of the recurrence times between these system-wide events satisfy the Weibull distribution to a good approximation. We argue that this applicability of the Weibull distribution is due to the power-law (scale invariant) behavior of the hazard function, i.e. the probability that the next event will occur at a time t0 after the last event has a power-law dependence on t0. The Weibull distribution is the only distribution that has a scale invariant hazard function. We further show that the onset of system-wide events is a well defined critical point. We find that the number of system-wide events NSWE satisfies the scaling relation NSWE ∝(α-αC)δ where αC is the critical value of the stiffness. The system-wide events represent a new phase for the slider-block system.
Thermodynamic Vent System for an On-Orbit Cryogenic Reaction Control Engine
NASA Technical Reports Server (NTRS)
Hurlbert, Eric A.; Romig, Kris A.; Jimenez, Rafael; Flores, Sam
2012-01-01
A report discusses a cryogenic reaction control system (RCS) that integrates a Joule-Thompson (JT) device (expansion valve) and thermodynamic vent system (TVS) with a cryogenic distribution system to allow fine control of the propellant quality (subcooled liquid) during operation of the device. It enables zero-venting when coupled with an RCS engine. The proper attachment locations and sizing of the orifice are required with the propellant distribution line to facilitate line conditioning. During operations, system instrumentation was strategically installed along the distribution/TVS line assembly, and temperature control bands were identified. A sub-scale run tank, full-scale distribution line, open-loop TVS, and a combination of procured and custom-fabricated cryogenic components were used in the cryogenic RCS build-up. Simulated on-orbit activation and thruster firing profiles were performed to quantify system heat gain and evaluate the TVS s capability to maintain the required propellant conditions at the inlet to the engine valves. Test data determined that a small control valve, such as a piezoelectric, is optimal to provide continuously the required thermal control. The data obtained from testing has also assisted with the development of fluid and thermal models of an RCS to refine integrated cryogenic propulsion system designs. This system allows a liquid oxygenbased main propulsion and reaction control system for a spacecraft, which improves performance, safety, and cost over conventional hypergolic systems due to higher performance, use of nontoxic propellants, potential for integration with life support and power subsystems, and compatibility with in-situ produced propellants.
Li, Guiwei; Ding, Yuanxun; Xu, Hongfu; Jin, Junwei; Shi, Baoyou
2018-04-01
Inorganic contaminants accumulation in drinking water distribution systems (DWDS) is a great threat to water quality and safety. This work assessed the main risk factors for different water pipes and discovered the release profile of accumulated materials in a full scale distribution system frequently suffered from water discoloration problem. Physicochemical characterization of pipe deposits were performed using X-ray fluorescence, scanning electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy. The metal release profile was obtained through continuous monitoring of a full-scale DWDS area. The results showed that aluminum and manganese were the main metals of deposits in nonmetallic pipes, while iron was dominant in iron-based pipe corrosion scales. Manganese primarily existed as MnO 2 without well crystalline form. The relative abundance of Mn and Fe in deposits changed with their distance from the water treatment plant. Compared with iron in corrosion scales, Mn and Al were more labile to be released back into bulk water during unidirectional flushing process. A main finding of this work is the co-release behavior of Mn and Al in particulate form and significant correlation exists between these two metals. Dual control of manganese and aluminum in treated water is proposed to be essential to cope with discoloration and trace metal contamination in DWDS. Copyright © 2018 Elsevier Ltd. All rights reserved.
Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, Ulrich; Fishman, Oleg
2015-03-20
The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.
Two-threshold model for scaling laws of noninteracting snow avalanches
Faillettaz, J.; Louchet, F.; Grasso, J.-R.
2004-01-01
A two-threshold model was proposed for scaling laws of noninteracting snow avalanches. It was found that the sizes of the largest avalanches just preceding the lattice system were power-law distributed. The proposed model reproduced the range of power-law exponents observe for land, rock or snow avalanches, by tuning the maximum value of the ratio of the two failure thresholds. A two-threshold 2D cellular automation was introduced to study the scaling for gravity-driven systems.
Krintz, Chandra
2013-01-01
AppScale is an open source distributed software system that implements a cloud platform as a service (PaaS). AppScale makes cloud applications easy to deploy and scale over disparate cloud fabrics, implementing a set of APIs and architecture that also makes apps portable across the services they employ. AppScale is API-compatible with Google App Engine (GAE) and thus executes GAE applications on-premise or over other cloud infrastructures, without modification. PMID:23828721
NASA Astrophysics Data System (ADS)
Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Ochoa-Rodriguez, Susana; Willems, Patrick; Ichiba, Abdellah; Wang, Lipen; Pina, Rui; Van Assel, Johan; Bruni, Guendalina; Murla Tuyls, Damian; ten Veldhuis, Marie-Claire
2017-04-01
Land use distribution and sewer system geometry exhibit complex scale dependent patterns in urban environment. This scale dependency is even more visible in a rasterized representation where only a unique class is affected to each pixel. Such features are well grasped with fractal tools, which are based scale invariance and intrinsically designed to characterise and quantify the space filled by a geometrical set exhibiting complex and tortuous patterns. Fractal tools have been widely used in hydrology but seldom in the specific context of urban hydrology. In this paper, they are used to analyse surface and sewer data from 10 urban or peri-urban catchments located in 5 European countries in the framework of the NWE Interreg RainGain project (www.raingain.eu). The aim was to characterise urban catchment properties accounting for the complexity and inhomogeneity typical of urban water systems. Sewer system density and imperviousness (roads or buildings), represented in rasterized maps of 2 m x 2 m pixels, were analysed to quantify their fractal dimension, characteristic of scaling invariance. It appears that both sewer density and imperviousness exhibit scale invariant features that can be characterized with the help of fractal dimensions ranging from 1.6 to 2, depending on the catchment. In a given area, consistent results were found for the two geometrical features, yielding a robust and innovative way of quantifying the level of urbanization. The representation of imperviousness in operational semi-distributed hydrological models for these catchments was also investigated by computing fractal dimensions of the geometrical sets made up of the sub-catchments with coefficients of imperviousness greater than a range of thresholds. It enables to quantify how well spatial structures of imperviousness are represented in the urban hydrological models.
NASA Astrophysics Data System (ADS)
Starek, Dušan; Fuksi, Tomáš
2017-08-01
A part of the Upper Oligocene sand-rich turbidite systems of the Central Carpathian Basin is represented by the Zuberec Formation. Sand/mud-mixed deposits of this formation are well exposed in the northern part of the basin, allowing us to interpret the turbidite succession as terminal lobe deposits of a submarine fan. This interpretation is based on the discrimination of three facies associations that are comparable to different components of distributive lobe deposits in deep-water fan systems. They correspond to the lobe off-axis, lobe fringe and lobe distal fringe depositional subenvironments, respectively. The inferences about the depositional paleoenvironment based on sedimentological observations are verified by statistical analyses. The bed-thickness frequency distributions and vertical organization of the facies associations show cyclic trends at different hierarchical levels that enable us to reconstruct architectural elements of a turbidite fan. First, small-scale trends correspond with shift in the lobe element centroid between successive elements. Differences in the distribution and frequency of sandstone bed thicknesses as well as differences in the shape of bed-thickness frequency distributions between individual facies associations reflect a gradual fining and thinning in a down-dip direction. Second, meso-scale trends are identified within lobes and they generally correspond to the significant periodicity identified by the time series analysis of the bed thicknesses. The meso-scale trends demonstrate shifts in the position of the lobe centroid within the lobe system. Both types of trends have a character of a compensational stacking pattern and could be linked to autogenic processes. Third, a largescale trend documented by generally thickening-upward stacking pattern of beds, accompanied by a general increase of the sandstones/mudstones ratio and by a gradual change of percentage of individual facies, could be comparable to lobe-system scale. This trend probably indicates a gradual basinward progradation of lobe system controlled by allogenic processes related to tectonic activity of sources and sea-level fluctuations.
Power law scaling in synchronization of brain signals depends on cognitive load.
Tinker, Jesse; Velazquez, Jose Luis Perez
2014-01-01
As it has several features that optimize information processing, it has been proposed that criticality governs the dynamics of nervous system activity. Indications of such dynamics have been reported for a variety of in vitro and in vivo recordings, ranging from in vitro slice electrophysiology to human functional magnetic resonance imaging. However, there still remains considerable debate as to whether the brain actually operates close to criticality or in another governing state such as stochastic or oscillatory dynamics. A tool used to investigate the criticality of nervous system data is the inspection of power-law distributions. Although the findings are controversial, such power-law scaling has been found in different types of recordings. Here, we studied whether there is a power law scaling in the distribution of the phase synchronization derived from magnetoencephalographic recordings during executive function tasks performed by children with and without autism. Characterizing the brain dynamics that is different between autistic and non-autistic individuals is important in order to find differences that could either aid diagnosis or provide insights as to possible therapeutic interventions in autism. We report in this study that power law scaling in the distributions of a phase synchrony index is not very common and its frequency of occurrence is similar in the control and the autism group. In addition, power law scaling tends to diminish with increased cognitive load (difficulty or engagement in the task). There were indications of changes in the probability distribution functions for the phase synchrony that were associated with a transition from power law scaling to lack of power law (or vice versa), which suggests the presence of phenomenological bifurcations in brain dynamics associated with cognitive load. Hence, brain dynamics may fluctuate between criticality and other regimes depending upon context and behaviors.
NASA Astrophysics Data System (ADS)
Gardner, W. P.
2017-12-01
A model which simulates tracer concentration in surface water as a function the age distribution of groundwater discharge is used to characterize groundwater flow systems at a variety of spatial scales. We develop the theory behind the model and demonstrate its application in several groundwater systems of local to regional scale. A 1-D stream transport model, which includes: advection, dispersion, gas exchange, first-order decay and groundwater inflow is coupled a lumped parameter model that calculates the concentration of environmental tracers in discharging groundwater as a function of the groundwater residence time distribution. The lumped parameters, which describe the residence time distribution, are allowed to vary spatially, and multiple environmental tracers can be simulated. This model allows us to calculate the longitudinal profile of tracer concentration in streams as a function of the spatially variable groundwater age distribution. By fitting model results to observations of stream chemistry and discharge, we can then estimate the spatial distribution of groundwater age. The volume of groundwater discharge to streams can be estimated using a subset of environmental tracers, applied tracers, synoptic stream gauging or other methods, and the age of groundwater then estimated using the previously calculated groundwater discharge and observed environmental tracer concentrations. Synoptic surveys of SF6, CFC's, 3H and 222Rn, along with measured stream discharge are used to estimate the groundwater inflow distribution and mean age for regional scale surveys of the Berland River in west-central Alberta. We find that groundwater entering the Berland has observable age, and that the age estimated using our stream survey is of similar order to limited samples from groundwater wells in the region. Our results show that the stream can be used as an easily accessible location to constrain the regional scale spatial distribution of groundwater age.
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1977-01-01
The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.
NASA Astrophysics Data System (ADS)
Huo, Chengyu; Huang, Xiaolin; Zhuang, Jianjun; Hou, Fengzhen; Ni, Huangjing; Ning, Xinbao
2013-09-01
The Poincaré plot is one of the most important approaches in human cardiac rhythm analysis. However, further investigations are still needed to concentrate on techniques that can characterize the dispersion of the points displayed by a Poincaré plot. Based on a modified Poincaré plot, we provide a novel measurement named distribution entropy (DE) and propose a quadrantal multi-scale distribution entropy analysis (QMDE) for the quantitative descriptions of the scatter distribution patterns in various regions and temporal scales. We apply this method to the heartbeat interval series derived from healthy subjects and congestive heart failure (CHF) sufferers, respectively, and find that the discriminations between them are most significant in the first quadrant, which implies significant impacts on vagal regulation brought about by CHF. We also investigate the day-night differences of young healthy people, and it is shown that the results present a clearly circadian rhythm, especially in the first quadrant. In addition, the multi-scale analysis indicates that the results of healthy subjects and CHF sufferers fluctuate in different trends with variation of the scale factor. The same phenomenon also appears in circadian rhythm investigations of young healthy subjects, which implies that the cardiac dynamic system is affected differently in various temporal scales by physiological or pathological factors.
de Boer, Marijke N; Simmonds, Mark P; Reijnders, Peter J H; Aarts, Geert
2014-01-01
The influence of topographic and temporal variables on cetacean distribution at a fine-scale is still poorly understood. To study the spatial and temporal distribution of harbour porpoise Phocoena phocoena and the poorly known Risso's dolphin Grampus griseus we carried out land-based observations from Bardsey Island (Wales, UK) in summer (2001-2007). Using Kernel analysis and Generalized Additive Models it was shown that porpoises and Risso's appeared to be linked to topographic and dynamic cyclic variables with both species using different core areas (dolphins to the West and porpoises to the East off Bardsey). Depth, slope and aspect and a low variation in current speed (for Risso's) were important in explaining the patchy distributions for both species. The prime temporal conditions in these shallow coastal systems were related to the tidal cycle (Low Water Slack and the flood phase), lunar cycle (a few days following the neap tidal phase), diel cycle (afternoons) and seasonal cycle (peaking in August) but differed between species on a temporary but predictable basis. The measure of tidal stratification was shown to be important. Coastal waters generally show a stronger stratification particularly during neap tides upon which the phytoplankton biomass at the surface rises reaching its maximum about 2-3 days after neap tide. It appeared that porpoises occurred in those areas where stratification is maximised and Risso's preferred more mixed waters. This fine-scale study provided a temporal insight into spatial distribution of two species that single studies conducted over broader scales (tens or hundreds of kilometers) do not achieve. Understanding which topographic and cyclic variables drive the patchy distribution of porpoises and Risso's in a Headland/Island system may form the initial basis for identifying potentially critical habitats for these species.
A Cellular Automata Model for the Study of Landslides
NASA Astrophysics Data System (ADS)
Liucci, Luisa; Suteanu, Cristian; Melelli, Laura
2016-04-01
Power-law scaling has been observed in the frequency distribution of landslide sizes in many regions of the world, for landslides triggered by different factors, and in both multi-temporal and post-event datasets, thus indicating the universal character of this property of landslides and suggesting that the same mechanisms drive the dynamics of mass wasting processes. The reasons for the scaling behavior of landslide sizes are widely debated, since their understanding would improve our knowledge of the spatial and temporal evolution of this phenomenon. Self-Organized Critical (SOC) dynamics and the key role of topography have been suggested as possible explanations. The scaling exponent of the landslide size-frequency distribution defines the probability of landslide magnitudes and it thus represents an important parameter for hazard assessment. Therefore, another - still unanswered - important question concerns the factors on which its value depends. This paper investigates these issues using a Cellular Automata (CA) model. The CA uses a real topographic surface acquired from a Digital Elevation Model to represent the initial state of the system, where the states of cells are defined in terms of altitude. The stability criterion is based on the slope gradient. The system is driven to instability through a temporal decrease of the stability condition of cells, which may be thought of as representing the temporal weakening of soil caused by factors like rainfall. A transition rule defines the way in which instabilities lead to discharge from unstable cells to the neighboring cells, deciding upon the landslide direction and the quantity of mass involved. Both the direction and the transferred mass depend on the local topographic features. The scaling properties of the area-frequency distributions of the resulting landslide series are investigated for several rates of weakening and for different time windows, in order to explore the response of the system to model parameters, and its temporal behavior. Results show that the model reproduces the scaling behavior of real landslide areas; while the value of the scaling exponent is stable over time, it linearly decreases with increasing rate of weakening. This suggests that it is the intensity of the triggering mechanism rather than its duration that affects the probability of landslide magnitudes. A quantitative relationship between the scaling exponent of the area frequency distribution of the generated landslides, on one hand, and the changes regarding the topographic surface affected by landslides, on the other hand, is established. The fact that a similar behavior could be observed in real systems may have useful implications in the context of landslide hazard assessment. These results support the hypotheses that landslides are driven by SOC dynamics, and that topography plays a key role in the scaling properties of their size distribution.
A distributed computing approach to mission operations support. [for spacecraft
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano
Past works that focused on addressing power-quality and reliability concerns related to renewable energy resources (RESs) operating with business-as-usual practices have looked at the design of Volt/VAr and Volt/Watt strategies to regulate real or reactive powers based on local voltage measurements, so that terminal voltages are within acceptable levels. These control strategies have the potential of operating at the same time scale of distribution-system dynamics, and can therefore mitigate disturbances precipitated fast time-varying loads and ambient conditions; however, they do not necessarily guarantee system-level optimality, and stability claims are mainly based on empirical evidences. On a different time scale, centralizedmore » and distributed optimal power flow (OPF) algorithms have been proposed to compute optimal steady-state inverter setpoints, so that power losses and voltage deviations are minimized and economic benefits to end-users providing ancillary services are maximized. However, traditional OPF schemes may offer decision making capabilities that do not match the dynamics of distribution systems. Particularly, during the time required to collect data from all the nodes of the network (e.g., loads), solve the OPF, and subsequently dispatch setpoints, the underlying load, ambient, and network conditions may have already changed; in this case, the DER output powers would be consistently regulated around outdated setpoints, leading to suboptimal system operation and violation of relevant electrical limits. The present work focuses on the synthesis of distributed RES-inverter controllers that leverage the opportunities for fast feedback offered by power-electronics interfaced RESs. The overarching objective is to bridge the temporal gap between long-term system optimization and real-time control, to enable seamless RES integration in large scale with stability and efficiency guarantees, while congruently pursuing system-level optimization objectives. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. The proposed controllers enable an update of the power outputs at a time scale that is compatible with the underlying dynamics of loads and ambient conditions, and continuously drive the system operation towards OPF-based solutions.« less
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
NASA Astrophysics Data System (ADS)
Kortenkamp, Stephen J.; Brock, Laci
2016-10-01
Scale model solar systems have been used for centuries to help educate young students and the public about the vastness of space and the relative sizes of objects. We have adapted the classic scale model solar system activity into a student-driven project for an undergraduate general education astronomy course at the University of Arizona. Students are challenged to construct and use their three dimensional models to demonstrate an understanding of numerous concepts in planetary science, including: 1) planetary obliquities, eccentricities, inclinations; 2) phases and eclipses; 3) planetary transits; 4) asteroid sizes, numbers, and distributions; 5) giant planet satellite and ring systems; 6) the Pluto system and Kuiper belt; 7) the extent of space travel by humans and robotic spacecraft; 8) the diversity of extrasolar planetary systems. Secondary objectives of the project allow students to develop better spatial reasoning skills and gain familiarity with technology such as Excel formulas, smart-phone photography, and audio/video editing.During our presentation we will distribute a formal description of the project and discuss our expectations of the students as well as present selected highlights from preliminary submissions.
Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
Double density dynamics: realizing a joint distribution of a physical system and a parameter system
NASA Astrophysics Data System (ADS)
Fukuda, Ikuo; Moritsugu, Kei
2015-11-01
To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
How Much Higher Can HTCondor Fly?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fajardo, E. M.; Dost, J. M.; Holzman, B.
The HTCondor high throughput computing system is heavily used in the high energy physics (HEP) community as the batch system for several Worldwide LHC Computing Grid (WLCG) resources. Moreover, it is the backbone of GlidelnWMS, the pilot system used by the computing organization of the Compact Muon Solenoid (CMS) experiment. To prepare for LHC Run 2, we probed the scalability limits of new versions and configurations of HTCondor with a goal of reaching 200,000 simultaneous running jobs in a single internationally distributed dynamic pool.In this paper, we first describe how we created an opportunistic distributed testbed capable of exercising runsmore » with 200,000 simultaneous jobs without impacting production. This testbed methodology is appropriate not only for scale testing HTCondor, but potentially for many other services. In addition to the test conditions and the testbed topology, we include the suggested configuration options used to obtain the scaling results, and describe some of the changes to HTCondor inspired by our testing that enabled sustained operations at scales well beyond previous limits.« less
Scaling relations for watersheds
NASA Astrophysics Data System (ADS)
Fehr, E.; Kadau, D.; Araújo, N. A. M.; Andrade, J. S., Jr.; Herrmann, H. J.
2011-09-01
We study the morphology of watersheds in two and three dimensional systems subjected to different degrees of spatial correlations. The response of these objects to small, local perturbations is also investigated with extensive numerical simulations. We find the fractal dimension of the watersheds to generally decrease with the Hurst exponent, which quantifies the degree of spatial correlations. Moreover, in two dimensions, our results match the range of fractal dimensions 1.10≤df≤1.15 observed for natural landscapes. We report that the watershed is strongly affected by local perturbations. For perturbed two and three dimensional systems, we observe a power-law scaling behavior for the distribution of areas (volumes) enclosed by the original and the displaced watershed and for the distribution of distances between outlets. Finite-size effects are analyzed and the resulting scaling exponents are shown to depend significantly on the Hurst exponent. The intrinsic relation between watershed and invasion percolation, as well as relations between exponents conjectured in previous studies with two dimensional systems, are now confirmed by our results in three dimensions.
Validity Issues in Standard-Setting Studies
ERIC Educational Resources Information Center
Pant, Hans A.; Rupp, Andre A.; Tiffin-Richards, Simon P.; Koller, Olaf
2009-01-01
Standard-setting procedures are a key component within many large-scale educational assessment systems. They are consensual approaches in which committees of experts set cut-scores on continuous proficiency scales, which facilitate communication of proficiency distributions of students to a wide variety of stakeholders. This communicative function…
Lehtola, Markku J; Juhna, Tālis; Miettinen, Ilkka T; Vartiainen, Terttu; Martikainen, Pertti J
2004-12-01
The formation of biofilms in drinking water distribution networks is a significant technical, aesthetic and hygienic problem. In this study, the effects of assimilable organic carbon, microbially available phosphorus (MAP), residual chlorine, temperature and corrosion products on the formation of biofilms were studied in two full-scale water supply systems in Finland and Latvia. Biofilm collectors consisting of polyvinyl chloride pipes were installed in several waterworks and distribution networks, which were supplied with chemically precipitated surface waters and groundwater from different sources. During a 1-year study, the biofilm density was measured by heterotrophic plate counts on R2A-agar, acridine orange direct counting and ATP-analyses. A moderate level of residual chlorine decreased biofilm density, whereas an increase of MAP in water and accumulated cast iron corrosion products significantly increased biofilm density. This work confirms, in a full-scale distribution system in Finland and Latvia, our earlier in vitro finding that biofilm formation is affected by the availability of phosphorus in drinking water.
Self-organized criticality in asymmetric exclusion model with noise for freeway traffic
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-02-01
The one-dimensional asymmetric simple-exclusion model with open boundaries for parallel update is extended to take into account temporary stopping of particles. The model presents the traffic flow on a highway with temporary deceleration of cars. Introducing temporary stopping into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. In the self-organized critical state, start-stop waves (or traffic jams) appear with various sizes (or lifetimes). The typical interval < s>between consecutive jams scales as < s> ≃ Lv with v = 0.51 ± 0.05 where L is the system size. It is shown that the cumulative jam-interval distribution Ns( L) satisfies the finite-size scaling form ( Ns( L) ≃ L- vf( s/ Lv). Also, the typical lifetime
Landscape-scale processes influence riparian plant composition along a regulated river
Palmquist, Emily C.; Ralston, Barbara; Merritt, David M.; Shafroth, Patrick B.
2018-01-01
Hierarchical frameworks are useful constructs when exploring landscape- and local-scale factors affecting patterns of vegetation in riparian areas. In drylands, which have steep environmental gradients and high habitat heterogeneity, landscape-scale variables, such as climate, can change rapidly along a river's course, affecting the relative influence of environmental variables at different scales. To assess how landscape-scale factors change the structure of riparian vegetation, we measured riparian vegetation composition along the Colorado River through Grand Canyon, determined which factors best explain observed changes, identified how richness and functional diversity vary, and described the implications of our results for river management. Cluster analysis identified three divergent floristic groups that are distributed longitudinally along the river. These groups were distributed along gradients of elevation, temperature and seasonal precipitation, but were not associated with annual precipitation or local-scale factors. Species richness and functional diversity decreased as a function of distance downstream showing that changing landscape-scale factors result in changes to ecosystem characteristics. Species composition and distribution remain closely linked to seasonal precipitation and temperature. These patterns in floristic composition in a semiarid system inform management and provide insights into potential future changes as a result of shifts in climate and changes in flow management.
Toward a global multi-scale heliophysics observatory
NASA Astrophysics Data System (ADS)
Semeter, J. L.
2017-12-01
We live within the only known stellar-planetary system that supports life. What we learn about this system is not only relevant to human society and its expanding reach beyond Earth's surface, but also to our understanding of the origins and evolution of life in the universe. Heliophysics is focused on solar-terrestrial interactions mediated by the magnetic and plasma environment surrounding the planet. A defining feature of energy flow through this environment is interaction across physical scales. A solar disturbance aimed at Earth can excite geospace variability on scales ranging from thousands of kilometers (e.g., global convection, region 1 and 2 currents, electrojet intensifications) to 10's of meters (e.g., equatorial spread-F, dispersive Alfven waves, plasma instabilities). Most "geospace observatory" concepts are focused on a single modality (e.g., HF/UHF radar, magnetometer, optical) providing a limited parameter set over a particular spatiotemporal resolution. Data assimilation methods have been developed to couple heterogeneous and distributed observations, but resolution has typically been prescribed a-priori and according to physical assumptions. This paper develops a conceptual framework for the next generation multi-scale heliophysics observatory, capable of revealing and quantifying the complete spectrum of cross-scale interactions occurring globally within the geospace system. The envisioned concept leverages existing assets, enlists citizen scientists, and exploits low-cost access to the geospace environment. Examples are presented where distributed multi-scale observations have resulted in substantial new insight into the inner workings of our stellar-planetary system.
Benford's law gives better scaling exponents in phase transitions of quantum XY models.
Rane, Ameya Deepak; Mishra, Utkarsh; Biswas, Anindya; Sen De, Aditi; Sen, Ujjwal
2014-08-01
Benford's law is an empirical law predicting the distribution of the first significant digits of numbers obtained from natural phenomena and mathematical tables. It has been found to be applicable for numbers coming from a plethora of sources, varying from seismographic, biological, financial, to astronomical. We apply this law to analyze the data obtained from physical many-body systems described by the one-dimensional anisotropic quantum XY models in a transverse magnetic field. We detect the zero-temperature quantum phase transition and find that our method gives better finite-size scaling exponents for the critical point than many other known scaling exponents using measurable quantities like magnetization, entanglement, and quantum discord. We extend our analysis to the same system but at finite temperature and find that it also detects the finite-temperature phase transition in the model. Moreover, we compare the Benford distribution analysis with the same obtained from the uniform and Poisson distributions. The analysis is furthermore important in that the high-precision detection of the cooperative physical phenomena is possible even from low-precision experimental data.
Generic finite size scaling for discontinuous nonequilibrium phase transitions into absorbing states
NASA Astrophysics Data System (ADS)
de Oliveira, M. M.; da Luz, M. G. E.; Fiore, C. E.
2015-12-01
Based on quasistationary distribution ideas, a general finite size scaling theory is proposed for discontinuous nonequilibrium phase transitions into absorbing states. Analogously to the equilibrium case, we show that quantities such as response functions, cumulants, and equal area probability distributions all scale with the volume, thus allowing proper estimates for the thermodynamic limit. To illustrate these results, five very distinct lattice models displaying nonequilibrium transitions—to single and infinitely many absorbing states—are investigated. The innate difficulties in analyzing absorbing phase transitions are circumvented through quasistationary simulation methods. Our findings (allied to numerical studies in the literature) strongly point to a unifying discontinuous phase transition scaling behavior for equilibrium and this important class of nonequilibrium systems.
2012-03-09
materials structures across scales for design of engineered systems ODISSEI: Origami Design for Integration of Self-assembling Systems for...AGENCIES Origami Engineering US-India Tunable Materials Forum US-AFRICA Initiative Reliance 21 Board Materials and Processing COI 29 DISTRIBUTION A
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
NASA Astrophysics Data System (ADS)
Giaconia, Alberto; Montagnino, Fabio; Paredes, Filippo; Donato, Filippo; Caputo, Giampaolo; Mazzei, Domenico
2017-06-01
CSP technologies can be applied for distributed energy production, on small-medium plants (on the 1 MW scale), to satisfy the needs of local communities, buildings and districts. In this perspective, reliable, low-cost, and flexible small/medium multi-generative CSP plants should be developed. Four pilot plants have been built in four Mediterranean countries (Cyprus, Egypt, Jordan, and Italy) to demonstrate the approach. In this paper, the plant built in Italy is presented, with specific innovations applied in the linear Fresnel collector design and the Thermal Energy Storage (TES) system, based on a single the use of molten salts but specifically tailored for small scale plants.
Ecosystem variability in the offshore northeastern Chukchi Sea
NASA Astrophysics Data System (ADS)
Blanchard, Arny L.; Day, Robert H.; Gall, Adrian E.; Aerts, Lisanne A. M.; Delarue, Julien; Dobbins, Elizabeth L.; Hopcroft, Russell R.; Questel, Jennifer M.; Weingartner, Thomas J.; Wisdom, Sheyna S.
2017-12-01
Understanding influences of cumulative effects from multiple stressors in marine ecosystems requires an understanding of the sources for and scales of variability. A multidisciplinary ecosystem study in the offshore northeastern Chukchi Sea during 2008-2013 investigated the variability of the study area's two adjacent sub-ecosystems: a pelagic system influenced by interannual and/or seasonal temporal variation at large, oceanographic (regional) scales, and a benthic-associated system more influenced by small-scale spatial variations. Variability in zooplankton communities reflected interannual oceanographic differences in waters advected northward from the Bering Sea, whereas variation in benthic communities was associated with seafloor and bottom-water characteristics. Variations in the planktivorous seabird community were correlated with prey distributions, whereas interaction effects in ANOVA for walruses were related to declines of sea-ice. Long-term shifts in seabird distributions were also related to changes in sea-ice distributions that led to more open water. Although characteristics of the lower trophic-level animals within sub-ecosystems result from oceanographic variations and interactions with seafloor topography, distributions of apex predators were related to sea-ice as a feeding platform (walruses) or to its absence (i.e., open water) for feeding (seabirds). The stability of prey resources appears to be a key factor in mediating predator interactions with other ocean characteristics. Seabirds reliant on highly-variable zooplankton prey show long-term changes as open water increases, whereas walruses taking benthic prey in biomass hotspots respond to sea-ice changes in the short-term. A better understanding of how variability scales up from prey to predators and how prey resource stability (including how critical prey respond to environmental changes over space and time) might be altered by climate and anthropogenic stressors is essential to predicting the future state of both the Chukchi and other arctic systems.
Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc
2016-01-01
The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596
Transdisciplinary Application of Cross-Scale Resilience ...
The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems. Comparative analyses of complex systems have, in fact, demonstrated commonalities among distinctly different types of systems (Schneider & Kay 1994; Holling 2001; Lansing 2003; Foster 2005; Bullmore et al. 2009). Both biological and non-biological complex systems appear t
Efficient High Performance Collective Communication for Distributed Memory Environments
ERIC Educational Resources Information Center
Ali, Qasim
2009-01-01
Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…
Building hydrologic information systems to promote climate resilience in the Blue Nile/Abay higlands
USDA-ARS?s Scientific Manuscript database
Climate adaptation requires information about climate and land-surface conditions – spatially distributed, and at scales of human influence (the field scale). This article describes a project aimed at combining meteorological data, satellite remote sensing, hydrologic modeling, and downscaled clima...
Dehghani, Nima; Hatsopoulos, Nicholas G.; Haga, Zach D.; Parker, Rebecca A.; Greger, Bradley; Halgren, Eric; Cash, Sydney S.; Destexhe, Alain
2012-01-01
Self-organized critical states are found in many natural systems, from earthquakes to forest fires, they have also been observed in neural systems, particularly, in neuronal cultures. However, the presence of critical states in the awake brain remains controversial. Here, we compared avalanche analyses performed on different in vivo preparations during wakefulness, slow-wave sleep, and REM sleep, using high density electrode arrays in cat motor cortex (96 electrodes), monkey motor cortex and premotor cortex and human temporal cortex (96 electrodes) in epileptic patients. In neuronal avalanches defined from units (up to 160 single units), the size of avalanches never clearly scaled as power-law, but rather scaled exponentially or displayed intermediate scaling. We also analyzed the dynamics of local field potentials (LFPs) and in particular LFP negative peaks (nLFPs) among the different electrodes (up to 96 sites in temporal cortex or up to 128 sites in adjacent motor and premotor cortices). In this case, the avalanches defined from nLFPs displayed power-law scaling in double logarithmic representations, as reported previously in monkey. However, avalanche defined as positive LFP (pLFP) peaks, which are less directly related to neuronal firing, also displayed apparent power-law scaling. Closer examination of this scaling using the more reliable cumulative distribution function (CDF) and other rigorous statistical measures, did not confirm power-law scaling. The same pattern was seen for cats, monkey, and human, as well as for different brain states of wakefulness and sleep. We also tested other alternative distributions. Multiple exponential fitting yielded optimal fits of the avalanche dynamics with bi-exponential distributions. Collectively, these results show no clear evidence for power-law scaling or self-organized critical states in the awake and sleeping brain of mammals, from cat to man. PMID:22934053
Economic optimization of the energy transport component of a large distributed solar power plant
NASA Technical Reports Server (NTRS)
Turner, R. H.
1976-01-01
A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
Service Discovery Oriented Management System Construction Method
NASA Astrophysics Data System (ADS)
Li, Huawei; Ren, Ying
2017-10-01
In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.
Assessment of distributed solar power systems: Issues and impacts
NASA Astrophysics Data System (ADS)
Moyle, R. A.; Chernoff, H.; Schweizer, T. C.; Patton, J. B.
1982-11-01
The installation of distributed solar-power systems presents electric utilities with a host of questions. Some of the technical and economic impacts of these systems are discussed. Among the technical interconnect issues are isolated operation, power quality, line safety, and metering options. Economic issues include user purchase criteria, structures and installation costs, marketing and product distribution costs, and interconnect costs. An interactive computer program that allows easy calculation of allowable system prices and allowable generation-equipment prices was developed as part of this project. It is concluded that the technical problems raised by distributed solar systems are surmountable, but their resolution may be costly. The stringent purchase criteria likely to be imposed by many potential system users and the economies of large-scale systems make small systems (less than 10 to 20 kW) less attractive than larger systems. Utilities that consider life-cycle costs in making investment decisions and third-party investors who have tax and financial advantages are likely to place the highest value on solar-power systems.
Supply-demand balance in outward-directed networks and Kleiber's law
Painter, Page R
2005-01-01
Background Recent theories have attempted to derive the value of the exponent α in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). Methods The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Results Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. Conclusion The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate. PMID:16283939
Supply-demand balance in outward-directed networks and Kleiber's law.
Painter, Page R
2005-11-10
Recent theories have attempted to derive the value of the exponent alpha in the allometric formula for scaling of basal metabolic rate from the properties of distribution network models for arteries and capillaries. It has recently been stated that a basic theorem relating the sum of nutrient currents to the specific nutrient uptake rate, together with a relationship claimed to be required in order to match nutrient supply to nutrient demand in 3-dimensional outward-directed networks, leads to Kleiber's law (b = 3/4). The validity of the supply-demand matching principle and the assumptions required to prove the basic theorem are assessed. The supply-demand principle is evaluated by examining the supply term and the demand term in outward-directed lattice models of nutrient and water distribution systems and by applying the principle to fractal-like models of mammalian arterial systems. Application of the supply-demand principle to bifurcating fractal-like networks that are outward-directed does not predict 3/4-power scaling, and evaluation of water distribution system models shows that the matching principle does not match supply to demand in such systems. Furthermore, proof of the basic theorem is shown to require that the covariance of nutrient uptake and current path length is 0, an assumption unlikely to be true in mammalian arterial systems. The supply-demand matching principle does not lead to a satisfactory explanation for the approximately 3/4-power scaling of mammalian basal metabolic rate.
White Paper on Dish Stirling Technology: Path Toward Commercial Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andraka, Charles E.; Stechel, Ellen; Becker, Peter
2016-07-01
Dish Stirling energy systems have been developed for distributed and large-scale utility deployment. This report summarizes the state of the technology in a joint project between Stirling Energy Systems, Sandia National Laboratories, and the Department of Energy in 2011. It then lays out a feasible path to large scale deployment, including development needs and anticipated cost reduction paths that will make a viable deployment product.
Sucurovic, Snezana; Milutinovic, Veljko
2008-01-01
The Internet based distributed large scale information systems implements attribute based access control (ABAC) rather than Role Based Access Control (RBAC). The reason is that the Internet is identity less and that ABAC scales better. EXtensible Access Control Markup Language is standardized language for writing access control policies, access control requests and access control responses in ABAC. XACML can provide decentralized administration and credentials distribution. In year 2002 version of CEN ENV 13 606 attributes have been attached to EHCR components and in such a system ABAC and XACML have been easy to implement. This paper presents writing XACML policies in the case when attributes are in hierarchical structure. It is presented two possible solutions to write XACML policy in that case and that the solution when set functions are used is more compact and provides 10% better performances.
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Exploring the effect of power law social popularity on language evolution.
Gong, Tao; Shuai, Lan
2014-01-01
We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.
Dense power-law networks and simplicial complexes
NASA Astrophysics Data System (ADS)
Courtney, Owen T.; Bianconi, Ginestra
2018-05-01
There is increasing evidence that dense networks occur in on-line social networks, recommendation networks and in the brain. In addition to being dense, these networks are often also scale-free, i.e., their degree distributions follow P (k ) ∝k-γ with γ ∈(1 ,2 ] . Models of growing networks have been successfully employed to produce scale-free networks using preferential attachment, however these models can only produce sparse networks as the numbers of links and nodes being added at each time step is constant. Here we present a modeling framework which produces networks that are both dense and scale-free. The mechanism by which the networks grow in this model is based on the Pitman-Yor process. Variations on the model are able to produce undirected scale-free networks with exponent γ =2 or directed networks with power-law out-degree distribution with tunable exponent γ ∈(1 ,2 ) . We also extend the model to that of directed two-dimensional simplicial complexes. Simplicial complexes are generalization of networks that can encode the many body interactions between the parts of a complex system and as such are becoming increasingly popular to characterize different data sets ranging from social interacting systems to the brain. Our model produces dense directed simplicial complexes with power-law distribution of the generalized out-degrees of the nodes.
Global Night-Time Lights for Observing Human Activity
NASA Technical Reports Server (NTRS)
Hipskind, Stephen R.; Elvidge, Chris; Gurney, K.; Imhoff, Mark; Bounoua, Lahouari; Sheffner, Edwin; Nemani, Ramakrishna R.; Pettit, Donald R.; Fischer, Marc
2011-01-01
We present a concept for a small satellite mission to make systematic, global observations of night-time lights with spatial resolution suitable for discerning the extent, type and density of human settlements. The observations will also allow better understanding of fine scale fossil fuel CO2 emission distribution. The NASA Earth Science Decadal Survey recommends more focus on direct observations of human influence on the Earth system. The most dramatic and compelling observations of human presence on the Earth are the night light observations taken by the Defence Meteorological System Program (DMSP) Operational Linescan System (OLS). Beyond delineating the footprint of human presence, night light data, when assembled and evaluated with complementary data sets, can determine the fine scale spatial distribution of global fossil fuel CO2 emissions. Understanding fossil fuel carbon emissions is critical to understanding the entire carbon cycle, and especially the carbon exchange between terrestrial and oceanic systems.
Diagnostic modeling of trace metal partitioning in south San Francisco Bay
Wood, T. W.; Baptista, A. M.; Kuwabara, J.S.; Flegal, A.R.
1995-01-01
The numerical results indicate that aqueous speciation will control basin-scale spatial variations in the apparent distribution coefficient, Kda, if the system is close to equilibrium. However, basin-scale spatial variations in Kda are determined by the location of the sources of metal and the suspended solids concentration of the receiving water if the system is far from equilibrium. The overall spatial variability in Kda also increases as the system moves away from equilibrium.
NASA Astrophysics Data System (ADS)
Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.
2016-04-01
We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.
NASA Astrophysics Data System (ADS)
Tang, Y. B.; Li, M.; Bernabe, Y.
2014-12-01
We modeled the electrical transport behavior of dual-pore carbonate rocks in this paper. Based on experimental data of a carbonate reservoir in China, we simply considered the low porosity samples equivalent to the matrix (micro-pore system) of the high porosity samples. For modeling the bimodal porous media, we considered that the matrix is homogeneous and interconnected. The connectivity and the pore size distribution of macro-pore system are varied randomly. Both pore systems are supposed to act electrically in parallel, connected at the nodes, where the fluid exchange takes place, an approach previously used by Bauer et al. (2012). Then, the effect of the properties of matrix, the pore size distribution and connectivity of macro-pore system on petrophysical properties of carbonates can be investigated. We simulated electrical current through networks in three-dimensional simple cubic (SC) and body-center cubic (BCC) with different coordination numbers and different pipe radius distributions of macro-pore system. Based on the simulation results, we found that the formation factor obeys a "universal" scaling relationship (i.e. independent of lattice type), 1/F∝eγz, where γ is a function of the normalized standard deviation of the pore radius distribution of macro-pore system and z is the coordination number of macro-pore system. This relationship is different from the classic "universal power law" in percolation theory. A formation factor model was inferred on the basis of the scaling relationship mentioned above and several scale-invariant quantities (such as hydraulic radius rH and throat length l of macro-pore). Several methods were developed to estimate corresponding parameters of the new model with conventional core analyses. It was satisfactorily tested against experimental data, including some published experimental data. Furthermore, the relationship between water saturation and resistivity in dual-pore carbonates was discussed based on the new model.
NASA Astrophysics Data System (ADS)
Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.
2013-09-01
A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
BIOLOGICAL NITRIFICATION IN A FULL-SCALE AND PILOT-SCALE IRON REMOVAL DRINKING WATER TREATMENT PLANT
Ammonia in source waters can cause water treatment and distribution system problems, many of which are associated with biological nitrification. Therefore, in some cases, the removal of ammonia from water is desirable. Biological oxidation of ammonia to nitrite and nitrate (nitr...
NASA Astrophysics Data System (ADS)
Illangasekare, T. H.; Sakaki, T.; Smits, K. M.; Limsuwat, A.; Terrés-Nícoli, J. M.
2008-12-01
Understanding the dynamics of soil moisture distribution near the ground surface is of interest in various applications involving land-atmospheric interaction, evaporation from soils, CO2 leakage from carbon sequestration, vapor intrusion into buildings, and land mine detection. Natural soil heterogeneity in combination with water and energy fluxes at the soil surface creates complex spatial and temporal distributions of soil moisture. Even though considerable knowledge exists on how soil moisture conditions change in response to flux and energy boundary conditions, emerging problems involving land atmospheric interactions require the quantification of soil moisture variability both at high spatial and temporal resolutions. The issue of up-scaling becomes critical in all applications, as in general, field measurements are taken at sparsely distributed spatial locations that require assimilation with measurements taken using remote sensing technologies. It is our contention that the knowledge that will contribute to both improving our understanding of the fundamental processes and practical problem solution cannot be obtained easily in the field due to a number of constraints. One of these basic constraints is the inability to make measurements at very fine spatial scales at high temporal resolutions in naturally heterogeneous field systems. Also, as the natural boundary conditions at the land/atmospheric interface are not controllable in the field, even in pilot scale studies, the developed theories and tools cannot be validated for the diversity of conditions that could be expected in the field. Intermediate scale testing using soil tanks packed to represent different heterogeneous test configurations provides an attractive and cost effective alternative to investigate a class of problems involving the shallow unsaturated zone. In this presentation, we will discuss the advantages and limitations of studies conducted in both two and three dimensional intermediate scale test systems together with instrumentation and measuring techniques. The features and capabilities of a new coupled porous media/climate wind tunnel test system that allows for the study of near surface unsaturated soil moisture conditions under climate boundary conditions will also be presented with the goal of exploring opportunities to use such a facility to study some of the multi-scale problems in the near surface unsaturated zone.
Factors Impacting Spatial Patterns of Snow Distribution in a Small Catchment near Nome, AK
NASA Astrophysics Data System (ADS)
Chen, M.; Wilson, C. J.; Charsley-Groffman, L.; Busey, R.; Bolton, W. R.
2017-12-01
Snow cover plays an important role in the climate, hydrology and ecological systems of the Arctic due to its influence on the water balance, thermal regimes, vegetation and carbon flux. Thus, snow depth and coverage have been key components in all the earth system models but are often poorly represented for arctic regions, where fine scale snow distribution data is sparse. The snow data currently used in the models is at coarse resolution, which in turn leads to high uncertainty in model predictions. Through the DOE Office of Science Next Generation Ecosystem Experiment, NGEE-Arctic, high resolution snow distribution data is being developed and applied in catchment scale models to ultimately improve representation of snow and its interactions with other model components in the earth system models . To improve these models, it is important to identify key factors that control snow distribution and quantify the impacts of those factors on snow distribution. In this study, two intensive snow depth surveys (1 to 10 meters scale) were conducted for a 2.3 km2 catchment on the Teller road, near Nome, AK in the winter of 2016 and 2017. We used a statistical model to quantify the impacts of vegetation types, macro-topography, micro-topography, and meteorological parameters on measured snow depth. The results show that snow spatial distribution was similar between 2016 and 2017, snow depth was spatially auto correlated over small distance (2-5 meters), but not spatially auto correlated over larger distance (more than 2-5 meters). The coefficients of variation of snow depth was above 0.3 for all the snow survey transects (500-800 meters long). Variation of snow depth is governed by vegetation height, aspect, slope, surface curvature, elevation and wind speed and direction. We expect that this empirical statistical model can be used to estimate end of winter snow depth for the whole watershed and will further develop the model using data from other arctic regions to estimate seasonally dynamic snow coverage and properties for use in catchment scale to pan-Arctic models.
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction
NASA Technical Reports Server (NTRS)
Li, Zhijin; Chao, Yi; Li, P. Peggy
2012-01-01
A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-09-18
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
NASA Astrophysics Data System (ADS)
Peng, Yong; Li, Hongqiang; Shen, Chunlong; Guo, Shun; Zhou, Qi; Wang, Kehong
2017-06-01
The power density distribution of electron beam welding (EBW) is a key factor to reflect the beam quality. The beam quality test system was designed for the actual beam power density distribution of high-voltage EBW. After the analysis of characteristics and phase relationship between the deflection control signal and the acquisition signal, the Post-Trigger mode was proposed for the signal acquisition meanwhile the same external clock source was shared by the control signal and the sampling clock. The power density distribution of beam cross-section was reconstructed using one-dimensional signal that was processed by median filtering, twice signal segmentation and spatial scale calibration. The diameter of beam cross-section was defined by amplitude method and integral method respectively. The measured diameter of integral definition is bigger than that of amplitude definition, but for the ideal distribution the former is smaller than the latter. The measured distribution without symmetrical shape is not concentrated compared to Gaussian distribution.
NASA Astrophysics Data System (ADS)
Browne, Joshua B.
Anthropogenic greenhouse gas emissions (GHG) contribute to global warming, and must be mitigated. With GHG mitigation as an overarching goal, this research aims to study the potential for newfound and abundant sources of natural gas to play a role as part of a GHG mitigation strategy. However, recent work suggests that methane leakage in the current natural gas system may inhibit end-use natural gas as a robust mitigation strategy, but that natural gas as a feedstock for other forms of energy, such as electricity generation or liquid fuels, may support natural-gas based mitigation efforts. Flaring of uneconomic natural gas, or outright loss of natural gas to the atmosphere results in greenhouse gas emissions that could be avoided and which today are very large in aggregate. A central part of this study is to look at a new technology for converting natural gas into methanol at a unit scale that is matched to the size of individual natural gas wells. The goal is to convert stranded or otherwise flared natural gas into a commercially valuable product and thereby avoid any unnecessary emission to the atmosphere. A major part of this study is to contribute to the development of a novel approach for converting natural gas into methanol and to assess the environmental impact (for better or for worse) of this new technology. This Ph. D. research contributes to the development of such a system and provides a comprehensive techno-economic and environmental assessment of this technology. Recognizing the distributed nature of methane leakage associated with the natural gas system, this work is also intended to advance previous research at the Lenfest Center for Sustainable Energy that aims to show that small, modular energy systems can be made economic. This thesis contributes to and analyzes the development of a small-scale gas-to-liquids (GTL) system aimed at addressing flared natural gas from gas and oil wells. This thesis includes system engineering around a design that converts natural gas to synthesis gas (syngas) in a reciprocating internal combustion engine and then converts the syngas into methanol in a small-scale reactor. With methanol as the product, this research aims to show that such a system can not only address current and future natural gas flaring regulation, but eventually can compete economically with historically large-scale, centralized methanol production infrastructure. If successful, such systems could contribute to a shift away from large, multi-billion dollar capital cost chemical plants towards smaller systems with shorter lifetimes that may decrease the time to transition to more sustainable forms of energy and chemical conversion technologies. This research also quantifies the potential for such a system to contribute to mitigating GHG emissions, not only by addressing flared gas in the near-term, but also supporting future natural gas infrastructure ideas that may help to redefine the way the current natural gas pipeline system is used. The introduction of new, small-scale, distributed energy and chemical conversion systems located closer to the point of extraction may contribute to reducing methane leakage throughout the natural gas distribution system by reducing the reliance and risks associated with the aging natural gas pipeline infrastructure. The outcome of this thesis will result in several areas for future work. From an economic perspective, factors that contribute to overall system cost, such as operation and maintenance (O&M) and capital cost multiplier (referred to as the Lang Factor for large-scale petro-chemical plants), are not yet known for novel systems such as the technology presented here. From a technical perspective, commercialization of small-scale, distributed chemical conversion systems may create a demand for economical compression and air-separation technologies at this scale that do not currently exist. Further, new business cases may arise aimed at utilizing small, remote sources of methane, such as biogas from agricultural and municipal waste. Finally, while methanol was selected as the end-product for this thesis, future applications of this technology may consider methane conversion to hydrogen, ammonia, and ethylene for example, challenging the orthodoxy in the chemical industry that "bigger is better."
Data Sharing in DHT Based P2P Systems
NASA Astrophysics Data System (ADS)
Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia
The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.
Nonequilibrium Quasiparticle Distribution Induced by Kondo Defects
NASA Astrophysics Data System (ADS)
Kroha, J.; Zawadowski, A.
2002-04-01
It is shown that in resistive nanowires out of equilibrium containing either single- or two-channel Kondo impurities the distribution function f(E,U) obeys scaling behavior in terms of the quasiparticle energy E and the bias voltage U. The numerically calculated f(E,U) curves explain quantitatively recent experiments on Cu and Au nanowires. The systematics of the impurity concentration cimp extracted from the comparison between theory and results on various Cu and Au samples strongly suggests that in these systems the scaling arises from magnetic Kondo impurities.
Biological instability in a chlorinated drinking water distribution network.
Nescerecka, Alina; Rubulis, Janis; Vital, Marius; Juhna, Talis; Hammes, Frederik
2014-01-01
The purpose of a drinking water distribution system is to deliver drinking water to the consumer, preferably with the same quality as when it left the treatment plant. In this context, the maintenance of good microbiological quality is often referred to as biological stability, and the addition of sufficient chlorine residuals is regarded as one way to achieve this. The full-scale drinking water distribution system of Riga (Latvia) was investigated with respect to biological stability in chlorinated drinking water. Flow cytometric (FCM) intact cell concentrations, intracellular adenosine tri-phosphate (ATP), heterotrophic plate counts and residual chlorine measurements were performed to evaluate the drinking water quality and stability at 49 sampling points throughout the distribution network. Cell viability methods were compared and the importance of extracellular ATP measurements was examined as well. FCM intact cell concentrations varied from 5×10(3) cells mL(-1) to 4.66×10(5) cells mL(-1) in the network. While this parameter did not exceed 2.1×10(4) cells mL(-1) in the effluent from any water treatment plant, 50% of all the network samples contained more than 1.06×10(5) cells mL(-1). This indisputably demonstrates biological instability in this particular drinking water distribution system, which was ascribed to a loss of disinfectant residuals and concomitant bacterial growth. The study highlights the potential of using cultivation-independent methods for the assessment of chlorinated water samples. In addition, it underlines the complexity of full-scale drinking water distribution systems, and the resulting challenges to establish the causes of biological instability.
Biological Instability in a Chlorinated Drinking Water Distribution Network
Nescerecka, Alina; Rubulis, Janis; Vital, Marius; Juhna, Talis; Hammes, Frederik
2014-01-01
The purpose of a drinking water distribution system is to deliver drinking water to the consumer, preferably with the same quality as when it left the treatment plant. In this context, the maintenance of good microbiological quality is often referred to as biological stability, and the addition of sufficient chlorine residuals is regarded as one way to achieve this. The full-scale drinking water distribution system of Riga (Latvia) was investigated with respect to biological stability in chlorinated drinking water. Flow cytometric (FCM) intact cell concentrations, intracellular adenosine tri-phosphate (ATP), heterotrophic plate counts and residual chlorine measurements were performed to evaluate the drinking water quality and stability at 49 sampling points throughout the distribution network. Cell viability methods were compared and the importance of extracellular ATP measurements was examined as well. FCM intact cell concentrations varied from 5×103 cells mL−1 to 4.66×105 cells mL−1 in the network. While this parameter did not exceed 2.1×104 cells mL−1 in the effluent from any water treatment plant, 50% of all the network samples contained more than 1.06×105 cells mL−1. This indisputably demonstrates biological instability in this particular drinking water distribution system, which was ascribed to a loss of disinfectant residuals and concomitant bacterial growth. The study highlights the potential of using cultivation-independent methods for the assessment of chlorinated water samples. In addition, it underlines the complexity of full-scale drinking water distribution systems, and the resulting challenges to establish the causes of biological instability. PMID:24796923
Woods, Gwen C; Trenholm, Rebecca A; Hale, Bruce; Campbell, Zeke; Dickenson, Eric R V
2015-07-01
Nitrosamines are considered to pose greater health risks than currently regulated DBPs and are subsequently listed as a priority pollutant by the EPA, with potential for future regulation. Denver Water, as part of the EPA's Unregulated Contaminant Monitoring Rule 2 (UCMR2) monitoring campaign, found detectable levels of N-nitrosodimethylamine (NDMA) at all sites of maximum residency within the distribution system. To better understand the occurrence of nitrosamines and nitrosamine precursors, Denver Water undertook a comprehensive year-long monitoring campaign. Samples were taken every two weeks to monitor for NDMA in the distribution system, and quarterly sampling events further examined 9 nitrosamines and nitrosamine precursors throughout the treatment and distribution systems. NDMA levels within the distribution system were typically low (>1.3 to 7.2 ng/L) with a remote distribution site (frequently >200 h of residency) experiencing the highest concentrations found. Eight other nitrosamines (N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitroso-di-n-propylamine, N-nitroso-di-n-butylamine, N-nitroso-di-phenylamine, N-nitrosopyrrolidine, N-nitrosopiperidine, N-nitrosomorpholine) were also monitored but none of these 8, or precursors of these 8 [as estimated with formation potential (FP) tests], were detected anywhere in raw, partially-treated or distribution samples. Throughout the year, there was evidence that seasonality may impact NDMA formation, such that lower temperatures (~5-10°C) produced greater NDMA than during warmer months. The year of sampling further provided evidence that water quality and weather events may impact NDMA precursor loads. Precursor loading estimates demonstrated that NDMA precursors increased during treatment (potentially from cationic polymer coagulant aids). The precursor analysis also provided evidence that precursors may have increased further within the distribution system itself. This comprehensive study of a large-scale drinking water system provides insight into the variability of NDMA occurrence in a chloraminated system, which may be impacted by seasonality, water quality changes and/or the varied origins of NDMA precursors within a given system. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Voltolini, Marco; Kwon, Tae-Hyuk; Ajo-Franklin, Jonathan
2017-10-21
Pore-scale distribution of supercritical CO 2 (scCO 2) exerts significant control on a variety of key hydrologic as well as geochemical processes, including residual trapping and dissolution. Despite such importance, only a small number of experiments have directly characterized the three-dimensional distribution of scCO 2 in geologic materials during the invasion (drainage) process. Here, we present a study which couples dynamic high-resolution synchrotron X-ray micro-computed tomography imaging of a scCO 2/brine system at in situ pressure/temperature conditions with quantitative pore-scale modeling to allow direct validation of a pore-scale description of scCO2 distribution. The experiment combines high-speed synchrotron radiography with tomographymore » to characterize the brine saturated sample, the scCO 2 breakthrough process, and the partially saturated state of a sandstone sample from the Domengine Formation, a regionally extensive unit within the Sacramento Basin (California, USA). The availability of a 3D dataset allowed us to examine correlations between grains and pores morphometric parameters and the actual distribution of scCO 2 in the sample, including the examination of the role of small-scale sedimentary structure on CO2 distribution. The segmented scCO 2/brine volume was also used to validate a simple computational model based on the local thickness concept, able to accurately simulate the distribution of scCO 2 after drainage. The same method was also used to simulate Hg capillary pressure curves with satisfactory results when compared to the measured ones. Finally, this predictive approach, requiring only a tomographic scan of the dry sample, proved to be an effective route for studying processes related to CO 2 invasion structure in geological samples at the pore scale.« less
Application of based on improved wavelet algorithm in fiber temperature sensor
NASA Astrophysics Data System (ADS)
Qi, Hui; Tang, Wenjuan
2018-03-01
It is crucial point that accurate temperature in distributed optical fiber temperature sensor. In order to solve the problem of temperature measurement error due to weak Raman scattering signal and strong noise in system, a new based on improved wavelet algorithm is presented. On the basis of the traditional modulus maxima wavelet algorithm, signal correlation is considered to improve the ability to capture signals and noise, meanwhile, combined with wavelet decomposition scale adaptive method to eliminate signal loss or noise not filtered due to mismatch scale. Superiority of algorithm filtering is compared with others by Matlab. At last, the 3km distributed optical fiber temperature sensing system is used for verification. Experimental results show that accuracy of temperature generally increased by 0.5233.
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
Integral criteria for large-scale multiple fingerprint solutions
NASA Astrophysics Data System (ADS)
Ushmaev, Oleg S.; Novikov, Sergey O.
2004-08-01
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
Probabilistic measures of persistence and extinction in measles (meta)populations.
Gunning, Christian E; Wearing, Helen J
2013-08-01
Persistence and extinction are fundamental processes in ecological systems that are difficult to accurately measure due to stochasticity and incomplete observation. Moreover, these processes operate on multiple scales, from individual populations to metapopulations. Here, we examine an extensive new data set of measles case reports and associated demographics in pre-vaccine era US cities, alongside a classic England & Wales data set. We first infer the per-population quasi-continuous distribution of log incidence. We then use stochastic, spatially implicit metapopulation models to explore the frequency of rescue events and apparent extinctions. We show that, unlike critical community size, the inferred distributions account for observational processes, allowing direct comparisons between metapopulations. The inferred distributions scale with population size. We use these scalings to estimate extinction boundary probabilities. We compare these predictions with measurements in individual populations and random aggregates of populations, highlighting the importance of medium-sized populations in metapopulation persistence. © 2013 John Wiley & Sons Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua
2015-12-01
Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.
Reactive solute transport in an asymmetric aquifer-aquitard system with scale-dependent dispersion
NASA Astrophysics Data System (ADS)
Zhou, R.; Zhan, H.
2017-12-01
Abstract: The understanding of reactive solute transport in an aquifer-aquitard system is important to study transport behavior in the more complex porous media. When transport properties are asymmetric in the upper and lower aquitards, reactive solute transport in such an aquifer-aquitard system becomes a coupled three domain problem that is more complex than the symmetric case in which the upper and lower aquitards have identical transport properties. Meanwhile, the dispersivity of transport in the aquifer is considered as a linear or exponential function of travel distance due to the heterogeneity of aquifer. This study proposed new transport models to describe reactive solute transport in such an asymmetric aquifer-aquitard system with scale-dependent dispersion. Mathematical models were developed for such problems under the first-type and third-type boundary conditions to analyze the spatial-temporal concentration and mass distribution in the aquifer and aquitards with the help of Laplace transform technique and the de Hoog numerical Laplace inversion method. Breakthrough curves (BTCs) and residence time distribution curves (RTDs) obtained from the models with scale-dependent dispersion, constant dispersion and constant effective dispersivity were compared to reflect the lumped scale-dispersion effect in the aquifer-aquitard system. The newly acquired solutions were then tested extensively against previous analytical and numerical solutions and were proven to be robust and accurate. Furthermore, to study the back diffusion of contaminant mass in aquitards, a zero-contaminant mass concentration boundary condition was imposed on the inlet boundary of the system after a certain time, which is also called the process of water flushing. The diffusion loss alone the aquifer/aquitard interfaces and mass stored ratio change in each of three domains (upper aquitard, aquifer, and lower aquitard) after water flushing provided an insightful and comprehensive analysis of transport behavior with asymmetric distribution of transport properties.
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
A Weibull distribution accrual failure detector for cloud computing.
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.
Large-scale P2P network based distributed virtual geographic environment (DVGE)
NASA Astrophysics Data System (ADS)
Tan, Xicheng; Yu, Liang; Bian, Fuling
2007-06-01
Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.
NASA Astrophysics Data System (ADS)
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
Learning Novel Musical Pitch via Distributional Learning
ERIC Educational Resources Information Center
Ong, Jia Hoong; Burnham, Denis; Stevens, Catherine J.
2017-01-01
Because different musical scales use different sets of intervals and, hence, different musical pitches, how do music listeners learn those that are in their native musical system? One possibility is that musical pitches are acquired in the same way as phonemes, that is, via distributional learning, in which learners infer knowledge from the…
Very Large Scale Distributed Information Processing Systems
1991-09-27
USENIX Conference Proceedings, pp. 31-43. USENIX, February 1988. [KLA90] Michael L. Kazar, Bruce W. Leverett, Owen T. Anderson, Vasilis Apos- tolides, Beth...will be selected if cost is the curlcron Iorsleettin- IfFigure 2 R DistribUted Database lSgtam and its we combin the abolve two pit , n r-itcrr
The stability of iron corrosion products and the bacterial composition of biofilm in drinking water distribution systems (DWDS) could have great impact on the water safety at the consumer ends. In this work, pipe loops were setup to investigate the transformation characteristics ...
Toxic arsenic (As) is known to incorporate from source well water onto the scales of distribution system pipes such as iron, copper, galvanized steel and even plastic containing internal buildup of iron coatings (Lytle et al., 2010, 2004; Schock, 2015; Reiber and Dostal, 2000). W...
Barth, Gilbert R.; Illangasekare, T.H.; Rajaram, H.
2003-01-01
This work considers the applicability of conservative tracers for detecting high-saturation nonaqueous-phase liquid (NAPL) entrapment in heterogeneous systems. For this purpose, a series of experiments and simulations was performed using a two-dimensional heterogeneous system (10??1.2 m), which represents an intermediate scale between laboratory and field scales. Tracer tests performed prior to injecting the NAPL provide the baseline response of the heterogeneous porous medium. Two NAPL spill experiments were performed and the entrapped-NAPL saturation distribution measured in detail using a gamma-ray attenuation system. Tracer tests following each of the NAPL spills produced breakthrough curves (BTCs) reflecting the impact of entrapped NAPL on conservative transport. To evaluate significance, the impact of NAPL entrapment on the conservative-tracer breakthrough curves was compared to simulated breakthrough curve variability for different realizations of the heterogeneous distribution. Analysis of the results reveals that the NAPL entrapment has a significant impact on the temporal moments of conservative-tracer breakthrough curves. ?? 2003 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dreier, J.; Huggenberger, M.; Aubert, C.
1996-08-01
The PANDA test facility at PSI in Switzerland is used to study the long-term Simplified Boiling Water Reactor (SBWR) Passive Containment Cooling System (PCCS) performance. The PANDA tests demonstrate performance on a larger scale than previous tests and examine the effects of any non-uniform spatial distributions of steam and non-condensables in the system. The PANDA facility has a 1:1 vertical scale, and 1:25 ``system`` scale (volume, power, etc.). Steady-state PCCS condenser performance tests and extensive facility characterization tests have been completed. Transient system behavior tests were conducted late in 1995; results from the first three transient tests (M3 series) aremore » reviewed. The first PANDA tests showed that the overall global behavior of the SBWR containment was globally repeatable and very favorable; the system exhibited great ``robustness.``« less
NASA Astrophysics Data System (ADS)
Grasso, J. R.; Bachèlery, P.
Self-organized systems are often used to describe natural phenomena where power laws and scale invariant geometry are observed. The Piton de la Fournaise volcano shows power-law behavior in many aspects. These include the temporal distribution of eruptions, the frequency-size distributions of induced earthquakes, dikes, fissures, lava flows and interflow periods, all evidence of self-similarity over a finite scale range. We show that the bounds to scale-invariance can be used to derive geomechanical constraints on both the volcano structure and the volcano mechanics. We ascertain that the present magma bodies are multi-lens reservoirs in a quasi-eruptive condition, i.e. a marginally critical state. The scaling organization of dynamic fluid-induced observables on the volcano, such as fluid induced earthquakes, dikes and surface fissures, appears to be controlled by underlying static hierarchical structure (geology) similar to that proposed for fluid circulations in human physiology. The emergence of saturation lengths for the scalable volcanic observable argues for the finite scalability of complex naturally self-organized critical systems, including volcano dynamics.
The scaling of geographic ranges: implications for species distribution models
Yackulic, Charles B.; Ginsberg, Joshua R.
2016-01-01
There is a need for timely science to inform policy and management decisions; however, we must also strive to provide predictions that best reflect our understanding of ecological systems. Species distributions evolve through time and reflect responses to environmental conditions that are mediated through individual and population processes. Species distribution models that reflect this understanding, and explicitly model dynamics, are likely to give more accurate predictions.
Sindt, Anthony R.; Fischer, Jesse R.; Quist, Michael C.; Pierce, Clay
2011-01-01
Anthropogenic alterations to Iowa’s landscape have greatly altered lotic systems with consequent effects on the biodiversity of freshwater fauna. Ictalurids are a diverse group of fishes and play an important ecological role in aquatic ecosystems. However, little is known about their distribution and status in lotic systems throughout Iowa. The purpose of this study was to describe the distribution of ictalurids in Iowa and examine their relationship with ecological integrity of streams and rivers. Historical data (i.e., 1884–2002) compiled for the Iowa Aquatic Gap Analysis Project (IAGAP) were used to detect declines in the distribution of ictalurids in Iowa streams and rivers at stream segment and watershed scales. Eight variables characterizing ictalurid assemblages were used to evaluate relationships with index of biotic integrity (IBI) ratings. Comparisons of recent and historic data from the IAGAP database indicated that 9 of Iowa’s 10 ictalurid species experienced distribution declines at one or more spatial scales. Analysis of variance indicated that ictalurid assemblages differed among samples with different IBI ratings. Specifically, total ictalurid, sensitive ictalurid, and Noturus spp. richness increased as IBI ratings increased. Results indicate declining ictalurid species distributions and biotic integrity are related, and management strategies aimed to improve habitat and increase biotic integrity will benefit ictalurid species.
Orobaton, Nosakhare; Abdulazeez, Jumare; Abegunde, Dele; Shoretire, Kamil; Maishanu, Abubakar; Ikoro, Nnenna; Fapohunda, Bolaji; Balami, Wapada; Beal, Katherine; Ganiyu, Akeem; Gwamzhi, Ringpon; Austin, Anne
2017-01-01
Background Postpartum haemorrhage (PPH) is a leading cause of maternal death in Sokoto State, Nigeria, where 95% of women give birth outside of a health facility. Although pilot schemes have demonstrated the value of community-based distribution of misoprostol for the prevention of PPH, none have provided practical insight on taking such programs to scale. Methods A community-based system for the distribution of misoprostol tablets (in 600ug) and chlorhexidine digluconate gel 7.1% to mother-newborn dyads was introduced by state government officials and community leaders throughout Sokoto State in April 2013, with the potential to reach an estimated 190,467 annual births. A simple outcome form that collected distribution and consumption data was used to assess the percentage of mothers that received misoprostol at labor through December 2014. Mothers’ conditions were tracked through 6 weeks postpartum. Verbal autopsies were conducted on associated maternal deaths. Results Misoprostol distribution was successfully introduced and reached mothers in labor in all 244 wards in Sokoto State. Community data collection systems were successfully operational in all 244 wards with reliable capacity to record maternal deaths. 70,982 women or 22% of expected births received misoprostol from April 2013 to December 2014. Between April and December 2013, 33 women (< 1%) reported that heavy bleeding persisted after misoprostol use and were promptly referred. There were a total of 11 deaths in the 2013 cohort which were confirmed as maternal deaths by verbal autopsies. Between January and December of 2014, a total 434 women (1.25%) that ingested misoprostol reported associated side effects. Conclusion It is feasible and safe to utilize government guidelines on results-based primary health care to successfully introduce community distribution of life saving misoprostol at scale to reduce PPH and improve maternal outcomes. Lessons from Sokoto State’s at-scale program implementation, to assure every mother’s right to uterotonics, can inform scale-up elsewhere in Nigeria. PMID:28234894
The Ghost in the Machine: Fracking in the Earth's Complex Brittle Crust
NASA Astrophysics Data System (ADS)
Malin, P. E.
2015-12-01
This paper discusses in the impact of complex rock properties on practical applications like fracking and its associated seismic emissions. A variety of borehole measurements show that the complex physical properties of the upper crust cannot be characterized by averages on any scale. Instead they appear to follow 3 empirical rule: a power law distribution in physical scales, a lognormal distribution in populations, and a direct relation between changes in porosity and log(permeability). These rules can be directly related to the presence of fluid rich and seismically active fractures - from mineral grains to fault segments. (These are the "ghosts" referred to in the title.) In other physical systems, such behaviors arise on the boundaries of phase changes, and are studied as "critical state physics". In analogy to the 4 phases of water, crustal rocks progress upward from a un-fractured, ductile lower crust to nearly cohesionless surface alluvium. The crust in between is in an unstable transition. It is in this layer methods such as hydrofracking operate - be they in Oil and Gas, geothermal, or mining. As a result, nothing is predictable in these systems. Crustal models have conventionally been constructed assuming that in situ permeability and related properties are normally distributed. This approach is consistent with the use of short scale-length cores and logs to estimate properties. However, reservoir-scale flow data show that they are better fit to lognormal distributions. Such "long tail" distributions are observed for well productivity, ore vein grades, and induced seismic signals. Outcrop and well-log data show that many rock properties also show a power-law-type variation in scale lengths. In terms of Fourier power spectra, if peaks per km is k, then their power is proportional to 1/k. The source of this variation is related to pore-space connectivity, beginning with grain-fractures. We then show that a passive seismic method, Tomographic Fracture ImagingTM (TFI), can observe the distribution of this connectivity. Combined with TFI data, our fracture-connectivity model reveals the most significant crustal features and account for their range of passive and stimulated behaviors.
Fractal analysis of urban environment: land use and sewer system
NASA Astrophysics Data System (ADS)
Gires, A.; Ochoa Rodriguez, S.; Van Assel, J.; Bruni, G.; Murla Tulys, D.; Wang, L.; Pina, R.; Richard, J.; Ichiba, A.; Willems, P.; Tchiguirinskaia, I.; ten Veldhuis, M. C.; Schertzer, D. J. M.
2014-12-01
Land use distribution are usually obtained by automatic processing of satellite and airborne pictures. The complexity of the obtained patterns which are furthermore scale dependent is enhanced in urban environment. This scale dependency is even more visible in a rasterized representation where only a unique class is affected to each pixel. A parameter commonly analysed in urban hydrology is the coefficient of imperviousness, which reflects the proportion of rainfall that will be immediately active in the catchment response. This coefficient is strongly scale dependent with a rasterized representation. This complex behaviour is well grasped with the help of the scale invariant notion of fractal dimension which enables to quantify the space occupied by a geometrical set (here the impervious areas) not only at a single scale but across all scales. This fractal dimension is also compared to the ones computed on the representation of the catchments with the help of operational semi-distributed models. Fractal dimensions of the corresponding sewer systems are also computed and compared with values found in the literature for natural river networks. This methodology is tested on 7 pilot sites of the European NWE Interreg IV RainGain project located in France, Belgium, Netherlands, United-Kingdom and Portugal. Results are compared between all the case study which exhibit different physical features (slope, level of urbanisation, population density...).
Context-aware distributed cloud computing using CloudScheduler
NASA Astrophysics Data System (ADS)
Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.
2017-10-01
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
A (Sub)Micro-Scale Investigation of Fe Plaque Distribution in Selected Wetland Plant Root Epidermis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Huan
This study focuses on investigation of the distribution of Fe plaque in the root epidermis of the selected wetland plant species (Phragmites australis, Typha latifolia and Spartina alterniflora) using synchrotron X-ray microfluoresces, X-ray absorption near edge structure and transmission X-ray microscope techniques with (sub)micro-scale resolution. The wetland plants were collected in Liberty State Park, New Jersey, USA, and Yangtze River intertidal zone, Shanghai, China, respectively, during the different time period. Although a number of early studies have reported that Fe-oxides can precipitate on the surface of aquatic plants in the rhizosphere to form iron plaque, the role of Fe plaquemore » in regulating metal biogeochemical cycle has been in discussion for decades. The results from this study show that Fe is mainly distributed in the epidermis non-uniformly, and the major Fe species is ferric Fe (Fe3+). This information is needed to make broad inferences about the relevant plant metal uptake mechanisms because Fe accumulation and distribution in the root system is important to understanding the metal transport processes that control the mobility of metals in plants. This study improves our understanding of Fe plaque distributions and speciation in the wetland plant root system, and helps us to understand the function of Fe plaque in metal transport and accumulation through the root system.« less
Large-Scale Geographic Variation in Distribution and Abundance of Australian Deep-Water Kelp Forests
Marzinelli, Ezequiel M.; Williams, Stefan B.; Babcock, Russell C.; Barrett, Neville S.; Johnson, Craig R.; Jordan, Alan; Kendrick, Gary A.; Pizarro, Oscar R.; Smale, Dan A.; Steinberg, Peter D.
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia’s Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10–100 m to 100–1,000 km) and depths (15–60 m) across several regions ca 2–6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40–50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves. PMID:25693066
Review of Interconnection Practices and Costs in the Western States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bird, Lori A; Flores-Espino, Francisco; Volpi, Christina M
The objective of this report is to evaluate the nature of barriers to interconnecting distributed PV, assess costs of interconnection, and compare interconnection practices across various states in the Western Interconnection. The report addresses practices for interconnecting both residential and commercial-scale PV systems to the distribution system. This study is part of a larger, joint project between the Western Interstate Energy Board (WIEB) and the National Renewable Energy Laboratory (NREL), funded by the U.S. Department of Energy, to examine barriers to distributed PV in the 11 states wholly within the Western Interconnection.
Security of Distributed-Phase-Reference Quantum Key Distribution
NASA Astrophysics Data System (ADS)
Moroder, Tobias; Curty, Marcos; Lim, Charles Ci Wen; Thinh, Le Phuc; Zbinden, Hugo; Gisin, Nicolas
2012-12-01
Distributed-phase-reference quantum key distribution stands out for its easy implementation with present day technology. For many years, a full security proof of these schemes in a realistic setting has been elusive. We solve this long-standing problem and present a generic method to prove the security of such protocols against general attacks. To illustrate our result, we provide lower bounds on the key generation rate of a variant of the coherent-one-way quantum key distribution protocol. In contrast to standard predictions, it appears to scale quadratically with the system transmittance.
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Computer-generated forces in distributed interactive simulation
NASA Astrophysics Data System (ADS)
Petty, Mikel D.
1995-04-01
Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.
2008-06-01
numbers—into inventory, sales, purchasing, marketing , and similar database systems distributed throughout an enterprise.(Sweeney, 2005) It can be seen as...the following: • Data sharing , both inside and outside of an enterprise. • Efficient management of massive data produced by an RFID system...matrix can be read omni-directionally and can be scaled down so that it can be affixed to small items. The DoD brokered an agreement with EAN/ UCC , the
Identification of Curie temperature distributions in magnetic particulate systems
NASA Astrophysics Data System (ADS)
Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.
2017-09-01
This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.
Analysis of Delays in Transmitting Time Code Using an Automated Computer Time Distribution System
1999-12-01
jlevine@clock. bldrdoc.gov Abstract An automated computer time distribution system broadcasts standard tune to users using computers and modems via...contributed to &lays - sofhareplatform (50% of the delay), transmission speed of time- codes (25OA), telephone network (lS%), modem and others (10’4). The... modems , and telephone lines. Users dial the ACTS server to receive time traceable to the national time scale of Singapore, UTC(PSB). The users can in
Kushniruk, A; Kaipio, J; Nieminen, M; Hyppönen, H; Lääveri, T; Nohr, C; Kanstrup, A M; Berg Christiansen, M; Kuo, M-H; Borycki, E
2014-08-15
The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems.
Kaipio, J.; Nieminen, M.; Hyppönen, H.; Lääveri, T.; Nohr, C.; Kanstrup, A. M.; Berg Christiansen, M.; Kuo, M.-H.; Borycki, E.
2014-01-01
Summary Objectives The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems. PMID:25123725
Multi-Scale Stochastic Resonance Spectrogram for fault diagnosis of rolling element bearings
NASA Astrophysics Data System (ADS)
He, Qingbo; Wu, Enhao; Pan, Yuanyuan
2018-04-01
It is not easy to identify incipient defect of a rolling element bearing by analyzing the vibration data because of the disturbance of background noise. The weak and unrecognizable transient fault signal of a mechanical system can be enhanced by the stochastic resonance (SR) technique that utilizes the noise in the system. However, it is challenging for the SR technique to identify sensitive fault information in non-stationary signals. This paper proposes a new method called multi-scale SR spectrogram (MSSRS) for bearing defect diagnosis. The new method considers the non-stationary property of the defective bearing vibration signals, and treats every scale of the time-frequency distribution (TFD) as a modulation system. Then the SR technique is utilized on each modulation system according to each frequencies in the TFD. The SR results are sensitive to the defect information because the energy of transient vibration is distributed in a limited frequency band in the TFD. Collecting the spectra of the SR outputs at all frequency scales then generates the MSSRS. The proposed MSSRS is able to well deal with the non-stationary transient signal, and can highlight the defect-induced frequency component corresponding to the impulse information. Experimental results with practical defective bearing vibration data have shown that the proposed method outperforms the former SR methods and exhibits a good application prospect in rolling element bearing fault diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...
2017-11-18
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
NASA Astrophysics Data System (ADS)
Senyel, Muzeyyen Anil
Investments in the urban energy infrastructure for distributing electricity and natural gas are analyzed using (1) property data measuring distribution plant value at the local/tax district level, and (2) system outputs such as sectoral numbers of customers and energy sales, input prices, company-specific characteristics such as average wages and load factor. Socio-economic and site-specific urban and geographic variables, however, often been neglected in past studies. The purpose of this research is to incorporate these site-specific characteristics of electricity and natural gas distribution into investment cost model estimations. These local characteristics include (1) socio-economic variables, such as income and wealth; (2) urban-related variables, such as density, land-use, street pattern, housing pattern; (3) geographic and environmental variables, such as soil, topography, and weather, and (4) company-specific characteristics such as average wages, and load factor. The classical output variables include residential and commercial-industrial customers and sales. In contrast to most previous research, only capital investments at the local level are considered. In addition to aggregate cost modeling, the analysis focuses on the investment costs for the system components: overhead conductors, underground conductors, conduits, poles, transformers, services, street lighting, and station equipment for electricity distribution; and mains, services, regular and industrial measurement and regulation stations for natural gas distribution. The Box-Cox, log-log and additive models are compared to determine the best fitting cost functions. The Box-Cox form turns out to be superior to the other forms at the aggregate level and for network components. However, a linear additive form provides a better fit for end-user related components. The results show that, in addition to output variables and company-specific variables, various site-specific variables are statistically significant at the aggregate and disaggregate levels. Local electricity and natural gas distribution networks are characterized by a natural monopoly cost structure and economies of scale and density. The results provide evidence for the economies of scale and density for the aggregate electricity and natural gas distribution systems. However, distribution components have varying economic characteristics. The backbones of the networks (overhead conductors for electricity, and mains for natural gas) display economies of scale and density, but services in both systems and street lighting display diseconomies of scale and diseconomies of density. Finally multi-utility network cost analyses are presented for aggregate and disaggregate electricity and natural gas capital investments. Economies of scope analyses investigate whether providing electricity and natural gas jointly is economically advantageous, as compared to providing these products separately. Significant economies of scope are observed for both the total network and the underground capital investments.
NASA Technical Reports Server (NTRS)
Mullican, R. C.; Hayes, B. C.
1991-01-01
Preliminary results of research conducted in the late 1970's indicate that perceptual qualities of an enclosure can be influenced by the distribution of illumination within the enclosure. Subjective impressions such as spaciousness, perceptual clarity, and relaxation or tenseness, among others, appear to be related to different combinations of surface luminance. A prototype indirect ambient illumination system was developed which will allow crew members to alter surface luminance distributions within an enclosed module, thus modifying perceptual cues to match crew preferences. A traditional lensed direct lighting system was compared to the prototype utilizing the full-scale mockup of Space Station Freedom developed by Marshall Space Flight Center. The direct lensed system was installed in the habitation module with the indirect prototype deployed in the U.S. laboratory module. Analysis centered on the illuminance and luminance distributions resultant from these systems and the implications of various luminaire spacing options. All test configurations were evaluated for compliance with NASA Standard 3000, Man-System Integration Standards.
Scheduling based on a dynamic resource connection
NASA Astrophysics Data System (ADS)
Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.
2017-02-01
The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.
IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.
This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less
Revisiting control establishments for emerging energy hubs
NASA Astrophysics Data System (ADS)
Nasirian, Vahidreza
Emerging small-scale energy systems, i.e., microgrids and smartgrids, rely on centralized controllers for voltage regulation, load sharing, and economic dispatch. However, the central controller is a single-point-of-failure in such a design as either the controller or attached communication links failure can render the entire system inoperable. This work seeks for alternative distributed control structures to improve system reliability and help to the scalability of the system. A cooperative distributed controller is proposed that uses a noise-resilient voltage estimator and handles global voltage regulation and load sharing across a DC microgrid. Distributed adaptive droop control is also investigated as an alternative solution. A droop-free distributed control is offered to handle voltage/frequency regulation and load sharing in AC systems. This solution does not require frequency measurement and, thus, features a fast frequency regulation. Distributed economic dispatch is also studied, where a distributed protocol is designed that controls generation units to merge their incremental costs into a consensus and, thus, push the entire system to generate with the minimum cost. Experimental verifications and Hardware-in-the-Loop (HIL) simulations are used to study efficacy of the proposed control protocols.
What Is a Complex Innovation System?
Katz, J. Sylvan
2016-01-01
Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
NASA Astrophysics Data System (ADS)
Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya
2017-04-01
Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.
NASA Astrophysics Data System (ADS)
Cao, Xuesong; Jiang, Ling; Hu, Ruimin
2006-10-01
Currently, the applications of surveillance system have been increasingly widespread. But there are few surveillance platforms that can meet the requirement of large-scale, cross-regional, and flexible surveillance business. In the paper, we present a distributed surveillance system platform to improve safety and security of the society. The system is constructed by an object-oriented middleware called as Internet Communications Engine (ICE). This middleware helps our platform to integrate a lot of surveillance resource of the society and accommodate diverse range of surveillance industry requirements. In the follow sections, we will describe in detail the design concepts of system and introduce traits of ICE.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-01-01
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596
Can power-law scaling and neuronal avalanches arise from stochastic dynamics?
Touboul, Jonathan; Destexhe, Alain
2010-02-11
The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
NASDA's Advanced On-Line System (ADOLIS)
NASA Technical Reports Server (NTRS)
Yamamoto, Yoshikatsu; Hara, Hideo; Yamada, Shigeo; Hirata, Nobuyuki; Komatsu, Shigenori; Nishihata, Seiji; Oniyama, Akio
1993-01-01
Spacecraft operations including ground system operations are generally realized by various large or small scale group work which is done by operators, engineers, managers, users and so on, and their positions are geographically distributed in many cases. In face-to-face work environments, it is easy for them to understand each other. However, in distributed work environments which need communication media, if only using audio, they become estranged from each other and lose interest in and continuity of work. It is an obstacle to smooth operation of spacecraft. NASDA has developed an experimental model of a new real-time operation control system called 'ADOLIS' (ADvanced On-Line System) adopted to such a distributed environment using a multi-media system dealing with character, figure, image, handwriting, video and audio information which is accommodated to operation systems of a wide range including spacecraft and ground systems. This paper describes the results of the development of the experimental model.
Power-law weighted networks from local attachments
NASA Astrophysics Data System (ADS)
Moriano, P.; Finke, J.
2012-07-01
This letter introduces a mechanism for constructing, through a process of distributed decision-making, substrates for the study of collective dynamics on extended power-law weighted networks with both a desired scaling exponent and a fixed clustering coefficient. The analytical results show that the connectivity distribution converges to the scaling behavior often found in social and engineering systems. To illustrate the approach of the proposed framework we generate network substrates that resemble steady state properties of the empirical citation distributions of i) publications indexed by the Institute for Scientific Information from 1981 to 1997; ii) patents granted by the U.S. Patent and Trademark Office from 1975 to 1999; and iii) opinions written by the Supreme Court and the cases they cite from 1754 to 2002.
Meng, Yuting; Ding, Shiming; Gong, Mengdan; Chen, Musong; Wang, Yan; Fan, Xianfang; Shi, Lei; Zhang, Chaosheng
2018-03-01
Sediments have a heterogeneous distribution of labile redox-sensitive elements due to a drastic downward transition from oxic to anoxic condition as a result of organic matter degradation. Characterization of the heterogeneous nature of sediments is vital for understanding of small-scale biogeochemical processes. However, there are limited reports on the related specialized methodology. In this study, the monthly distributions of labile phosphorus (P), a redox-sensitive limiting nutrient, were measured in the eutrophic Lake Taihu by Zr-oxide diffusive gradients in thin films (Zr-oxide DGT) on a two-dimensional (2D) submillimeter level. Geographical information system (GIS) techniques were used to visualize the labile P distribution at such a micro-scale, showing that the DGT-labile P was low in winter and high in summer. Spatial analysis methods, including semivariogram and Moran's I, were used to quantify the spatial variation of DGT-labile P. The distribution of DGT-labile P had clear submillimeter-scale spatial patterns with significant spatial autocorrelation during the whole year and displayed seasonal changes. High values of labile P with strong spatial variation were observed in summer, while low values of labile P with relatively uniform spatial patterns were detected in winter, demonstrating the strong influences of temperature on the mobility and spatial distribution of P in sediment profiles. Copyright © 2017 Elsevier Ltd. All rights reserved.
Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision (in Chinese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong
Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logicmore » and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.« less
A Performance Comparison of Tree and Ring Topologies in Distributed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Min
A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less
The mathematical relationship between Zipf’s law and the hierarchical scaling law
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2012-06-01
The empirical studies of city-size distribution show that Zipf's law and the hierarchical scaling law are linked in many ways. The rank-size scaling and hierarchical scaling seem to be two different sides of the same coin, but their relationship has never been revealed by strict mathematical proof. In this paper, the Zipf's distribution of cities is abstracted as a q-sequence. Based on this sequence, a self-similar hierarchy consisting of many levels is defined and the numbers of cities in different levels form a geometric sequence. An exponential distribution of the average size of cities is derived from the hierarchy. Thus we have two exponential functions, from which follows a hierarchical scaling equation. The results can be statistically verified by simple mathematical experiments and observational data of cities. A theoretical foundation is then laid for the conversion from Zipf's law to the hierarchical scaling law, and the latter can show more information about city development than the former. Moreover, the self-similar hierarchy provides a new perspective for studying networks of cities as complex systems. A series of mathematical rules applied to cities such as the allometric growth law, the 2n principle and Pareto's law can be associated with one another by the hierarchical organization.
Chaotic dynamics of large-scale double-diffusive convection in a porous medium
NASA Astrophysics Data System (ADS)
Kondo, Shutaro; Gotoda, Hiroshi; Miyano, Takaya; Tokuda, Isao T.
2018-02-01
We have studied chaotic dynamics of large-scale double-diffusive convection of a viscoelastic fluid in a porous medium from the viewpoint of dynamical systems theory. A fifth-order nonlinear dynamical system modeling the double-diffusive convection is theoretically obtained by incorporating the Darcy-Brinkman equation into transport equations through a physical dimensionless parameter representing porosity. We clearly show that the chaotic convective motion becomes much more complicated with increasing porosity. The degree of dynamic instability during chaotic convective motion is quantified by two important measures: the network entropy of the degree distribution in the horizontal visibility graph and the Kaplan-Yorke dimension in terms of Lyapunov exponents. We also present an interesting on-off intermittent phenomenon in the probability distribution of time intervals exhibiting nearly complete synchronization.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Scaling properties of a rice-pile model: inertia and friction effects.
Khfifi, M; Loulidi, M
2008-11-01
We present a rice-pile cellular automaton model that includes inertial and friction effects. This model is studied in one dimension, where the updating of metastable sites is done according to a stochastic dynamics governed by a probabilistic toppling parameter p that depends on the accumulated energy of moving grains. We investigate the scaling properties of the model using finite-size scaling analysis. The avalanche size, the lifetime, and the residence time distributions exhibit a power-law behavior. Their corresponding critical exponents, respectively, tau, y, and yr, are not universal. They present continuous variation versus the parameters of the system. The maximal value of the critical exponent tau that our model gives is very close to the experimental one, tau=2.02 [Frette, Nature (London) 379, 49 (1996)], and the probability distribution of the residence time is in good agreement with the experimental results. We note that the critical behavior is observed only in a certain range of parameter values of the system which correspond to low inertia and high friction.
Smart grid integration of small-scale trigeneration systems
NASA Astrophysics Data System (ADS)
Vacheva, Gergana; Kanchev, Hristiyan; Hinov, Nikolay
2017-12-01
This paper presents a study on the possibilities for implementation of local heating, air-conditioning and electricity generation (trigeneration) as distributed energy resource in the Smart Grid. By the means of microturbine-based generators and absorption chillers buildings are able to meet partially or entirely their electrical load curve or even supply power to the grid by following their heating and air-conditioning daily schedule. The principles of small-scale cooling, heating and power generation systems are presented at first, then the thermal calculations of an example building are performed: the heat losses due to thermal conductivity and the estimated daily heating and air-conditioning load curves. By considering daily power consumption curves and weather data for several winter and summer days, the heating/air-conditioning schedule is estimated and the available electrical energy from a microturbine-based cogeneration system is estimated. Simulation results confirm the potential of using cogeneration and trigeneration systems for local distributed electricity generation and grid support in the daily peaks of power consumption.
A Weibull distribution accrual failure detector for cloud computing
Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229
Dynamical complexity changes during two forms of meditation
NASA Astrophysics Data System (ADS)
Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng
2011-06-01
Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.
Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)
Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline
2008-01-01
The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...
Kelly M. Andersen; Bridgett J. Naylor; Bryan A. Endress; Catherine G. Parks
2015-01-01
Questions: Mountain systems have high abiotic heterogeneity over local spatial scales, offering natural experiments for examining plant species invasions. We ask whether functional groupings explain non-native species spread into native vegetation and up elevation gradients.We examine whether non-native species distribution patterns are related to environmental...
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burman, K.; Olis, D.; Gevorgian, V.
2011-09-01
This report focuses on the economic and technical feasibility of integrating renewable energy technologies into the U.S. Virgin Islands transmission and distribution systems. The report includes three main areas of analysis: 1) the economics of deploying utility-scale renewable energy technologies on St. Thomas/St. John and St. Croix; 2) potential sites for installing roof- and ground-mount PV systems and wind turbines and the impact renewable generation will have on the electrical subtransmission and distribution infrastructure, and 3) the feasibility of a 100- to 200-megawatt power interconnection of the Puerto Rico Electric Power Authority (PREPA), Virgin Islands Water and Power Authority (WAPA),more » and British Virgin Islands (BVI) grids via a submarine cable system.« less
Scaling Theory of Entanglement at the Many-Body Localization Transition.
Dumitrescu, Philipp T; Vasseur, Romain; Potter, Andrew C
2017-09-15
We study the universal properties of eigenstate entanglement entropy across the transition between many-body localized (MBL) and thermal phases. We develop an improved real space renormalization group approach that enables numerical simulation of large system sizes and systematic extrapolation to the infinite system size limit. For systems smaller than the correlation length, the average entanglement follows a subthermal volume law, whose coefficient is a universal scaling function. The full distribution of entanglement follows a universal scaling form, and exhibits a bimodal structure that produces universal subleading power-law corrections to the leading volume law. For systems larger than the correlation length, the short interval entanglement exhibits a discontinuous jump at the transition from fully thermal volume law on the thermal side, to pure area law on the MBL side.
Ubiquity of Benford's law and emergence of the reciprocal distribution
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
2016-04-07
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Estimating Pore Properties from NMR Relaxation Time Measurements in Heterogeneous Media
NASA Astrophysics Data System (ADS)
Grunewald, E.; Knight, R.
2008-12-01
The link between pore geometry and the nuclear magnetic resonance (NMR) relaxation time T2 is well- established for simple systems but is poorly understood for complex media with heterogeneous pores. Conventional interpretation of NMR relaxation data employs a model of isolated pores in which each hydrogen proton samples only one pore type, and the T2-distribution is directly scaled to estimate a pore-size distribution. During an actual NMR measurement, however, each proton diffuses through a finite volume of the pore network, and so may sample multiple pore types encountered within this diffusion cell. For cases in which heterogeneous pores are strongly coupled by diffusion, the meaning of the T2- distribution is not well understood and further research is required to determine how such measurements should be interpreted. In this study we directly investigate the implications of pore coupling in two groups of laboratory NMR experiments. We conduct two suites of experiments, in which samples are synthesized to exhibit a range of pore coupling strengths using two independent approaches: (a) varying the scale of the diffusion cell and (b) varying the scale over which heterogeneous pores are encountered. In the first set of experiments, we vary the scale of the diffusion cell in silica gels which have a bimodal pore-size distribution comprised of intragrannular micropores and much larger intergrannular pores. The untreated gel exhibits strong pore coupling with a single broad peak observed in the T2-distribution. By treating the gel with varied amounts of paramagnetic iron surface coatings, we decrease the surface relaxation time, T2S, and effectively decrease both the size of the diffusion cell and the degree of pore coupling. As more iron is coated to the grain surfaces, we observe a separation of the broad T2-distribution into two peaks that more accurately represent the true bimodal pore-size distribution. In the second set of experiments, we vary the scale over which heterogeneous pores are encountered in bimodal grain packs of pure quartz (long T2S) and hematite (short T2S). The scale of heterogeneity is varied by changing the mean grain size and relative mineral concentrations. When the mean grain size is small and the mineral concentrations are comparable, the T2-distribution is roughly monomodal indicating strong pore coupling. As the grain size is increased or the mineral concentrations are made increasingly uneven, the T2- distribution develops a bimodal character, more representative of the actual distribution of pore types. Numerical simulations of measurements in both experiment groups allow us to more closely investigate how the relaxing magnetization evolves in both time and space. Collectively, these experiments provide important insights into the effects of pore coupling on NMR measurements in heterogeneous systems and contribute to our ultimate goal of improving the interpretation of these data in complex near-surface sediments.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Kim, Eun Jung; Herrera, Jose E
2010-08-15
Destabilization of the corrosion scale present in lead pipes used in drinking water distribution systems is currently considered a major problem for municipalities serviced in part by lead pipes. Although several lead corrosion strategies have been deployed with success, a clear understanding of the chemistry of corrosion products present in the scale is needed for an effective lead control. This contribution focuses on a comprehensive characterization of the layers present in the corrosion scale formed on the inner surfaces of lead pipes used in the drinking water distribution system of the City on London, ON, Canada. Solid corrosion products were characterized using X-ray diffraction (XRD), Raman spectroscopy, Fourier transform infrared (FTIR) spectroscopy, and X-ray photoelectron spectroscopy (XPS). Toxic elements accumulated in the corrosion scale were also identified using inductively coupled plasma (ICP) spectrometry after acid digestion. Based on the XRD results, hydrocerussite was identified as the major lead crystalline corrosion phase in most of the pipes sampled, while cerussite was observed as the main crystalline component only in a few cases. Lead oxides including PbO(2) and Pb(3)O(4) were also observed in the inner layers of the corrosion scale. The presence of these highly oxidized lead species is rationalized in terms of the lead(II) carbonate phase transforming into lead(IV) oxide through an intermediate Pb(3)O(4) (2Pb(II)O x Pb(IV)O(2)) phase. In addition to lead corrosion products, an amorphous aluminosilicate phase was also identified in the corrosion scale. Its concentration is particularly high at the outer surface layers. Accumulation of toxic contaminants such as As, V, Sb, Cu, and Cr was observed in the corrosion scales, together with a strong correlation between arsenic accumulation and aluminum concentration.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
NASA Astrophysics Data System (ADS)
Newman, J. P.; Dandy, G. C.; Maier, H. R.
2014-10-01
In many regions, conventional water supplies are unable to meet projected consumer demand. Consequently, interest has arisen in integrated urban water systems, which involve the reclamation or harvesting of alternative, localized water sources. However, this makes the planning and design of water infrastructure more difficult, as multiple objectives need to be considered, water sources need to be selected from a number of alternatives, and end uses of these sources need to be specified. In addition, the scale at which each treatment, collection, and distribution network should operate needs to be investigated. In order to deal with this complexity, a framework for planning and designing water infrastructure taking into account integrated urban water management principles is presented in this paper and applied to a rural greenfield development. Various options for water supply, and the scale at which they operate were investigated in order to determine the life-cycle trade-offs between water savings, cost, and GHG emissions as calculated from models calibrated using Australian data. The decision space includes the choice of water sources, storage tanks, treatment facilities, and pipes for water conveyance. For each water system analyzed, infrastructure components were sized using multiobjective genetic algorithms. The results indicate that local water sources are competitive in terms of cost and GHG emissions, and can reduce demand on the potable system by as much as 54%. Economies of scale in treatment dominated the diseconomies of scale in collection and distribution of water. Therefore, water systems that connect large clusters of households tend to be more cost efficient and have lower GHG emissions. In addition, water systems that recycle wastewater tended to perform better than systems that captured roof-runoff. Through these results, the framework was shown to be effective at identifying near optimal trade-offs between competing objectives, thereby enabling informed decisions to be made when planning water systems for greenfield developments.
On-line detection of Escherichia coli intrusion in a pilot-scale drinking water distribution system.
Ikonen, Jenni; Pitkänen, Tarja; Kosse, Pascal; Ciszek, Robert; Kolehmainen, Mikko; Miettinen, Ilkka T
2017-08-01
Improvements in microbial drinking water quality monitoring are needed for the better control of drinking water distribution systems and for public health protection. Conventional water quality monitoring programmes are not always able to detect a microbial contamination of drinking water. In the drinking water production chain, in addition to the vulnerability of source waters, the distribution networks are prone to contamination. In this study, a pilot-scale drinking-water distribution network with an on-line monitoring system was utilized for detecting bacterial intrusion. During the experimental Escherichia coli intrusions, the contaminant was measured by applying a set of on-line sensors for electric conductivity (EC), pH, temperature (T), turbidity, UV-absorbance at 254 nm (UVAS SC) and with a device for particle counting. Monitored parameters were compared with the measured E. coli counts using the integral calculations of the detected peaks. EC measurement gave the strongest signal compared with the measured baseline during the E. coli intrusion. Integral calculations showed that the peaks in the EC, pH, T, turbidity and UVAS SC data were detected corresponding to the time predicted. However, the pH and temperature peaks detected were barely above the measured baseline and could easily be mixed with the background noise. The results indicate that on-line monitoring can be utilized for the rapid detection of microbial contaminants in the drinking water distribution system although the peak interpretation has to be performed carefully to avoid being mixed up with normal variations in the measurement data. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tick, G. R.; Ghosh, J.; Greenberg, R. R.; Akyol, N. H.
2015-12-01
A series of pore-scale experiments were conducted to understand the interfacial processes contributing to the removal of crude oil from various porous media during surfactant-induced remediation. Effects of physical heterogeneity (i.e. media uniformity) and carbonate soil content on oil recovery and distribution were evaluated through pore scale quantification techniques. Additionally, experiments were conducted to evaluate impacts of tetrachloroethene (PCE) content on crude oil distribution and recovery under these same conditions. Synchrotron X-ray microtomography (SXM) was used to obtain high-resolution images of the two-fluid-phase oil/water system, and quantify temporal changes in oil blob distribution, blob morphology, and blob surface area before and after sequential surfactant flooding events. The reduction of interfacial tension in conjunction with the sufficient increase in viscous forces as a result of surfactant flushing was likely responsible for mobilization and recovery of lighter fractions of crude oil. Corresponding increases in viscous forces were insufficient to initiate and maintain the displacement of the heavy crude oil in more homogeneous porous media systems during surfactant flushing. Interestingly, higher relative recoveries of heavy oil fractions were observed within more heterogeneous porous media indicating that wettability may be responsible for controlling mobilization in these systems. Compared to the "pure" crude oil experiments, preliminary results show that crude oil with PCE produced variability in oil distribution and recovery before and after each surfactant-flooding event. Such effects were likely influenced by viscosity and interfacial tension modifications associated with the crude-oil/solvent mixed systems.
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.
In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less
Design and Realization of Online Monitoring System of Distributed New Energy and Renewable Energy
NASA Astrophysics Data System (ADS)
Tang, Yanfen; Zhou, Tao; Li, Mengwen; Zheng, Guotai; Li, Hao
2018-01-01
Aimed at difficult centralized monitoring and management of current distributed new energy and renewable energy generation projects due to great varieties, different communication protocols and large-scale difference, this paper designs a online monitoring system of new energy and renewable energy characterized by distributed deployment, tailorable functions, extendible applications and fault self-healing performance. This system is designed based on international general standard for grid information data model, formulates unified data acquisition and transmission standard for different types of new energy and renewable energy generation projects, and can realize unified data acquisition and real-time monitoring of new energy and renewable energy generation projects, such as solar energy, wind power, biomass energy, etc. within its jurisdiction. This system has applied in Beijing. At present, 576 projects are connected to the system. Good effect is achieved and stability and reliability of the system have been validated.
Beyond Word Frequency: Bursts, Lulls, and Scaling in the Temporal Distributions of Words
Altmann, Eduardo G.; Pierrehumbert, Janet B.; Motter, Adilson E.
2009-01-01
Background Zipf's discovery that word frequency distributions obey a power law established parallels between biological and physical processes, and language, laying the groundwork for a complex systems perspective on human communication. More recent research has also identified scaling regularities in the dynamics underlying the successive occurrences of events, suggesting the possibility of similar findings for language as well. Methodology/Principal Findings By considering frequent words in USENET discussion groups and in disparate databases where the language has different levels of formality, here we show that the distributions of distances between successive occurrences of the same word display bursty deviations from a Poisson process and are well characterized by a stretched exponential (Weibull) scaling. The extent of this deviation depends strongly on semantic type – a measure of the logicality of each word – and less strongly on frequency. We develop a generative model of this behavior that fully determines the dynamics of word usage. Conclusions/Significance Recurrence patterns of words are well described by a stretched exponential distribution of recurrence times, an empirical scaling that cannot be anticipated from Zipf's law. Because the use of words provides a uniquely precise and powerful lens on human thought and activity, our findings also have implications for other overt manifestations of collective human dynamics. PMID:19907645
Collision Based Blood Cell Distribution of the Blood Flow
NASA Astrophysics Data System (ADS)
Cinar, Yildirim
2003-11-01
Introduction: The goal of the study is the determination of the energy transferring process between colliding masses and the application of the results to the distribution of the cell, velocity and kinetic energy in arterial blood flow. Methods: Mathematical methods and models were used to explain the collision between two moving systems, and the distribution of linear momentum, rectilinear velocity, and kinetic energy in a collision. Results: According to decrease of mass of the second system, the velocity and momentum of constant mass of the first system are decreased, and linearly decreasing mass of the second system captures a larger amount of the kinetic energy and the rectilinear velocity of the collision system on a logarithmic scale. Discussion: The cause of concentration of blood cells at the center of blood flow an artery is not explained by Bernoulli principle alone but the kinetic energy and velocity distribution due to collision between the big mass of the arterial wall and the small mass of blood cells must be considered as well.
THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.
2012-08-01
The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less
The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud
NASA Astrophysics Data System (ADS)
Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.
2012-08-01
The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.
High-Penetration PV Integration Handbook for Distribution Engineers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seguin, Rich; Woyak, Jeremy; Costyk, David
2016-01-01
This handbook has been developed as part of a five-year research project which began in 2010. The National Renewable Energy Laboratory (NREL), Southern California Edison (SCE), Quanta Technology, Satcon Technology Corporation, Electrical Distribution Design (EDD), and Clean Power Research (CPR) teamed together to analyze the impacts of high-penetration levels of photovoltaic (PV) systems interconnected onto the SCE distribution system. This project was designed specifically to leverage the experience that SCE and the project team would gain during the significant installation of 500 MW of commercial scale PV systems (1-5 MW typically) starting in 2010 and completing in 2015 within SCE’smore » service territory through a program approved by the California Public Utility Commission (CPUC).« less
Towards scalable Byzantine fault-tolerant replication
NASA Astrophysics Data System (ADS)
Zbierski, Maciej
2017-08-01
Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Deangelis; Rich Depuy; Debashis Dey
2004-09-30
This report summarizes the work performed by Hybrid Power Generation Systems, LLC (HPGS) during the April to October 2004 reporting period in Task 2.3 (SOFC Scaleup for Hybrid and Fuel Cell Systems) under Cooperative Agreement DE-FC26-01NT40779 for the U. S. Department of Energy, National Energy Technology Laboratory (DOE/NETL), entitled ''Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation''. This study analyzes the performance and economics of power generation systems for central power generation application based on Solid Oxide Fuel Cell (SOFC) technology and fueled by natural gas. The main objective of this task is to develop credible scale upmore » strategies for large solid oxide fuel cell-gas turbine systems. System concepts that integrate a SOFC with a gas turbine were developed and analyzed for plant sizes in excess of 20 MW. A 25 MW plant configuration was selected with projected system efficiency of over 65% and a factory cost of under $400/kW. The plant design is modular and can be scaled to both higher and lower plant power ratings. Technology gaps and required engineering development efforts were identified and evaluated.« less
Sarin, P; Snoeyink, V L; Bebee, J; Jim, K K; Beckett, M A; Kriven, W M; Clement, J A
2004-03-01
Iron release from corroded iron pipes is the principal cause of "colored water" problems in drinking water distribution systems. The corrosion scales present in corroded iron pipes restrict the flow of water, and can also deteriorate the water quality. This research was focused on understanding the effect of dissolved oxygen (DO), a key water quality parameter, on iron release from the old corroded iron pipes. Corrosion scales from 70-year-old galvanized iron pipe were characterized as porous deposits of Fe(III) phases (goethite (alpha-FeOOH), magnetite (Fe(3)O(4)), and maghemite (alpha-Fe(2)O(3))) with a shell-like, dense layer near the top of the scales. High concentrations of readily soluble Fe(II) content was present inside the scales. Iron release from these corroded pipes was investigated for both flow and stagnant water conditions. Our studies confirmed that iron was released to bulk water primarily in the ferrous form. When DO was present in water, higher amounts of iron release was observed during stagnation in comparison to flowing water conditions. Additionally, it was found that increasing the DO concentration in water during stagnation reduced the amount of iron release. Our studies substantiate that increasing the concentration of oxidants in water and maintaining flowing conditions can reduce the amount of iron release from corroded iron pipes. Based on our studies, it is proposed that iron is released from corroded iron pipes by dissolution of corrosion scales, and that the microstructure and composition of corrosion scales are important parameters that can influence the amount of iron released from such systems.
Scaling and intermittency of brain events as a manifestation of consciousness
NASA Astrophysics Data System (ADS)
Paradisi, P.; Allegrini, P.; Gemignani, A.; Laurino, M.; Menicucci, D.; Piarulli, A.
2013-01-01
We discuss the critical brain hypothesis and its relationship with intermittent renewal processes displaying power-law decay in the distribution of waiting times between two consecutive renewal events. In particular, studies on complex systems in a "critical" condition show that macroscopic variables, integrating the activities of many individual functional units, undergo fluctuations with an intermittent serial structure characterized by avalanches with inverse-power-law (scale-free) distribution densities of sizes and inter-event times. This condition, which is denoted as "fractal intermittency", was found in the electroencephalograms of subjects observed during a resting state wake condition. It remained unsolved whether fractal intermittency correlates with the stream of consciousness or with a non-task-driven default mode activity, also present in non-conscious states, like deep sleep. After reviewing a method of scaling analysis of intermittent systems based of eventdriven random walks, we show that during deep sleep fractal intermittency breaks down, and reestablishes during REM (Rapid Eye Movement) sleep, with essentially the same anomalous scaling of the pre-sleep wake condition. From the comparison of the pre-sleep wake, deep sleep and REM conditions we argue that the scaling features of intermittent brain events are related to the level of consciousness and, consequently, could be exploited as a possible indicator of consciousness in clinical applications.
Heterogeneity and scaling land-atmospheric water and energy fluxes in climate systems
NASA Technical Reports Server (NTRS)
Wood, Eric F.
1993-01-01
The effects of small-scale heterogeneity in land surface characteristics on the large-scale fluxes of water and energy in land-atmosphere system has become a central focus of many of the climatology research experiments. The acquisition of high resolution land surface data through remote sensing and intensive land-climatology field experiments (like HAPEX and FIFE) has provided data to investigate the interactions between microscale land-atmosphere interactions and macroscale models. One essential research question is how to account for the small scale heterogeneities and whether 'effective' parameters can be used in the macroscale models. To address this question of scaling, three modeling experiments were performed and are reviewed in the paper. The first is concerned with the aggregation of parameters and inputs for a terrestrial water and energy balance model. The second experiment analyzed the scaling behavior of hydrologic responses during rain events and between rain events. The third experiment compared the hydrologic responses from distributed models with a lumped model that uses spatially constant inputs and parameters. The results show that the patterns of small scale variations can be represented statistically if the scale is larger than a representative elementary area scale, which appears to be about 2 - 3 times the correlation length of the process. For natural catchments this appears to be about 1 - 2 sq km. The results concerning distributed versus lumped representations are more complicated. For conditions when the processes are nonlinear, then lumping results in biases; otherwise a one-dimensional model based on 'equivalent' parameters provides quite good results. Further research is needed to fully understand these conditions.
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
A distributed parallel storage architecture and its potential application within EOSDIS
NASA Technical Reports Server (NTRS)
Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony
1994-01-01
We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.
Research on distributed virtual reality system in electronic commerce
NASA Astrophysics Data System (ADS)
Xue, Qiang; Wang, Jiening; Sun, Jizhou
2004-03-01
In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.
Hughes, Joseph D.; Sifuentes, Dorothy F.; White, Jeremy T.
2016-03-15
Model accuracy and use are limited by uncertainty in the physical properties and boundary conditions of the system, uncertainty in historical and future conditions, and generalizations made in the mathematical relationships used to describe the physical processes of groundwater flow and transport. Because of these limitations, model results should be considered in relative rather than absolute terms. Nonetheless, model results do provide useful information on the relative scale of response of the system to changes in pumping distribution, sea-level rise, and mitigation activities.
Distributed Energy Systems: Security Implications of the Grid of the Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamber, Kevin L.; Kelic, Andjelka; Taylor, Robert A.
2017-01-01
Distributed Energy Resources (DER) are being added to the nation's electric grid, and as penetration of these resources increases, they have the potential to displace or offset large-scale, capital-intensive, centralized generation. Integration of DER into operation of the traditional electric grid requires automated operational control and communication of DER elements, from system measurement to control hardware and software, in conjunction with a utility's existing automated and human-directed control of other portions of the system. Implementation of DER technologies suggests a number of gaps from both a security and a policy perspective. This page intentionally left blank.
Applications of physics to economics and finance: Money, income, wealth, and the stock market
NASA Astrophysics Data System (ADS)
Dragulescu, Adrian Antoniu
Several problems arising in Economics and Finance are analyzed using concepts and quantitative methods from Physics. The dissertation is organized as follows: In the first chapter it is argued that in a closed economic system, money is conserved. Thus, by analogy with energy, the equilibrium probability distribution of money must follow the exponential Boltzmann-Gibbs law characterized by an effective temperature equal to the average amount of money per economic agent. The emergence of Boltzmann-Gibbs distribution is demonstrated through computer simulations of economic models. A thermal machine which extracts a monetary profit can be constructed between two economic systems with different temperatures. The role of debt and models with broken time-reversal symmetry for which the Boltzmann-Gibbs law does not hold, are discussed. In the second chapter, using data from several sources, it is found that the distribution of income is described for the great majority of population by an exponential distribution, whereas the high-end tail follows a power law. From the individual income distribution, the probability distribution of income for families with two earners is derived and it is shown that it also agrees well with the data. Data on wealth is presented and it is found that the distribution of wealth has a structure similar to the distribution of income. The Lorenz curve and Gini coefficient were calculated and are shown to be in good agreement with both income and wealth data sets. In the third chapter, the stock-market fluctuations at different time scales are investigated. A model where stock-price dynamics is governed by a geometrical (multiplicative) Brownian motion with stochastic variance is proposed. The corresponding Fokker-Planck equation can be solved exactly. Integrating out the variance, an analytic formula for the time-dependent probability distribution of stock price changes (returns) is found. The formula is in excellent agreement with the Dow-Jones index for the time lags from 1 to 250 trading days. For time lags longer than the relaxation time of variance, the probability distribution can be expressed in a scaling form using a Bessel function. The Dow-Jones data follow the scaling function for seven orders of magnitude.
Avalanches in Strained Amorphous Solids: Does Inertia Destroy Critical Behavior?
NASA Astrophysics Data System (ADS)
Salerno, K. Michael; Maloney, Craig E.; Robbins, Mark O.
2012-09-01
Simulations are used to determine the effect of inertia on athermal shear of amorphous two-dimensional solids. In the quasistatic limit, shear occurs through a series of rapid avalanches. The distribution of avalanches is analyzed using finite-size scaling with thousands to millions of disks. Inertia takes the system to a new underdamped universality class rather than driving the system away from criticality as previously thought. Scaling exponents are determined for the underdamped and overdamped limits and a critical damping that separates the two regimes. Systems are in the overdamped universality class even when most vibrational modes are underdamped.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.
With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-08-18
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.
Rathfelder, K M; Abriola, L M; Taylor, T P; Pennell, K D
2001-04-01
A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.
Goh, Shaun K Y; Tham, Elaine K H; Magiati, Iliana; Sim, Litwee; Sanmugam, Shamini; Qiu, Anqi; Daniel, Mary L; Broekman, Birit F P; Rifkin-Graboi, Anne
2017-09-18
The purpose of this study was to improve standardized language assessments among bilingual toddlers by investigating and removing the effects of bias due to unfamiliarity with cultural norms or a distributed language system. The Expressive and Receptive Bayley-III language scales were adapted for use in a multilingual country (Singapore). Differential item functioning (DIF) was applied to data from 459 two-year-olds without atypical language development. This involved investigating if the probability of success on each item varied according to language exposure while holding latent language ability, gender, and socioeconomic status constant. Associations with language, behavioral, and emotional problems were also examined. Five of 16 items showed DIF, 1 of which may be attributed to cultural bias and another to a distributed language system. The remaining 3 items favored toddlers with higher bilingual exposure. Removal of DIF items reduced associations between language scales and emotional and language problems, but improved the validity of the expressive scale from poor to good. Our findings indicate the importance of considering cultural and distributed language bias in standardized language assessments. We discuss possible mechanisms influencing performance on items favoring bilingual exposure, including the potential role of inhibitory processing.
NASA Astrophysics Data System (ADS)
Börries, S.; Metz, O.; Pranzas, P. K.; Bellosta von Colbe, J. M.; Bücherl, T.; Dornheim, M.; Klassen, T.; Schreyer, A.
2016-10-01
For the storage of hydrogen, complex metal hydrides are considered as highly promising with respect to capacity, reversibility and safety. The optimization of corresponding storage tanks demands a precise and time-resolved investigation of the hydrogen distribution in scaled-up metal hydride beds. In this study it is shown that in situ fission Neutron Radiography provides unique insights into the spatial distribution of hydrogen even for scaled-up compacts and therewith enables a direct study of hydrogen storage tanks. A technique is introduced for the precise quantification of both time-resolved data and a priori material distribution, allowing inter alia for an optimization of compacts manufacturing process. For the first time, several macroscopic fields are combined which elucidates the great potential of Neutron Imaging for investigations of metal hydrides by going further than solely 'imaging' the system: A combination of in-situ Neutron Radiography, IR-Thermography and thermodynamic quantities can reveal the interdependency of different driving forces for a scaled-up sodium alanate pellet by means of a multi-correlation analysis. A decisive and time-resolved, complex influence of material packing density is derived. The results of this study enable a variety of new investigation possibilities that provide essential information on the optimization of future hydrogen storage tanks.
NASA Astrophysics Data System (ADS)
Kim, Yup; Cho, Minsoo; Yook, Soon-Hyung
2011-10-01
We study the effects of the underlying topologies on a single feature perturbation imposed to the Axelrod model of consensus formation. From the numerical simulations we show that there are successive updates which are similar to avalanches in many self-organized criticality systems when a perturbation is imposed. We find that the distribution of avalanche size satisfies the finite-size scaling (FSS) ansatz on two-dimensional lattices and random networks. However, on scale-free networks with the degree exponent γ≤3 we show that the avalanche size distribution does not satisfy the FSS ansatz. The results indicate that the disordered configurations on two-dimensional lattices or on random networks are still stable against the perturbation in the limit N (network size) →∞. However, on scale-free networks with γ≤3 the perturbation always drives the disordered phase into an ordered phase. The possible relationship between the properties of phase transition of the Axelrod model and the avalanche distribution is also discussed.
On the existence of a scaling relation in the evolution of cellular systems
NASA Astrophysics Data System (ADS)
Fortes, M. A.
1994-05-01
A mean field approximation is used to analyze the evolution of the distribution of sizes in systems formed by individual 'cells,' each of which grows or shrinks, in such a way that the total number of cells decreases (e.g. polycrystals, soap froths, precipitate particles in a matrix). The rate of change of the size of a cell is defined by a growth function that depends on the size (x) of the cell and on moments of the size distribution, such as the average size (bar-x). Evolutionary equations for the distribution of sizes and of reduced sizes (i.e. x/bar-x) are established. The stationary (or steady state) solutions of the equations are obtained for various particular forms of the growth function. A steady state of the reduced size distribution is equivalent to a scaling behavior. It is found that there are an infinity of steady state solutions which form a (continuous) one-parameter family of functions, but they are not, in general, reached from an arbitrary initial state. These properties are at variance from those that can be derived from models based on von Neumann-Mullins equation.
Utility of 222Rn as a passive tracer of subglacial distributed system drainage
NASA Astrophysics Data System (ADS)
Linhoff, Benjamin S.; Charette, Matthew A.; Nienow, Peter W.; Wadham, Jemma L.; Tedstone, Andrew J.; Cowton, Thomas
2017-03-01
Water flow beneath the Greenland Ice Sheet (GrIS) has been shown to include slow-inefficient (distributed) and fast-efficient (channelized) drainage systems, in response to meltwater delivery to the bed via both moulins and surface lake drainage. This partitioning between channelized and distributed drainage systems is difficult to quantify yet it plays an important role in bulk meltwater chemistry and glacial velocity, and thus subglacial erosion. Radon-222, which is continuously produced via the decay of 226Ra, accumulates in meltwater that has interacted with rock and sediment. Hence, elevated concentrations of 222Rn should be indicative of meltwater that has flowed through a distributed drainage system network. In the spring and summer of 2011 and 2012, we made hourly 222Rn measurements in the proglacial river of a large outlet glacier of the GrIS (Leverett Glacier, SW Greenland). Radon-222 activities were highest in the early melt season (10-15 dpm L-1), decreasing by a factor of 2-5 (3-5 dpm L-1) following the onset of widespread surface melt. Using a 222Rn mass balance model, we estimate that, on average, greater than 90% of the river 222Rn was sourced from distributed system meltwater. The distributed system 222Rn flux varied on diurnal, weekly, and seasonal time scales with highest fluxes generally occurring on the falling limb of the hydrograph and during expansion of the channelized drainage system. Using laboratory based estimates of distributed system 222Rn, the distributed system water flux generally ranged between 1-5% of the total proglacial river discharge for both seasons. This study provides a promising new method for hydrograph separation in glacial watersheds and for estimating the timing and magnitude of distributed system fluxes expelled at ice sheet margins.
An optimal beam alignment method for large-scale distributed space surveillance radar system
NASA Astrophysics Data System (ADS)
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
The US business cycle: power law scaling for interacting units with complex internal structure
NASA Astrophysics Data System (ADS)
Ormerod, Paul
2002-11-01
In the social sciences, there is increasing evidence of the existence of power law distributions. The distribution of recessions in capitalist economies has recently been shown to follow such a distribution. The preferred explanation for this is self-organised criticality. Gene Stanley and colleagues propose an alternative, namely that power law scaling can arise from the interplay between random multiplicative growth and the complex structure of the units composing the system. This paper offers a parsimonious model of the US business cycle based on similar principles. The business cycle, along with long-term growth, is one of the two features which distinguishes capitalism from all previously existing societies. Yet, economics lacks a satisfactory theory of the cycle. The source of cycles is posited in economic theory to be a series of random shocks which are external to the system. In this model, the cycle is an internal feature of the system, arising from the level of industrial concentration of the agents and the interactions between them. The model-in contrast to existing economic theories of the cycle-accounts for the key features of output growth in the US business cycle in the 20th century.
Disordered artificial spin ices: Avalanches and criticality (invited)
NASA Astrophysics Data System (ADS)
Reichhardt, Cynthia J. Olson; Chern, Gia-Wei; Libál, Andras; Reichhardt, Charles
2015-05-01
We show that square and kagome artificial spin ices with disconnected islands exhibit disorder-induced nonequilibrium phase transitions. The critical point of the transition is characterized by a diverging length scale and the effective spin reconfiguration avalanche sizes are power-law distributed. For weak disorder, the magnetization reversal is dominated by system-spanning avalanche events characteristic of a supercritical regime, while at strong disorder, the avalanche distributions have subcritical behavior and are cut off above a length scale that decreases with increasing disorder. The different type of geometrical frustration in the two lattices produces distinct forms of critical avalanche behavior. Avalanches in the square ice consist of the propagation of locally stable domain walls separating the two polarized ground states, and we find a scaling collapse consistent with an interface depinning mechanism. In the fully frustrated kagome ice, however, the avalanches branch strongly in a manner reminiscent of directed percolation. We also observe an interesting crossover in the power-law scaling of the kagome ice avalanches at low disorder. Our results show that artificial spin ices are ideal systems in which to study a variety of nonequilibrium critical point phenomena as the microscopic degrees of freedom can be accessed directly in experiments.
Disordered artificial spin ices: Avalanches and criticality (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichhardt, Cynthia J. Olson, E-mail: cjrx@lanl.gov; Chern, Gia-Wei; Reichhardt, Charles
2015-05-07
We show that square and kagome artificial spin ices with disconnected islands exhibit disorder-induced nonequilibrium phase transitions. The critical point of the transition is characterized by a diverging length scale and the effective spin reconfiguration avalanche sizes are power-law distributed. For weak disorder, the magnetization reversal is dominated by system-spanning avalanche events characteristic of a supercritical regime, while at strong disorder, the avalanche distributions have subcritical behavior and are cut off above a length scale that decreases with increasing disorder. The different type of geometrical frustration in the two lattices produces distinct forms of critical avalanche behavior. Avalanches in themore » square ice consist of the propagation of locally stable domain walls separating the two polarized ground states, and we find a scaling collapse consistent with an interface depinning mechanism. In the fully frustrated kagome ice, however, the avalanches branch strongly in a manner reminiscent of directed percolation. We also observe an interesting crossover in the power-law scaling of the kagome ice avalanches at low disorder. Our results show that artificial spin ices are ideal systems in which to study a variety of nonequilibrium critical point phenomena as the microscopic degrees of freedom can be accessed directly in experiments.« less
Professional diversity and the productivity of cities.
Bettencourt, Luís M A; Samaniego, Horacio; Youn, Hyejin
2014-06-23
Attempts to understand the relationship between diversity, productivity and scale have remained limited due to the scheme-dependent nature of the taxonomies describing complex systems. We analyze the diversity of US metropolitan areas in terms of profession diversity and employment to show how this frequency distribution takes a universal scale-invariant form, common to all cities, in the limit of infinite resolution of occupational taxonomies. We show that this limit is obtained under general conditions that follow from the analysis of the variation of the occupational frequency across taxonomies at different resolutions in a way analogous to finite-size scaling in statistical physical systems. We propose a theoretical framework that derives the form and parameters of the limiting distribution of professions based on the appearance, in urban social networks, of new occupations as the result of specialization and coordination of labor. By deriving classification scheme-independent measures of functional diversity and modeling cities as social networks embedded in infrastructural space, these results show how standard economic arguments of division and coordination of labor can be articulated in detail in cities and provide a microscopic basis for explaining increasing returns to population scale observed at the level of entire metropolitan areas.
Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia
2012-01-01
Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1981-01-01
Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.
The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modeled processes were examined and enhanced to suitably represent the extended space and timesca...
Active Control of Flow Separation on a High-Lift System with Slotted Flap at High Reynolds Number
NASA Technical Reports Server (NTRS)
Khodadoust, Abdollah; Washburn, Anthony
2007-01-01
The NASA Energy Efficient Transport (EET) airfoil was tested at NASA Langley's Low- Turbulence Pressure Tunnel (LTPT) to assess the effectiveness of distributed Active Flow Control (AFC) concepts on a high-lift system at flight scale Reynolds numbers for a medium-sized transport. The test results indicate presence of strong Reynolds number effects on the high-lift system with the AFC operational, implying the importance of flight-scale testing for implementation of such systems during design of future flight vehicles with AFC. This paper describes the wind tunnel test results obtained at the LTPT for the EET high-lift system for various AFC concepts examined on this airfoil.
Pattern detection in stream networks: Quantifying spatialvariability in fish distribution
Torgersen, Christian E.; Gresswell, Robert E.; Bateman, Douglas S.
2004-01-01
Biological and physical properties of rivers and streams are inherently difficult to sample and visualize at the resolution and extent necessary to detect fine-scale distributional patterns over large areas. Satellite imagery and broad-scale fish survey methods are effective for quantifying spatial variability in biological and physical variables over a range of scales in marine environments but are often too coarse in resolution to address conservation needs in inland fisheries management. We present methods for sampling and analyzing multiscale, spatially continuous patterns of stream fishes and physical habitat in small- to medium-size watersheds (500–1000 hectares). Geospatial tools, including geographic information system (GIS) software such as ArcInfo dynamic segmentation and ArcScene 3D analyst modules, were used to display complex biological and physical datasets. These tools also provided spatial referencing information (e.g. Cartesian and route-measure coordinates) necessary for conducting geostatistical analyses of spatial patterns (empirical semivariograms and wavelet analysis) in linear stream networks. Graphical depiction of fish distribution along a one-dimensional longitudinal profile and throughout the stream network (superimposed on a 10-metre digital elevation model) provided the spatial context necessary for describing and interpreting the relationship between landscape pattern and the distribution of coastal cutthroat trout (Oncorhynchus clarki clarki) in western Oregon, U.S.A. The distribution of coastal cutthroat trout was highly autocorrelated and exhibited a spherical semivariogram with a defined nugget, sill, and range. Wavelet analysis of the main-stem longitudinal profile revealed periodicity in trout distribution at three nested spatial scales corresponding ostensibly to landscape disturbances and the spacing of tributary junctions.
Large-Scale Ichthyoplankton and Water Mass Distribution along the South Brazil Shelf
de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique
2014-01-01
Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27′ and 34°51′S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients. PMID:24614798
Large-scale ichthyoplankton and water mass distribution along the South Brazil Shelf.
de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique
2014-01-01
Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27' and 34°51'S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
A web-based subsetting service for regional scale MODIS land products
DOE Office of Scientific and Technical Information (OSTI.GOV)
SanthanaVannan, Suresh K; Cook, Robert B; Holladay, Susan K
2009-12-01
The Moderate Resolution Imaging Spectroradiometer (MODIS) sensor has provided valuable information on various aspects of the Earth System since March 2000. The spectral, spatial, and temporal characteristics of MODIS products have made them an important data source for analyzing key science questions relating to Earth System processes at regional, continental, and global scales. The size of the MODIS product and native HDF-EOS format are not optimal for use in field investigations at individual sites (100 - 100 km or smaller). In order to make MODIS data readily accessible for field investigations, the NASA-funded Distributed Active Archive Center (DAAC) for Biogeochemicalmore » Dynamics at Oak Ridge National Laboratory (ORNL) has developed an online system that provides MODIS land products in an easy-to-use format and in file sizes more appropriate to field research. This system provides MODIS land products data in a nonproprietary comma delimited ASCII format and in GIS compatible formats (GeoTIFF and ASCII grid). Web-based visualization tools are also available as part of this system and these tools provide a quick snapshot of the data. Quality control tools and a multitude of data delivery options are available to meet the demands of various user communities. This paper describes the important features and design goals for the system, particularly in the context of data archive and distribution for regional scale analysis. The paper also discusses the ways in which data from this system can be used for validation, data intercomparison, and modeling efforts.« less
A Distributed Approach to System-Level Prognostics
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil
2012-01-01
Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
Hierarchical Data Distribution Scheme for Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Bhushan, Shashi; Dave, M.; Patel, R. B.
2010-11-01
In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.
Bosse, Stefan
2015-01-01
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550
Bosse, Stefan
2015-02-16
Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.
Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations
NASA Technical Reports Server (NTRS)
Hogan, Craig J.; Woods, Jorden
1992-01-01
The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Marvin; Bose, James; Beier, Richard
2004-12-01
The assets that Citizen Potawatomi Nation holds were evaluated to help define the strengths and weaknesses to be used in pursuing economic prosperity. With this baseline assessment, a Planning Team will create a vision for the tribe to integrate into long-term energy and business strategies. Identification of energy efficiency devices, systems and technologies was made, and an estimation of cost benefits of the more promising ideas is submitted for possible inclusion into the final energy plan. Multiple energy resources and sources were identified and their attributes were assessed to determine the appropriateness of each. Methods of saving energy were evaluatedmore » and reported on and potential revenue-generating sources that specifically fit the tribe were identified and reported. A primary goal is to create long-term energy strategies to explore development of tribal utility options and analyze renewable energy and energy efficiency options. Associated goals are to consider exploring energy efficiency and renewable economic development projects involving the following topics: (1) Home-scale projects may include construction of a home with energy efficiency or renewable energy features and retrofitting an existing home to add energy efficiency or renewable energy features. (2) Community-scale projects may include medium to large scale energy efficiency building construction, retrofit project, or installation of community renewable energy systems. (3) Small business development may include the creation of a tribal enterprise that would manufacture and distribute solar and wind powered equipment for ranches and farms or create a contracting business to include energy efficiency and renewable retrofits such as geothermal heat pumps. (4) Commercial-scale energy projects may include at a larger scale, the formation of a tribal utility formed to sell power to the commercial grid, or to transmit and distribute power throughout the tribal community, or hydrogen production, and propane and natural-gas distribution systems.« less
Statistical physics approaches to financial fluctuations
NASA Astrophysics Data System (ADS)
Wang, Fengzhong
2009-12-01
Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.
NASA Astrophysics Data System (ADS)
Mbanjwa, Mesuli B.; Chen, Hao; Fourie, Louis; Ngwenya, Sibusiso; Land, Kevin
2014-06-01
Multiplexed or parallelised droplet microfluidic systems allow for increased throughput in the production of emulsions and microparticles, while maintaining a small footprint and utilising minimal ancillary equipment. The current paper demonstrates the design and fabrication of a multiplexed microfluidic system for producing biocatalytic microspheres. The microfluidic system consists of an array of 10 parallel microfluidic circuits, for simultaneous operation to demonstrate increased production throughput. The flow distribution was achieved using a principle of reservoirs supplying individual microfluidic circuits. The microfluidic devices were fabricated in poly (dimethylsiloxane) (PDMS) using soft lithography techniques. The consistency of the flow distribution was determined by measuring the size variations of the microspheres produced. The coefficient of variation of the particles was determined to be 9%, an indication of consistent particle formation and good flow distribution between the 10 microfluidic circuits.
Network Theory: A Primer and Questions for Air Transportation Systems Applications
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.
2004-01-01
A new understanding (with potential applications to air transportation systems) has emerged in the past five years in the scientific field of networks. This development emerges in large part because we now have a new laboratory for developing theories about complex networks: The Internet. The premise of this new understanding is that most complex networks of interest, both of nature and of human contrivance, exhibit a fundamentally different behavior than thought for over two hundred years under classical graph theory. Classical theory held that networks exhibited random behavior, characterized by normal, (e.g., Gaussian or Poisson) degree distributions of the connectivity between nodes by links. The new understanding turns this idea on its head: networks of interest exhibit scale-free (or small world) degree distributions of connectivity, characterized by power law distributions. The implications of scale-free behavior for air transportation systems include the potential that some behaviors of complex system architectures might be analyzed through relatively simple approximations of local elements of the system. For air transportation applications, this presentation proposes a framework for constructing topologies (architectures) that represent the relationships between mobility, flight operations, aircraft requirements, and airspace capacity, and the related externalities in airspace procedures and architectures. The proposed architectures or topologies may serve as a framework for posing comparative and combinative analyses of performance, cost, security, environmental, and related metrics.
Interpreting Popov criteria in Lure´ systems with complex scaling stability analysis
NASA Astrophysics Data System (ADS)
Zhou, J.
2018-06-01
The paper presents a novel frequency-domain interpretation of Popov criteria for absolute stability in Lure´ systems by means of what we call complex scaling stability analysis. The complex scaling technique is developed for exponential/asymptotic stability in LTI feedback systems, which dispenses open-loop poles distribution, contour/locus orientation and prior frequency sweeping. Exploiting the technique for alternatively revealing positive realness of transfer functions, re-interpreting Popov criteria is explicated. More specifically, the suggested frequency-domain stability conditions are conformable both in scalar and multivariable cases, and can be implemented either graphically with locus plotting or numerically without; in particular, the latter is suitable as a design tool with auxiliary parameter freedom. The interpretation also reveals further frequency-domain facts about Lure´ systems. Numerical examples are included to illustrate the main results.
Effects of habitat fragmentation on passerine birds breeding in Intermountain shrubsteppe
Knick, S.T.; Rotenberry, J.T.
2002-01-01
Habitat fragmentation and loss strongly influence the distribution and abundance of passerine birds breeding in Intermountain shrubsteppe. Wildfires, human activities, and change in vegetation communities often are synergistic in these systems and can result in radical conversion from shrubland to grasslands dominated by exotic annuals at large temporal and spatial scales from which recovery to native conditions is unlikely. As a result, populations of 5 of the 12 species in our review of Intermountain shrubsteppe birds are undergoing significant declines; 5 species are listed as at-risk or as candidates for protection in at least one state. The process by which fragmentation affects bird distributions in these habitats remains unknown because most research has emphasized the detection of population trends and patterns of habitat associations at relatively large spatial scales. Our research indicates that the distribution of shrubland-obligate species, such as Brewer's Sparrows (Spizella breweri), Sage Sparrows (Amphispiza belli), and Sage Thrashers (Oreoscoptes montanus), was highly sensitive to fragmentation of shrublands at spatial scales larger than individual home ranges. In contrast, the underlying mechanisms for both habitat change and bird population dynamics may operate independently of habitat boundaries. We propose alternative, but not necessarily exclusive, mechanisms to explain the relationship between habitat fragmentation and bird distribution and abundance. Fragmentation might influence productivity through differences in breeding density, nesting success, or predation. However, local and landscape variables were not significant determinants either of success, number fledged, or probability of predation or parasitism (although our tests had relatively low statistical power). Alternatively, relative absence of natal philopatry and redistribution by individuals among habitats following fledging or post-migration could account for the pattern of distribution and abundance. Thus, boundary dynamics may be important in determining the distribution of shrubland-obligate species but insignificant relative to the mechanisms causing the pattern of habitat and bird distribution. Because of the dichotomy in responses, Intermountain shrubsteppe systems present a unique challenge in understanding how landscape composition, configuration, and change influence bird population dynamics.
Transition path time distributions
NASA Astrophysics Data System (ADS)
Laleman, M.; Carlon, E.; Orland, H.
2017-12-01
Biomolecular folding, at least in simple systems, can be described as a two state transition in a free energy landscape with two deep wells separated by a high barrier. Transition paths are the short part of the trajectories that cross the barrier. Average transition path times and, recently, their full probability distribution have been measured for several biomolecular systems, e.g., in the folding of nucleic acids or proteins. Motivated by these experiments, we have calculated the full transition path time distribution for a single stochastic particle crossing a parabolic barrier, including inertial terms which were neglected in previous studies. These terms influence the short time scale dynamics of a stochastic system and can be of experimental relevance in view of the short duration of transition paths. We derive the full transition path time distribution as well as the average transition path times and discuss the similarities and differences with the high friction limit.
Bringing modeling to the masses: A web based system to predict potential species distributions
Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul
2010-01-01
Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
NASA Astrophysics Data System (ADS)
Ahluwalia, Arti
2017-02-01
About two decades ago, West and coworkers established a model which predicts that metabolic rate follows a three quarter power relationship with the mass of an organism, based on the premise that tissues are supplied nutrients through a fractal distribution network. Quarter power scaling is widely considered a universal law of biology and it is generally accepted that were in-vitro cultures to obey allometric metabolic scaling, they would have more predictive potential and could, for instance, provide a viable substitute for animals in research. This paper outlines a theoretical and computational framework for establishing quarter power scaling in three-dimensional spherical constructs in-vitro, starting where fractal distribution ends. Allometric scaling in non-vascular spherical tissue constructs was assessed using models of Michaelis Menten oxygen consumption and diffusion. The models demonstrate that physiological scaling is maintained when about 5 to 60% of the construct is exposed to oxygen concentrations less than the Michaelis Menten constant, with a significant concentration gradient in the sphere. The results have important implications for the design of downscaled in-vitro systems with physiological relevance.
Bio-Nanobattery Development and Characterization
NASA Technical Reports Server (NTRS)
King, Glen C.; Choi, Sang H.; Chu, Sang-Hyon; Kim, Jae-Woo; Watt, Gerald D.; Lillehei, Peter T.; Park, Yeonjoon; Elliott, James R.
2005-01-01
A bio-nanobattery is an electrical energy storage device that utilizes organic materials and processes on an atomic, or nanometer-scale. The bio-nanobattery under development at NASA s Langley Research Center provides new capabilities for electrical power generation, storage, and distribution as compared to conventional power storage systems. Most currently available electronic systems and devices rely on a single, centralized power source to supply electrical power to a specified location in the circuit. As electronic devices and associated components continue to shrink in size towards the nanometer-scale, a single centralized power source becomes impractical. Small systems, such as these, will require distributed power elements to reduce Joule heating, to minimize wiring quantities, and to allow autonomous operation of the various functions performed by the circuit. Our research involves the development and characterization of a bio-nanobattery using ferritins reconstituted with both an iron core (Fe-ferritin) and a cobalt core (Co-ferritin). Synthesis and characterization of the Co-ferritin and Fe-ferritin electrodes were performed, including reducing capability and the half-cell electrical potentials. Electrical output of nearly 0.5 V for the battery cell was measured. Ferritin utilizing other metallic cores were also considered to increase the overall electrical output. Two dimensional ferritin arrays were produced on various substrates to demonstrate the feasibility of a thin-film nano-scaled power storage system for distributed power storage applications. The bio-nanobattery will be ideal for nanometerscaled electronic applications, due to the small size, high energy density, and flexible thin-film structure. A five-cell demonstration article was produced for concept verification and bio-nanobattery characterization. Challenges to be addressed include the development of a multi-layered thin-film, increasing the energy density, dry-cell bionanobattery development, and selection of ferritin core materials to allow the broadest range of applications. The potential applications for the distributed power system include autonomously-operating intelligent chips, flexible thin-film electronic circuits, nanoelectromechanical systems (NEMS), ultra-high density data storage devices, nanoelectromagnetics, quantum electronic devices, biochips, nanorobots for medical applications and mechanical nano-fabrication, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Ning; Yearsley, John; Voisin, Nathalie
2015-05-15
Stream temperatures in urban watersheds are influenced to a high degree by anthropogenic impacts related to changes in landscape, stream channel morphology, and climate. These impacts can occur at small time and length scales, hence require analytical tools that consider the influence of the hydrologic regime, energy fluxes, topography, channel morphology, and near-stream vegetation distribution. Here we describe a modeling system that integrates the Distributed Hydrologic Soil Vegetation Model, DHSVM, with the semi-Lagrangian stream temperature model RBM, which has the capability to simulate the hydrology and water temperature of urban streams at high time and space resolutions, as well asmore » a representation of the effects of riparian shading on stream energetics. We demonstrate the modeling system through application to the Mercer Creek watershed, a small urban catchment near Bellevue, Washington. The results suggest that the model is able both to produce realistic streamflow predictions at fine temporal and spatial scales, and to provide spatially distributed water temperature predictions that are consistent with observations throughout a complex stream network. We use the modeling construct to characterize impacts of land use change and near-stream vegetation change on stream temperature throughout the Mercer Creek system. We then explore the sensitivity of stream temperature to land use changes and modifications in vegetation along the riparian corridor.« less
Aquifer Vulnerability Assessment Based on Sequence Stratigraphic and ³⁹Ar Transport Modeling.
Sonnenborg, Torben O; Scharling, Peter B; Hinsby, Klaus; Rasmussen, Erik S; Engesgaard, Peter
2016-03-01
A large-scale groundwater flow and transport model is developed for a deep-seated (100 to 300 m below ground surface) sedimentary aquifer system. The model is based on a three-dimensional (3D) hydrostratigraphic model, building on a sequence stratigraphic approach. The flow model is calibrated against observations of hydraulic head and stream discharge while the credibility of the transport model is evaluated against measurements of (39)Ar from deep wells using alternative parameterizations of dispersivity and effective porosity. The directly simulated 3D mean age distributions and vertical fluxes are used to visualize the two-dimensional (2D)/3D age and flux distribution along transects and at the top plane of individual aquifers. The simulation results are used to assess the vulnerability of the aquifer system that generally has been assumed to be protected by thick overlaying clayey units and therefore proposed as future reservoirs for drinking water supply. The results indicate that on a regional scale these deep-seated aquifers are not as protected from modern surface water contamination as expected because significant leakage to the deeper aquifers occurs. The complex distribution of local and intermediate groundwater flow systems controlled by the distribution of the river network as well as the topographical variation (Tóth 1963) provides the possibility for modern water to be found in even the deepest aquifers. © 2015, National Ground Water Association.
NASA Astrophysics Data System (ADS)
Adare, A.; Afanasiev, S.; Aidala, C.; Ajitanand, N. N.; Akiba, Y.; Akimoto, R.; Al-Bataineh, H.; Alexander, J.; Alfred, M.; Al-Jamel, A.; Al-Ta'Ani, H.; Angerami, A.; Aoki, K.; Apadula, N.; Aphecetche, L.; Aramaki, Y.; Armendariz, R.; Aronson, S. H.; Asai, J.; Asano, H.; Aschenauer, E. C.; Atomssa, E. T.; Averbeck, R.; Awes, T. C.; Azmoun, B.; Babintsev, V.; Bai, M.; Bai, X.; Baksay, G.; Baksay, L.; Baldisseri, A.; Bandara, N. S.; Bannier, B.; Barish, K. N.; Barnes, P. D.; Bassalleck, B.; Basye, A. T.; Bathe, S.; Batsouli, S.; Baublis, V.; Bauer, F.; Baumann, C.; Baumgart, S.; Bazilevsky, A.; Beaumier, M.; Beckman, S.; Belikov, S.; Belmont, R.; Bennett, R.; Berdnikov, A.; Berdnikov, Y.; Bhom, J. H.; Bickley, A. A.; Bjorndal, M. T.; Black, D.; Blau, D. S.; Boissevain, J. G.; Bok, J. S.; Borel, H.; Boyle, K.; Brooks, M. L.; Brown, D. S.; Bryslawskyj, J.; Bucher, D.; Buesching, H.; Bumazhnov, V.; Bunce, G.; Burward-Hoy, J. M.; Butsyk, S.; Campbell, S.; Caringi, A.; Castera, P.; Chai, J.-S.; Chang, B. S.; Charvet, J.-L.; Chen, C.-H.; Chernichenko, S.; Chi, C. Y.; Chiba, J.; Chiu, M.; Choi, I. J.; Choi, J. B.; Choi, S.; Choudhury, R. K.; Christiansen, P.; Chujo, T.; Chung, P.; Churyn, A.; Chvala, O.; Cianciolo, V.; Citron, Z.; Cleven, C. R.; Cobigo, Y.; Cole, B. A.; Comets, M. P.; Conesa Del Valle, Z.; Connors, M.; Constantin, P.; Cronin, N.; Crossette, N.; Csanád, M.; Csörgő, T.; Dahms, T.; Dairaku, S.; Danchev, I.; Danley, T. W.; Das, K.; Datta, A.; Daugherity, M. S.; David, G.; Dayananda, M. K.; Deaton, M. B.; Deblasio, K.; Dehmelt, K.; Delagrange, H.; Denisov, A.; D'Enterria, D.; Deshpande, A.; Desmond, E. J.; Dharmawardane, K. V.; Dietzsch, O.; Ding, L.; Dion, A.; Diss, P. B.; Do, J. H.; Donadelli, M.; D'Orazio, L.; Drachenberg, J. L.; Drapier, O.; Drees, A.; Drees, K. A.; Dubey, A. K.; Durham, J. M.; Durum, A.; Dutta, D.; Dzhordzhadze, V.; Edwards, S.; Efremenko, Y. V.; Egdemir, J.; Ellinghaus, F.; Emam, W. S.; Engelmore, T.; Enokizono, A.; En'yo, H.; Espagnon, B.; Esumi, S.; Eyser, K. O.; Fadem, B.; Feege, N.; Fields, D. E.; Finger, M.; Finger, M.; Fleuret, F.; Fokin, S. L.; Forestier, B.; Fraenkel, Z.; Frantz, J. E.; Franz, A.; Frawley, A. D.; Fujiwara, K.; Fukao, Y.; Fung, S.-Y.; Fusayasu, T.; Gadrat, S.; Gainey, K.; Gal, C.; Gallus, P.; Garg, P.; Garishvili, A.; Garishvili, I.; Gastineau, F.; Ge, H.; Germain, M.; Giordano, F.; Glenn, A.; Gong, H.; Gong, X.; Gonin, M.; Gosset, J.; Goto, Y.; Granier de Cassagnac, R.; Grau, N.; Greene, S. V.; Grim, G.; Grosse Perdekamp, M.; Gu, Y.; Gunji, T.; Guo, L.; Guragain, H.; Gustafsson, H.-Å.; Hachiya, T.; Hadj Henni, A.; Haegemann, C.; Haggerty, J. S.; Hagiwara, M. N.; Hahn, K. I.; Hamagaki, H.; Hamblen, J.; Hamilton, H. F.; Han, R.; Han, S. Y.; Hanks, J.; Harada, H.; Hartouni, E. P.; Haruna, K.; Harvey, M.; Hasegawa, S.; Haseler, T. O. S.; Hashimoto, K.; Haslum, E.; Hasuko, K.; Hayano, R.; Hayashi, S.; He, X.; Heffner, M.; Hemmick, T. K.; Hester, T.; Heuser, J. M.; Hiejima, H.; Hill, J. C.; Hobbs, R.; Hohlmann, M.; Hollis, R. S.; Holmes, M.; Holzmann, W.; Homma, K.; Hong, B.; Horaguchi, T.; Hori, Y.; Hornback, D.; Hoshino, T.; Hotvedt, N.; Huang, J.; Huang, S.; Hur, M. G.; Ichihara, T.; Ichimiya, R.; Iinuma, H.; Ikeda, Y.; Imai, K.; Imazu, Y.; Imrek, J.; Inaba, M.; Inoue, Y.; Iordanova, A.; Isenhower, D.; Isenhower, L.; Ishihara, M.; Isinhue, A.; Isobe, T.; Issah, M.; Isupov, A.; Ivanishchev, D.; Iwanaga, Y.; Jacak, B. V.; Javani, M.; Jeon, S. J.; Jezghani, M.; Jia, J.; Jiang, X.; Jin, J.; Jinnouchi, O.; Johnson, B. M.; Jones, T.; Joo, K. S.; Jouan, D.; Jumper, D. S.; Kajihara, F.; Kametani, S.; Kamihara, N.; Kamin, J.; Kanda, S.; Kaneta, M.; Kaneti, S.; Kang, B. H.; Kang, J. H.; Kang, J. S.; Kanou, H.; Kapustinsky, J.; Karatsu, K.; Kasai, M.; Kawagishi, T.; Kawall, D.; Kawashima, M.; Kazantsev, A. V.; Kelly, S.; Kempel, T.; Key, J. A.; Khachatryan, V.; Khandai, P. K.; Khanzadeev, A.; Kijima, K. M.; Kikuchi, J.; Kim, A.; Kim, B. I.; Kim, C.; Kim, D. H.; Kim, D. J.; Kim, E.; Kim, E.-J.; Kim, G. W.; Kim, H. J.; Kim, K.-B.; Kim, M.; Kim, Y.-J.; Kim, Y. K.; Kim, Y.-S.; Kimelman, B.; Kinney, E.; Kiss, Á.; Kistenev, E.; Kitamura, R.; Kiyomichi, A.; Klatsky, J.; Klay, J.; Klein-Boesing, C.; Kleinjan, D.; Kline, P.; Koblesky, T.; Kochenda, L.; Kochetkov, V.; Kofarago, M.; Komatsu, Y.; Komkov, B.; Konno, M.; Koster, J.; Kotchetkov, D.; Kotov, D.; Kozlov, A.; Král, A.; Kravitz, A.; Krizek, F.; Kroon, P. J.; Kubart, J.; Kunde, G. J.; Kurihara, N.; Kurita, K.; Kurosawa, M.; Kweon, M. J.; Kwon, Y.; Kyle, G. S.; Lacey, R.; Lai, Y. S.; Lajoie, J. G.; Lebedev, A.; Le Bornec, Y.; Leckey, S.; Lee, B.; Lee, D. M.; Lee, G. H.; Lee, J.; Lee, K. B.; Lee, K. S.; Lee, M. K.; Lee, S.; Lee, S. H.; Lee, S. R.; Lee, T.; Leitch, M. J.; Leite, M. A. L.; Leitgab, M.; Lenzi, B.; Lewis, B.; Li, X.; Li, X. H.; Lichtenwalner, P.; Liebing, P.; Lim, H.; Lim, S. H.; Linden Levy, L. A.; Liška, T.; Litvinenko, A.; Liu, H.; Liu, M. X.; Love, B.; Lynch, D.; Maguire, C. F.; Makdisi, Y. I.; Makek, M.; Malakhov, A.; Malik, M. D.; Manion, A.; Manko, V. I.; Mannel, E.; Mao, Y.; Maruyama, T.; Mašek, L.; Masui, H.; Masumoto, S.; Matathias, F.; McCain, M. C.; McCumber, M.; McGaughey, P. L.; McGlinchey, D.; McKinney, C.; Means, N.; Meles, A.; Mendoza, M.; Meredith, B.; Miake, Y.; Mibe, T.; Midori, J.; Mignerey, A. C.; Mikeš, P.; Miki, K.; Miller, T. E.; Milov, A.; Mioduszewski, S.; Mishra, D. K.; Mishra, G. C.; Mishra, M.; Mitchell, J. T.; Mitrovski, M.; Miyachi, Y.; Miyasaka, S.; Mizuno, S.; Mohanty, A. K.; Mohapatra, S.; Montuenga, P.; Moon, H. J.; Moon, T.; Morino, Y.; Morreale, A.; Morrison, D. P.; Moskowitz, M.; Moss, J. M.; Motschwiller, S.; Moukhanova, T. V.; Mukhopadhyay, D.; Murakami, T.; Murata, J.; Mwai, A.; Nagae, T.; Nagamiya, S.; Nagashima, K.; Nagata, Y.; Nagle, J. L.; Naglis, M.; Nagy, M. I.; Nakagawa, I.; Nakagomi, H.; Nakamiya, Y.; Nakamura, K. R.; Nakamura, T.; Nakano, K.; Nam, S.; Nattrass, C.; Nederlof, A.; Netrakanti, P. K.; Newby, J.; Nguyen, M.; Nihashi, M.; Niida, T.; Nishimura, S.; Norman, B. E.; Nouicer, R.; Novák, T.; Novitzky, N.; Nukariya, A.; Nyanin, A. S.; Nystrand, J.; Oakley, C.; Obayashi, H.; O'Brien, E.; Oda, S. X.; Ogilvie, C. A.; Ohnishi, H.; Oide, H.; Ojha, I. D.; Oka, M.; Okada, K.; Omiwade, O. O.; Onuki, Y.; Orjuela Koop, J. D.; Osborn, J. D.; Oskarsson, A.; Otterlund, I.; Ouchida, M.; Ozawa, K.; Pak, R.; Pal, D.; Palounek, A. P. T.; Pantuev, V.; Papavassiliou, V.; Park, B. H.; Park, I. H.; Park, J.; Park, J. S.; Park, S.; Park, S. K.; Park, W. J.; Pate, S. F.; Patel, L.; Patel, M.; Pei, H.; Peng, J.-C.; Pereira, H.; Perepelitsa, D. V.; Perera, G. D. N.; Peresedov, V.; Peressounko, D. Yu.; Perry, J.; Petti, R.; Pinkenburg, C.; Pinson, R.; Pisani, R. P.; Proissl, M.; Purschke, M. L.; Purwar, A. K.; Qu, H.; Rak, J.; Rakotozafindrabe, A.; Ramson, B. J.; Ravinovich, I.; Read, K. F.; Rembeczki, S.; Reuter, M.; Reygers, K.; Reynolds, D.; Riabov, V.; Riabov, Y.; Richardson, E.; Rinn, T.; Riveli, N.; Roach, D.; Roche, G.; Rolnick, S. D.; Romana, A.; Rosati, M.; Rosen, C. A.; Rosendahl, S. S. E.; Rosnet, P.; Rowan, Z.; Rubin, J. G.; Rukoyatkin, P.; Ružička, P.; Rykov, V. L.; Ryu, M. S.; Ryu, S. S.; Sahlmueller, B.; Saito, N.; Sakaguchi, T.; Sakai, S.; Sakashita, K.; Sakata, H.; Sako, H.; Samsonov, V.; Sano, M.; Sano, S.; Sarsour, M.; Sato, H. D.; Sato, S.; Sato, T.; Sawada, S.; Schaefer, B.; Schmoll, B. K.; Sedgwick, K.; Seele, J.; Seidl, R.; Sekiguchi, Y.; Semenov, V.; Sen, A.; Seto, R.; Sett, P.; Sexton, A.; Sharma, D.; Shaver, A.; Shea, T. K.; Shein, I.; Shevel, A.; Shibata, T.-A.; Shigaki, K.; Shimomura, M.; Shohjoh, T.; Shoji, K.; Shukla, P.; Sickles, A.; Silva, C. L.; Silvermyr, D.; Silvestre, C.; Sim, K. S.; Singh, B. K.; Singh, C. P.; Singh, V.; Skolnik, M.; Skutnik, S.; Slunečka, M.; Smith, W. C.; Snowball, M.; Solano, S.; Soldatov, A.; Soltz, R. A.; Sondheim, W. E.; Sorensen, S. P.; Sourikova, I. V.; Staley, F.; Stankus, P. W.; Steinberg, P.; Stenlund, E.; Stepanov, M.; Ster, A.; Stoll, S. P.; Stone, M. R.; Sugitate, T.; Suire, C.; Sukhanov, A.; Sullivan, J. P.; Sumita, T.; Sun, J.; Sziklai, J.; Tabaru, T.; Takagi, S.; Takagui, E. M.; Takahara, A.; Taketani, A.; Tanabe, R.; Tanaka, K. H.; Tanaka, Y.; Taneja, S.; Tanida, K.; Tannenbaum, M. J.; Tarafdar, S.; Taranenko, A.; Tarján, P.; Tennant, E.; Themann, H.; Thomas, D.; Thomas, T. L.; Tieulent, R.; Timilsina, A.; Todoroki, T.; Togawa, M.; Toia, A.; Tojo, J.; Tomášek, L.; Tomášek, M.; Torii, H.; Towell, C. L.; Towell, R.; Towell, R. S.; Tram, V.-N.; Tserruya, I.; Tsuchimoto, Y.; Tsuji, T.; Tuli, S. K.; Tydesjö, H.; Tyurin, N.; Vale, C.; Valle, H.; van Hecke, H. W.; Vargyas, M.; Vazquez-Zambrano, E.; Veicht, A.; Velkovska, J.; Vértesi, R.; Vinogradov, A. A.; Virius, M.; Voas, B.; Vossen, A.; Vrba, V.; Vznuzdaev, E.; Wagner, M.; Walker, D.; Wang, X. R.; Watanabe, D.; Watanabe, K.; Watanabe, Y.; Watanabe, Y. S.; Wei, F.; Wei, R.; Wessels, J.; Whitaker, S.; White, A. S.; White, S. N.; Willis, N.; Winter, D.; Wolin, S.; Woody, C. L.; Wright, R. M.; Wysocki, M.; Xia, B.; Xie, W.; Xue, L.; Yalcin, S.; Yamaguchi, Y. L.; Yamaura, K.; Yang, R.; Yanovich, A.; Yasin, Z.; Ying, J.; Yokkaichi, S.; Yoo, J. H.; Yoon, I.; You, Z.; Young, G. R.; Younus, I.; Yu, H.; Yushmanov, I. E.; Zajc, W. A.; Zaudtke, O.; Zelenski, A.; Zhang, C.; Zhou, S.; Zimamyi, J.; Zolin, L.; Zou, L.; Phenix Collaboration
2016-02-01
Measurements of midrapidity charged-particle multiplicity distributions, d Nch/d η , and midrapidity transverse-energy distributions, d ET/d η , are presented for a variety of collision systems and energies. Included are distributions for Au +Au collisions at √{sNN}=200 , 130, 62.4, 39, 27, 19.6, 14.5, and 7.7 GeV, Cu +Cu collisions at √{sNN}=200 and 62.4 GeV, Cu +Au collisions at √{sNN}=200 GeV, U +U collisions at √{sNN}=193 GeV, d +Au collisions at √{sNN}=200 GeV, 3He+Au collisions at √{sNN}=200 GeV, and p +p collisions at √{sNN}=200 GeV. Centrality-dependent distributions at midrapidity are presented in terms of the number of nucleon participants, Npart, and the number of constituent quark participants, Nqp. For all A +A collisions down to √{sNN}=7.7 GeV, it is observed that the midrapidity data are better described by scaling with Nqp than scaling with Npart. Also presented are estimates of the Bjorken energy density, ɛBJ, and the ratio of d ET/d η to d Nch/d η , the latter of which is seen to be constant as a function of centrality for all systems.
Performance Monitoring of Residential Hot Water Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Anna; Lanzisera, Steven; Lutz, Jim
Current water distribution systems are designed such that users need to run the water for some time to achieve the desired temperature, wasting energy and water in the process. We developed a wireless sensor network for large-scale, long time-series monitoring of residential water end use. Our system consists of flow meters connected to wireless motes transmitting data to a central manager mote, which in turn posts data to our server via the internet. This project also demonstrates a reliable and flexible data collection system that could be configured for various other forms of end use metering in buildings. The purposemore » of this study was to determine water and energy use and waste in hot water distribution systems in California residences. We installed meters at every end use point and the water heater in 20 homes and collected 1s flow and temperature data over an 8 month period. For a typical shower and dishwasher events, approximately half the energy is wasted. This relatively low efficiency highlights the importance of further examining the energy and water waste in hot water distribution systems.« less
Fine-scale habitat modeling of a top marine predator: do prey data improve predictive capacity?
Torres, Leigh G; Read, Andrew J; Halpin, Patrick
2008-10-01
Predators and prey assort themselves relative to each other, the availability of resources and refuges, and the temporal and spatial scale of their interaction. Predictive models of predator distributions often rely on these relationships by incorporating data on environmental variability and prey availability to determine predator habitat selection patterns. This approach to predictive modeling holds true in marine systems where observations of predators are logistically difficult, emphasizing the need for accurate models. In this paper, we ask whether including prey distribution data in fine-scale predictive models of bottlenose dolphin (Tursiops truncatus) habitat selection in Florida Bay, Florida, U.S.A., improves predictive capacity. Environmental characteristics are often used as predictor variables in habitat models of top marine predators with the assumption that they act as proxies of prey distribution. We examine the validity of this assumption by comparing the response of dolphin distribution and fish catch rates to the same environmental variables. Next, the predictive capacities of four models, with and without prey distribution data, are tested to determine whether dolphin habitat selection can be predicted without recourse to describing the distribution of their prey. The final analysis determines the accuracy of predictive maps of dolphin distribution produced by modeling areas of high fish catch based on significant environmental characteristics. We use spatial analysis and independent data sets to train and test the models. Our results indicate that, due to high habitat heterogeneity and the spatial variability of prey patches, fine-scale models of dolphin habitat selection in coastal habitats will be more successful if environmental variables are used as predictor variables of predator distributions rather than relying on prey data as explanatory variables. However, predictive modeling of prey distribution as the response variable based on environmental variability did produce high predictive performance of dolphin habitat selection, particularly foraging habitat.
Modeling complexity in engineered infrastructure system: Water distribution network as an example
NASA Astrophysics Data System (ADS)
Zeng, Fang; Li, Xiang; Li, Ke
2017-02-01
The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.
2015-05-22
sensor networks for managing power levels of wireless networks ; air and ground transportation systems for air traffic control and payload transport and... network systems, large-scale systems, adaptive control, discontinuous systems 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF...cover a broad spectrum of ap- plications including cooperative control of unmanned air vehicles, autonomous underwater vehicles, distributed sensor
Parallel and distributed computation for fault-tolerant object recognition
NASA Technical Reports Server (NTRS)
Wechsler, Harry
1988-01-01
The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.
McInnes, Alistair M.; Khoosal, Arjun; Murrell, Ben; Merkle, Dagmar; Lacerda, Miguel; Nyengera, Reason; Coetzee, Janet C.; Edwards, Loyd C.; Ryan, Peter G.; Rademan, Johan; van der Westhuizen, Jan J; Pichegru, Lorien
2015-01-01
Studies investigating how mobile marine predators respond to their prey are limited due to the challenging nature of the environment. While marine top predators are increasingly easy to study thanks to developments in bio-logging technology, typically there is scant information on the distribution and abundance of their prey, largely due to the specialised nature of acquiring this information. We explore the potential of using single-beam recreational fish-finders (RFF) to quantify relative forage fish abundance and draw inferences of the prey distribution at a fine spatial scale. We compared fish school characteristics as inferred from the RFF with that of a calibrated scientific split-beam echo-sounder (SES) by simultaneously operating both systems from the same vessel in Algoa Bay, South Africa. Customized open-source software was developed to extract fish school information from the echo returns of the RFF. For schools insonified by both systems, there was close correspondence between estimates of mean school depth (R2 = 0.98) and school area (R2 = 0.70). Estimates of relative school density (mean volume backscattering strength; Sv) measured by the RFF were negatively biased through saturation of this system given its smaller dynamic range. A correction factor applied to the RFF-derived density estimates improved the comparability between the two systems. Relative abundance estimates using all schools from both systems were congruent at scales from 0.5 km to 18 km with a strong positive linear trend in model fit estimates with increasing scale. Although absolute estimates of fish abundance cannot be derived from these systems, they are effective at describing prey school characteristics and have good potential for mapping forage fish distribution and relative abundance. Using such relatively inexpensive systems could greatly enhance our understanding of predator-prey interactions. PMID:26600300
Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering
NASA Astrophysics Data System (ADS)
Koehler, Sarah Muraoka
Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.
The microscopic basis for strain localisation in porous media
NASA Astrophysics Data System (ADS)
Main, Ian; Kun, Ferenz; Pal, Gergo; Janosi, Zoltan
2017-04-01
The spontaneous emergence of localized cooperative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uni-axial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of preexisting heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.
Nucleation, growth and localisation of microcracks: implications for predictability of rock failure
NASA Astrophysics Data System (ADS)
Main, I. G.; Kun, F.; Pál, G.; Jánosi, Z.
2016-12-01
The spontaneous emergence of localized co-operative deformation is an important phenomenon in the development of shear faults in porous media. It can be studied by empirical observation, by laboratory experiment or by numerical simulation. Here we investigate the evolution of damage and fragmentation leading up to and including system-sized failure in a numerical model of a porous rock, using discrete element simulations of the strain-controlled uniaxial compression of cylindrical samples of different finite size. As the system approaches macroscopic failure the number of fractures and the energy release rate both increase as a time-reversed Omori law, with scaling constants for the frequency-size distribution and the inter-event time, including their temporal evolution, that closely resemble those of natural experiments. The damage progressively localizes in a narrow shear band, ultimately a fault 'gouge' containing a large number of poorly-sorted non-cohesive fragments on a broad bandwidth of scales, with properties similar to those of natural and experimental faults. We determine the position and orientation of the central fault plane, the width of the deformation band and the spatial and mass distribution of fragments. The relative width of the deformation band decreases as a power law of the system size and the probability distribution of the angle of the damage plane converges to around 30 degrees, representing an emergent internal coefficient of friction of 0.7 or so. The mass of fragments is power law distributed, with an exponent that does not depend on scale, and is near that inferred for experimental and natural fault gouges. The fragments are in general angular, with a clear self-affine geometry. The consistency of this model with experimental and field results confirms the critical roles of pre-existing heterogeneity, elastic interactions, and finite system size to grain size ratio on the development of faults, and ultimately to assessing the predictive power of forecasts of failure time in such media.
Todd, M. Jason; Lowrance, R. Richard; Goovaerts, Pierre; Vellidis, George; Pringle, Catherine M.
2010-01-01
Blackwater streams are found throughout the Coastal Plain of the southeastern United States and are characterized by a series of instream floodplain swamps that play a critical role in determining the water quality of these systems. Within the state of Georgia, many of these streams are listed in violation of the state’s dissolved oxygen (DO) standard. Previous work has shown that sediment oxygen demand (SOD) is elevated in instream floodplain swamps and due to these areas of intense oxygen demand, these locations play a major role in determining the oxygen balance of the watershed as a whole. This work also showed SOD rates to be positively correlated with the concentration of total organic carbon. This study builds on previous work by using geostatistics and Sequential Gaussian Simulation to investigate the patchiness and distribution of total organic carbon (TOC) at the reach scale. This was achieved by interpolating TOC observations and simulated SOD rates based on a linear regression. Additionally, this study identifies areas within the stream system prone to high SOD at representative 3rd and 5th order locations. Results show that SOD was spatially correlated with the differences in distribution of TOC at both locations and that these differences in distribution are likely a result of the differing hydrologic regime and watershed position. Mapping of floodplain soils at the watershed scale shows that areas of organic sediment are widespread and become more prevalent in higher order streams. DO dynamics within blackwater systems are a complicated mix of natural and anthropogenic influences, but this paper illustrates the importance of instream swamps in enhancing SOD at the watershed scale. Moreover, our study illustrates the influence of instream swamps on oxygen demand while providing support that many of these systems are naturally low in DO. PMID:20938491
Wang, Wendong; Song, Shan; Zhang, Xiaoni; Mitchell Spear, J; Wang, Xiaochang; Wang, Wen; Ding, Zhenzhen; Qiao, Zixia
2014-07-01
Observations of aluminum containing sediments/scales formed within the distribution pipes have been reported for several decades. In this study, the effect of Ni(2+) on the formation and transformation processes of aluminum hydroxide sediment in a simulated drinking water distribution system were investigated using X-ray diffraction spectrum (XRD), Fourier transform infrared spectrum (FT-IR), scanning electron microscope (SEM), and thermodynamic calculation methods. It was determined that the existence of Ni(2+) had notable effects on the formation of bayerite. In the system without Ni(2+) addition, there was no X-ray diffraction signal observed after 400 d of aging. The presence of Ni(2+), however, even when present in small amounts (Ni/Al=1:100) the formation of bayerite would occur in as little as 3d at pH 8.5. As the molar ratio of Ni/Al increase from 1:100 to 1:10, the amount of bayerite formed on the pipeline increased further; meanwhile, the specific area of the pipe scale decreased from 160 to 122 m(2)g(-1). In the system with Ni/Al molar ratio at 1:3, the diffraction spectrum strength of bayerite became weaker, and disappeared when Ni/Al molar ratios increased above 1:1. At these highs Ni/Al molar ratios, Ni5Al4O11⋅18H2O was determined to be the major component of the pipe scale. Further study indicated that the presence of Ni(2+) promoted the formation of bayerite and Ni5Al4O11⋅18H2O under basic conditions. At lower pH (6.5) however, the existence of Ni(2+) had little effect on the formation of bayerite and Ni5Al4O11⋅18H2O, rather the adsorption of amorphous Al(OH)3 for Ni(2+) promoted the formation of crystal Ni(OH)2. Copyright © 2013 Elsevier Ltd. All rights reserved.
Classification and Mapping of Agricultural Land for National Water-Quality Assessment
Gilliom, Robert J.; Thelin, Gail P.
1997-01-01
Agricultural land use is one of the most important influences on water quality at national and regional scales. Although there is great diversity in the character of agricultural land, variations follow regional patterns that are influenced by environmental setting and economics. These regional patterns can be characterized by the distribution of crops. A new approach to classifying and mapping agricultural land use for national water-quality assessment was developed by combining information on general land-use distribution with information on crop patterns from agricultural census data. Separate classification systems were developed for row crops and for orchards, vineyards, and nurseries. These two general categories of agricultural land are distinguished from each other in the land-use classification system used in the U.S. Geological Survey national Land Use and Land Cover database. Classification of cropland was based on the areal extent of crops harvested. The acreage of each crop in each county was divided by total row-crop area or total orchard, vineyard, and nursery area, as appropriate, thus normalizing the crop data and making the classification independent of total cropland area. The classification system was developed using simple percentage criteria to define combinations of 1 to 3 crops that account for 50 percent or more or harvested acreage in a county. The classification system consists of 21 level I categories and 46 level II subcategories for row crops, and 26 level I categories and 19 level II subcategories for orchards, vineyards, and nurseries. All counties in the United States with reported harvested acreage are classified in these categories. The distribution of agricultural land within each county, however, must be evaluated on the basis of general land-use data. This can be done at the national scale using 'Major Land Uses of the United States,' at the regional scale using data from the national Land Use and Land Cover database, or at smaller scales using locally available data.
NASA Astrophysics Data System (ADS)
Norrbin, F.; Priou, P. D.; Varela, A. P.
2016-02-01
We studied the influence of dense layers of phytoplankton and aggregates on shaping the vertical distribution of zooplankton in a North Norwegian fjord using a Video Plankton Recorder (VPR). This instrument provided fine-scale vertical distribution (cm-m scale) of planktonic organisms as well as aggregates of marine snow in relation to environmental conditions. At the height - later stage of the spring phytoplankton bloom in May, the outer part of the fjord was dominated by Phaeocystis pouchetii, while diatoms (Chaetoceros spp.) were dominating in the innermost basin. Small copepods species like Pseudocalanus spp., Microsetella norvegica, and Oithona spp. prevailed over larger copepod species in the inner part of the fjord whereas the outer part was dominated by large copepods like Calanus finmarchicus. While the zooplankton where spread out over the water column during the early stage of the bloom, in May they were linked to the phytoplankton vertical distribution and in the winter situation they were found in deeper waters. Herbivorous zooplankton species were affected by phytoplankton species composition; C. finmarchicus and Pseudocalanus spp. avoided the dense layer of P. pouchetii while herbivorous zooplankton matched the distribution of the diatom-dominated bloom. Small, omnivorous copepod species like Microsetella sp., Oithona sp. and Pseudocalanus sp. were often associated with dense layers of snow aggregates. This distribution may provide a shelter from predators as well as a food source. Natural or anthropogenic-induced changes in phytoplankton composition and aggregate distribution may thus influence food-web interactions.
NASA Astrophysics Data System (ADS)
Hill, Nicole A.; Lucieer, Vanessa; Barrett, Neville S.; Anderson, Tara J.; Williams, Stefan B.
2014-06-01
Management of the marine environment is often hampered by a lack of comprehensive spatial information on the distribution of diversity and the bio-physical processes structuring regional ecosystems. This is particularly true in temperate reef systems beyond depths easily accessible to divers. Yet these systems harbor a diversity of sessile life that provide essential ecosystem services, sustain fisheries and, as with shallower ecosystems, are also increasingly vulnerable to anthropogenic impacts and environmental change. Here we use cutting-edge tools (Autonomous Underwater Vehicles and ship-borne acoustics) and analytical approaches (predictive modelling) to quantify and map these highly productive ecosystems. We find the occurrence of key temperate-reef biota can be explained and predicted using standard (depth) and novel (texture) surrogates derived from multibeam acoustic data, and geographic surrogates. This suggests that combinations of fine-scale processes, such as light limitation and habitat complexity, and broad-scale processes, such as regional currents and exposure regimes, are important in structuring these diverse deep-reef communities. While some dominant habitat forming biota, including canopy algae, were widely distributed, others, including gorgonians and sea whips, exhibited patchy and restricted distributions across the reef system. In addition to providing the first quantitative and full coverage maps of reef diversity for this area, our modelling revealed that offshore reefs represented a regional diversity hotspot that is of high ecological and conservation value. Regional reef systems should not, therefore, be considered homogenous units in conservation planning and management. Full-coverage maps of the predicted distribution of biota (and associated uncertainty) are likely to be increasingly valuable, not only for conservation planning, but in the ongoing management and monitoring of these less-accessible ecosystems.
NASA Astrophysics Data System (ADS)
Tick, G. R.; Wei, S.; Sun, H.; Zhang, Y.
2016-12-01
Pore-scale heterogeneity, NAPL distribution, and sorption/desorption processes can significantly affect aqueous phase elution and mass flux in porous media systems. The application of a scale-independent fractional derivative model (tFADE) was used to simulate elution curves for a series of columns (5 cm, 7 cm, 15 cm, 25 cm, and 80 cm) homogeneously packed with 20/30-mesh sand and distributed with uniform saturations (7-24%) of NAPL phase trichloroethene (TCE). An additional set of columns (7 cm and 25 cm) were packed with a heterogeneous distribution of quartz sand upon which TCE was emplaced by imbibing the immiscible liquid, under stable displacement conditions, to simulate a spill-type process. The tFADE model was able to better represent experimental elution behavior for systems that exhibited extensive long-term concentration tailing requiring much less parameters compared to typical multi-rate mass transfer models (MRMT). However, the tFADE model was not able to effectively simulate the entire elution curve for such systems with short concentration tailing periods since it assumes a power-law distribution for the dissolution rate for TCE. Such limitations may be solved using the tempered fractional derivative model, which can capture the single-rate mass transfer process and therefore the short elution concentration tailing behavior. Numerical solution for the tempered fractional-derivative model in bounded domains however remains a challenge and therefore requires further study. However, the tFADE model shows excellent promise for understanding impacts on concentration elution behavior for systems in which physical heterogeneity, non-uniform NAPL distribution, and pronounced sorption-desorption effects dominate or are present.
Deshommes, Elise; Laroche, Laurent; Deveau, Dominique; Nour, Shokoufeh; Prévost, Michèle
2017-09-05
Thirty-three households were monitored in a full-scale water distribution system, to investigate the impact of recent (<2 yr) or old partial lead service line replacements (PLSLRs). Total and particulate lead concentrations were measured using repeat sampling over a period of 1-20 months. Point-of-entry filters were installed to capture sporadic release of particulate lead from the lead service lines (LSLs). Mean concentrations increased immediately after PLSLRs and erratic particulate lead spikes were observed over the 18 month post-PLSLR monitoring period. The mass of lead released during this time frame indicates the occurrence of galvanic corrosion and scale destabilization. System-wide, lead concentrations were however lower in households with PLSLRs as compared to those with no replacement, especially for old PLSLRs. Nonetheless, 61% of PLSLR samples still exceeded 10 μg/L, reflecting the importance of implementing full LSL replacement and efficient risk communication. Acute concentrations measured immediately after PLSLRs demonstrate the need for appropriate flushing procedures to prevent lead poisoning.
NASA Astrophysics Data System (ADS)
Noh, S.; Tachikawa, Y.; Shiiba, M.; Kim, S.
2011-12-01
Applications of the sequential data assimilation methods have been increasing in hydrology to reduce uncertainty in the model prediction. In a distributed hydrologic model, there are many types of state variables and each variable interacts with each other based on different time scales. However, the framework to deal with the delayed response, which originates from different time scale of hydrologic processes, has not been thoroughly addressed in the hydrologic data assimilation. In this study, we propose the lagged filtering scheme to consider the lagged response of internal states in a distributed hydrologic model using two filtering schemes; particle filtering (PF) and ensemble Kalman filtering (EnKF). The EnKF is one of the widely used sub-optimal filters implementing an efficient computation with limited number of ensemble members, however, still based on Gaussian approximation. PF can be an alternative in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions involved. In case of PF, advanced particle regularization scheme is implemented together to preserve the diversity of the particle system. In case of EnKF, the ensemble square root filter (EnSRF) are implemented. Each filtering method is parallelized and implemented in the high performance computing system. A distributed hydrologic model, the water and energy transfer processes (WEP) model, is applied for the Katsura River catchment, Japan to demonstrate the applicability of proposed approaches. Forecasted results via PF and EnKF are compared and analyzed in terms of the prediction accuracy and the probabilistic adequacy. Discussions are focused on the prospects and limitations of each data assimilation method.
Landscape patterns and soil organic carbon stocks in agricultural bocage landscapes
NASA Astrophysics Data System (ADS)
Viaud, Valérie; Lacoste, Marine; Michot, Didier; Walter, Christian
2014-05-01
Soil organic carbon (SOC) has a crucial impact on global carbon storage at world scale. SOC spatial variability is controlled by the landscape patterns resulting from the continuous interactions between the physical environment and the society. Natural and anthropogenic processes occurring and interplaying at the landscape scale, such as soil redistribution in the lateral and vertical dimensions by tillage and water erosion processes or spatial differentiation of land-use and land-management practices, strongly affect SOC dynamics. Inventories of SOC stocks, reflecting their spatial distribution, are thus key elements to develop relevant management strategies to improving carbon sequestration and mitigating climate change and soil degradation. This study aims to quantify SOC stocks and their spatial distribution in a 1,000-ha agricultural bocage landscape with dairy production as dominant farming system (Zone Atelier Armorique, LTER Europe, NW France). The site is characterized by high heterogeneity on short distance due to a high diversity of soils with varying waterlogging, soil parent material, topography, land-use and hedgerow density. SOC content and stocks were measured up to 105-cm depth in 200 sampling locations selected using conditioned Latin hypercube sampling. Additive sampling was designed to specifically explore SOC distribution near to hedges: 112 points were sampled at fixed distance on 14 transects perpendicular from hedges. We illustrate the heterogeneity of spatial and vertical distribution of SOC stocks at landscape scale, and quantify SOC stocks in the various landscape components. Using multivariate statistics, we discuss the variability and co-variability of existing spatial organization of cropping systems, environmental factors, and SOM stocks, over landscape. Ultimately, our results may contribute to improving regional or national digital soil mapping approaches, by considering the distribution of SOC stocks within each modeling unit and by accounting for the impact of sensitive ecosystems.
Sample distribution in peak mode isotachophoresis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, Shimon; Schwartz, Ortal; Bercovici, Moran, E-mail: mberco@technion.ac.il
We present an analytical study of peak mode isotachophoresis (ITP), and provide closed form solutions for sample distribution and electric field, as well as for leading-, trailing-, and counter-ion concentration profiles. Importantly, the solution we present is valid not only for the case of fully ionized species, but also for systems of weak electrolytes which better represent real buffer systems and for multivalent analytes such as proteins and DNA. The model reveals two major scales which govern the electric field and buffer distributions, and an additional length scale governing analyte distribution. Using well-controlled experiments, and numerical simulations, we verify andmore » validate the model and highlight its key merits as well as its limitations. We demonstrate the use of the model for determining the peak concentration of focused sample based on known buffer and analyte properties, and show it differs significantly from commonly used approximations based on the interface width alone. We further apply our model for studying reactions between multiple species having different effective mobilities yet co-focused at a single ITP interface. We find a closed form expression for an effective-on rate which depends on reactants distributions, and derive the conditions for optimizing such reactions. Interestingly, the model reveals that maximum reaction rate is not necessarily obtained when the concentration profiles of the reacting species perfectly overlap. In addition to the exact solutions, we derive throughout several closed form engineering approximations which are based on elementary functions and are simple to implement, yet maintain the interplay between the important scales. Both the exact and approximate solutions provide insight into sample focusing and can be used to design and optimize ITP-based assays.« less
The origin of polygonal troughs on the northern plains of Mars
NASA Astrophysics Data System (ADS)
Pechmann, J. C.
1980-05-01
The morphology, distribution, geologic environment and relative age of large-scale polygonal trough systems on Mars are examined. The troughs are steep-walled, flat-floored, sinuous depressions typically 200-800 m wide, 20-120 m deep and spaced 5-10 km apart. The mechanics of formation of tension cracks is reviewed to identify the factors controlling the scale of tension crack systems; special emphasis is placed on thermal cracking in permafrost. It is shown that because of the extremely large scale of the Martian fracture systems, they could not have formed by thermal cracking in permafrost, dessication cracking in sediments or contraction cracking in cooling lava. On the basis of photogeologic evidence and analog studies, it is proposed that polygonal troughs on the northern plains of Mars are grabens.
Self-Organized Bistability Associated with First-Order Phase Transitions
NASA Astrophysics Data System (ADS)
di Santo, Serena; Burioni, Raffaella; Vezzani, Alessandro; Muñoz, Miguel A.
2016-06-01
Self-organized criticality elucidates the conditions under which physical and biological systems tune themselves to the edge of a second-order phase transition, with scale invariance. Motivated by the empirical observation of bimodal distributions of activity in neuroscience and other fields, we propose and analyze a theory for the self-organization to the point of phase coexistence in systems exhibiting a first-order phase transition. It explains the emergence of regular avalanches with attributes of scale invariance that coexist with huge anomalous ones, with realizations in many fields.
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle.
Khantuleva, Tatiana A; Shalymov, Dmitry S
2017-03-06
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle
NASA Astrophysics Data System (ADS)
Khantuleva, Tatiana A.; Shalymov, Dmitry S.
2017-03-01
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed. This article is part of the themed issue 'Horizons of cybernetical physics'.
Modelling non-equilibrium thermodynamic systems from the speed-gradient principle
Khantuleva, Tatiana A.
2017-01-01
The application of the speed-gradient (SG) principle to the non-equilibrium distribution systems far away from thermodynamic equilibrium is investigated. The options for applying the SG principle to describe the non-equilibrium transport processes in real-world environments are discussed. Investigation of a non-equilibrium system's evolution at different scale levels via the SG principle allows for a fresh look at the thermodynamics problems associated with the behaviour of the system entropy. Generalized dynamic equations for finite and infinite number of constraints are proposed. It is shown that the stationary solution to the equations, resulting from the SG principle, entirely coincides with the locally equilibrium distribution function obtained by Zubarev. A new approach to describe time evolution of systems far from equilibrium is proposed based on application of the SG principle at the intermediate scale level of the system's internal structure. The problem of the high-rate shear flow of viscous fluid near the rigid plane plate is discussed. It is shown that the SG principle allows closed mathematical models of non-equilibrium processes to be constructed. This article is part of the themed issue ‘Horizons of cybernetical physics’. PMID:28115617
Compounding approach for univariate time series with nonstationary variances
NASA Astrophysics Data System (ADS)
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
Compounding approach for univariate time series with nonstationary variances.
Schäfer, Rudi; Barkhofen, Sonja; Guhr, Thomas; Stöckmann, Hans-Jürgen; Kuhl, Ulrich
2015-12-01
A defining feature of nonstationary systems is the time dependence of their statistical parameters. Measured time series may exhibit Gaussian statistics on short time horizons, due to the central limit theorem. The sample statistics for long time horizons, however, averages over the time-dependent variances. To model the long-term statistical behavior, we compound the local distribution with the distribution of its parameters. Here, we consider two concrete, but diverse, examples of such nonstationary systems: the turbulent air flow of a fan and a time series of foreign exchange rates. Our main focus is to empirically determine the appropriate parameter distribution for the compounding approach. To this end, we extract the relevant time scales by decomposing the time signals into windows and determine the distribution function of the thus obtained local variances.
2008-10-01
AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3
Near-surface wind speed statistical distribution: comparison between ECMWF System 4 and ERA-Interim
NASA Astrophysics Data System (ADS)
Marcos, Raül; Gonzalez-Reviriego, Nube; Torralba, Verónica; Cortesi, Nicola; Young, Doo; Doblas-Reyes, Francisco J.
2017-04-01
In the framework of seasonal forecast verification, knowing whether the characteristics of the climatological wind speed distribution, simulated by the forecasting systems, are similar to the observed ones is essential to guide the subsequent process of bias adjustment. To bring some light about this topic, this work assesses the properties of the statistical distributions of 10m wind speed from both ERA-Interim reanalysis and seasonal forecasts of ECMWF system 4. The 10m wind speed distribution has been characterized in terms of the four main moments of the probability distribution (mean, standard deviation, skewness and kurtosis) together with the coefficient of variation and goodness of fit Shapiro-Wilks test, allowing the identification of regions with higher wind variability and non-Gaussian behaviour at monthly time-scales. Also, the comparison of the predicted and observed 10m wind speed distributions has been measured considering both inter-annual and intra-seasonal variability. Such a comparison is important in both climate research and climate services communities because it provides useful climate information for decision-making processes and wind industry applications.
Emergence of universal scaling in financial markets from mean-field dynamics
NASA Astrophysics Data System (ADS)
Vikram, S. V.; Sinha, Sitabhra
2011-01-01
Collective phenomena with universal properties have been observed in many complex systems with a large number of components. Here we present a microscopic model of the emergence of scaling behavior in such systems, where the interaction dynamics between individual components is mediated by a global variable making the mean-field description exact. Using the example of financial markets, we show that asset price can be such a global variable with the critical role of coordinating the actions of agents who are otherwise independent. The resulting model accurately reproduces empirical properties such as the universal scaling of the price fluctuation and volume distributions, long-range correlations in volatility, and multiscaling.
The scale of the Fourier transform: a point of view of the fractional Fourier transform
NASA Astrophysics Data System (ADS)
Jimenez, C. J.; Vilardy, J. M.; Salinas, S.; Mattos, L.; Torres, C. O.
2017-01-01
In this paper using the Fourier transform of order fractional, the ray transfer matrix for the symmetrical optical systems type ABCD and the formulae by Collins for the diffraction, we obtain explicitly the expression for scaled Fourier transform conventional; this result is the great importance in optical signal processing because it offers the possibility of scaling the size of output the Fourier distribution of the system, only by manipulating the distance of the diffraction object toward the thin lens, this research also emphasizes on practical limits when a finite spherical converging lens aperture is used. Digital simulation was carried out using the numerical platform of Matlab 7.1.
NASA Astrophysics Data System (ADS)
Selvam, A. M.
2017-01-01
Dynamical systems in nature exhibit self-similar fractal space-time fluctuations on all scales indicating long-range correlations and, therefore, the statistical normal distribution with implicit assumption of independence, fixed mean and standard deviation cannot be used for description and quantification of fractal data sets. The author has developed a general systems theory based on classical statistical physics for fractal fluctuations which predicts the following. (1) The fractal fluctuations signify an underlying eddy continuum, the larger eddies being the integrated mean of enclosed smaller-scale fluctuations. (2) The probability distribution of eddy amplitudes and the variance (square of eddy amplitude) spectrum of fractal fluctuations follow the universal Boltzmann inverse power law expressed as a function of the golden mean. (3) Fractal fluctuations are signatures of quantum-like chaos since the additive amplitudes of eddies when squared represent probability densities analogous to the sub-atomic dynamics of quantum systems such as the photon or electron. (4) The model predicted distribution is very close to statistical normal distribution for moderate events within two standard deviations from the mean but exhibits a fat long tail that are associated with hazardous extreme events. Continuous periodogram power spectral analyses of available GHCN annual total rainfall time series for the period 1900-2008 for Indian and USA stations show that the power spectra and the corresponding probability distributions follow model predicted universal inverse power law form signifying an eddy continuum structure underlying the observed inter-annual variability of rainfall. On a global scale, man-made greenhouse gas related atmospheric warming would result in intensification of natural climate variability, seen immediately in high frequency fluctuations such as QBO and ENSO and even shorter timescales. Model concepts and results of analyses are discussed with reference to possible prediction of climate change. Model concepts, if correct, rule out unambiguously, linear trends in climate. Climate change will only be manifested as increase or decrease in the natural variability. However, more stringent tests of model concepts and predictions are required before applications to such an important issue as climate change. Observations and simulations with climate models show that precipitation extremes intensify in response to a warming climate (O'Gorman in Curr Clim Change Rep 1:49-59, 2015).
NASA Astrophysics Data System (ADS)
Thompson, C. J.; Croke, J. C.; Grove, J. R.
2012-04-01
Non-linearity in physical systems provides a conceptual framework to explain complex patterns and form that are derived from complex internal dynamics rather than external forcings, and can be used to inform modeling and improve landscape management. One process that has been investigated previously to explore the existence of self-organised critical system (SOC) in river systems at the basin-scale is bank failure. Spatial trends in bank failure have been previously quantified to determine if the distribution of bank failures at the basin scale exhibit the necessary power law magnitude/frequency distributions. More commonly bank failures are investigated at a small-scale using several cross-sections with strong emphasis on local-scale factors such as bank height, cohesion and hydraulic properties. Advancing our understanding of non-linearity in such processes, however, requires many more studies where both the spatial and temporal measurements of the process can be used to investigate the existence or otherwise of non-linearity and self-organised criticality. This study presents measurements of bank failure throughout the Lockyer catchment in southeast Queensland, Australia, which experienced an extreme flood event in January 2011 resulting in the loss of human lives and geomorphic channel change. The most dominant form of fluvial adjustment consisted of changes in channel geometry and notably widespread bank failures, which were readily identifiable as 'scalloped' shaped failure scarps. The spatial extents of these were mapped using high-resolution LiDAR derived digital elevation model and were verified by field surveys and air photos. Pre-flood event LiDAR coverage for the catchment also existed allowing direct comparison of the magnitude and frequency of bank failures from both pre and post-flood time periods. Data were collected and analysed within a GIS framework and investigated for power-law relationships. Bank failures appeared random and occurred throughout the basin but plots of magnitude and frequency did display power-law scaling of failures. In addition, there was a lack of site specific correlations between bank failure and other factors such channel width, bank height and stream power. The data are used here to discuss the existence of SOC in fluvial systems and the relative role of local and basin-wide processes in influencing their distribution in space and time.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
Distributed and grid computing projects with research focus in human health.
Diomidous, Marianna; Zikos, Dimitrios
2012-01-01
Distributed systems and grid computing systems are used to connect several computers to obtain a higher level of performance, in order to solve a problem. During the last decade, projects use the World Wide Web to aggregate individuals' CPU power for research purposes. This paper presents the existing active large scale distributed and grid computing projects with research focus in human health. There have been found and presented 11 active projects with more than 2000 Processing Units (PUs) each. The research focus for most of them is molecular biology and, specifically on understanding or predicting protein structure through simulation, comparing proteins, genomic analysis for disease provoking genes and drug design. Though not in all cases explicitly stated, common target diseases include research to find cure against HIV, dengue, Duchene dystrophy, Parkinson's disease, various types of cancer and influenza. Other diseases include malaria, anthrax, Alzheimer's disease. The need for national initiatives and European Collaboration for larger scale projects is stressed, to raise the awareness of citizens to participate in order to create a culture of internet volunteering altruism.
High-Throughput Computing on High-Performance Platforms: A Case Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oleynik, D; Panitkin, S; Matteo, Turilli
The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
NASA Astrophysics Data System (ADS)
Torrealba, V.; Karpyn, Z.; Yoon, H.; Hart, D. B.; Klise, K. A.
2013-12-01
The pore-scale dynamics that govern multiphase flow under variable stress conditions are not well understood. This lack of fundamental understanding limits our ability to quantitatively predict multiphase flow and fluid distributions in natural geologic systems. In this research, we focus on pore-scale, single and multiphase flow properties that impact displacement mechanisms and residual trapping of non-wetting phase under varying stress conditions. X-ray micro-tomography is used to image pore structures and distribution of wetting and non-wetting fluids in water-wet synthetic granular packs, under dynamic load. Micro-tomography images are also used to determine structural features such as medial axis, surface area, and pore body and throat distribution; while the corresponding transport properties are determined from Lattice-Boltzmann simulations performed on lattice replicas of the imaged specimens. Results are used to investigate how inter-granular deformation mechanisms affect fluid displacement and residual trapping at the pore-scale. This will improve our understanding of the dynamic interaction of mechanical deformation and fluid flow during enhanced oil recovery and geologic CO2 sequestration. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell
2017-01-01
Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…
6th Annual Earth System Grid Federation Face to Face Conference Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, D. N.
The Sixth Annual Face-to-Face (F2F) Conference of the Earth System Grid Federation (ESGF), a global consortium of international government agencies, institutions, and companies dedicated to the creation, management, analysis, and distribution of extreme-scale scientific data, was held December 5–9, 2016, in Washington, D.C.
Financial performance of a mobile pyrolysis system used to produce biochar from sawmill residues
Dongyeob Kim; Nathaniel McLean Anderson; Woodam Chung
2015-01-01
Primary wood products manufacturers generate significant amounts of woody biomass residues that can be used as feedstocks for distributed-scale thermochemical conversion systems that produce valuable bioenergy and bioproducts. However, private investment in these technologies is driven primarily by financial performance, which is often unknown for new technologies with...
NASA Astrophysics Data System (ADS)
Jiang, Zeyun; Couples, Gary D.; Lewis, Helen; Mangione, Alessandro
2018-07-01
Limestones containing abundant disc-shaped fossil Nummulites can form significant hydrocarbon reservoirs but they have a distinctly heterogeneous distribution of pore shapes, sizes and connectivities, which make it particularly difficult to calculate petrophysical properties and consequent flow outcomes. The severity of the problem rests on the wide length-scale range from the millimetre scale of the fossil's pore space to the micron scale of rock matrix pores. This work develops a technique to incorporate multi-scale void systems into a pore network, which is used to calculate the petrophysical properties for subsequent flow simulations at different stages in the limestone's petrophysical evolution. While rock pore size, shape and connectivity can be determined, with varying levels of fidelity, using techniques such as X-ray computed tomography (CT) or scanning electron microscopy (SEM), this work represents a more challenging class where the rock of interest is insufficiently sampled or, as here, has been overprinted by extensive chemical diagenesis. The main challenge is integrating multi-scale void structures derived from both SEM and CT images, into a single model or a pore-scale network while still honouring the nature of the connections across these length scales. Pore network flow simulations are used to illustrate the technique but of equal importance, to demonstrate how supportable earlier-stage petrophysical property distributions can be used to assess the viability of several potential geological event sequences. The results of our flow simulations on generated models highlight the requirement for correct determination of the dominant pore scales (one plus of nm, μm, mm, cm), the spatial correlation and the cross-scale connections.
Generic solar photovoltaic system dynamic simulation model specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Abraham; Behnke, Michael Robert; Elliott, Ryan Thomas
This document is intended to serve as a specification for generic solar photovoltaic (PV) system positive-sequence dynamic models to be implemented by software developers and approved by the WECC MVWG for use in bulk system dynamic simulations in accordance with NERC MOD standards. Two specific dynamic models are included in the scope of this document. The first, a Central Station PV System model, is intended to capture the most important dynamic characteristics of large scale (> 10 MW) PV systems with a central Point of Interconnection (POI) at the transmission level. The second, a Distributed PV System model, is intendedmore » to represent an aggregation of smaller, distribution-connected systems that comprise a portion of a composite load that might be modeled at a transmission load bus.« less
Finite-size scaling for discontinuous nonequilibrium phase transitions
NASA Astrophysics Data System (ADS)
de Oliveira, Marcelo M.; da Luz, M. G. E.; Fiore, Carlos E.
2018-06-01
A finite-size scaling theory, originally developed only for transitions to absorbing states [Phys. Rev. E 92, 062126 (2015), 10.1103/PhysRevE.92.062126], is extended to distinct sorts of discontinuous nonequilibrium phase transitions. Expressions for quantities such as response functions, reduced cumulants, and equal area probability distributions are derived from phenomenological arguments. Irrespective of system details, all these quantities scale with the volume, establishing the dependence on size. The approach generality is illustrated through the analysis of different models. The present results are a relevant step in trying to unify the scaling behavior description of nonequilibrium transition processes.
Analysis and Application of Microgrids
NASA Astrophysics Data System (ADS)
Yue, Lu
New trends of generating electricity locally and utilizing non-conventional or renewable energy sources have attracted increasing interests due to the gradual depletion of conventional fossil fuel energy sources. The new type of power generation is called Distributed Generation (DG) and the energy sources utilized by Distributed Generation are termed Distributed Energy Sources (DERs). With DGs embedded in the distribution networks, they evolve from passive distribution networks to active distribution networks enabling bidirectional power flows in the networks. Further incorporating flexible and intelligent controllers and employing future technologies, active distribution networks will turn to a Microgrid. A Microgrid is a small-scale, low voltage Combined with Heat and Power (CHP) supply network designed to supply electrical and heat loads for a small community. To further implement Microgrids, a sophisticated Microgrid Management System must be integrated. However, due to the fact that a Microgrid has multiple DERs integrated and is likely to be deregulated, the ability to perform real-time OPF and economic dispatch with fast speed advanced communication network is necessary. In this thesis, first, problems such as, power system modelling, power flow solving and power system optimization, are studied. Then, Distributed Generation and Microgrid are studied and reviewed, including a comprehensive review over current distributed generation technologies and Microgrid Management Systems, etc. Finally, a computer-based AC optimization method which minimizes the total transmission loss and generation cost of a Microgrid is proposed and a wireless communication scheme based on synchronized Code Division Multiple Access (sCDMA) is proposed. The algorithm is tested with a 6-bus power system and a 9-bus power system.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
Linkages and feedbacks in orogenic systems: An introduction
Thigpen, J. Ryan; Law, Richard D.; Merschat, Arthur J.; Stowell, Harold
2017-01-01
Orogenic processes operate at scales ranging from the lithosphere to grain-scale, and are inexorably linked. For example, in many orogens, fault and shear zone architecture controls distribution of heat advection along faults and also acts as the primary mechanism for redistribution of heat-producing material. This sets up the thermal structure of the orogen, which in turn controls lithospheric rheology, the nature and distribution of deformation and strain localization, and ultimately, through localized mechanical strengthening and weakening, the fundamental shape of the developing orogenic wedge (Fig. 1). Strain localization establishes shear zone and fault geometry, and it is the motion on these structures, in conjunction with climate, that often focuses erosional and exhumational processes. This climatic focusing effect can even drive development of asymmetry at the scale of the entire wedge (Willett et al., 1993).
Lorentzian symmetry predicts universality beyond scaling laws
NASA Astrophysics Data System (ADS)
Watson, Stephen J.
2017-06-01
We present a covariant theory for the ageing characteristics of phase-ordering systems that possess dynamical symmetries beyond mere scalings. A chiral spin dynamics which conserves the spin-up (+) and spin-down (-) fractions, μ+ and μ- , serves as the emblematic paradigm of our theory. Beyond a parabolic spatio-temporal scaling, we discover a hidden Lorentzian dynamical symmetry therein, and thereby prove that the characteristic length L of spin domains grows in time t according to L = \\fracβ{\\sqrt{1 - σ^2}}t\\frac{1{2}} , where σ:= μ+ - μ- (the invariant spin-excess) and β is a universal constant. Furthermore, the normalised length distributions of the spin-up and the spin-down domains each provably adopt a coincident universal (σ-independent) time-invariant form, and this supra-universal probability distribution is empirically verified to assume a form reminiscent of the Wigner surmise.
Cho, Keunhee; Cho, Jeong-Rae; Kim, Sung Tae; Park, Sung Yong; Kim, Young-Jin; Park, Young-Hwan
2016-01-01
The recently developed smart strand can be used to measure the prestress force in the prestressed concrete (PSC) structure from the construction stage to the in-service stage. The higher cost of the smart strand compared to the conventional strand renders it unaffordable to replace all the strands by smart strands, and results in the application of only a limited number of smart strands in the PSC structure. However, the prestress forces developed in the strands of the multi-strand system frequently adopted in PSC structures differ from each other, which means that the prestress force in the multi-strand system cannot be obtained by simple proportional scaling using the measurement of the smart strand. Therefore, this study examines the prestress force distribution in the multi-strand system to find the correlation between the prestress force measured by the smart strand and the prestress force distribution in the multi-strand system. To that goal, the prestress force distribution was measured using electromagnetic sensors for various factors of the multi-strand system adopted on site in the fabrication of actual PSC girders. The results verified the possibility to assume normal distribution for the prestress force distribution per anchor head, and a method computing the mean and standard deviation defining the normal distribution is proposed. This paper presents a meaningful finding by proposing an estimation method of the prestress force based upon field-measured data of the prestress force distribution in the multi-strand system of actual PSC structures. PMID:27548172
NASA Astrophysics Data System (ADS)
Hoyal, D. C.; Sheets, B. A.
2005-12-01
The degree to which experimental sedimentary systems form channels has an important bearing on their applicability as analogs of large-scale natural systems, where channels and their associated landforms are ubiquitous. The internal geometry and properties (e.g., grain size, vertical succession and stacking) of many depositional landforms can be directly linked to the processes of channel initiation and evolution. Unfortunately, strong self-channelization, a prerequisite for certain natural phenomena (e.g. mouth lobe development, meandering, etc.), has been difficult to reproduce at laboratory scales. In shallow-water experiments (sub-aerial), although weak channelization develops relatively easily, as is commonly observed in gutters after a rain storm, strong channelization with well-developed banks has proved difficult to model. In deep water experiments the challenge is even greater. Despite considerable research effort experimental conditions for deep water channel initiation have only recently been identified. Experiments on the requisite conditions for channelization in shallow and deep water have been ongoing at the ExxonMobil Upstream Research Company (EMURC) for several years. By primarily manipulating the cohesiveness of the sediment supply we have developed models of distributive systems with well-defined channels in shallow water, reminiscent of fine grained river-dominated deltas like the Mississippi. In deep water we have developed models that demonstrate strong channelization and associated lobe behavior in a distributive setting, by scaling up an approach developed by another group using salt-water flows and low-density plastic sediment. The experiments highlight a number of important controls on experimental channel formation, including: (1) bed strength or cohesiveness; (2) bedform development; and (3) Reynolds number. Among these controls bed forms disrupt the channel forming instability, reducing the energy available for channelization. The fundamental channel instability develops in both laminar and turbulent flow but with important differences. The scaling of these effects is the focus of ongoing research. In general it was observed that there are strong similarities between the processes and sedimentary products in shallow and deep water systems. Further, strong channelization in EMURC experiments provides insights into the evolution of distributive systems including: (1) the cyclic process of lobe formation and channel growth at a channel mouth, (2) types of channel fill, (3) architectural differences between channel fill and lobe deposits, (4) channel backfilling and avulsion, (5) Channel initiation vs. entrenched channel phases, (6) knickpoints and channel erosion, (7) structure of overbank, levee-building flows, and (8) the role of levees in altering the distributive channel pattern.
Injection System for Multi-Well Injection Using a Single Pump
Wovkulich, Karen; Stute, Martin; Protus, Thomas J.; Mailloux, Brian J.; Chillrud, Steven N.
2015-01-01
Many hydrological and geochemical studies rely on data resulting from injection of tracers and chemicals into groundwater wells. The even distribution of liquids to multiple injection points can be challenging or expensive, especially when using multiple pumps. An injection system was designed using one chemical metering pump to evenly distribute the desired influent simultaneously to 15 individual injection points through an injection manifold. The system was constructed with only one metal part contacting the fluid due to the low pH of the injection solutions. The injection manifold system was used during a three-month pilot scale injection experiment at the Vineland Chemical Company Superfund site. During the two injection phases of the experiment (Phase I = 0.27 L/min total flow, Phase II = 0.56 L/min total flow), flow measurements were made 20 times over three months; an even distribution of flow to each injection well was maintained (RSD <4%). This durable system is expandable to at least 16 injection points and should be adaptable to other injection experiments that require distribution of air-stable liquids to multiple injection points with a single pump. PMID:26140014
Progress in the Phase 0 Model Development of a STAR Concept for Dynamics and Control Testing
NASA Technical Reports Server (NTRS)
Woods-Vedeler, Jessica A.; Armand, Sasan C.
2003-01-01
The paper describes progress in the development of a lightweight, deployable passive Synthetic Thinned Aperture Radiometer (STAR). The spacecraft concept presented will enable the realization of 10 km resolution global soil moisture and ocean salinity measurements at 1.41 GHz. The focus of this work was on definition of an approximately 1/3-scaled, 5-meter Phase 0 test article for concept demonstration and dynamics and control testing. Design requirements, parameters and a multi-parameter, hybrid scaling approach for the dynamically scaled test model were established. The El Scaling Approach that was established allows designers freedom to define the cross section of scaled, lightweight structural components that is most convenient for manufacturing when the mass of the component is small compared to the overall system mass. Static and dynamic response analysis was conducted on analytical models to evaluate system level performance and to optimize panel geometry for optimal tension load distribution.
Mathur, Rohit; Xing, Jia; Gilliam, Robert; Sarwar, Golam; Hogrefe, Christian; Pleim, Jonathan; Pouliot, George; Roselle, Shawn; Spero, Tanya L.; Wong, David C.; Young, Jeffrey
2018-01-01
The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modelled processes were examined and enhanced to suitably represent the extended space and time scales for such applications. Hemispheric scale simulations with CMAQ and the Weather Research and Forecasting (WRF) model are performed for multiple years. Model capabilities for a range of applications including episodic long-range pollutant transport, long-term trends in air pollution across the Northern Hemisphere, and air pollution-climate interactions are evaluated through detailed comparison with available surface, aloft, and remotely sensed observations. The expansion of CMAQ to simulate the hemispheric scales provides a framework to examine interactions between atmospheric processes occurring at various spatial and temporal scales with physical, chemical, and dynamical consistency. PMID:29681922
Universal scaling laws of diffusion in two-dimensional granular liquids.
Wang, Chen-Hung; Yu, Szu-Hsuan; Chen, Peilong
2015-06-01
We find, in a two-dimensional air table granular system, that the reduced diffusion constant D* and excess entropy S(2) follow two distinct scaling laws: D*∼e(S(2)*) for dense liquids and D∼e(3S(2)*) for dilute ones. The scaling for dense liquids is very similar to that for three-dimensional liquids proposed previously [M. Dzugutov, Nature (London) 381, 137 (1996); A. Samanta et al., Phys. Rev. Lett. 92, 145901 (2004)]. In the dilute regime, a power law [Y. Rosenfeld, J. Phys.: Condens. Matter 11, 5415 (1999)] also fits our data reasonably well. In our system, particles experience low air drag dissipation and interact with each others through embedded magnets. These near-conservative many-body interactions are responsible for the measured Gaussian velocity distribution functions and the scaling laws. The dominance of cage relaxations in dense liquids leads to the different scaling laws for dense and dilute regimes.
Intermittent Granular Dynamics at a Seismogenic Plate Boundary.
Meroz, Yasmine; Meade, Brendan J
2017-09-29
Earthquakes at seismogenic plate boundaries are a response to the differential motions of tectonic blocks embedded within a geometrically complex network of branching and coalescing faults. Elastic strain is accumulated at a slow strain rate on the order of 10^{-15} s^{-1}, and released intermittently at intervals >100 yr, in the form of rapid (seconds to minutes) coseismic ruptures. The development of macroscopic models of quasistatic planar tectonic dynamics at these plate boundaries has remained challenging due to uncertainty with regard to the spatial and kinematic complexity of fault system behaviors. The characteristic length scale of kinematically distinct tectonic structures is particularly poorly constrained. Here, we analyze fluctuations in Global Positioning System observations of interseismic motion from the southern California plate boundary, identifying heavy-tailed scaling behavior. Namely, we show that, consistent with findings for slowly sheared granular media, the distribution of velocity fluctuations deviates from a Gaussian, exhibiting broad tails, and the correlation function decays as a stretched exponential. This suggests that the plate boundary can be understood as a densely packed granular medium, predicting a characteristic tectonic length scale of 91±20 km, here representing the characteristic size of tectonic blocks in the southern California fault network, and relating the characteristic duration and recurrence interval of earthquakes, with the observed sheared strain rate, and the nanosecond value for the crack tip evolution time scale. Within a granular description, fault and blocks systems may rapidly rearrange the distribution of forces within them, driving a mixture of transient and intermittent fault slip behaviors over tectonic time scales.
Intermittent Granular Dynamics at a Seismogenic Plate Boundary
NASA Astrophysics Data System (ADS)
Meroz, Yasmine; Meade, Brendan J.
2017-09-01
Earthquakes at seismogenic plate boundaries are a response to the differential motions of tectonic blocks embedded within a geometrically complex network of branching and coalescing faults. Elastic strain is accumulated at a slow strain rate on the order of 10-15 s-1 , and released intermittently at intervals >100 yr , in the form of rapid (seconds to minutes) coseismic ruptures. The development of macroscopic models of quasistatic planar tectonic dynamics at these plate boundaries has remained challenging due to uncertainty with regard to the spatial and kinematic complexity of fault system behaviors. The characteristic length scale of kinematically distinct tectonic structures is particularly poorly constrained. Here, we analyze fluctuations in Global Positioning System observations of interseismic motion from the southern California plate boundary, identifying heavy-tailed scaling behavior. Namely, we show that, consistent with findings for slowly sheared granular media, the distribution of velocity fluctuations deviates from a Gaussian, exhibiting broad tails, and the correlation function decays as a stretched exponential. This suggests that the plate boundary can be understood as a densely packed granular medium, predicting a characteristic tectonic length scale of 91 ±20 km , here representing the characteristic size of tectonic blocks in the southern California fault network, and relating the characteristic duration and recurrence interval of earthquakes, with the observed sheared strain rate, and the nanosecond value for the crack tip evolution time scale. Within a granular description, fault and blocks systems may rapidly rearrange the distribution of forces within them, driving a mixture of transient and intermittent fault slip behaviors over tectonic time scales.
Andra, Syam S; Makris, Konstantinos C; Botsaris, George; Charisiadis, Pantelis; Kalyvas, Harris; Costa, Costas N
2014-02-15
Changes in disinfectant type could trigger a cascade of reactions releasing pipe-anchored metals/metalloids into finished water. However, the effect of pre-formed disinfection by-products on the release of sorbed contaminants (arsenic-As in particular) from drinking water distribution system pipe scales remains unexplored. A bench-scale study using a factorial experimental design was performed to evaluate the independent and interaction effects of trihalomethanes (TTHM) and haloacetic acids (HAA) on arsenic (As) release from either scales-only or scale-biofilm conglomerates (SBC) both anchored on asbestos/cement pipe coupons. A model biofilm (Pseudomonas aeruginosa) was allowed to grow on select pipe coupons prior experimentation. Either TTHM or HAA individual dosing did not promote As release from either scales only or SBC, detecting <6 μg AsL(-1) in finished water. In the case of scales-only coupons, the combination of the highest spike level of TTHM and HAA significantly (p<0.001) increased dissolved and total As concentrations to levels up to 16 and 95 μg L(-1), respectively. Similar treatments in the presence of biofilm (SBC) resulted in significant (p<0.001) increase in dissolved and total recoverable As up to 20 and 47 μg L(-1), respectively, exceeding the regulatory As limit. Whether or not, our laboratory-based results truly represent mechanisms operating in disinfected finished water in pipe networks remains to be investigated in the field. Copyright © 2013 Elsevier B.V. All rights reserved.
THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL
A toolkit for distributed hydrologic modeling at multiple scales using a geographic information system is presented. This open-source, freely available software was developed through a collaborative endeavor involving two Universities and two government agencies. Called the Auto...
NASA Astrophysics Data System (ADS)
Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.
2017-12-01
Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.
Self-Healing Networks: Redundancy and Structure
Quattrociocchi, Walter; Caldarelli, Guido; Scala, Antonio
2014-01-01
We introduce the concept of self-healing in the field of complex networks modelling; in particular, self-healing capabilities are implemented through distributed communication protocols that exploit redundant links to recover the connectivity of the system. We then analyze the effect of the level of redundancy on the resilience to multiple failures; in particular, we measure the fraction of nodes still served for increasing levels of network damages. Finally, we study the effects of redundancy under different connectivity patterns—from planar grids, to small-world, up to scale-free networks—on healing performances. Small-world topologies show that introducing some long-range connections in planar grids greatly enhances the resilience to multiple failures with performances comparable to the case of the most resilient (and least realistic) scale-free structures. Obvious applications of self-healing are in the important field of infrastructural networks like gas, power, water, oil distribution systems. PMID:24533065
Landscape heterogeneity shapes predation in a newly restored predator-prey system.
Kauffman, Matthew J; Varley, Nathan; Smith, Douglas W; Stahler, Daniel R; MacNulty, Daniel R; Boyce, Mark S
2007-08-01
Because some native ungulates have lived without top predators for generations, it has been uncertain whether runaway predation would occur when predators are newly restored to these systems. We show that landscape features and vegetation, which influence predator detection and capture of prey, shape large-scale patterns of predation in a newly restored predator-prey system. We analysed the spatial distribution of wolf (Canis lupus) predation on elk (Cervus elaphus) on the Northern Range of Yellowstone National Park over 10 consecutive winters. The influence of wolf distribution on kill sites diminished over the course of this study, a result that was likely caused by territorial constraints on wolf distribution. In contrast, landscape factors strongly influenced kill sites, creating distinct hunting grounds and prey refugia. Elk in this newly restored predator-prey system should be able to mediate their risk of predation by movement and habitat selection across a heterogeneous risk landscape.
Landscape heterogeneity shapes predation in a newly restored predator-prey system
Kauffman, M.J.; Varley, N.; Smith, D.W.; Stahler, D.R.; MacNulty, D.R.; Boyce, M.S.
2007-01-01
Because some native ungulates have lived without top predators for generations, it has been uncertain whether runaway predation would occur when predators are newly restored to these systems. We show that landscape features and vegetation, which influence predator detection and capture of prey, shape large-scale patterns of predation in a newly restored predator-prey system. We analysed the spatial distribution of wolf (Canis lupus) predation on elk (Cervus elaphus) on the Northern Range of Yellowstone National Park over 10 consecutive winters. The influence of wolf distribution on kill sites diminished over the course of this study, a result that was likely caused by territorial constraints on wolf distribution. In contrast, landscape factors strongly influenced kill sites, creating distinct hunting grounds and prey refugia. Elk in this newly restored predator-prey system should be able to mediate their risk of predation by movement and habitat selection across a heterogeneous risk landscape. ?? 2007 Blackwell Publishing Ltd/CNRS.
NASA Astrophysics Data System (ADS)
Girard, L.; Weiss, J.; Molines, J. M.; Barnier, B.; Bouillon, S.
2009-08-01
Sea ice drift and deformation from models are evaluated on the basis of statistical and scaling properties. These properties are derived from two observation data sets: the RADARSAT Geophysical Processor System (RGPS) and buoy trajectories from the International Arctic Buoy Program (IABP). Two simulations obtained with the Louvain-la-Neuve Ice Model (LIM) coupled to a high-resolution ocean model and a simulation obtained with the Los Alamos Sea Ice Model (CICE) were analyzed. Model ice drift compares well with observations in terms of large-scale velocity field and distributions of velocity fluctuations although a significant bias on the mean ice speed is noted. On the other hand, the statistical properties of ice deformation are not well simulated by the models: (1) The distributions of strain rates are incorrect: RGPS distributions of strain rates are power law tailed, i.e., exhibit "wild randomness," whereas models distributions remain in the Gaussian attraction basin, i.e., exhibit "mild randomness." (2) The models are unable to reproduce the spatial and temporal correlations of the deformation fields: In the observations, ice deformation follows spatial and temporal scaling laws that express the heterogeneity and the intermittency of deformation. These relations do not appear in simulated ice deformation. Mean deformation in models is almost scale independent. The statistical properties of ice deformation are a signature of the ice mechanical behavior. The present work therefore suggests that the mechanical framework currently used by models is inappropriate. A different modeling framework based on elastic interactions could improve the representation of the statistical and scaling properties of ice deformation.
Modelling Root Systems Using Oriented Density Distributions
NASA Astrophysics Data System (ADS)
Dupuy, Lionel X.
2011-09-01
Root architectural models are essential tools to understand how plants access and utilize soil resources during their development. However, root architectural models use complex geometrical descriptions of the root system and this has limitations to model interactions with the soil. This paper presents the development of continuous models based on the concept of oriented density distribution function. The growth of the root system is built as a hierarchical system of partial differential equations (PDEs) that incorporate single root growth parameters such as elongation rate, gravitropism and branching rate which appear explicitly as coefficients of the PDE. Acquisition and transport of nutrients are then modelled by extending Darcy's law to oriented density distribution functions. This framework was applied to build a model of the growth and water uptake of barley root system. This study shows that simplified and computer effective continuous models of the root system development can be constructed. Such models will allow application of root growth models at field scale.
The ATLAS PanDA Pilot in Operation
NASA Astrophysics Data System (ADS)
Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration
2011-12-01
The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauder, C.
This subcontract report was completed under the auspices of the NREL/SCE High-Penetration Photovoltaic (PV) Integration Project, which is co-funded by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and the California Solar Initiative (CSI) Research, Development, Demonstration, and Deployment (RD&D) program funded by the California Public Utility Commission (CPUC) and managed by Itron. This project is focused on modeling, quantifying, and mitigating the impacts of large utility-scale PV systems (generally 1-5 MW in size) that are interconnected to the distribution system. This report discusses the concerns utilities have when interconnecting large PV systems thatmore » interconnect using PV inverters (a specific application of frequency converters). Additionally, a number of capabilities of PV inverters are described that could be implemented to mitigate the distribution system-level impacts of high-penetration PV integration. Finally, the main issues that need to be addressed to ease the interconnection of large PV systems to the distribution system are presented.« less
NASA Technical Reports Server (NTRS)
Montag, Bruce C.; Bishop, Alfred M.; Redfield, Joe B.
1989-01-01
The findings of a preliminary investigation by Southwest Research Institute (SwRI) in simulation host computer concepts is presented. It is designed to aid NASA in evaluating simulation technologies for use in spaceflight training. The focus of the investigation is on the next generation of space simulation systems that will be utilized in training personnel for Space Station Freedom operations. SwRI concludes that NASA should pursue a distributed simulation host computer system architecture for the Space Station Training Facility (SSTF) rather than a centralized mainframe based arrangement. A distributed system offers many advantages and is seen by SwRI as the only architecture that will allow NASA to achieve established functional goals and operational objectives over the life of the Space Station Freedom program. Several distributed, parallel computing systems are available today that offer real-time capabilities for time critical, man-in-the-loop simulation. These systems are flexible in terms of connectivity and configurability, and are easily scaled to meet increasing demands for more computing power.
Biostability analysis for drinking water distribution systems.
Srinivasan, Soumya; Harrington, Gregory W
2007-05-01
The ability to limit regrowth in drinking water is referred to as biological stability and depends on the concentration of disinfectant residual and on the concentration of substrate required for the growth of microorganisms. The biostability curve, based on this fundamental concept of biological stability, is a graphical approach to study the two competing effects that determine bacterial regrowth in a distribution system: inactivation due to the presence of a disinfectant, and growth due to the presence of a substrate. Biostability curves are a practical, system specific approach for addressing the problem of bacterial regrowth in distribution systems. This paper presents a standardized algorithm for generating biostability curves and this will enable water utilities to incorporate this approach for their site-specific needs. Using data from pilot scale studies, it was found that this algorithm was applicable to control regrowth of HPC in chlorinated systems where AOC is the growth limiting substrate, and growth of AOB in chloraminated systems, where ammonia is the growth limiting substrate.
The Aerosol-Monsoon Climate System of Asia
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Kyu-Myong, Kim
2012-01-01
In Asian monsoon countries such as China and India, human health and safety problems caused by air-pollution are worsening due to the increased loading of atmospheric pollutants stemming from rising energy demand associated with the rapid pace of industrialization and modernization. Meanwhile, uneven distribution of monsoon rain associated with flash flood or prolonged drought, has caused major loss of human lives, and damages in crop and properties with devastating societal impacts on Asian countries. Historically, air-pollution and monsoon research are treated as separate problems. However a growing number of recent studies have suggested that the two problems may be intrinsically intertwined and need to be studied jointly. Because of complexity of the dynamics of the monsoon systems, aerosol impacts on monsoons and vice versa must be studied and understood in the context of aerosol forcing in relationship to changes in fundamental driving forces of the monsoon climate system (e.g. sea surface temperature, land-sea contrast etc.) on time scales from intraseasonal variability (weeks) to climate change ( multi-decades). Indeed, because of the large contributions of aerosols to the global and regional energy balance of the atmosphere and earth surface, and possible effects of the microphysics of clouds and precipitation, a better understanding of the response to climate change in Asian monsoon regions requires that aerosols be considered as an integral component of a fully coupled aerosol-monsoon system on all time scales. In this paper, using observations and results from climate modeling, we will discuss the coherent variability of the coupled aerosol-monsoon climate system in South Asia and East Asia, including aerosol distribution and types, with respect to rainfall, moisture, winds, land-sea thermal contrast, heat sources and sink distributions in the atmosphere in seasonal, interannual to climate change time scales. We will show examples of how elevated absorbing aerosols (dust and black carbon) may interact with monsoon dynamics to produce feedback effects on the atmospheric water cycle, leading to in accelerated melting of snowpacks over the Himalayas and Tibetan Plateau, and subsequent changes in evolution of the pre-monsoon and peak monsoon rainfall, moisture and wind distributions in South Asia and East Asia.
Applications of finite-size scaling for atomic and non-equilibrium systems
NASA Astrophysics Data System (ADS)
Antillon, Edwin A.
We apply the theory of Finite-size scaling (FSS) to an atomic and a non-equilibrium system in order to extract critical parameters. In atomic systems, we look at the energy dependence on the binding charge near threshold between bound and free states, where we seek the critical nuclear charge for stability. We use different ab initio methods, such as Hartree-Fock, Density Functional Theory, and exact formulations implemented numerically with the finite-element method (FEM). Using Finite-size scaling formalism, where in this case the size of the system is related to the number of elements used in the basis expansion of the wavefunction, we predict critical parameters in the large basis limit. Results prove to be in good agreement with previous Slater-basis set calculations and demonstrate that this combined approach provides a promising first-principles approach to describe quantum phase transitions for materials and extended systems. In the second part we look at non-equilibrium one-dimensional model known as the raise and peel model describing a growing surface which grows locally and has non-local desorption. For a specific values of adsorption ( ua) and desorption (ud) the model shows interesting features. At ua = ud, the model is described by a conformal field theory (with conformal charge c = 0) and its stationary probability can be mapped to the ground state of a quantum chain and can also be related a two dimensional statistical model. For ua ≥ ud, the model shows a scale invariant phase in the avalanche distribution. In this work we study the surface dynamics by looking at avalanche distributions using FSS formalism and explore the effect of changing the boundary conditions of the model. The model shows the same universality for the cases with and with our the wall for an odd number of tiles removed, but we find a new exponent in the presence of a wall for an even number of avalanches released. We provide new conjecture for the probability distribution of avalanches with a wall obtained by using exact diagonalization of small lattices and Monte-Carlo simulations.
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-01-01
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems. PMID:27548171
Chamberlin, Ralph V; Davis, Bryce F
2013-10-01
Disordered systems show deviations from the standard Debye theory of specific heat at low temperatures. These deviations are often attributed to two-level systems of uncertain origin. We find that a source of excess specific heat comes from correlations between quanta of energy if excitations are localized on an intermediate length scale. We use simulations of a simplified Creutz model for a system of Ising-like spins coupled to a thermal bath of Einstein-like oscillators. One feature of this model is that energy is quantized in both the system and its bath, ensuring conservation of energy at every step. Another feature is that the exact entropies of both the system and its bath are known at every step, so that their temperatures can be determined independently. We find that there is a mismatch in canonical temperature between the system and its bath. In addition to the usual finite-size effects in the Bose-Einstein and Fermi-Dirac distributions, if excitations in the heat bath are localized on an intermediate length scale, this mismatch is independent of system size up to at least 10(6) particles. We use a model for correlations between quanta of energy to adjust the statistical distributions and yield a thermodynamically consistent temperature. The model includes a chemical potential for units of energy, as is often used for other types of particles that are quantized and conserved. Experimental evidence for this model comes from its ability to characterize the excess specific heat of imperfect crystals at low temperatures.
Triangular-shaped landforms reveal subglacial drainage routes in SW Finland
NASA Astrophysics Data System (ADS)
Mäkinen, J.; Kajuutti, K.; Palmu, J.-P.; Ojala, A.; Ahokangas, E.
2017-05-01
The aim of this study is to present the first evidence of triangular-shaped till landforms and related erosional features indicative of subglacial drainage within the ice stream bed of the Scandinavian ice sheet in Finland. Previously unidentified grouped patterns of Quaternary deposits with triangular landforms can be recognized from LiDAR-based DEMs. The triangular landforms occur as segments within geomorphologically distinguishable routes that are associated with eskers. The morphological and sedimentological characteristics as well as the distribution of the triangular landforms are interpreted to involve the creep of saturated deforming till, flow and pressure fluctuations of subglacial meltwater associated with meltwater erosion. There are no existing models for the formation of this kind of large-scale drainage systems, but we claim that they represent an efficient drainage system for subglacial meltwater transfer under high pressure conditions. Our hypothesis is that the routed, large-scale subglacial drainage systems described herein form a continuum between channelized (eskers) and more widely spread small-scale distributed subglacial drainage. Moreover, the transition from the conduit dominated drainage to triangular-shaped subglacial landforms takes place about 50-60 km from the ice margin. We provide an important contribution towards a more realistic representation of ice sheet hydrological drainage systems that could be used to improve paleoglaciological models and to simulate likely responses of ice sheets to increased meltwater production.
Nonequilibrium steady state of a weakly-driven Kardar–Parisi–Zhang equation
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Sasorov, Pavel V.; Vilenkin, Arkady
2018-05-01
We consider an infinite interface of d > 2 dimensions, governed by the Kardar–Parisi–Zhang (KPZ) equation with a weak Gaussian noise which is delta-correlated in time and has short-range spatial correlations. We study the probability distribution of the interface height H at a point of the substrate, when the interface is initially flat. We show that, in stark contrast with the KPZ equation in d < 2, this distribution approaches a non-equilibrium steady state. The time of relaxation toward this state scales as the diffusion time over the correlation length of the noise. We study the steady-state distribution using the optimal-fluctuation method. The typical, small fluctuations of height are Gaussian. For these fluctuations the activation path of the system coincides with the time-reversed relaxation path, and the variance of can be found from a minimization of the (nonlocal) equilibrium free energy of the interface. In contrast, the tails of are nonequilibrium, non-Gaussian and strongly asymmetric. To determine them we calculate, analytically and numerically, the activation paths of the system, which are different from the time-reversed relaxation paths. We show that the slower-decaying tail of scales as , while the faster-decaying tail scales as . The slower-decaying tail has important implications for the statistics of directed polymers in random potential.
Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns
NASA Astrophysics Data System (ADS)
Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro
2017-05-01
The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.
Grech, Alana; Sheppard, James; Marsh, Helene
2011-01-01
Background Conservation planning and the design of marine protected areas (MPAs) requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys conducted to inform mandatory stock assessments. Methodology and Results We use a 20-year time series of systematic aerial surveys of dugong (Dugong dugong) abundance to create spatially-explicit models of dugong distribution and relative density at the scale of the coastal waters of northeast Australia (∼136,000 km2). We interpolated the corrected data at the scale of 2 km * 2 km planning units using geostatistics. Planning units were classified as low, medium, high and very high dugong density on the basis of the relative density of dugongs estimated from the models and a frequency analysis. Torres Strait was identified as the most significant dugong habitat in northeast Australia and the most globally significant habitat known for any member of the Order Sirenia. The models are used by local, State and Federal agencies to inform management decisions related to the Indigenous harvest of dugongs, gill-net fisheries and Australia's National Representative System of Marine Protected Areas. Conclusion/Significance In this paper we demonstrate that spatially-explicit population models add value to data collected for stock assessments, provide a robust alternative to predictive habitat distribution models, and inform species conservation at multiple scales. PMID:21464933
Outbreak statistics and scaling laws for externally driven epidemics.
Singh, Sarabjeet; Myers, Christopher R
2014-04-01
Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.
Advanced Operating System Technologies
NASA Astrophysics Data System (ADS)
Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro
In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.
Large Scale Frequent Pattern Mining using MPI One-Sided Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Agarwal, Khushbu
In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less
The time scale of quasifission process in reactions with heavy ions
NASA Astrophysics Data System (ADS)
Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.
2014-05-01
The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. These data have been measured by two time-of-flight CORSET spectrometer. The mass resolution of the spectrometer for these measurements was about 3u. It allows to investigate the features of mass distributions with good accuracy. The properties of mass and TKE of QF fragments in dependence on interaction energy have been investigated and compared with characteristics of the fusion-fission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.
Locating inefficient links in a large-scale transportation network
NASA Astrophysics Data System (ADS)
Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu
2015-02-01
Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.
A metabolite-centric view on flux distributions in genome-scale metabolic models
2013-01-01
Background Genome-scale metabolic models are important tools in systems biology. They permit the in-silico prediction of cellular phenotypes via mathematical optimisation procedures, most importantly flux balance analysis. Current studies on metabolic models mostly consider reaction fluxes in isolation. Based on a recently proposed metabolite-centric approach, we here describe a set of methods that enable the analysis and interpretation of flux distributions in an integrated metabolite-centric view. We demonstrate how this framework can be used for the refinement of genome-scale metabolic models. Results We applied the metabolite-centric view developed here to the most recent metabolic reconstruction of Escherichia coli. By compiling the balance sheets of a small number of currency metabolites, we were able to fully characterise the energy metabolism as predicted by the model and to identify a possibility for model refinement in NADPH metabolism. Selected branch points were examined in detail in order to demonstrate how a metabolite-centric view allows identifying functional roles of metabolites. Fructose 6-phosphate aldolase and the sedoheptulose bisphosphate bypass were identified as enzymatic reactions that can carry high fluxes in the model but are unlikely to exhibit significant activity in vivo. Performing a metabolite essentiality analysis, unconstrained import and export of iron ions could be identified as potentially problematic for the quality of model predictions. Conclusions The system-wide analysis of split ratios and branch points allows a much deeper insight into the metabolic network than reaction-centric analyses. Extending an earlier metabolite-centric approach, the methods introduced here establish an integrated metabolite-centric framework for the interpretation of flux distributions in genome-scale metabolic networks that can complement the classical reaction-centric framework. Analysing fluxes and their metabolic context simultaneously opens the door to systems biological interpretations that are not apparent from isolated reaction fluxes. Particularly powerful demonstrations of this are the analyses of the complete metabolic contexts of energy metabolism and the folate-dependent one-carbon pool presented in this work. Finally, a metabolite-centric view on flux distributions can guide the refinement of metabolic reconstructions for specific growth scenarios. PMID:23587327
Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.
Guo, Tianjiao; Englehardt, James D; Fallon, Howard J
While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.
Testing optimal foraging theory in a penguin-krill system.
Watanabe, Yuuki Y; Ito, Motohiro; Takahashi, Akinori
2014-03-22
Food is heterogeneously distributed in nature, and understanding how animals search for and exploit food patches is a fundamental challenge in ecology. The classic marginal value theorem (MVT) formulates optimal patch residence time in response to patch quality. The MVT was generally proved in controlled animal experiments; however, owing to the technical difficulties in recording foraging behaviour in the wild, it has been inadequately examined in natural predator-prey systems, especially those in the three-dimensional marine environment. Using animal-borne accelerometers and video cameras, we collected a rare dataset in which the behaviour of a marine predator (penguin) was recorded simultaneously with the capture timings of mobile, patchily distributed prey (krill). We provide qualitative support for the MVT by showing that (i) krill capture rate diminished with time in each dive, as assumed in the MVT, and (ii) dive duration (or patch residence time, controlled for dive depth) increased with short-term, dive-scale krill capture rate, but decreased with long-term, bout-scale krill capture rate, as predicted from the MVT. Our results demonstrate that a single environmental factor (i.e. patch quality) can have opposite effects on animal behaviour depending on the time scale, emphasizing the importance of multi-scale approaches in understanding complex foraging strategies.
Universal scaling relations in scale-free structure formation
NASA Astrophysics Data System (ADS)
Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.
2018-07-01
A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM ∝ M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters, and even dark matter haloes. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power-law tail of dA/dln Σ ∝ Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D ∝ R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM haloes) tend to a ρ ∝ R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation, and detailed `full physics' hydrodynamical simulations. We find that these power laws are good first-order descriptions in all cases.
Universal Scaling Relations in Scale-Free Structure Formation
NASA Astrophysics Data System (ADS)
Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.
2018-04-01
A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.
Characteristics of Low-Priced Solar Photovoltaic Systems in the United States
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nemet, Gregory F.; O'Shaughnessy, Eric; Wiser, Ryan H.
2016-01-01
Despite impressive recent cost reductions, there is wide dispersion in the prices of installed solar photovoltaic (PV) systems. We identify the most important factors that make a system likely to be low priced (LP). Our sample consists of detailed characteristics for 42,611 small-scale (< 15 kW) PV systems installed in 15 U.S. states during 2013. Using four definitions of LP systems, we compare LP and non-LP systems and find statistically significant differences in nearly all factors explored, including competition, installer scale, markets, demographics, ownership, policy, and system components. Logit and probit model results robustly indicate that LP systems are associatedmore » with markets with few active installers; experienced installers; customer ownership; large systems; retrofits; and thin-film, low-efficiency, and Chinese modules. We also find significant differences across states, with LP systems much more likely to occur in some than in others. Our focus on the left tail of the price distribution provides implications for policy that are distinct from recent studies of mean prices. While those studies find that PV subsidies increase mean prices, we find that subsidies also generate LP systems. PV subsidies appear to simultaneously shift and broaden the price distribution. Much of this broadening occurs in a particular location, northern California, which is worthy of further investigation with new data.« less
Driven fragmentation of granular gases.
Cruz Hidalgo, Raúl; Pagonabarraga, Ignacio
2008-06-01
The dynamics of homogeneously heated granular gases which fragment due to particle collisions is analyzed. We introduce a kinetic model which accounts for correlations induced at the grain collisions and analyze both the kinetics and relevant distribution functions these systems develop. The work combines analytical and numerical studies based on direct simulation Monte Carlo calculations. A broad family of fragmentation probabilities is considered, and its implications for the system kinetics are discussed. We show that generically these driven materials evolve asymptotically into a dynamical scaling regime. If the fragmentation probability tends to a constant, the grain number diverges at a finite time, leading to a shattering singularity. If the fragmentation probability vanishes, then the number of grains grows monotonously as a power law. We consider different homogeneous thermostats and show that the kinetics of these systems depends weakly on both the grain inelasticity and driving. We observe that fragmentation plays a relevant role in the shape of the velocity distribution of the particles. When the fragmentation is driven by local stochastic events, the long velocity tail is essentially exponential independently of the heating frequency and the breaking rule. However, for a Lowe-Andersen thermostat, numerical evidence strongly supports the conjecture that the scaled velocity distribution follows a generalized exponential behavior f(c) approximately exp(-cn) , with n approximately 1.2 , regarding less the fragmentation mechanisms.
NASA Astrophysics Data System (ADS)
Visser, Philip W.; Kooi, Henk; Stuyfzand, Pieter J.
2015-05-01
Results are presented of a comprehensive thermal impact study on an aquifer thermal energy storage (ATES) system in Bilthoven, the Netherlands. The study involved monitoring of the thermal impact and modeling of the three-dimensional temperature evolution of the storage aquifer and over- and underlying units. Special attention was paid to non-uniformity of the background temperature, which varies laterally and vertically in the aquifer. Two models were applied with different levels of detail regarding initial conditions and heterogeneity of hydraulic and thermal properties: a fine-scale heterogeneity model which construed the lateral and vertical temperature distribution more realistically, and a simplified model which represented the aquifer system with only a limited number of homogeneous layers. Fine-scale heterogeneity was shown to be important to accurately model the ATES-impacted vertical temperature distribution and the maximum and minimum temperatures in the storage aquifer, and the spatial extent of the thermal plumes. The fine-scale heterogeneity model resulted in larger thermally impacted areas and larger temperature anomalies than the simplified model. The models showed that scattered and scarce monitoring data of ATES-induced temperatures can be interpreted in a useful way by groundwater and heat transport modeling, resulting in a realistic assessment of the thermal impact.
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
Conceptual study of superconducting urban area power systems
NASA Astrophysics Data System (ADS)
Noe, Mathias; Bach, Robert; Prusseit, Werner; Willén, Dag; Gold-acker, Wilfried; Poelchau, Juri; Linke, Christian
2010-06-01
Efficient transmission, distribution and usage of electricity are fundamental requirements for providing citizens, societies and economies with essential energy resources. It will be a major future challenge to integrate more sustainable generation resources, to meet growing electricity demand and to renew electricity networks. Research and development on superconducting equipment and components have an important role to play in addressing these challenges. Up to now, most studies on superconducting applications in power systems have been concentrated on the application of specific devices like for example cables and current limiters. In contrast to this, the main focus of our study is to show the consequence of a large scale integration of superconducting power equipment in distribution level urban power systems. Specific objectives are to summarize the state-of-the-art of superconducting power equipment including cooling systems and to compare the superconducting power system with respect to energy and economic efficiency with conventional solutions. Several scenarios were considered starting from the replacement of an existing distribution level sub-grid up to a full superconducting urban area distribution level power system. One major result is that a full superconducting urban area distribution level power system could be cost competitive with existing solutions in the future. In addition to that, superconducting power systems offer higher energy efficiency as well as a number of technical advantages like lower voltage drops and improved stability.
NASA Astrophysics Data System (ADS)
Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.
2017-12-01
Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.
Zhao, Meng; Ding, Baocang
2015-03-01
This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Evolving Scale-Free Networks by Poisson Process: Modeling and Degree Distribution.
Feng, Minyu; Qu, Hong; Yi, Zhang; Xie, Xiurui; Kurths, Jurgen
2016-05-01
Since the great mathematician Leonhard Euler initiated the study of graph theory, the network has been one of the most significant research subject in multidisciplinary. In recent years, the proposition of the small-world and scale-free properties of complex networks in statistical physics made the network science intriguing again for many researchers. One of the challenges of the network science is to propose rational models for complex networks. In this paper, in order to reveal the influence of the vertex generating mechanism of complex networks, we propose three novel models based on the homogeneous Poisson, nonhomogeneous Poisson and birth death process, respectively, which can be regarded as typical scale-free networks and utilized to simulate practical networks. The degree distribution and exponent are analyzed and explained in mathematics by different approaches. In the simulation, we display the modeling process, the degree distribution of empirical data by statistical methods, and reliability of proposed networks, results show our models follow the features of typical complex networks. Finally, some future challenges for complex systems are discussed.
Park, Sang-Won; Kim, Soree; Jung, YounJoon
2015-11-21
We study how dynamic heterogeneity in ionic liquids is affected by the length scale of structural relaxation and the ionic charge distribution by the molecular dynamics simulations performed on two differently charged models of ionic liquid and their uncharged counterpart. In one model of ionic liquid, the charge distribution in the cation is asymmetric, and in the other it is symmetric, while their neutral counterpart has no charge with the ions. It is found that all the models display heterogeneous dynamics, exhibiting subdiffusive dynamics and a nonexponential decay of structural relaxation. We investigate the lifetime of dynamic heterogeneity, τ(dh), in these systems by calculating the three-time correlation functions to find that τ(dh) has in general a power-law behavior with respect to the structural relaxation time, τ(α), i.e., τ(dh) ∝ τ(α)(ζ(dh)). Although the dynamics of the asymmetric-charge model is seemingly more heterogeneous than that of the symmetric-charge model, the exponent is found to be similar, ζ(dh) ≈ 1.2, for all the models studied in this work. The same scaling relation is found regardless of interactions, i.e., with or without Coulomb interaction, and it holds even when the length scale of structural relaxation is long enough to become the Fickian diffusion. This fact indicates that τ(dh) is a distinctive time scale from τ(α), and the dynamic heterogeneity is mainly affected by the short-range interaction and the molecular structure.
Coordinate transformation by minimizing correlations between parameters
NASA Technical Reports Server (NTRS)
Kumar, M.
1972-01-01
This investigation was to determine the transformation parameters (three rotations, three translations and a scale factor) between two Cartesian coordinate systems from sets of coordinates given in both systems. The objective was the determination of well separated transformation parameters with reduced correlations between each other, a problem especially relevant when the sets of coordinates are not well distributed. The above objective is achieved by preliminarily determining the three rotational parameters and the scale factor from the respective direction cosines and chord distances (these being independent of the translation parameters) between the common points, and then computing all the seven parameters from a solution in which the rotations and the scale factor are entered as weighted constraints according to their variances and covariances obtained in the preliminary solutions. Numerical tests involving two geodetic reference systems were performed to evaluate the effectiveness of this approach.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
Waiting time distribution in public health care: empirics and theory.
Dimakou, Sofia; Dimakou, Ourania; Basso, Henrique S
2015-12-01
Excessive waiting times for elective surgery have been a long-standing concern in many national healthcare systems in the OECD. How do the hospital admission patterns that generate waiting lists affect different patients? What are the hospitals characteristics that determine waiting times? By developing a model of healthcare provision and analysing empirically the entire waiting time distribution we attempt to shed some light on those issues. We first build a theoretical model that describes the optimal waiting time distribution for capacity constraint hospitals. Secondly, employing duration analysis, we obtain empirical representations of that distribution across hospitals in the UK from 1997-2005. We observe important differences on the 'scale' and on the 'shape' of admission rates. Scale refers to how quickly patients are treated and shape represents trade-offs across duration-treatment profiles. By fitting the theoretical to the empirical distributions we estimate the main structural parameters of the model and are able to closely identify the main drivers of these empirical differences. We find that the level of resources allocated to elective surgery (budget and physical capacity), which determines how constrained the hospital is, explains differences in scale. Changes in benefits and costs structures of healthcare provision, which relate, respectively, to the desire to prioritise patients by duration and the reduction in costs due to delayed treatment, determine the shape, affecting short and long duration patients differently. JEL Classification I11; I18; H51.
Adare, A.; Afanasiev, S.; Aidala, C.; ...
2016-02-03
Measurements of midrapidity charged-particle multiplicity distributions, dN ch/dη, and midrapidity transverse-energy distributions, dE T/dη, are presented for a variety of collision systems and energies. Included are distributions for Au+Au collisions at √s NN=200, 130, 62.4, 39, 27, 19.6, 14.5, and 7.7 GeV, Cu+Cu collisions at √s NN=200 and 62.4 GeV, Cu+Au collisions at √s NN=200 GeV, U+U collisions at√s NN=193 GeV, d+Au collisions at √s NN=200 GeV, He3+Au collisions at √s NN=200 GeV, and p+p collisions at √s NN=200 GeV. We present centrality-dependent distributions at midrapidity in terms of the number of nucleon participants, N part, and the number ofmore » constituent quark participants, N qp. For all A+A collisions down to √s NN=7.7 GeV, we observed that the midrapidity data are better described by scaling with N qp than scaling with N part. Finally, our estimates of the Bjorken energy density, ε BJ, and the ratio of dE T/dη to dN ch/dη are presented, the latter of which is seen to be constant as a function of centrality for all systems.« less
[Effect on iron release in drinking water distribution systems].
Niu, Zhang-bin; Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Wang, Sheng-hui
2007-10-01
Batch-scale experiments were done to quantitatively study the effect of inorganic chemical parameters on iron release in drinking water distribution systems. The parameters include acid-base condition, oxidation-reduction condition, and neutral ion condition. It was found that the iron release rate decreased with pH, alkalinity, the concentration of dissolved oxygen increasing, and the iron release rate increased with the concentration of chloride increasing. The theoretical critical formula of iron release rate was elucidated. According to the formula, the necessary condition for controlling iron release is that pH is above 7.6, the concentration of alkalinity and dissolved oxygen is more than 150 mg/L and 2 mg/L, and the concentration of chloride is less than 150 mg/L of distributed water.
NASA Astrophysics Data System (ADS)
Martynova, A. I.; Orlov, V. V.
2014-10-01
Numerical simulations have been carried out in the general three-body problem with equal masses with zero initial velocities, to investigate the distribution of the decay times T based on a representative sample of initial conditions. The distribution has a power-law character on long time scales, f( T) ∝ T - α , with α = 1.74. Over small times T < 30 T cr ( T cr is the mean crossing time for a component of the triple system), a series of local maxima separated by about 1.0 T cr is observed in the decay-time distribution. These local peaks correspond to zones of decay after one or a few triple encounters. Figures showing the arrangement of these zones in the domain of the initial conditions are presented.