He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe
2013-01-01
It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.
Workflow management in large distributed systems
NASA Astrophysics Data System (ADS)
Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.
2011-12-01
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
Supporting large scale applications on networks of workstations
NASA Technical Reports Server (NTRS)
Cooper, Robert; Birman, Kenneth P.
1989-01-01
Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.
NASA Astrophysics Data System (ADS)
Manfredi, Sabato
2016-06-01
Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.
Distributed intrusion detection system based on grid security model
NASA Astrophysics Data System (ADS)
Su, Jie; Liu, Yahui
2008-03-01
Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.
Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings
ERIC Educational Resources Information Center
Morales, Ramses Victor
2009-01-01
Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…
A multidisciplinary approach to the development of low-cost high-performance lightwave networks
NASA Technical Reports Server (NTRS)
Maitan, Jacek; Harwit, Alex
1991-01-01
Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brancucci Martinez-Anido, C.; Hodge, B. M.
2014-09-01
This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).
Network placement optimization for large-scale distributed system
NASA Astrophysics Data System (ADS)
Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng
2018-01-01
The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.
Enterprise PACS and image distribution.
Huang, H K
2003-01-01
Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.
'Fracking', Induced Seismicity and the Critical Earth
NASA Astrophysics Data System (ADS)
Leary, P.; Malin, P. E.
2012-12-01
Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.
Analyzing Distributed Functions in an Integrated Hazard Analysis
NASA Technical Reports Server (NTRS)
Morris, A. Terry; Massie, Michael J.
2010-01-01
Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.
Architecture and Programming Models for High Performance Intensive Computation
2016-06-29
Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
Optical interconnect for large-scale systems
NASA Astrophysics Data System (ADS)
Dress, William
2013-02-01
This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.
NASA's Information Power Grid: Large Scale Distributed Computing and Data Management
NASA Technical Reports Server (NTRS)
Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)
2001-01-01
Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Novel Directional Protection Scheme for the FREEDM Smart Grid System
NASA Astrophysics Data System (ADS)
Sharma, Nitish
This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.
Information Power Grid Posters
NASA Technical Reports Server (NTRS)
Vaziri, Arsi
2003-01-01
This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.
Distributed Coordinated Control of Large-Scale Nonlinear Networks
Kundu, Soumya; Anghel, Marian
2015-11-08
We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering
NASA Technical Reports Server (NTRS)
Maly, K.
1998-01-01
Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Efficient On-Demand Operations in Large-Scale Infrastructures
ERIC Educational Resources Information Center
Ko, Steven Y.
2009-01-01
In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…
1996-04-01
time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Broderick, Robert; Mather, Barry
2016-05-01
This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
2018-01-23
Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.
Large-area photogrammetry based testing of wind turbine blades
NASA Astrophysics Data System (ADS)
Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul
2017-03-01
An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.
Resource Management for Distributed Parallel Systems
NASA Technical Reports Server (NTRS)
Neuman, B. Clifford; Rao, Santosh
1993-01-01
Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.
Shared versus distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Jordan, Harry F.
1991-01-01
The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.
On Predictability of System Anomalies in Real World
2011-08-01
distributed system SETI @home [44]. Different from the above work, this work focuses on quantifying the predictability of real-world system anomalies. V...J.-M. Vincent, and D. Anderson, “Mining for statistical models of availability in large-scale distributed systems: An empirical study of seti @home,” in Proc. of MASCOTS, sept. 2009.
Design of distributed PID-type dynamic matrix controller for fractional-order systems
NASA Astrophysics Data System (ADS)
Wang, Dawei; Zhang, Ridong
2018-01-01
With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ruchika Mehra Vijayan, E.
2017-11-01
This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language
White Paper on Dish Stirling Technology: Path Toward Commercial Deployment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andraka, Charles E.; Stechel, Ellen; Becker, Peter
2016-07-01
Dish Stirling energy systems have been developed for distributed and large-scale utility deployment. This report summarizes the state of the technology in a joint project between Stirling Energy Systems, Sandia National Laboratories, and the Department of Energy in 2011. It then lays out a feasible path to large scale deployment, including development needs and anticipated cost reduction paths that will make a viable deployment product.
Distributed intelligent urban environment monitoring system
NASA Astrophysics Data System (ADS)
Du, Jinsong; Wang, Wei; Gao, Jie; Cong, Rigang
2018-02-01
The current environmental pollution and destruction have developed into a world-wide major social problem that threatens human survival and development. Environmental monitoring is the prerequisite and basis of environmental governance, but overall, the current environmental monitoring system is facing a series of problems. Based on the electrochemical sensor, this paper designs a small, low-cost, easy to layout urban environmental quality monitoring terminal, and multi-terminal constitutes a distributed network. The system has been small-scale demonstration applications and has confirmed that the system is suitable for large-scale promotion
Universal distribution of component frequencies in biological and technological systems
Pang, Tin Yau; Maslov, Sergei
2013-01-01
Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195
Simulation Framework for Intelligent Transportation Systems
DOT National Transportation Integrated Search
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...
A unifying framework for systems modeling, control systems design, and system operation
NASA Technical Reports Server (NTRS)
Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.
2005-01-01
Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.
Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.
NASA Astrophysics Data System (ADS)
Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua
2015-12-01
Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.
NASA Astrophysics Data System (ADS)
Singh, Surya P. N.; Thayer, Scott M.
2002-02-01
This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.
Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus
2018-08-01
Long-term spatial-temporal investigations of microbial dynamics in full-scale drinking water distribution systems are scarce. These investigations can reveal the process, infrastructure, and environmental factors that influence the microbial community, offering opportunities to re-think microbial management in drinking water systems. Often, these insights are missed or are unreliable in short-term studies, which are impacted by stochastic variabilities inherent to large full-scale systems. In this two-year study, we investigated the spatial and temporal dynamics of the microbial community in a large, full scale South African drinking water distribution system that uses three successive disinfection strategies (i.e. chlorination, chloramination and hypochlorination). Monthly bulk water samples were collected from the outlet of the treatment plant and from 17 points in the distribution system spanning nearly 150 km and the bacterial community composition was characterised by Illumina MiSeq sequencing of the V4 hypervariable region of the 16S rRNA gene. Like previous studies, Alpha- and Betaproteobacteria dominated the drinking water bacterial communities, with an increase in Betaproteobacteria post-chloramination. In contrast with previous reports, the observed richness, diversity, and evenness of the bacterial communities were higher in the winter months as opposed to the summer months in this study. In addition to temperature effects, the seasonal variations were also likely to be influenced by changes in average water age in the distribution system and corresponding changes in disinfectant residual concentrations. Spatial dynamics of the bacterial communities indicated distance decay, with bacterial communities becoming increasingly dissimilar with increasing distance between sampling locations. These spatial effects dampened the temporal changes in the bulk water community and were the dominant factor when considering the entire distribution system. However, temporal variations were consistently stronger as compared to spatial changes at individual sampling locations and demonstrated seasonality. This study emphasises the need for long-term studies to comprehensively understand the temporal patterns that would otherwise be missed in short-term investigations. Furthermore, systematic long-term investigations are particularly critical towards determining the impact of changes in source water quality, environmental conditions, and process operations on the changes in microbial community composition in the drinking water distribution system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Daniel J. Isaak; Russell F. Thurow
2006-01-01
Spatially continuous sampling designs, when temporally replicated, provide analytical flexibility and are unmatched in their ability to provide a dynamic system view. We have compiled such a data set by georeferencing the network-scale distribution of Chinook salmon (Oncorhynchus tshawytscha) redds across a large wilderness basin (7330 km2) in...
NASA Astrophysics Data System (ADS)
Dednam, W.; Botha, A. E.
2015-01-01
Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-09-18
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-09-25
The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less
Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel
2016-01-01
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894
Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel
2016-08-16
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.
Computer-generated forces in distributed interactive simulation
NASA Astrophysics Data System (ADS)
Petty, Mikel D.
1995-04-01
Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.
Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction
NASA Technical Reports Server (NTRS)
Li, Zhijin; Chao, Yi; Li, P. Peggy
2012-01-01
A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.
A distributed parallel storage architecture and its potential application within EOSDIS
NASA Technical Reports Server (NTRS)
Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony
1994-01-01
We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.
“Colored water” resulting from suspended iron particles is a common drinking water consumer complaint which is largely impacted by water chemistry. A bench scale study, performed on a 90 year-old corroded cast-iron pipe section removed from a drinking water distribution system, w...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias
2016-08-11
This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less
Integral criteria for large-scale multiple fingerprint solutions
NASA Astrophysics Data System (ADS)
Ushmaev, Oleg S.; Novikov, Sergey O.
2004-08-01
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks
Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi
2015-01-01
Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596
Large-Scale Geographic Variation in Distribution and Abundance of Australian Deep-Water Kelp Forests
Marzinelli, Ezequiel M.; Williams, Stefan B.; Babcock, Russell C.; Barrett, Neville S.; Johnson, Craig R.; Jordan, Alan; Kendrick, Gary A.; Pizarro, Oscar R.; Smale, Dan A.; Steinberg, Peter D.
2015-01-01
Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia’s Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10–100 m to 100–1,000 km) and depths (15–60 m) across several regions ca 2–6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40–50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves. PMID:25693066
NASA Technical Reports Server (NTRS)
Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.
1986-01-01
To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.
Distributed rendering for multiview parallax displays
NASA Astrophysics Data System (ADS)
Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.
2006-02-01
3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.
Validity Issues in Standard-Setting Studies
ERIC Educational Resources Information Center
Pant, Hans A.; Rupp, Andre A.; Tiffin-Richards, Simon P.; Koller, Olaf
2009-01-01
Standard-setting procedures are a key component within many large-scale educational assessment systems. They are consensual approaches in which committees of experts set cut-scores on continuous proficiency scales, which facilitate communication of proficiency distributions of students to a wide variety of stakeholders. This communicative function…
The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications
NASA Technical Reports Server (NTRS)
Johnston, William E.
2002-01-01
With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.
Kushniruk, A; Kaipio, J; Nieminen, M; Hyppönen, H; Lääveri, T; Nohr, C; Kanstrup, A M; Berg Christiansen, M; Kuo, M-H; Borycki, E
2014-08-15
The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems.
Kaipio, J.; Nieminen, M.; Hyppönen, H.; Lääveri, T.; Nohr, C.; Kanstrup, A. M.; Berg Christiansen, M.; Kuo, M.-H.; Borycki, E.
2014-01-01
Summary Objectives The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems. PMID:25123725
DOE Office of Scientific and Technical Information (OSTI.GOV)
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
McHenry, Mark P.; Johnson, Jay; Hightower, Mike
2016-01-01
The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less
Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...
2014-12-09
Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less
Large-Scale Ichthyoplankton and Water Mass Distribution along the South Brazil Shelf
de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique
2014-01-01
Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27′ and 34°51′S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients. PMID:24614798
Large-scale ichthyoplankton and water mass distribution along the South Brazil Shelf.
de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique
2014-01-01
Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27' and 34°51'S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients.
NASA Astrophysics Data System (ADS)
Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.
Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.
Spatiotemporal stick-slip phenomena in a coupled continuum-granular system
NASA Astrophysics Data System (ADS)
Ecke, Robert
In sheared granular media, stick-slip behavior is ubiquitous, especially at very small shear rates and weak drive coupling. The resulting slips are characteristic of natural phenomena such as earthquakes and well as being a delicate probe of the collective dynamics of the granular system. In that spirit, we developed a laboratory experiment consisting of sheared elastic plates separated by a narrow gap filled with quasi-two-dimensional granular material (bi-dispersed nylon rods) . We directly determine the spatial and temporal distributions of strain displacements of the elastic continuum over 200 spatial points located adjacent to the gap. Slip events can be divided into large system-spanning events and spatially distributed smaller events. The small events have a probability distribution of event moment consistent with an M - 3 / 2 power law scaling and a Poisson distributed recurrence time distribution. Large events have a broad, log-normal moment distribution and a mean repetition time. As the applied normal force increases, there are fractionally more (less) large (small) events, and the large-event moment distribution broadens. The magnitude of the slip motion of the plates is well correlated with the root-mean-square displacements of the granular matter. Our results are consistent with mean field descriptions of statistical models of earthquakes and avalanches. We further explore the high-speed dynamics of system events and also discuss the effective granular friction of the sheared layer. We find that large events result from stored elastic energy in the plates in this coupled granular-continuum system.
Large-scale machine learning and evaluation platform for real-time traffic surveillance
NASA Astrophysics Data System (ADS)
Eichel, Justin A.; Mishra, Akshaya; Miller, Nicholas; Jankovic, Nicholas; Thomas, Mohan A.; Abbott, Tyler; Swanson, Douglas; Keller, Joel
2016-09-01
In traffic engineering, vehicle detectors are trained on limited datasets, resulting in poor accuracy when deployed in real-world surveillance applications. Annotating large-scale high-quality datasets is challenging. Typically, these datasets have limited diversity; they do not reflect the real-world operating environment. There is a need for a large-scale, cloud-based positive and negative mining process and a large-scale learning and evaluation system for the application of automatic traffic measurements and classification. The proposed positive and negative mining process addresses the quality of crowd sourced ground truth data through machine learning review and human feedback mechanisms. The proposed learning and evaluation system uses a distributed cloud computing framework to handle data-scaling issues associated with large numbers of samples and a high-dimensional feature space. The system is trained using AdaBoost on 1,000,000 Haar-like features extracted from 70,000 annotated video frames. The trained real-time vehicle detector achieves an accuracy of at least 95% for 1/2 and about 78% for 19/20 of the time when tested on ˜7,500,000 video frames. At the end of 2016, the dataset is expected to have over 1 billion annotated video frames.
Wan, Shixiang; Zou, Quan
2017-01-01
Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.
NASA Astrophysics Data System (ADS)
Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng
2018-02-01
Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.
NASA Astrophysics Data System (ADS)
Schoch, Anna; Blöthe, Jan; Hoffmann, Thomas; Schrott, Lothar
2016-04-01
A large number of sediment budgets have been compiled on different temporal and spatial scales in alpine regions. Detailed sediment budgets based on the quantification of a number of sediment storages (e.g. talus cones, moraine deposits) exist only for a few small scale drainage basins (up to 10² km²). In contrast, large scale sediment budgets (> 10³ km²) consider only long term sediment sinks such as valley fills and lakes. Until now, these studies often neglect small scale sediment storages in the headwaters. However, the significance of these sediment storages have been reported. A quantitative verification whether headwaters function as sediment source regions is lacking. Despite substantial transport energy in mountain environments due to steep gradients and high relief, sediment flux in large river systems is frequently disconnected from alpine headwaters. This leads to significant storage of coarse-grained sediment along the flow path from rockwall source regions to large sedimentary sinks in major alpine valleys. To improve the knowledge on sediment budgets in large scale alpine catchments and to bridge the gap between small and large scale sediment budgets, we apply a multi-method approach comprising investigations on different spatial scales in the Upper Rhone Basin (URB). The URB is the largest inneralpine basin in the European Alps with a size of > 5400 km². It is a closed system with Lake Geneva acting as an ultimate sediment sink for suspended and clastic sediment. We examine the spatial pattern and volumes of sediment storages as well as the morphometry on the local and catchment-wide scale. We mapped sediment storages and bedrock in five sub-regions of the study area (Goms, Lötschen valley, Val d'Illiez, Vallée de la Liène, Turtmann valley) in the field and from high-resolution remote sensing imagery to investigate the spatial distribution of different sediment storage types (e.g. talus deposits, debris flow cones, alluvial fans). These sub-regions cover all three litho-tectonic units of the URB (Helvetic nappes, Penninic nappes, External massifs) and different catchment sizes to capture the inherent variability. Different parameters characterizing topography, surface characteristics, and vegetation cover are analyzed for each storage type. The data is then used in geostatistical models (PCA, stepwise logistic regression) to predict the spatial distribution of sediment storage for the whole URB. We further conduct morphometric analyses of the URB to gain information on the varying degree of glacial imprint and postglacial landscape evolution and their control on the spatial distribution of sediment storage in a large scale drainage basin. Geophysical methods (ground penetrating radar and electrical resistivity tomography) are applied on different sediment storage types on the local scale to estimate mean thicknesses. Additional data from published studies are used to complement our dataset. We integrate the local data in the statistical model on the spatial distribution of sediment storages for the whole URB. Hence, we can extrapolate the stored sediment volumes to the regional scale in order to bridge the gap between small and large scale studies.
Scaling laws of strategic behavior and size heterogeneity in agent dynamics
NASA Astrophysics Data System (ADS)
Vaglica, Gabriella; Lillo, Fabrizio; Moro, Esteban; Mantegna, Rosario N.
2008-03-01
We consider the financial market as a model system and study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact. We quantify this strategic behavior by detecting scaling relations between the variables characterizing the trading activity of different institutions. We also observe power-law distributions in the investment time horizon, in the number of transactions needed to execute a large order, and in the traded value exchanged by large institutions, and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schauder, C.
This subcontract report was completed under the auspices of the NREL/SCE High-Penetration Photovoltaic (PV) Integration Project, which is co-funded by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and the California Solar Initiative (CSI) Research, Development, Demonstration, and Deployment (RD&D) program funded by the California Public Utility Commission (CPUC) and managed by Itron. This project is focused on modeling, quantifying, and mitigating the impacts of large utility-scale PV systems (generally 1-5 MW in size) that are interconnected to the distribution system. This report discusses the concerns utilities have when interconnecting large PV systems thatmore » interconnect using PV inverters (a specific application of frequency converters). Additionally, a number of capabilities of PV inverters are described that could be implemented to mitigate the distribution system-level impacts of high-penetration PV integration. Finally, the main issues that need to be addressed to ease the interconnection of large PV systems to the distribution system are presented.« less
Remote maintenance monitoring system
NASA Technical Reports Server (NTRS)
Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)
1992-01-01
A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.
Chaotic dynamics of large-scale double-diffusive convection in a porous medium
NASA Astrophysics Data System (ADS)
Kondo, Shutaro; Gotoda, Hiroshi; Miyano, Takaya; Tokuda, Isao T.
2018-02-01
We have studied chaotic dynamics of large-scale double-diffusive convection of a viscoelastic fluid in a porous medium from the viewpoint of dynamical systems theory. A fifth-order nonlinear dynamical system modeling the double-diffusive convection is theoretically obtained by incorporating the Darcy-Brinkman equation into transport equations through a physical dimensionless parameter representing porosity. We clearly show that the chaotic convective motion becomes much more complicated with increasing porosity. The degree of dynamic instability during chaotic convective motion is quantified by two important measures: the network entropy of the degree distribution in the horizontal visibility graph and the Kaplan-Yorke dimension in terms of Lyapunov exponents. We also present an interesting on-off intermittent phenomenon in the probability distribution of time intervals exhibiting nearly complete synchronization.
NASA Astrophysics Data System (ADS)
Salerno, K. Michael; Robbins, Mark O.
2013-12-01
Molecular dynamics simulations with varying damping are used to examine the effects of inertia and spatial dimension on sheared disordered solids in the athermal quasistatic limit. In all cases the distribution of avalanche sizes follows a power law over at least three orders of magnitude in dissipated energy or stress drop. Scaling exponents are determined using finite-size scaling for systems with 103-106 particles. Three distinct universality classes are identified corresponding to overdamped and underdamped limits, as well as a crossover damping that separates the two regimes. For each universality class, the exponent describing the avalanche distributions is the same in two and three dimensions. The spatial extent of plastic deformation is proportional to the energy dissipated in an avalanche. Both rise much more rapidly with system size in the underdamped limit where inertia is important. Inertia also lowers the mean energy of configurations sampled by the system and leads to an excess of large events like that seen in earthquake distributions for individual faults. The distribution of stress values during shear narrows to zero with increasing system size and may provide useful information about the size of elemental events in experimental systems. For overdamped and crossover systems the stress variation scales inversely with the square root of the system size. For underdamped systems the variation is determined by the size of the largest events.
Data Sharing in DHT Based P2P Systems
NASA Astrophysics Data System (ADS)
Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia
The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.
The revolution in data gathering systems
NASA Technical Reports Server (NTRS)
Cambra, J. M.; Trover, W. F.
1975-01-01
Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.
Response of deep and shallow tropical maritime cumuli to large-scale processes
NASA Technical Reports Server (NTRS)
Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.
1976-01-01
The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.
Automated Decomposition of Model-based Learning Problems
NASA Technical Reports Server (NTRS)
Williams, Brian C.; Millar, Bill
1996-01-01
A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.
[A bomb attack in the Netherlands; how would we provide hospital care?
Haverkort, J J M; de Jong, M B; van Spengler, L L; Leenen, L P H
2017-01-01
- There is an ever-present threat of large-scale incidents and disasters, such as terrorism and natural disasters.- Preparation and planning are the key to successful response to major incidents and disasters, which is why education, training sessions and exercises take place in the Netherlands.- Analysis of recent large-scale incidents in Europe and the USA emphasises the importance of adequate distribution of the wounded to centres where the correct care can be provided.- A major step has been taken in the Netherlands to provide for such an eventuality with the introduction of distribution plans for the wounded, and other initiatives such as a regional hospital disaster plan.- If a large-scale incident should take place the Netherlands also has a Major Incident Hospital at its disposal; this is a unique facility that can guarantee availability of 200 spare beds to the Netherlands healthcare system within 30 minutes.
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
Voltage Impacts of Utility-Scale Distributed Wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, A.
2014-09-01
Although most utility-scale wind turbines in the United States are added at the transmission level in large wind power plants, distributed wind power offers an alternative that could increase the overall wind power penetration without the need for additional transmission. This report examines the distribution feeder-level voltage issues that can arise when adding utility-scale wind turbines to the distribution system. Four of the Pacific Northwest National Laboratory taxonomy feeders were examined in detail to study the voltage issues associated with adding wind turbines at different distances from the sub-station. General rules relating feeder resistance up to the point of turbinemore » interconnection to the expected maximum voltage change levels were developed. Additional analysis examined line and transformer overvoltage conditions.« less
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2011-08-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting at very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TRG only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3" (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
Large-scale runoff generation - parsimonious parameterisation using high-resolution topography
NASA Astrophysics Data System (ADS)
Gong, L.; Halldin, S.; Xu, C.-Y.
2010-09-01
World water resources have primarily been analysed by global-scale hydrological models in the last decades. Runoff generation in many of these models are based on process formulations developed at catchments scales. The division between slow runoff (baseflow) and fast runoff is primarily governed by slope and spatial distribution of effective water storage capacity, both acting a very small scales. Many hydrological models, e.g. VIC, account for the spatial storage variability in terms of statistical distributions; such models are generally proven to perform well. The statistical approaches, however, use the same runoff-generation parameters everywhere in a basin. The TOPMODEL concept, on the other hand, links the effective maximum storage capacity with real-world topography. Recent availability of global high-quality, high-resolution topographic data makes TOPMODEL attractive as a basis for a physically-based runoff-generation algorithm at large scales, even if its assumptions are not valid in flat terrain or for deep groundwater systems. We present a new runoff-generation algorithm for large-scale hydrology based on TOPMODEL concepts intended to overcome these problems. The TRG (topography-derived runoff generation) algorithm relaxes the TOPMODEL equilibrium assumption so baseflow generation is not tied to topography. TGR only uses the topographic index to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by one parameter. The distribution of storage capacity within large-scale grid cells is obtained numerically through topographic analysis. The new topography-derived distribution function is then inserted into a runoff-generation framework similar VIC's. Different basin parts are parameterised by different storage capacities, and different shapes of the storage-distribution curves depend on their topographic characteristics. The TRG algorithm is driven by the HydroSHEDS dataset with a resolution of 3'' (around 90 m at the equator). The TRG algorithm was validated against the VIC algorithm in a common model framework in 3 river basins in different climates. The TRG algorithm performed equally well or marginally better than the VIC algorithm with one less parameter to be calibrated. The TRG algorithm also lacked equifinality problems and offered a realistic spatial pattern for runoff generation and evaporation.
2015-07-01
Reactive kVAR Kilo Watts kW Lithium Ion Li Ion Lithium-Titanate Oxide nLTO Natural gas NG Performance Objectives PO Photovoltaic PV Power ...cloud covered) periods. The demonstration features a large (relative to the overall system power requirements) photovoltaic solar array, whose inverter...microgrid with less expensive power storage instead of large scale energy storage and that the renewable energy with small-scale power storage can
Non-Gaussian Nature of Fracture and the Survival of Fat-Tail Exponents
NASA Astrophysics Data System (ADS)
Tallakstad, Ken Tore; Toussaint, Renaud; Santucci, Stephane; Måløy, Knut Jørgen
2013-04-01
We study the fluctuations of the global velocity Vl(t), computed at various length scales l, during the intermittent mode-I propagation of a crack front. The statistics converge to a non-Gaussian distribution, with an asymmetric shape and a fat tail. This breakdown of the central limit theorem (CLT) is due to the diverging variance of the underlying local crack front velocity distribution, displaying a power law tail. Indeed, by the application of a generalized CLT, the full shape of our experimental velocity distribution at large scale is shown to follow the stable Levy distribution, which preserves the power law tail exponent under upscaling. This study aims to demonstrate in general for crackling noise systems how one can infer the complete scale dependence of the activity—and extreme event distributions—by measuring only at a global scale.
Large-scale modeling of rain fields from a rain cell deterministic model
NASA Astrophysics Data System (ADS)
FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia
2006-04-01
A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.
He, Xinhua; Hu, Wenfa
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.
He, Xinhua
2014-01-01
This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367
Spatial analysis and characteristics of pig farming in Thailand.
Thanapongtharm, Weerapong; Linard, Catherine; Chinson, Pornpiroon; Kasemsuwan, Suwicha; Visser, Marjolein; Gaughan, Andrea E; Epprech, Michael; Robinson, Timothy P; Gilbert, Marius
2016-10-06
In Thailand, pig production intensified significantly during the last decade, with many economic, epidemiological and environmental implications. Strategies toward more sustainable future developments are currently investigated, and these could be informed by a detailed assessment of the main trends in the pig sector, and on how different production systems are geographically distributed. This study had two main objectives. First, we aimed to describe the main trends and geographic patterns of pig production systems in Thailand in terms of pig type (native, breeding, and fattening pigs), farm scales (smallholder and large-scale farming systems) and type of farming systems (farrow-to-finish, nursery, and finishing systems) based on a very detailed 2010 census. Second, we aimed to study the statistical spatial association between these different types of pig farming distribution and a set of spatial variables describing access to feed and markets. Over the last decades, pig population gradually increased, with a continuously increasing number of pigs per holder, suggesting a continuing intensification of the sector. The different pig-production systems showed very contrasted geographical distributions. The spatial distribution of large-scale pig farms corresponds with that of commercial pig breeds, and spatial analysis conducted using Random Forest distribution models indicated that these were concentrated in lowland urban or peri-urban areas, close to means of transportation, facilitating supply to major markets such as provincial capitals and the Bangkok Metropolitan region. Conversely the smallholders were distributed throughout the country, with higher densities located in highland, remote, and rural areas, where they supply local rural markets. A limitation of the study was that pig farming systems were defined from the number of animals per farm, resulting in their possible misclassification, but this should have a limited impact on the main patterns revealed by the analysis. The very contrasted distribution of different pig production systems present opportunities for future regionalization of pig production. More specifically, the detailed geographical analysis of the different production systems will be used to spatially-inform planning decisions for pig farming accounting for the specific health, environment and economical implications of the different pig production systems.
GLAD: a system for developing and deploying large-scale bioinformatics grid.
Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong
2005-03-01
Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.
NASA Astrophysics Data System (ADS)
Duroure, Christophe; Sy, Abdoulaye; Baray, Jean luc; Van baelen, Joel; Diop, Bouya
2017-04-01
Precipitation plays a key role in the management of sustainable water resources and flood risk analyses. Changes in rainfall will be a critical factor determining the overall impact of climate change. We propose to analyse long series (10 years) of daily precipitation at different regions. We present the Fourier densities energy spectra and morphological spectra (i.e. probability repartition functions of the duration and the horizontal scale) of large precipitating systems. Satellite data from the Global precipitation climatology project (GPCP) and local pluviometers long time series in Senegal and France are used and compared in this work. For mid-latitude and Sahelian regions (North of 12°N), the morphological spectra are close to exponential decreasing distribution. This fact allows to define two characteristic scales (duration and space extension) for the precipitating region embedded into the large meso-scale convective system (MCS). For tropical and equatorial regions (South of 12°N) the morphological spectra are close to a Levy-stable distribution (power law decrease) which does not allow to define a characteristic scale (scaling range). When the time and space characteristic scales are defined, a "statistical velocity" of precipitating MCS can be defined, and compared to observed zonal advection. Maps of the characteristic scales and Levy-stable exponent over West Africa and south Europe are presented. The 12° latitude transition between exponential and Levy-stable behaviors of precipitating MCS is compared with the result of ECMWF ERA-Interim reanalysis for the same period. This morphological sharp transition could be used to test the different parameterizations of deep convection in forecast models.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Boemer, Jens C.; Vittal, Eknath
The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less
Very Large Scale Distributed Information Processing Systems
1991-09-27
USENIX Conference Proceedings, pp. 31-43. USENIX, February 1988. [KLA90] Michael L. Kazar, Bruce W. Leverett, Owen T. Anderson, Vasilis Apos- tolides, Beth...will be selected if cost is the curlcron Iorsleettin- IfFigure 2 R DistribUted Database lSgtam and its we combin the abolve two pit , n r-itcrr
Large-scale P2P network based distributed virtual geographic environment (DVGE)
NASA Astrophysics Data System (ADS)
Tan, Xicheng; Yu, Liang; Bian, Fuling
2007-06-01
Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.
NASA Astrophysics Data System (ADS)
Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris
This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.
A Large Scale Dynamical System Immune Network Modelwith Finite Connectivity
NASA Astrophysics Data System (ADS)
Uezu, T.; Kadono, C.; Hatchett, J.; Coolen, A. C. C.
We study a model of an idiotypic immune network which was introduced by N. K. Jerne. It is known that in immune systems there generally exist several kinds of immune cells which can recognize any particular antigen. Taking this fact into account and assuming that each cell interacts with only a finite number of other cells, we analyze a large scale immune network via both numerical simulations and statistical mechanical methods, and show that the distribution of the concentrations of antibodies becomes non-trivial for a range of values of the strength of the interaction and the connectivity.
Extreme Cost Reductions with Multi-Megawatt Centralized Inverter Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwabe, Ulrich; Fishman, Oleg
2015-03-20
The objective of this project was to fully develop, demonstrate, and commercialize a new type of utility scale PV system. Based on patented technology, this includes the development of a truly centralized inverter system with capacities up to 100MW, and a high voltage, distributed harvesting approach. This system promises to greatly impact both the energy yield from large scale PV systems by reducing losses and increasing yield from mismatched arrays, as well as reduce overall system costs through very cost effective conversion and BOS cost reductions enabled by higher voltage operation.
Superfluid-like turbulence in cosmology
NASA Technical Reports Server (NTRS)
Gradwohl, Ben-Ami
1991-01-01
A network of vortices in a superfluid system exhibits turbulent behavior. It is argued that the universe may have experienced such a phase of superfluid-like turbulence due to the existence of a coherent state with non-topological charge and a network of global strings. The unique feature of a distribution of turbulent domains is that it can yield non-gravitationally induced large-scale coherent velocities. It may be difficult, however, to relate these velocities to the observed large-scale bulk motion.
Domain-area distribution anomaly in segregating multicomponent superfluids
NASA Astrophysics Data System (ADS)
Takeuchi, Hiromitsu
2018-01-01
The domain-area distribution in the phase transition dynamics of Z2 symmetry breaking is studied theoretically and numerically for segregating binary Bose-Einstein condensates in quasi-two-dimensional systems. Due to the dynamic-scaling law of the phase ordering kinetics, the domain-area distribution is described by a universal function of the domain area, rescaled by the mean distance between domain walls. The scaling theory for general coarsening dynamics in two dimensions hypothesizes that the distribution during the coarsening dynamics has a hierarchy with the two scaling regimes, the microscopic and macroscopic regimes with distinct power-law exponents. The power law in the macroscopic regime, where the domain size is larger than the mean distance, is universally represented with the Fisher's exponent of the percolation theory in two dimensions. On the other hand, the power-law exponent in the microscopic regime is sensitive to the microscopic dynamics of the system. This conjecture is confirmed by large-scale numerical simulations of the coupled Gross-Pitaevskii equation for binary condensates. In the numerical experiments of the superfluid system, the exponent in the microscopic regime anomalously reaches to its theoretical upper limit of the general scaling theory. The anomaly comes from the quantum-fluid effect in the presence of circular vortex sheets, described by the hydrodynamic approximation neglecting the fluid compressibility. It is also found that the distribution of superfluid circulation along vortex sheets obeys a dynamic-scaling law with different power-law exponents in the two regimes. An analogy to quantum turbulence on the hierarchy of vorticity distribution and the applicability to chiral superfluid 3He in a slab are also discussed.
Global deformation on the surface of Venus
NASA Technical Reports Server (NTRS)
Bilotti, Frank; Connors, Chris; Suppe, John
1992-01-01
Large-scale mapping of tectonic structures on Venus shows that there is an organized global distribution to deformation. The structures we emphasize are linear compressive mountain belts, extensional rafted zones, and the small-scale but widely distributed wrinkle ridges. Ninety percent of the area of the planet's compressive mountain belts are concentrated in the northern hemisphere whereas the southern hemisphere is dominated by extension and small-scale compression. We propose that this striking concentration of fold belts in the northern hemisphere, along with the globe-encircling equatorial rift system, represents a global organization to deformation on Venus.
The origin of polygonal troughs on the northern plains of Mars
NASA Astrophysics Data System (ADS)
Pechmann, J. C.
1980-05-01
The morphology, distribution, geologic environment and relative age of large-scale polygonal trough systems on Mars are examined. The troughs are steep-walled, flat-floored, sinuous depressions typically 200-800 m wide, 20-120 m deep and spaced 5-10 km apart. The mechanics of formation of tension cracks is reviewed to identify the factors controlling the scale of tension crack systems; special emphasis is placed on thermal cracking in permafrost. It is shown that because of the extremely large scale of the Martian fracture systems, they could not have formed by thermal cracking in permafrost, dessication cracking in sediments or contraction cracking in cooling lava. On the basis of photogeologic evidence and analog studies, it is proposed that polygonal troughs on the northern plains of Mars are grabens.
NASA Astrophysics Data System (ADS)
Pevtsov, A.
Solar magnetic fields exhibit hemispheric preference for negative (pos- itive) helicity in northern (southern) hemisphere. The hemispheric he- licity rule, however, is not very strong, - the patterns of opposite sign helicity were observed on different spatial scales in each hemisphere. For instance, many individual sunspots exhibit patches of opposite he- licity inside the single polarity field. There are also helicity patterns on scales larger than the size of typical active region. Such patterns were observed in distribution of active regions with abnormal (for a give hemisphere) helicity, in large-scale photospheric magnetic fields and coronal flux systems. We will review the observations of large-scale pat- terns of helicity in solar atmosphere and their possible relationship with (sub-)photospheric processes. The emphasis will be on large-scale pho- tospheric magnetic field and solar corona.
Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghattas, Omar
2013-10-15
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less
CERN data services for LHC computing
NASA Astrophysics Data System (ADS)
Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.
2017-10-01
Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
NASA Astrophysics Data System (ADS)
Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.
2017-12-01
Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.
NASA Technical Reports Server (NTRS)
Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick;
2001-01-01
A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
Asymptotic theory of time varying networks with burstiness and heterogeneous activation patterns
NASA Astrophysics Data System (ADS)
Burioni, Raffaella; Ubaldi, Enrico; Vezzani, Alessandro
2017-05-01
The recent availability of large-scale, time-resolved and high quality digital datasets has allowed for a deeper understanding of the structure and properties of many real-world networks. The empirical evidence of a temporal dimension prompted the switch of paradigm from a static representation of networks to a time varying one. In this work we briefly review the framework of time-varying-networks in real world social systems, especially focusing on the activity-driven paradigm. We develop a framework that allows for the encoding of three generative mechanisms that seem to play a central role in the social networks’ evolution: the individual’s propensity to engage in social interactions, its strategy in allocate these interactions among its alters and the burstiness of interactions amongst social actors. The functional forms and probability distributions encoding these mechanisms are typically data driven. A natural question arises if different classes of strategies and burstiness distributions, with different local scale behavior and analogous asymptotics can lead to the same long time and large scale structure of the evolving networks. We consider the problem in its full generality, by investigating and solving the system dynamics in the asymptotic limit, for general classes of ties allocation mechanisms and waiting time probability distributions. We show that the asymptotic network evolution is driven by a few characteristics of these functional forms, that can be extracted from direct measurements on large datasets.
A cloud-based framework for large-scale traditional Chinese medical record retrieval.
Liu, Lijun; Liu, Li; Fu, Xiaodong; Huang, Qingsong; Zhang, Xianwen; Zhang, Yin
2018-01-01
Electronic medical records are increasingly common in medical practice. The secondary use of medical records has become increasingly important. It relies on the ability to retrieve the complete information about desired patient populations. How to effectively and accurately retrieve relevant medical records from large- scale medical big data is becoming a big challenge. Therefore, we propose an efficient and robust framework based on cloud for large-scale Traditional Chinese Medical Records (TCMRs) retrieval. We propose a parallel index building method and build a distributed search cluster, the former is used to improve the performance of index building, and the latter is used to provide high concurrent online TCMRs retrieval. Then, a real-time multi-indexing model is proposed to ensure the latest relevant TCMRs are indexed and retrieved in real-time, and a semantics-based query expansion method and a multi- factor ranking model are proposed to improve retrieval quality. Third, we implement a template-based visualization method for displaying medical reports. The proposed parallel indexing method and distributed search cluster can improve the performance of index building and provide high concurrent online TCMRs retrieval. The multi-indexing model can ensure the latest relevant TCMRs are indexed and retrieved in real-time. The semantics expansion method and the multi-factor ranking model can enhance retrieval quality. The template-based visualization method can enhance the availability and universality, where the medical reports are displayed via friendly web interface. In conclusion, compared with the current medical record retrieval systems, our system provides some advantages that are useful in improving the secondary use of large-scale traditional Chinese medical records in cloud environment. The proposed system is more easily integrated with existing clinical systems and be used in various scenarios. Copyright © 2017. Published by Elsevier Inc.
cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design
Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei
2016-01-01
Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509
A distributed computing approach to mission operations support. [for spacecraft
NASA Technical Reports Server (NTRS)
Larsen, R. L.
1975-01-01
Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.
Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Secchi, Simone; Tumeo, Antonino; Villa, Oreste
Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less
Implementation of Fiber Optic Sensing System on Sandwich Composite Cylinder Buckling Test
NASA Technical Reports Server (NTRS)
Pena, Francisco; Richards, W. Lance; Parker, Allen R.; Piazza, Anthony; Schultz, Marc R.; Rudd, Michelle T.; Gardner, Nathaniel W.; Hilburger, Mark W.
2018-01-01
The National Aeronautics and Space Administration (NASA) Engineering and Safety Center Shell Buckling Knockdown Factor Project is a multicenter project tasked with developing new analysis-based shell buckling design guidelines and design factors (i.e., knockdown factors) through high-fidelity buckling simulations and advanced test technologies. To validate these new buckling knockdown factors for future launch vehicles, the Shell Buckling Knockdown Factor Project is carrying out structural testing on a series of large-scale metallic and composite cylindrical shells at the NASA Marshall Space Flight Center (Marshall Space Flight Center, Alabama). A fiber optic sensor system was used to measure strain on a large-scale sandwich composite cylinder that was tested under multiple axial compressive loads up to more than 850,000 lb, and equivalent bending loads over 22 million in-lb. During the structural testing of the composite cylinder, strain data were collected from optical cables containing distributed fiber Bragg gratings using a custom fiber optic sensor system interrogator developed at the NASA Armstrong Flight Research Center. A total of 16 fiber-optic strands, each containing nearly 1,000 fiber Bragg gratings, measuring strain, were installed on the inner and outer cylinder surfaces to monitor the test article global structural response through high-density real-time and post test strain measurements. The distributed sensing system provided evidence of local epoxy failure at the attachment-ring-to-barrel interface that would not have been detected with conventional instrumentation. Results from the fiber optic sensor system were used to further refine and validate structural models for buckling of the large-scale composite structures. This paper discusses the techniques employed for real-time structural monitoring of the composite cylinder for structural load introduction and distributed bending-strain measurements over a large section of the cylinder by utilizing unique sensing capabilities of fiber optic sensors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poidevin, Frédérick; Ade, Peter A. R.; Hargrave, Peter C.
2014-08-10
Turbulence and magnetic fields are expected to be important for regulating molecular cloud formation and evolution. However, their effects on sub-parsec to 100 parsec scales, leading to the formation of starless cores, are not well understood. We investigate the prestellar core structure morphologies obtained from analysis of the Herschel-SPIRE 350 μm maps of the Lupus I cloud. This distribution is first compared on a statistical basis to the large-scale shape of the main filament. We find the distribution of the elongation position angle of the cores to be consistent with a random distribution, which means no specific orientation of themore » morphology of the cores is observed with respect to the mean orientation of the large-scale filament in Lupus I, nor relative to a large-scale bent filament model. This distribution is also compared to the mean orientation of the large-scale magnetic fields probed at 350 μm with the Balloon-borne Large Aperture Telescope for Polarimetry during its 2010 campaign. Here again we do not find any correlation between the core morphology distribution and the average orientation of the magnetic fields on parsec scales. Our main conclusion is that the local filament dynamics—including secondary filaments that often run orthogonally to the primary filament—and possibly small-scale variations in the local magnetic field direction, could be the dominant factors for explaining the final orientation of each core.« less
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1981-01-01
Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.
A practical large scale/high speed data distribution system using 8 mm libraries
NASA Technical Reports Server (NTRS)
Howard, Kevin
1993-01-01
Eight mm tape libraries are known primarily for their small size, large storage capacity, and low cost. However, many applications require an additional attribute which, heretofore, has been lacking -- high transfer rate. Transfer rate is particularly important in a large scale data distribution environment -- an environment in which 8 mm tape should play a very important role. Data distribution is a natural application for 8 mm for several reasons: most large laboratories have access to 8 mm tape drives, 8 mm tapes are upwardly compatible, 8 mm media are very inexpensive, 8 mm media are light weight (important for shipping purposes), and 8 mm media densely pack data (5 gigabytes now and 15 gigabytes on the horizon). If the transfer rate issue were resolved, 8 mm could offer a good solution to the data distribution problem. To that end Exabyte has analyzed four ways to increase its transfer rate: native drive transfer rate increases, data compression at the drive level, tape striping, and homogeneous drive utilization. Exabyte is actively pursuing native drive transfer rate increases and drive level data compression. However, for non-transmitted bulk data applications (which include data distribution) the other two methods (tape striping and homogeneous drive utilization) hold promise.
Large-scale anisotropy of the cosmic microwave background radiation
NASA Technical Reports Server (NTRS)
Silk, J.; Wilson, M. L.
1981-01-01
Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.
With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less
NASA Astrophysics Data System (ADS)
Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.
2003-09-01
In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.
The Large -scale Distribution of Galaxies
NASA Astrophysics Data System (ADS)
Flin, Piotr
A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.
Male group size, female distribution and changes in sexual segregation by Roosevelt elk
Peterson, Leah M.
2017-01-01
Sexual segregation, or the differential use of space by males and females, is hypothesized to be a function of body size dimorphism. Sexual segregation can also manifest at small (social segregation) and large (habitat segregation) spatial scales for a variety of reasons. Furthermore, the connection between small- and large-scale sexual segregation has rarely been addressed. We studied a population of Roosevelt elk (Cervus elaphus roosevelti) across 21 years in north coastal California, USA, to assess small- and large-scale sexual segregation in winter. We hypothesized that male group size would associate with small-scale segregation and that a change in female distribution would associate with large-scale segregation. Variation in forage biomass might also be coupled to small and large-scale sexual segregation. Our findings were consistent with male group size associating with small-scale segregation and a change in female distribution associating with large-scale segregation. Females appeared to avoid large groups comprised of socially dominant males. Males appeared to occupy a habitat vacated by females because of a wider forage niche, greater tolerance to lethal risks, and, perhaps, to reduce encounters with other elk. Sexual segregation at both spatial scales was a poor predictor of forage biomass. Size dimorphism was coupled to change in sexual segregation at small and large spatial scales. Small scale segregation can seemingly manifest when all forage habitat is occupied by females and large scale segregation might happen when some forage habitat is not occupied by females. PMID:29121076
Architectural Optimization of Digital Libraries
NASA Technical Reports Server (NTRS)
Biser, Aileen O.
1998-01-01
This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.
Large-Scale Simulation of Multi-Asset Ising Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2017-03-01
We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.
Service Discovery Oriented Management System Construction Method
NASA Astrophysics Data System (ADS)
Li, Huawei; Ren, Ying
2017-10-01
In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.
NASA Astrophysics Data System (ADS)
Guenther, A. B.; Duhl, T.
2011-12-01
Increasing computational resources have enabled a steady improvement in the spatial resolution used for earth system models. Land surface models and landcover distributions have kept ahead by providing higher spatial resolution than typically used in these models. Satellite observations have played a major role in providing high resolution landcover distributions over large regions or the entire earth surface but ground observations are needed to calibrate these data and provide accurate inputs for models. As our ability to resolve individual landscape components improves, it is important to consider what scale is sufficient for providing inputs to earth system models. The required spatial scale is dependent on the processes being represented and the scientific questions being addressed. This presentation will describe the development a contiguous U.S. landcover database using high resolution imagery (1 to 1000 meters) and surface observations of species composition and other landcover characteristics. The database includes plant functional types and species composition and is suitable for driving land surface models (CLM and MEGAN) that predict land surface exchange of carbon, water, energy and biogenic reactive gases (e.g., isoprene, sesquiterpenes, and NO). We investigate the sensitivity of model results to landcover distributions with spatial scales ranging over six orders of magnitude (1 meter to 1000000 meters). The implications for predictions of regional climate and air quality will be discussed along with recommendations for regional and global earth system modeling.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2016-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.
Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
NASA Technical Reports Server (NTRS)
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph;
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...
2016-05-24
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less
Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert
2016-01-01
The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566
Scaling earthquake ground motions for performance-based assessment of buildings
Huang, Y.-N.; Whittaker, A.S.; Luco, N.; Hamburger, R.O.
2011-01-01
The impact of alternate ground-motion scaling procedures on the distribution of displacement responses in simplified structural systems is investigated. Recommendations are provided for selecting and scaling ground motions for performance-based assessment of buildings. Four scaling methods are studied, namely, (1)geometric-mean scaling of pairs of ground motions, (2)spectrum matching of ground motions, (3)first-mode-period scaling to a target spectral acceleration, and (4)scaling of ground motions per the distribution of spectral demands. Data were developed by nonlinear response-history analysis of a large family of nonlinear single degree-of-freedom (SDOF) oscillators that could represent fixed-base and base-isolated structures. The advantages and disadvantages of each scaling method are discussed. The relationship between spectral shape and a ground-motion randomness parameter, is presented. A scaling procedure that explicitly considers spectral shape is proposed. ?? 2011 American Society of Civil Engineers.
An optimal beam alignment method for large-scale distributed space surveillance radar system
NASA Astrophysics Data System (ADS)
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
Competition and Cooperation of Distributed Generation and Power System
NASA Astrophysics Data System (ADS)
Miyake, Masatoshi; Nanahara, Toshiya
Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.
Size and structure of Chlorella zofingiensis /FeCl 3 flocs in a shear flow: Algae Floc Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyatt, Nicholas B.; O'Hern, Timothy J.; Shelden, Bion
Flocculation is a promising method to overcome the economic hurdle to separation of algae from its growth medium in large scale operations. But, understanding of the floc structure and the effects of shear on the floc structure are crucial to the large scale implementation of this technique. The floc structure is important because it determines, in large part, the density and settling behavior of the algae. Freshwater algae floc size distributions and fractal dimensions are presented as a function of applied shear rate in a Couette cell using ferric chloride as a flocculant. Comparisons are made with measurements made formore » a polystyrene microparticle model system taken here as well as reported literature results. The algae floc size distributions are found to be self-preserving with respect to shear rate, consistent with literature data for polystyrene. Moreover, three fractal dimensions are calculated which quantitatively characterize the complexity of the floc structure. Low shear rates result in large, relatively dense packed flocs which elongate and fracture as the shear rate is increased. Our results presented here provide crucial information for economically implementing flocculation as a large scale algae harvesting strategy.« less
Hierarchical Data Distribution Scheme for Peer-to-Peer Networks
NASA Astrophysics Data System (ADS)
Bhushan, Shashi; Dave, M.; Patel, R. B.
2010-11-01
In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.
NASA Technical Reports Server (NTRS)
Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.
1987-01-01
The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.
NASA Technical Reports Server (NTRS)
Klumpar, D. M. (Principal Investigator)
1982-01-01
The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.
ERIC Educational Resources Information Center
Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell
2017-01-01
Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…
Large Scale Frequent Pattern Mining using MPI One-Sided Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vishnu, Abhinav; Agarwal, Khushbu
In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less
Locating inefficient links in a large-scale transportation network
NASA Astrophysics Data System (ADS)
Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu
2015-02-01
Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.
NASA Astrophysics Data System (ADS)
Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.
2013-09-01
A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.
NASA Technical Reports Server (NTRS)
Geller, Margaret J.; Huchra, J. P.
1991-01-01
Present-day understanding of the large-scale galaxy distribution is reviewed. The statistics of the CfA redshift survey are briefly discussed. The need for deeper surveys to clarify the issues raised by recent studies of large-scale galactic distribution is addressed.
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
Comparison of the Frontier Distributed Database Caching System to NoSQL Databases
NASA Astrophysics Data System (ADS)
Dykstra, Dave
2012-12-01
One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.
Comparison of the Frontier Distributed Database Caching System to NoSQL Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, Dave
One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less
Costs and cost-effectiveness of vector control in Eritrea using insecticide-treated bed nets.
Yukich, Joshua O; Zerom, Mehari; Ghebremeskel, Tewolde; Tediosi, Fabrizio; Lengeler, Christian
2009-03-30
While insecticide-treated nets (ITNs) are a recognized effective method for preventing malaria, there has been an extensive debate in recent years about the best large-scale implementation strategy. Implementation costs and cost-effectiveness are important elements to consider when planning ITN programmes, but so far little information on these aspects is available from national programmes. This study uses a standardized methodology, as part of a larger comparative study, to collect cost data and cost-effectiveness estimates from a large programme providing ITNs at the community level and ante-natal care facilities in Eritrea. This is a unique model of ITN implementation fully integrated into the public health system. Base case analysis results indicated that the average annual cost of ITN delivery (2005 USD 3.98) was very attractive when compared with past ITN delivery studies at different scales. Financing was largely from donor sources though the Eritrean government and net users also contributed funding. The intervention's cost-effectiveness was in a highly attractive range for sub-Saharan Africa. The cost per DALY averted was USD 13 - 44. The cost per death averted was USD 438-1449. Distribution of nets coincided with significant increases in coverage and usage of nets nationwide, approaching or exceeding international targets in some areas. ITNs can be cost-effectively delivered at a large scale in sub-Saharan Africa through a distribution system that is highly integrated into the health system. Operating and sustaining such a system still requires strong donor funding and support as well as a functional and extensive system of health facilities and community health workers already in place.
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1977-01-01
The general problem of bistatic scattering from a two scale surface was evaluated. The treatment was entirely two-dimensional and in a vector formulation independent of any particular coordinate system. The two scale scattering model was then applied to backscattering from the sea surface. In particular, the model was used in conjunction with the JONSWAP 1975 aircraft scatterometer measurements to determine the sea surface's two scale roughness distributions, namely the probability density of the large scale surface slope and the capillary wavenumber spectrum. Best fits yield, on the average, a 0.7 dB rms difference between the model computations and the vertical polarization measurements of the normalized radar cross section. Correlations between the distribution parameters and the wind speed were established from linear, least squares regressions.
Identification of Curie temperature distributions in magnetic particulate systems
NASA Astrophysics Data System (ADS)
Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.
2017-09-01
This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-08-18
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2014-09-30
172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing
Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering
NASA Astrophysics Data System (ADS)
Koehler, Sarah Muraoka
Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.
Large-scale displacement following the 2016 Kaikōura earthquake
NASA Astrophysics Data System (ADS)
Wang, T.; Peng, D.; Barbot, S.; Wei, S.; Shi, X.
2017-12-01
The 2016 Mw 7.9 Kaikōura earthquake occurred near the southern termination of the Hikurangi subduction system, where a transition from subduction to strike-slip motion dominates the pre-seismic strain accumulation. Dense spatial coverage of the GPS measurements and large amount of Interferometric Synthetic Aperture Radar (InSAR) images provide valuable constraints, from the near field to the far field, to study how the slip is distributed among the subduction interface and the overlying fault system before, during and after the earthquake. We extract time-series deformation from the New Zealand continuous GPS network, and SAR images acquired from Japanese ALOS-2 and European Sentinel-1A/B satellites to image the surface deformation related to the 2016 Kaikōura earthquake. Both GPS and InSAR data, which cover the entire New Zealand region, show that the co-seismic and post-seismic deformations are distributed in an extraordinary large area, as far as to the north tip of the North Island. Based on a coseismic slip model derived from seismic and geodetic observations, we calculate the stress perturbation incurred by the earthquake. We explore a range of possibilities of friction laws and rheology via a linear combination of strain rate in finite volumes and slip velocity on ruptured faults. We obtain the slip distribution that can best explain our geodetic measurements using outlier-insensitive hierarchical Bayesian model, to better understand different mechanisms behind the localized shallow after slip and distributed deformation. Our results indicate that complex interactions between the subduction interface and the overlying fault system play an important role in causing such large-scale deformation during and after the earthquake event.
ERIC Educational Resources Information Center
Turner, Henry J.
2014-01-01
This dissertation of practice utilized a multiple case-study approach to examine distributed leadership within five school districts that were attempting to gain acceptance of a large-scale 1:1 technology initiative. Using frame theory and distributed leadership theory as theoretical frameworks, this study interviewed each district's…
NASA Technical Reports Server (NTRS)
Birman, Kenneth; Cooper, Robert; Marzullo, Keith
1990-01-01
The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.
Ben-Naim, E; Krapivsky, P L
2003-09-01
We study a class of growth processes in which clusters evolve via exchange of particles. We show that depending on the rate of exchange there are three possibilities: (I) Growth-clusters grow indefinitely, (II) gelation-all mass is transformed into an infinite gel in a finite time, and (III) instant gelation. In regimes I and II, the cluster size distribution attains a self-similar form. The large size tail of the scaling distribution is Phi(x) approximately exp(-x(2-nu)), where nu is a homogeneity degree of the rate of exchange. At the borderline case nu=2, the distribution exhibits a generic algebraic tail, Phi(x) approximately x(-5). In regime III, the gel nucleates immediately and consumes the entire system. For finite systems, the gelation time vanishes logarithmically, T approximately [lnN](-(nu-2)), in the large system size limit N--> infinity. The theory is applied to coarsening in the infinite range Ising-Kawasaki model and in electrostatically driven granular layers.
NASA Technical Reports Server (NTRS)
Wong, Sun; Del Genio, Anthony; Wang, Tao; Kahn, Brian; Fetzer, Eric J.; L'Ecuyer, Tristan S.
2015-01-01
Goals: Water budget-related dynamical phase space; Connect large-scale dynamical conditions to atmospheric water budget (including precipitation); Connect atmospheric water budget to cloud type distributions.
The Global Distribution of Precipitation and Clouds. Chapter 2.4
NASA Technical Reports Server (NTRS)
Shepherd, J. Marshall; Adler, Robert; Huffman, George; Rossow, William; Ritter, Michael; Curtis, Scott
2004-01-01
The water cycle is the key circuit moving water through the Earth's system. This large system, powered by energy from the sun, is a continuous exchange of moisture between the oceans, the atmosphere, and the land. Precipitation (including rain, snow, sleet, freezing rain, and hail), is the primary mechanism for transporting water from the atmosphere back to the Earth's surface and is the key physical process that links aspects of climate, weather, and the global water cycle. Global precipitation and associate cloud processes are critical for understanding the water cycle balance on a global scale and interactions with the Earth's climate system. However, unlike measurement of less dynamic and more homogenous meteorological fields such as pressure or even temperature, accurate assessment of global precipitation is particularly challenging due to its highly stochastic and rapidly changing nature. It is not uncommon to observe a broad spectrum of precipitation rates and distributions over very localized time scales. Furthermore, precipitating systems generally exhibit nonhomogeneous spatial distributions of rain rates over local to global domains.
Amplification of large scale magnetic fields in a decaying MHD system
NASA Astrophysics Data System (ADS)
Park, Kiwan
2017-10-01
Dynamo theory explains the amplification of magnetic fields in the conducting fluids (plasmas) driven by the continuous external energy. It is known that the nonhelical continuous kinetic or magnetic energy amplifies the small scale magnetic field; and the helical energy, the instability, or the shear with rotation effect amplifies the large scale magnetic field. However, recently it was reported that the decaying magnetic energy independent of helicity or instability could generate the large scale magnetic field. This phenomenon may look somewhat contradictory to the conventional dynamo theory. But it gives us some clues to the fundamental mechanism of energy transfer in the magnetized conducting fluids. It also implies that an ephemeral astrophysical event emitting the magnetic and kinetic energy can be a direct cause of the large scale magnetic field observed in space. As of now the exact physical mechanism is not yet understood in spite of several numerical results. The plasma motion coupled with a nearly conserved vector potential in the magnetohydrodynamic (MHD) system may transfer magnetic energy to the large scale. Also the intrinsic property of the scaling invariant MHD equation may decide the direction of energy transfer. In this paper we present the simulation results of inversely transferred helical and nonhelical energy in a decaying MHD system. We introduce a field structure model based on the MHD equation to show that the transfer of magnetic energy is essentially bidirectional depending on the plasma motion and initial energy distribution. And then we derive α coefficient algebraically in line with the field structure model to explain how the large scale magnetic field is induced by the helical energy in the system regardless of an external forcing source. And for the algebraic analysis of nonhelical magnetic energy, we use the eddy damped quasinormalized Markovian approximation to show the inverse transfer of magnetic energy.
NASA Astrophysics Data System (ADS)
Thorslund, J.; Jarsjo, J.; Destouni, G.
2017-12-01
The quality of freshwater resources is increasingly impacted by human activities. Humans also extensively change the structure of landscapes, which may alter natural hydrological processes. To manage and maintain freshwater of good water quality, it is critical to understand how pollutants are released into, transported and transformed within the hydrological system. Some key scientific questions include: What are net downstream impacts of pollutants across different hydroclimatic and human disturbance conditions, and on different scales? What are the functions within and between components of the landscape, such as wetlands, on mitigating pollutant load delivery to downstream recipients? We explore these questions by synthesizing results from several relevant case study examples of intensely human-impacted hydrological systems. These case study sites have been specifically evaluated in terms of net impact of human activities on pollutant input to the aquatic system, as well as flow-path distributions trough wetlands as a potential ecosystem service of pollutant mitigation. Results shows that although individual wetlands have high retention capacity, efficient net retention effects were not always achieved at a larger landscape scale. Evidence suggests that the function of wetlands as mitigation solutions to pollutant loads is largely controlled by large-scale parallel and circular flow-paths, through which multiple wetlands are interconnected in the landscape. To achieve net mitigation effects at large scale, a large fraction of the polluted large-scale flows must be transported through multiple connected wetlands. Although such large-scale flow interactions are critical for assessing water pollution spreading and fate through the landscape, our synthesis shows a frequent lack of knowledge at such scales. We suggest ways forward for addressing the mismatch between the large scales at which key pollutant pressures and water quality changes take place and the relatively scale at which most studies and implementations are currently made. These suggestions can help bridge critical knowledge gaps, as needed for improving water quality predictions and mitigation solutions under human and environmental changes.
Implications of the IRAS data for galactic gamma-ray astronomy and EGRET
NASA Technical Reports Server (NTRS)
Stecker, F. W.
1990-01-01
Using the results of gamma-ray, millimeter wave and far infrared surveys of the galaxy, one can derive a logically consistent picture of the large scale distribution of galactic gas and cosmic rays, one tied to the overall processes of stellar birth and destruction on a galactic scale. Using the results of the IRAS far-infrared survey of the galaxy, the large scale radial distribution of galactic far-infrared emission were obtained independently for both the Northern and Southern Hemisphere sides of the Galaxy. It was found that the dominant feature in these distributions to be a broad peak coincident with the 5 kpc molecular gas cloud ring. Also found was evidence of spiral arm features. Strong correlations are evident between the large scale galactic distributions of far infrared emission, gamma-ray emission and total CO emission. There is a particularly tight correlation between the distribution of warm molecular clouds and far-infrared emission on a galactic scale.
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
2015-05-22
sensor networks for managing power levels of wireless networks ; air and ground transportation systems for air traffic control and payload transport and... network systems, large-scale systems, adaptive control, discontinuous systems 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF...cover a broad spectrum of ap- plications including cooperative control of unmanned air vehicles, autonomous underwater vehicles, distributed sensor
NASA Astrophysics Data System (ADS)
Browne, Joshua B.
Anthropogenic greenhouse gas emissions (GHG) contribute to global warming, and must be mitigated. With GHG mitigation as an overarching goal, this research aims to study the potential for newfound and abundant sources of natural gas to play a role as part of a GHG mitigation strategy. However, recent work suggests that methane leakage in the current natural gas system may inhibit end-use natural gas as a robust mitigation strategy, but that natural gas as a feedstock for other forms of energy, such as electricity generation or liquid fuels, may support natural-gas based mitigation efforts. Flaring of uneconomic natural gas, or outright loss of natural gas to the atmosphere results in greenhouse gas emissions that could be avoided and which today are very large in aggregate. A central part of this study is to look at a new technology for converting natural gas into methanol at a unit scale that is matched to the size of individual natural gas wells. The goal is to convert stranded or otherwise flared natural gas into a commercially valuable product and thereby avoid any unnecessary emission to the atmosphere. A major part of this study is to contribute to the development of a novel approach for converting natural gas into methanol and to assess the environmental impact (for better or for worse) of this new technology. This Ph. D. research contributes to the development of such a system and provides a comprehensive techno-economic and environmental assessment of this technology. Recognizing the distributed nature of methane leakage associated with the natural gas system, this work is also intended to advance previous research at the Lenfest Center for Sustainable Energy that aims to show that small, modular energy systems can be made economic. This thesis contributes to and analyzes the development of a small-scale gas-to-liquids (GTL) system aimed at addressing flared natural gas from gas and oil wells. This thesis includes system engineering around a design that converts natural gas to synthesis gas (syngas) in a reciprocating internal combustion engine and then converts the syngas into methanol in a small-scale reactor. With methanol as the product, this research aims to show that such a system can not only address current and future natural gas flaring regulation, but eventually can compete economically with historically large-scale, centralized methanol production infrastructure. If successful, such systems could contribute to a shift away from large, multi-billion dollar capital cost chemical plants towards smaller systems with shorter lifetimes that may decrease the time to transition to more sustainable forms of energy and chemical conversion technologies. This research also quantifies the potential for such a system to contribute to mitigating GHG emissions, not only by addressing flared gas in the near-term, but also supporting future natural gas infrastructure ideas that may help to redefine the way the current natural gas pipeline system is used. The introduction of new, small-scale, distributed energy and chemical conversion systems located closer to the point of extraction may contribute to reducing methane leakage throughout the natural gas distribution system by reducing the reliance and risks associated with the aging natural gas pipeline infrastructure. The outcome of this thesis will result in several areas for future work. From an economic perspective, factors that contribute to overall system cost, such as operation and maintenance (O&M) and capital cost multiplier (referred to as the Lang Factor for large-scale petro-chemical plants), are not yet known for novel systems such as the technology presented here. From a technical perspective, commercialization of small-scale, distributed chemical conversion systems may create a demand for economical compression and air-separation technologies at this scale that do not currently exist. Further, new business cases may arise aimed at utilizing small, remote sources of methane, such as biogas from agricultural and municipal waste. Finally, while methanol was selected as the end-product for this thesis, future applications of this technology may consider methane conversion to hydrogen, ammonia, and ethylene for example, challenging the orthodoxy in the chemical industry that "bigger is better."
2008-10-01
AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3
Scaling and biomechanics of surface attachment in climbing animals
Labonte, David; Federle, Walter
2015-01-01
Attachment devices are essential adaptations for climbing animals and valuable models for synthetic adhesives. A major unresolved question for both natural and bioinspired attachment systems is how attachment performance depends on size. Here, we discuss how contact geometry and mode of detachment influence the scaling of attachment forces for claws and adhesive pads, and how allometric data on biological systems can yield insights into their mechanism of attachment. Larger animals are expected to attach less well to surfaces, due to their smaller surface-to-volume ratio, and because it becomes increasingly difficult to distribute load uniformly across large contact areas. In order to compensate for this decrease of weight-specific adhesion, large animals could evolve overproportionally large pads, or adaptations that increase attachment efficiency (adhesion or friction per unit contact area). Available data suggest that attachment pad area scales close to isometry within clades, but pad efficiency in some animals increases with size so that attachment performance is approximately size-independent. The mechanisms underlying this biologically important variation in pad efficiency are still unclear. We suggest that switching between stress concentration (easy detachment) and uniform load distribution (strong attachment) via shear forces is one of the key mechanisms enabling the dynamic control of adhesion during locomotion. PMID:25533088
The distribution of free electrons in the inner galaxy from pulsar dispersion measures
NASA Technical Reports Server (NTRS)
Harding, D. S.; Harding, A. K.
1981-01-01
The dispersion measures of a sample of 149 pulsars in the inner Galaxy (absolute value of l 50 deg) were statistically analyzed to deduce the large-scale distribution of free thermal electrons in this region. The dispersion measure distribution of these pulsars shows significant evidence for a decrease in the electron scale height from a local value greater than the pulsar scale height to a value less than the pulsar scale height at galactocentric radii inside of approximately 7 kpc. An increase in the electron density (to a value around .15/cu cm at 4 to 5 kpc) must accompany such a decrease in scale height. There is also evidence for a large-scale warp in the electron distribution below the b + 0 deg plane inside the Solar circle. A model is proposed for the electron distribution which incorporates these features and Monte Carlo generated dispersion measure distributions are presented for parameters which best reproduce the observed pulsar distributions.
Formation of large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.; Melott, Adrian L.; Bertschinger, Edmund
1989-01-01
Numerical simulations of large-scale structure formation from cosmic strings and massive neutrinos are described. The linear power spectrum in this model resembles the cold-dark-matter power spectrum. Galaxy formation begins early, and the final distribution consists of isolated density peaks embedded in a smooth background, leading to a natural bias in the distribution of luminous matter. The distribution of clustered matter has a filamentary appearance with large voids.
Boehm, Alexandria B
2002-10-15
In this study, we extend the established scaling theory for cluster size distributions generated during unsteady coagulation to number-flux distributions that arise during steady-state coagulation and settling in an unmixed water mass. The scaling theory predicts self-similar number-flux distributions and power-law decay of total number flux with depth. The shape of the number-flux distributions and the power-law exponent describing the decay of the total number flux are shown to depend on the homogeneity and small i/j limit of the coagulation kernel and the exponent kappa, which describes the variation in settling velocity with cluster volume. Particle field measurements from Lake Zurich, collected by U. Weilenmann and co-workers (Limnol. Oceanogr.34, 1 (1989)), are used to illustrate how the scaling predictions can be applied to a natural system. This effort indicates that within the mid-depth region of Lake Zurich, clusters of the same size preferentially interact and large clusters react with one another more quickly than small ones, indicative of clusters coagulating in a reaction-limited regime.
Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen
2008-07-29
This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.
Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen
2008-01-01
This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946
Resurrecting hot dark matter - Large-scale structure from cosmic strings and massive neutrinos
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1988-01-01
These are the results of a numerical simulation of the formation of large-scale structure from cosmic-string loops in a universe dominated by massive neutrinos (hot dark matter). This model has several desirable features. The final matter distribution contains isolated density peaks embedded in a smooth background, producing a natural bias in the distribution of luminous matter. Because baryons can accrete onto the cosmic strings before the neutrinos, the galaxies will have baryon cores and dark neutrino halos. Galaxy formation in this model begins much earlier than in random-phase models. On large scales the distribution of clustered matter visually resembles the CfA survey, with large voids and filaments.
Gardner, W.P.; Susong, D.D.; Solomon, D.K.; Heasler, H.P.
2011-01-01
Multiple environmental tracers are used to investigate age distribution, evolution, and mixing in local- to regional-scale groundwater circulation around the Norris Geyser Basin area in Yellowstone National Park. Springs ranging in temperature from 3??C to 90??C in the Norris Geyser Basin area were sampled for stable isotopes of hydrogen and oxygen, major and minor element chemistry, dissolved chlorofluorocarbons, and tritium. Groundwater near Norris Geyser Basin is comprised of two distinct systems: a shallow, cool water system and a deep, high-temperature hydrothermal system. These two end-member systems mix to create springs with intermediate temperature and composition. Using multiple tracers from a large number of springs, it is possible constrain the distribution of possible flow paths and refine conceptual models of groundwater circulation in and around a large, complex hydrothermal system. Copyright 2011 by the American Geophysical Union.
A likely universal model of fracture scaling and its consequence for crustal hydromechanics
NASA Astrophysics Data System (ADS)
Davy, P.; Le Goc, R.; Darcel, C.; Bour, O.; de Dreuzy, J. R.; Munier, R.
2010-10-01
We argue that most fracture systems are spatially organized according to two main regimes: a "dilute" regime for the smallest fractures, where they can grow independently of each other, and a "dense" regime for which the density distribution is controlled by the mechanical interactions between fractures. We derive a density distribution for the dense regime by acknowledging that, statistically, fractures do not cross a larger one. This very crude rule, which expresses the inhibiting role of large fractures against smaller ones but not the reverse, actually appears be a very strong control on the eventual fracture density distribution since it results in a self-similar distribution whose exponents and density term are fully determined by the fractal dimension D and a dimensionless parameter γ that encompasses the details of fracture correlations and orientations. The range of values for D and γ appears to be extremely limited, which makes this model quite universal. This theory is supported by quantitative data on either fault or joint networks. The transition between the dilute and dense regimes occurs at about a few tenths of a kilometer for faults systems and a few meters for joints. This remarkable difference between both processes is likely due to a large-scale control (localization) of the fracture growth for faulting that does not exist for jointing. Finally, we discuss the consequences of this model on the flow properties and show that these networks are in a critical state, with a large number of nodes carrying a large amount of flow.
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
2017-06-01
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provides a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to selection rules that favor the rare trajectories of interest. Such algorithms are plagued by finite simulation time and finite population size, effects that can render their use delicate. In this paper, we present a numerical approach which uses the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of rare trajectories. The method we propose allows one to extract the infinite-time and infinite-size limit of these estimators, which-as shown on the contact process-provides a significant improvement of the large deviation function estimators compared to the standard one.
Coalescence of repelling colloidal droplets: a route to monodisperse populations.
Roger, Kevin; Botet, Robert; Cabane, Bernard
2013-05-14
Populations of droplets or particles dispersed in a liquid may evolve through Brownian collisions, aggregation, and coalescence. We have found a set of conditions under which these populations evolve spontaneously toward a narrow size distribution. The experimental system consists of poly(methyl methacrylate) (PMMA) nanodroplets dispersed in a solvent (acetone) + nonsolvent (water) mixture. These droplets carry electrical charges, located on the ionic end groups of the macromolecules. We used time-resolved small angle X-ray scattering to determine their size distribution. We find that the droplets grow through coalescence events: the average radius (R) increases logarithmically with elapsed time while the relative width σR/(R) of the distribution decreases as the inverse square root of (R). We interpret this evolution as resulting from coalescence events that are hindered by ionic repulsions between droplets. We generalize this evolution through a simulation of the Smoluchowski kinetic equation, with a kernel that takes into account the interactions between droplets. In the case of vanishing or attractive interactions, all droplet encounters lead to coalescence. The corresponding kernel leads to the well-known "self-preserving" particle distribution of the coalescence process, where σR/(R) increases to a plateau value. However, for droplets that interact through long-range ionic repulsions, "large + small" droplet encounters are more successful at coalescence than "large + large" encounters. We show that the corresponding kernel leads to a particular scaling of the droplet-size distribution-known as the "second-scaling law" in the theory of critical phenomena, where σR/(R) decreases as 1/√(R) and becomes independent of the initial distribution. We argue that this scaling explains the narrow size distributions of colloidal dispersions that have been synthesized through aggregation processes.
Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models
NASA Astrophysics Data System (ADS)
Rigler, E. J.; Wiltberger, M. J.; Love, J. J.
2017-12-01
Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.
THE DEPENDENCE OF PRESTELLAR CORE MASS DISTRIBUTIONS ON THE STRUCTURE OF THE PARENTAL CLOUD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parravano, Antonio; Sanchez, Nestor; Alfaro, Emilio J.
2012-08-01
The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle and Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloudmore » structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle and Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root N statistical fluctuations, increasing with H.« less
The Dependence of Prestellar Core Mass Distributions on the Structure of the Parental Cloud
NASA Astrophysics Data System (ADS)
Parravano, Antonio; Sánchez, Néstor; Alfaro, Emilio J.
2012-08-01
The mass distribution of prestellar cores is obtained for clouds with arbitrary internal mass distributions using a selection criterion based on the thermal and turbulent Jeans mass and applied hierarchically from small to large scales. We have checked this methodology by comparing our results for a log-normal density probability distribution function with the theoretical core mass function (CMF) derived by Hennebelle & Chabrier, namely a power law at large scales and a log-normal cutoff at low scales, but our method can be applied to any mass distributions representing a star-forming cloud. This methodology enables us to connect the parental cloud structure with the mass distribution of the cores and their spatial distribution, providing an efficient tool for investigating the physical properties of the molecular clouds that give rise to the prestellar core distributions observed. Simulated fractional Brownian motion (fBm) clouds with the Hurst exponent close to the value H = 1/3 give the best agreement with the theoretical CMF derived by Hennebelle & Chabrier and Chabrier's system initial mass function. Likewise, the spatial distribution of the cores derived from our methodology shows a surface density of companions compatible with those observed in Trapezium and Ophiucus star-forming regions. This method also allows us to analyze the properties of the mass distribution of cores for different realizations. We found that the variations in the number of cores formed in different realizations of fBm clouds (with the same Hurst exponent) are much larger than the expected root {\\cal N} statistical fluctuations, increasing with H.
NASA Astrophysics Data System (ADS)
Cao, Xuesong; Jiang, Ling; Hu, Ruimin
2006-10-01
Currently, the applications of surveillance system have been increasingly widespread. But there are few surveillance platforms that can meet the requirement of large-scale, cross-regional, and flexible surveillance business. In the paper, we present a distributed surveillance system platform to improve safety and security of the society. The system is constructed by an object-oriented middleware called as Internet Communications Engine (ICE). This middleware helps our platform to integrate a lot of surveillance resource of the society and accommodate diverse range of surveillance industry requirements. In the follow sections, we will describe in detail the design concepts of system and introduce traits of ICE.
Vo, T D; Dwyer, G; Szeto, H H
1986-04-01
A relatively powerful and inexpensive microcomputer-based system for the spectral analysis of the EEG is presented. High resolution and speed is achieved with the use of recently available large-scale integrated circuit technology with enhanced functionality (INTEL Math co-processors 8087) which can perform transcendental functions rapidly. The versatility of the system is achieved with a hardware organization that has distributed data acquisition capability performed by the use of a microprocessor-based analog to digital converter with large resident memory (Cyborg ISAAC-2000). Compiled BASIC programs and assembly language subroutines perform on-line or off-line the fast Fourier transform and spectral analysis of the EEG which is stored as soft as well as hard copy. Some results obtained from test application of the entire system in animal studies are presented.
NASA Technical Reports Server (NTRS)
Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary
1996-01-01
We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.
Ensemble Kalman filtering in presence of inequality constraints
NASA Astrophysics Data System (ADS)
van Leeuwen, P. J.
2009-04-01
Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.
Design and Implementation of Distributed Crawler System Based on Scrapy
NASA Astrophysics Data System (ADS)
Fan, Yuhao
2018-01-01
At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.
Properties and spatial distribution of galaxy superclusters
NASA Astrophysics Data System (ADS)
Liivamägi, Lauri Juhan
2017-01-01
Astronomy is a science that can offer plenty of unforgettable imagery, and the large-scale distribution of galaxies is no exception. Among the first features the viewer's eye is likely to be drawn to, are large concentrations of galaxies - galaxy superclusters, contrasting to the seemingly empty regions beside them. Superclusters can extend from tens to over hundred megaparsecs, they contain from hundreds to thousands of galaxies, and many galaxy groups and clusters. Unlike galaxy clusters, superclusters are clearly unrelaxed systems, not gravitationally bound as crossing times exceed the age of the universe, and show little to no radial symmetry. Superclusters, as part of the large-scale structure, are sensitive to the initial power spectrum and the following evolution. They are massive enough to leave an imprint on the cosmic microwave background radiation. Superclusters can also provide an unique environment for their constituent galaxies and galaxy clusters. In this study we used two different observational and one simulated galaxy samples to create several catalogues of structures that, we think, correspond to what are generally considered galaxy superclusters. Superclusters were delineated as continuous over-dense regions in galaxy luminosity density fields. When calculating density fields several corrections were applied to remove small-scale redshift distortions and distance-dependent selection effects. Resulting catalogues of objects display robust statistical properties, showing that flux-limited galaxy samples can be used to create nearly volume-limited catalogues of superstructures. Generally, large superclusters can be regarded as massive, often branching filamentary structures, that are mainly characterised by their length. Smaller superclusters, on the other hand, can display a variety of shapes. Spatial distribution of superclusters shows large-scale variations, with high-density concentrations often found in semi-regularly spaced groups. Future studies are needed to quantify the relations between superclusters and finer details of the galaxy distribution. Supercluster catalogues from this thesis have already been used in numerous other studies.
Shallow cumuli ensemble statistics for development of a stochastic parameterization
NASA Astrophysics Data System (ADS)
Sakradzija, Mirjana; Seifert, Axel; Heus, Thijs
2014-05-01
According to a conventional deterministic approach to the parameterization of moist convection in numerical atmospheric models, a given large scale forcing produces an unique response from the unresolved convective processes. This representation leaves out the small-scale variability of convection, as it is known from the empirical studies of deep and shallow convective cloud ensembles, there is a whole distribution of sub-grid states corresponding to the given large scale forcing. Moreover, this distribution gets broader with the increasing model resolution. This behavior is also consistent with our theoretical understanding of a coarse-grained nonlinear system. We propose an approach to represent the variability of the unresolved shallow-convective states, including the dependence of the sub-grid states distribution spread and shape on the model horizontal resolution. Starting from the Gibbs canonical ensemble theory, Craig and Cohen (2006) developed a theory for the fluctuations in a deep convective ensemble. The micro-states of a deep convective cloud ensemble are characterized by the cloud-base mass flux, which, according to the theory, is exponentially distributed (Boltzmann distribution). Following their work, we study the shallow cumulus ensemble statistics and the distribution of the cloud-base mass flux. We employ a Large-Eddy Simulation model (LES) and a cloud tracking algorithm, followed by a conditional sampling of clouds at the cloud base level, to retrieve the information about the individual cloud life cycles and the cloud ensemble as a whole. In the case of shallow cumulus cloud ensemble, the distribution of micro-states is a generalized exponential distribution. Based on the empirical and theoretical findings, a stochastic model has been developed to simulate the shallow convective cloud ensemble and to test the convective ensemble theory. Stochastic model simulates a compound random process, with the number of convective elements drawn from a Poisson distribution, and cloud properties sub-sampled from a generalized ensemble distribution. We study the role of the different cloud subtypes in a shallow convective ensemble and how the diverse cloud properties and cloud lifetimes affect the system macro-state. To what extent does the cloud-base mass flux distribution deviate from the simple Boltzmann distribution and how does it affect the results from the stochastic model? Is the memory, provided by the finite lifetime of individual clouds, of importance for the ensemble statistics? We also test for the minimal information given as an input to the stochastic model, able to reproduce the ensemble mean statistics and the variability in a convective ensemble. An important property of the resulting distribution of the sub-grid convective states is its scale-adaptivity - the smaller the grid-size, the broader the compound distribution of the sub-grid states.
Exact Extremal Statistics in the Classical 1D Coulomb Gas
NASA Astrophysics Data System (ADS)
Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory
2017-08-01
We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.
Sucurovic, Snezana; Milutinovic, Veljko
2008-01-01
The Internet based distributed large scale information systems implements attribute based access control (ABAC) rather than Role Based Access Control (RBAC). The reason is that the Internet is identity less and that ABAC scales better. EXtensible Access Control Markup Language is standardized language for writing access control policies, access control requests and access control responses in ABAC. XACML can provide decentralized administration and credentials distribution. In year 2002 version of CEN ENV 13 606 attributes have been attached to EHCR components and in such a system ABAC and XACML have been easy to implement. This paper presents writing XACML policies in the case when attributes are in hierarchical structure. It is presented two possible solutions to write XACML policy in that case and that the solution when set functions are used is more compact and provides 10% better performances.
Spatial distribution of GRBs and large scale structure of the Universe
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Rácz, István I.; Balázs, Lajos G.; Tóth, L. Viktor; Horváth, István
We studied the space distribution of the starburst galaxies from Millennium XXL database at z = 0.82. We examined the starburst distribution in the classical Millennium I (De Lucia et al. (2006)) using a semi-analytical model for the genesis of the galaxies. We simulated a starburst galaxies sample with Markov Chain Monte Carlo method. The connection between the large scale structures homogenous and starburst groups distribution (Kofman and Shandarin 1998), Suhhonenko et al. (2011), Liivamägi et al. (2012), Park et al. (2012), Horvath et al. (2014), Horvath et al. (2015)) on a defined scale were checked too.
Exploring the effect of power law social popularity on language evolution.
Gong, Tao; Shuai, Lan
2014-01-01
We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.
Research on distributed virtual reality system in electronic commerce
NASA Astrophysics Data System (ADS)
Xue, Qiang; Wang, Jiening; Sun, Jizhou
2004-03-01
In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.
Distributed Energy Systems: Security Implications of the Grid of the Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stamber, Kevin L.; Kelic, Andjelka; Taylor, Robert A.
2017-01-01
Distributed Energy Resources (DER) are being added to the nation's electric grid, and as penetration of these resources increases, they have the potential to displace or offset large-scale, capital-intensive, centralized generation. Integration of DER into operation of the traditional electric grid requires automated operational control and communication of DER elements, from system measurement to control hardware and software, in conjunction with a utility's existing automated and human-directed control of other portions of the system. Implementation of DER technologies suggests a number of gaps from both a security and a policy perspective. This page intentionally left blank.
The large-scale distribution of galaxies
NASA Technical Reports Server (NTRS)
Geller, Margaret J.
1989-01-01
The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.
Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas
2017-01-01
Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948
An interactive web-based system using cloud for large-scale visual analytics
NASA Astrophysics Data System (ADS)
Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.
2015-03-01
Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.
The Parallel System for Integrating Impact Models and Sectors (pSIMS)
NASA Technical Reports Server (NTRS)
Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian
2014-01-01
We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.
Parallel Index and Query for Large Scale Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less
Evolution of wealth in a non-conservative economy driven by local Nash equilibria.
Degond, Pierre; Liu, Jian-Guo; Ringhofer, Christian
2014-11-13
We develop a model for the evolution of wealth in a non-conservative economic environment, extending a theory developed in Degond et al. (2014 J. Stat. Phys. 154, 751-780 (doi:10.1007/s10955-013-0888-4)). The model considers a system of rational agents interacting in a game-theoretical framework. This evolution drives the dynamics of the agents in both wealth and economic configuration variables. The cost function is chosen to represent a risk-averse strategy of each agent. That is, the agent is more likely to interact with the market, the more predictable the market, and therefore the smaller its individual risk. This yields a kinetic equation for an effective single particle agent density with a Nash equilibrium serving as the local thermodynamic equilibrium. We consider a regime of scale separation where the large-scale dynamics is given by a hydrodynamic closure with this local equilibrium. A class of generalized collision invariants is developed to overcome the difficulty of the non-conservative property in the hydrodynamic closure derivation of the large-scale dynamics for the evolution of wealth distribution. The result is a system of gas dynamics-type equations for the density and average wealth of the agents on large scales. We recover the inverse Gamma distribution, which has been previously considered in the literature, as a local equilibrium for particular choices of the cost function. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.
2017-12-01
Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
NASA Astrophysics Data System (ADS)
Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha
2018-06-01
Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.
Woods, Gwen C; Trenholm, Rebecca A; Hale, Bruce; Campbell, Zeke; Dickenson, Eric R V
2015-07-01
Nitrosamines are considered to pose greater health risks than currently regulated DBPs and are subsequently listed as a priority pollutant by the EPA, with potential for future regulation. Denver Water, as part of the EPA's Unregulated Contaminant Monitoring Rule 2 (UCMR2) monitoring campaign, found detectable levels of N-nitrosodimethylamine (NDMA) at all sites of maximum residency within the distribution system. To better understand the occurrence of nitrosamines and nitrosamine precursors, Denver Water undertook a comprehensive year-long monitoring campaign. Samples were taken every two weeks to monitor for NDMA in the distribution system, and quarterly sampling events further examined 9 nitrosamines and nitrosamine precursors throughout the treatment and distribution systems. NDMA levels within the distribution system were typically low (>1.3 to 7.2 ng/L) with a remote distribution site (frequently >200 h of residency) experiencing the highest concentrations found. Eight other nitrosamines (N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitroso-di-n-propylamine, N-nitroso-di-n-butylamine, N-nitroso-di-phenylamine, N-nitrosopyrrolidine, N-nitrosopiperidine, N-nitrosomorpholine) were also monitored but none of these 8, or precursors of these 8 [as estimated with formation potential (FP) tests], were detected anywhere in raw, partially-treated or distribution samples. Throughout the year, there was evidence that seasonality may impact NDMA formation, such that lower temperatures (~5-10°C) produced greater NDMA than during warmer months. The year of sampling further provided evidence that water quality and weather events may impact NDMA precursor loads. Precursor loading estimates demonstrated that NDMA precursors increased during treatment (potentially from cationic polymer coagulant aids). The precursor analysis also provided evidence that precursors may have increased further within the distribution system itself. This comprehensive study of a large-scale drinking water system provides insight into the variability of NDMA occurrence in a chloraminated system, which may be impacted by seasonality, water quality changes and/or the varied origins of NDMA precursors within a given system. Copyright © 2015 Elsevier B.V. All rights reserved.
Improving Assimilated Global Data Sets using TMI Rainfall and Columnar Moisture Observations
NASA Technical Reports Server (NTRS)
Hou, Arthur Y.; Zhang, Sara Q.; daSilva, Arlindo M.; Olson, William S.
1999-01-01
A global analysis that optimally combine observations from diverse sources with physical models of atmospheric and land processes can provide a comprehensive description of the climate systems. Currently, such data products contain significant errors in primary hydrological fields such as precipitation and evaporation, especially in the tropics. In this study, we show that assimilating precipitation and total precipitable water (TPW) retrievals derived from the TRMM Microwave Imager (TMI) improves not only the hydrological cycle but also key climate parameters such as clouds, radiation, and the large-scale circulation produced by the Goddard Earth Observing System (GEOS) data assimilation system (DAS). In particular, assimilating TMI rain improves clouds and radiation in areas of active convection, as well as the latent heating distribution and the large-scale motion field in the tropics, while assimilating TMI TPW heating distribution and the large-scale motion field in the tropics, while assimilating TMI TPW retrievals leads to reduced moisture biases and improved radiative fluxes in clear-sky regions. The improved analysis also improves short-range forecasts in the tropics. Ensemble forecasts initialized with the GEOS analysis incorporating TMI rain rates and TPW yield smaller biases in tropical precipitation forecasts beyond 1 day and better 500 hPa geopotential height forecasts up to 5 days. Results of this study demonstrate the potential of using high-quality space-borne rainfall and moisture observations to improve the quality of assimilated global data for climate analysis and weather forecasting applications
Knispel, Alexis L; McLachlan, Stéphane M
2010-01-01
Genetically modified herbicide-tolerant (GMHT) oilseed rape (OSR; Brassica napus L.) was approved for commercial cultivation in Canada in 1995 and currently represents over 95% of the OSR grown in western Canada. After a decade of widespread cultivation, GMHT volunteers represent an increasing management problem in cultivated fields and are ubiquitous in adjacent ruderal habitats, where they contribute to the spread of transgenes. However, few studies have considered escaped GMHT OSR populations in North America, and even fewer have been conducted at large spatial scales (i.e. landscape scales). In particular, the contribution of landscape structure and large-scale anthropogenic dispersal processes to the persistence and spread of escaped GMHT OSR remains poorly understood. We conducted a multi-year survey of the landscape-scale distribution of escaped OSR plants adjacent to roads and cultivated fields. Our objective was to examine the long-term dynamics of escaped OSR at large spatial scales and to assess the relative importance of landscape and localised factors to the persistence and spread of these plants outside of cultivation. From 2005 to 2007, we surveyed escaped OSR plants along roadsides and field edges at 12 locations in three agricultural landscapes in southern Manitoba where GMHT OSR is widely grown. Data were analysed to examine temporal changes at large spatial scales and to determine factors affecting the distribution of escaped OSR plants in roadside and field edge habitats within agricultural landscapes. Additionally, we assessed the potential for seed dispersal between escaped populations by comparing the relative spatial distribution of roadside and field edge OSR. Densities of escaped OSR fluctuated over space and time in both roadside and field edge habitats, though the proportion of GMHT plants was high (93-100%). Escaped OSR was positively affected by agricultural landscape (indicative of cropping intensity) and by the presence of an adjacent field planted to OSR. Within roadside habitats, escaped OSR was also strongly associated with large-scale variables, including road surface (indicative of traffic intensity) and distance to the nearest grain elevator. Conversely, within field edges, OSR density was affected by localised crop management practices such as mowing, soil disturbance and herbicide application. Despite the proximity of roadsides and field edges, there was little evidence of spatial aggregation among escaped OSR populations in these two habitats, especially at very fine spatial scales (i.e. <100 m), suggesting that natural propagule exchange is infrequent. Escaped OSR populations were persistent at large spatial and temporal scales, and low density in a given landscape or year was not indicative of overall extinction. As a result of ongoing cultivation and transport of OSR crops, escaped GMHT traits will likely remain predominant in agricultural landscapes. While escaped OSR in field edge habitats generally results from local seeding and management activities occurring at the field-scale, distribution patterns within roadside habitats are determined in large part by seed transport occurring at the landscape scale and at even larger regional scales. Our findings suggest that these large-scale anthropogenic dispersal processes are sufficient to enable persistence despite limited natural seed dispersal. This widespread dispersal is likely to undermine field-scale management practices aimed at eliminating escaped and in-field GMHT OSR populations. Agricultural transport and landscape-scale cropping patterns are important determinants of the distribution of escaped GM crops. At the regional level, these factors ensure ongoing establishment and spread of escaped GMHT OSR despite limited local seed dispersal. Escaped populations thus play an important role in the spread of transgenes and have substantial implications for the coexistence of GM and non-GM production systems. Given the large-scale factors driving the spread of escaped transgenes, localised co-existence measures may be impracticable where they are not commensurate with regional dispersal mechanisms. To be effective, strategies aimed at reducing contamination from GM crops should be multi-scale in approach and be developed and implemented at both farm and landscape levels of organisation. Multiple stakeholders should thus be consulted, including both GM and non-GM farmers, as well as seed developers, processors, transporters and suppliers. Decisions to adopt GM crops require thoughtful and inclusive consideration of the risks and responsibilities inherent in this new technology.
Spatial Distribution of Fate and Transport Parameters Using Cxtfit in a Karstified Limestone Model
NASA Astrophysics Data System (ADS)
Toro, J.; Padilla, I. Y.
2017-12-01
Karst environments have a high capacity to transport and store large amounts of water. This makes karst aquifers a productive resource for human consumption and ecological integrity, but also makes them vulnerable to potential contamination of hazardous chemical substances. High heterogeneity and anisotropy of karst aquifer properties make them very difficult to characterize for accurate prediction of contaminant mobility and persistence in groundwater. Current technologies to characterize and quantify flow and transport processes at field-scale is limited by low resolution of spatiotemporal data. To enhance this resolution and provide the essential knowledge of karst groundwater systems, studies at laboratory scale can be conducted. This work uses an intermediate karstified lab-scale physical model (IKLPM) to study fate and transport processes and assess viable tools to characterize heterogeneities in karst systems. Transport experiments are conducted in the IKLPM using step injections of calcium chloride, uranine, and rhodamine wt tracers. Temporal concentration distributions (TCDs) obtained from the experiments are analyzed using the method of moments and CXTFIT to quantify fate and transport parameters in the system at various flow rates. The spatial distribution of the estimated fate and transport parameters for the tracers revealed high variability related to preferential flow heterogeneities and scale dependence. Results are integrated to define spatially-variable transport regions within the system and assess their fate and transport characteristics.
NASA Astrophysics Data System (ADS)
Basu, Aritra; Mao, S. A.; Fletcher, Andrew; Kanekar, Nissim; Shukurov, Anvar; Schnitzeler, Dominic; Vacca, Valentina; Junklewitz, Henrik
2018-06-01
Deriving the Faraday rotation measure (RM) of quasar absorption line systems, which are tracers of high-redshift galaxies intervening background quasars, is a powerful tool for probing magnetic fields in distant galaxies. Statistically comparing the RM distributions of two quasar samples, with and without absorption line systems, allows one to infer magnetic field properties of the intervening galaxy population. Here, we have derived the analytical form of the probability distribution function (PDF) of RM produced by a single galaxy with an axisymmetric large-scale magnetic field. We then further determine the PDF of RM for one random sight line traversing each galaxy in a population with a large-scale magnetic field prescription. We find that the resulting PDF of RM is dominated by a Lorentzian with a width that is directly related to the mean axisymmetric large-scale field strength
Space and energy. [space systems for energy generation, distribution and control
NASA Technical Reports Server (NTRS)
Bekey, I.
1976-01-01
Potential contributions of space to energy-related activities are discussed. Advanced concepts presented include worldwide energy distribution to substation-sized users using low-altitude space reflectors; powering large numbers of large aircraft worldwide using laser beams reflected from space mirror complexes; providing night illumination via sunlight-reflecting space mirrors; fine-scale power programming and monitoring in transmission networks by monitoring millions of network points from space; prevention of undetected hijacking of nuclear reactor fuels by space tracking of signals from tagging transmitters on all such materials; and disposal of nuclear power plant radioactive wastes in space.
A Functional Model for Management of Large Scale Assessments.
ERIC Educational Resources Information Center
Banta, Trudy W.; And Others
This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System
Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin
2016-01-01
Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems. PMID:27548171
Triangular-shaped landforms reveal subglacial drainage routes in SW Finland
NASA Astrophysics Data System (ADS)
Mäkinen, J.; Kajuutti, K.; Palmu, J.-P.; Ojala, A.; Ahokangas, E.
2017-05-01
The aim of this study is to present the first evidence of triangular-shaped till landforms and related erosional features indicative of subglacial drainage within the ice stream bed of the Scandinavian ice sheet in Finland. Previously unidentified grouped patterns of Quaternary deposits with triangular landforms can be recognized from LiDAR-based DEMs. The triangular landforms occur as segments within geomorphologically distinguishable routes that are associated with eskers. The morphological and sedimentological characteristics as well as the distribution of the triangular landforms are interpreted to involve the creep of saturated deforming till, flow and pressure fluctuations of subglacial meltwater associated with meltwater erosion. There are no existing models for the formation of this kind of large-scale drainage systems, but we claim that they represent an efficient drainage system for subglacial meltwater transfer under high pressure conditions. Our hypothesis is that the routed, large-scale subglacial drainage systems described herein form a continuum between channelized (eskers) and more widely spread small-scale distributed subglacial drainage. Moreover, the transition from the conduit dominated drainage to triangular-shaped subglacial landforms takes place about 50-60 km from the ice margin. We provide an important contribution towards a more realistic representation of ice sheet hydrological drainage systems that could be used to improve paleoglaciological models and to simulate likely responses of ice sheets to increased meltwater production.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2016-01-01
During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.
Spatially distributed potential evapotranspiration modeling and climate projections.
Gharbia, Salem S; Smullen, Trevor; Gill, Laurence; Johnston, Paul; Pilla, Francesco
2018-08-15
Evapotranspiration integrates energy and mass transfer between the Earth's surface and atmosphere and is the most active mechanism linking the atmosphere, hydrosphsophere, lithosphere and biosphere. This study focuses on the fine resolution modeling and projection of spatially distributed potential evapotranspiration on the large catchment scale as response to climate change. Six potential evapotranspiration designed algorithms, systematically selected based on a structured criteria and data availability, have been applied and then validated to long-term mean monthly data for the Shannon River catchment with a 50m 2 cell size. The best validated algorithm was therefore applied to evaluate the possible effect of future climate change on potential evapotranspiration rates. Spatially distributed potential evapotranspiration projections have been modeled based on climate change projections from multi-GCM ensembles for three future time intervals (2020, 2050 and 2080) using a range of different Representative Concentration Pathways producing four scenarios for each time interval. Finally, seasonal results have been compared to baseline results to evaluate the impact of climate change on the potential evapotranspiration and therefor on the catchment dynamical water balance. The results present evidence that the modeled climate change scenarios would have a significant impact on the future potential evapotranspiration rates. All the simulated scenarios predicted an increase in potential evapotranspiration for each modeled future time interval, which would significantly affect the dynamical catchment water balance. This study addresses the gap in the literature of using GIS-based algorithms to model fine-scale spatially distributed potential evapotranspiration on the large catchment systems based on climatological observations and simulations in different climatological zones. Providing fine-scale potential evapotranspiration data is very crucial to assess the dynamical catchment water balance to setup management scenarios for the water abstractions. This study illustrates a transferable systematic method to design GIS-based algorithms to simulate spatially distributed potential evapotranspiration on the large catchment systems. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.
2016-04-01
We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.
New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations
NASA Technical Reports Server (NTRS)
Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.
2012-01-01
In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.
Detecting Potentially Compromised Credentials in a Large-Scale Production Single-Signon System
2014-06-01
Attention Deficit Hyperactivity Disorder ( ADHD ), Post-Traumatic Stress Disorder (PTSD), anxiety, they are neurotic, and have memory issues. They... Deficit Hyperactivity Disorder API Application Programming Interface CAC Common Access Card CBL Composite Blocking List CDF Cumulative Distribution...Service Logons (DSLs) system . . . . . . . . . . . . . . . . 49 xi THIS PAGE INTENTIONALLY LEFT BLANK xii List of Acronyms and Abbreviations ADHD Attention
A. Townsend Peterson; Daniel A. Kluza
2005-01-01
Large-scale assessments of the distribution and diversity of birds have been challenged by the need for a robust methodology for summarizing or predicting species' geographic distributions (e.g. Beard et al. 1999, Manel et al. 1999, Saveraid et al. 2001). Methodologies used in such studies have at times been inappropriate, or even more frequently limited in their...
Mechem, David B.; Giangrande, Scott E.; Wittman, Carly S.; ...
2015-03-13
A case of shallow cumulus and precipitating cumulus congestus sampled at the Atmospheric Radiation Measurement (ARM) Program Southern Great Plains (SGP) supersite is analyzed using a multi-sensor observational approach and numerical simulation. Observations from a new radar suite surrounding the facility are used to characterize the evolving statistical behavior of the precipitating cloud system. This is accomplished using distributions of different measures of cloud geometry and precipitation properties. Large-eddy simulation (LES) with size-resolved (bin) microphysics is employed to determine the forcings most important in producing the salient aspects of the cloud system captured in the radar observations. Our emphasis ismore » on assessing the importance of time-varying vs. steady-state large-scale forcing on the model's ability to reproduce the evolutionary behavior of the cloud system. Additional consideration is given to how the characteristic spatial scale and homogeneity of the forcing imposed on the simulation influences the evolution of cloud system properties. Results indicate that several new scanning radar estimates such as distributions of cloud top are useful to differentiate the value of time-varying (or at least temporally well-matched) forcing on LES solution fidelity.« less
Numerical Simulations of Vortical Mode Stirring: Effects of Large Scale Shear and Strain
2015-09-30
Numerical Simulations of Vortical Mode Stirring: Effects of Large-Scale Shear and Strain M.-Pascale Lelong NorthWest Research Associates...can be implemented in larger-scale ocean models. These parameterizations will incorporate the effects of local ambient conditions including latitude...talk at the 1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Nonlinear Effects in Internal Waves Conference held
NASA Technical Reports Server (NTRS)
Over, Thomas, M.; Gupta, Vijay K.
1994-01-01
Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.
Dispersion upscaling from a pore scale characterization of Lagrangian velocities
NASA Astrophysics Data System (ADS)
Turuban, Régis; de Anna, Pietro; Jiménez-Martínez, Joaquín; Tabuteau, Hervé; Méheust, Yves; Le Borgne, Tanguy
2013-04-01
Mixing and reactive transport are primarily controlled by the interplay between diffusion, advection and reaction at pore scale. Yet, how the distribution and spatial correlation of the velocity field at pore scale impact these processes is still an open question. Here we present an experimental investigation of the distribution and correlation of pore scale velocities and its relation with upscaled dispersion. We use a quasi two-dimensional (2D) horizontal set up, consisting of two glass plates filled with cylinders representing the grains of the porous medium : the cell is built by soft lithography technique, wich allows for full control of the system geometry. The local velocity field is quantified from particle tracking velocimetry using microspheres that are advected with the pore scale flow. Their displacement is purely advective, as the particle size is chosen large enough to avoid diffusion. We thus obtain particle trajectories as well as lagrangian velocities in the entire system. The measured velocity field shows the existence of a network of preferential flow paths in channels with high velocities, as well as very low velocity in stagnation zones, with a non Gaussian distribution. Lagrangian velocities are long range correlated in time, which implies a non-fickian scaling of the longitudinal variance of particle positions. To upscale this process we develop an effective transport model, based on correlated continous time random walk, which is entirely parametrized by the pore scale velocity distribution and correlation. The model predictions are compared with conservative tracer test data for different Peclet numbers. Furthermore, we investigate the impact of different pore geometries on the distribution and correlation of Lagrangian velocities and we discuss the link between these properties and the effective dispersion behavior.
SOLAR SYSTEM MOONS AS ANALOGS FOR COMPACT EXOPLANETARY SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kane, Stephen R.; Hinkel, Natalie R.; Raymond, Sean N., E-mail: skane@ipac.caltech.edu
2013-11-01
The field of exoplanetary science has experienced a recent surge of new systems that is largely due to the precision photometry provided by the Kepler mission. The latest discoveries have included compact planetary systems in which the orbits of the planets all lie relatively close to the host star, which presents interesting challenges in terms of formation and dynamical evolution. The compact exoplanetary systems are analogous to the moons orbiting the giant planets in our solar system, in terms of their relative sizes and semimajor axes. We present a study that quantifies the scaled sizes and separations of the solarmore » system moons with respect to their hosts. We perform a similar study for a large sample of confirmed Kepler planets in multi-planet systems. We show that a comparison between the two samples leads to a similar correlation between their scaled sizes and separation distributions. The different gradients of the correlations may be indicative of differences in the formation and/or long-term dynamics of moon and planetary systems.« less
NASA Technical Reports Server (NTRS)
Marochnik, Leonid S.; Mukhin, Lev M.; Sagdeev, Roald Z.
1991-01-01
Views of the large-scale structure of the solar system, consisting of the Sun, the nine planets and their satellites, changed when Oort demonstrated that a gigantic cloud of comets (the Oort cloud) is located on the periphery of the solar system. The following subject areas are covered: (1) the Oort cloud's mass; (2) Hill's cloud mass; (3) angular momentum distribution in the solar system; and (4) the cometary cloud around other stars.
A Weibull distribution accrual failure detector for cloud computing.
Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.
Economic optimization of the energy transport component of a large distributed solar power plant
NASA Technical Reports Server (NTRS)
Turner, R. H.
1976-01-01
A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.
NASA Technical Reports Server (NTRS)
Takayabu, Yukari N.; Shige, Shoichi; Tao, Wei-Kuo; Hirota, Nagio
2010-01-01
The global hydrological cycle is central to the Earth's climate system, with rainfall and the physics of its formation acting as the key links in the cycle. Two-thirds of global rainfall occurs in the Tropics. Associated with this rainfall is a vast amount of heat, which is known as latent heat. It arises mainly due to the phase change of water vapor condensing into liquid droplets; three-fourths of the total heat energy available to the Earth's atmosphere comes from tropical rainfall. In addition, fresh water provided by tropical rainfall and its variability exerts a large impact upon the structure and motions of the upper ocean layer. Three-dimensional distributions of latent heating estimated from Tropical Rainfall Measuring Mission Precipitation Radar (TRMM PR)utilizing the Spectral Latent Heating (SLH) algorithm are analyzed. Mass-weighted and vertically integrated latent heating averaged over the tropical oceans is estimated as approx.72.6 J/s (approx.2.51 mm/day), and that over tropical land is approx.73.7 J/s (approx.2.55 mm/day), for 30degN-30degS. It is shown that non-drizzle precipitation over tropical and subtropical oceans consists of two dominant modes of rainfall systems, deep systems and congestus. A rough estimate of shallow mode contribution against the total heating is about 46.7 % for the average tropical oceans, which is substantially larger than 23.7 % over tropical land. While cumulus congestus heating linearly correlates with the SST, deep mode is dynamically bounded by large-scale subsidence. It is notable that substantial amount of rain, as large as 2.38 mm day-1 in average, is brought from congestus clouds under the large-scale subsiding circulation. It is also notable that even in the region with SST warmer than 28 oC, large-scale subsidence effectively suppresses the deep convection, remaining the heating by congestus clouds. Our results support that the entrainment of mid-to-lower-tropospheric dry air, which accompanies the large-scale subsidence is the major factor suppressing the deep convection. Therefore, representation of the realistic entrainment is very important for proper reproduction of precipitation distribution and resultant large-scale circulation.
Assessment of distributed solar power systems: Issues and impacts
NASA Astrophysics Data System (ADS)
Moyle, R. A.; Chernoff, H.; Schweizer, T. C.; Patton, J. B.
1982-11-01
The installation of distributed solar-power systems presents electric utilities with a host of questions. Some of the technical and economic impacts of these systems are discussed. Among the technical interconnect issues are isolated operation, power quality, line safety, and metering options. Economic issues include user purchase criteria, structures and installation costs, marketing and product distribution costs, and interconnect costs. An interactive computer program that allows easy calculation of allowable system prices and allowable generation-equipment prices was developed as part of this project. It is concluded that the technical problems raised by distributed solar systems are surmountable, but their resolution may be costly. The stringent purchase criteria likely to be imposed by many potential system users and the economies of large-scale systems make small systems (less than 10 to 20 kW) less attractive than larger systems. Utilities that consider life-cycle costs in making investment decisions and third-party investors who have tax and financial advantages are likely to place the highest value on solar-power systems.
CD-ROM technology at the EROS data center
Madigan, Michael E.; Weinheimer, Mary C.
1993-01-01
The vast amount of digital spatial data often required by a single user has created a demand for media alternatives to 1/2" magnetic tape. One such medium that has been recently adopted at the U.S. Geological Survey's EROS Data Center is the compact disc (CD). CD's are a versatile, dynamic, and low-cost method for providing a variety of data on a single media device and are compatible with various computer platforms. CD drives are available for personal computers, UNIX workstations, and mainframe systems, either directly connected, or through a network. This medium furnishes a quick method of reproducing and distributing large amounts of data on a single CD. Several data sets are already available on CD's, including collections of historical Landsat multispectral scanner data and biweekly composites of Advanced Very High Resolution Radiometer data for the conterminous United States. The EROS Data Center intends to provide even more data sets on CD's. Plans include specific data sets on a customized disc to fulfill individual requests, and mass production of unique data sets for large-scale distribution. Requests for a single compact disc-read only memory (CD-ROM) containing a large volume of data either for archiving or for one-time distribution can be addressed with a CD-write once (CD-WO) unit. Mass production and large-scale distribution will require CD-ROM replication and mastering.
Steps Towards Understanding Large-scale Deformation of Gas Hydrate-bearing Sediments
NASA Astrophysics Data System (ADS)
Gupta, S.; Deusner, C.; Haeckel, M.; Kossel, E.
2016-12-01
Marine sediments bearing gas hydrates are typically characterized by heterogeneity in the gas hydrate distribution and anisotropy in the sediment-gas hydrate fabric properties. Gas hydrates also contribute to the strength and stiffness of the marine sediment, and any disturbance in the thermodynamic stability of the gas hydrates is likely to affect the geomechanical stability of the sediment. Understanding mechanisms and triggers of large-strain deformation and failure of marine gas hydrate-bearing sediments is an area of extensive research, particularly in the context of marine slope-stability and industrial gas production. The ultimate objective is to predict severe deformation events such as regional-scale slope failure or excessive sand production by using numerical simulation tools. The development of such tools essentially requires a careful analysis of thermo-hydro-chemo-mechanical behavior of gas hydrate-bearing sediments at lab-scale, and its stepwise integration into reservoir-scale simulators through definition of effective variables, use of suitable constitutive relations, and application of scaling laws. One of the focus areas of our research is to understand the bulk coupled behavior of marine gas hydrate systems with contributions from micro-scale characteristics, transport-reaction dynamics, and structural heterogeneity through experimental flow-through studies using high-pressure triaxial test systems and advanced tomographical tools (CT, ERT, MRI). We combine these studies to develop mathematical model and numerical simulation tools which could be used to predict the coupled hydro-geomechanical behavior of marine gas hydrate reservoirs in a large-strain framework. Here we will present some of our recent results from closely co-ordinated experimental and numerical simulation studies with an objective to capture the large-deformation behavior relevant to different gas production scenarios. We will also report on a variety of mechanically relevant test scenarios focusing on effects of dynamic changes in gas hydrate saturation, highly uneven gas hydrate distributions, focused fluid migration and gas hydrate production through depressurization and CO2 injection.
Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications
NASA Astrophysics Data System (ADS)
Zu, Yue
Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.
NASA Astrophysics Data System (ADS)
Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.
2013-12-01
A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Chase Qishi; Zhu, Michelle Mengxia
The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less
A study of residence time distribution using radiotracer technique in the large scale plant facility
NASA Astrophysics Data System (ADS)
Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.
2017-06-01
As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
A Rich Metadata Filesystem for Scientific Data
ERIC Educational Resources Information Center
Bui, Hoang
2012-01-01
As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…
Simulation of the Impact of Packet Errors on the Kademlia Peer-to-Peer Routing
2010-09-01
during the routing process. Pastry [15] switches to a proximity based metric when approaching a node closely. This complicates the implementation...and Peter Druschel. Pastry : Scalable, distributed object location and routing for large-scale peer-to-peer systems. IFIP/ACM International Conference
Multi-level structure in the large scale distribution of optically luminous galaxies
NASA Astrophysics Data System (ADS)
Deng, Xin-fa; Deng, Zu-gan; Liu, Yong-zhen
1992-04-01
Fractal dimensions in the large scale distribution of galaxies have been calculated with the method given by Wen et al. [1] Samples are taken from CfA redshift survey in northern and southern galactic [2] hemisphere in our analysis respectively. Results from these two regions are compared with each other. There are significant differences between the distributions in these two regions. However, our analyses do show some common features of the distributions in these two regions. All subsamples show multi-level fractal character distinctly. Combining it with the results from analyses of samples given by IRAS galaxies and results from samples given by redshift survey in pencil-beam fields, [3,4] we suggest that multi-level fractal structure is most likely to be a general and important character in the large scale distribution of galaxies. The possible implications of this character are discussed.
NASA Astrophysics Data System (ADS)
Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto
2016-02-01
Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
Jacquet, Claire; Mouillot, David; Kulbicki, Michel; Gravel, Dominique
2017-02-01
The Theory of Island Biogeography (TIB) predicts how area and isolation influence species richness equilibrium on insular habitats. However, the TIB remains silent about functional trait composition and provides no information on the scaling of functional diversity with area, an observation that is now documented in many systems. To fill this gap, we develop a probabilistic approach to predict the distribution of a trait as a function of habitat area and isolation, extending the TIB beyond the traditional species-area relationship. We compare model predictions to the body-size distribution of piscivorous and herbivorous fishes found on tropical reefs worldwide. We find that small and isolated reefs have a higher proportion of large-sized species than large and connected reefs. We also find that knowledge of species body-size and trophic position improves the predictions of fish occupancy on tropical reefs, supporting both the allometric and trophic theory of island biogeography. The integration of functional ecology to island biogeography is broadly applicable to any functional traits and provides a general probabilistic approach to study the scaling of trait distribution with habitat area and isolation. © 2016 John Wiley & Sons Ltd/CNRS.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Towards scalable Byzantine fault-tolerant replication
NASA Astrophysics Data System (ADS)
Zbierski, Maciej
2017-08-01
Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.
Scaling Theory of Entanglement at the Many-Body Localization Transition.
Dumitrescu, Philipp T; Vasseur, Romain; Potter, Andrew C
2017-09-15
We study the universal properties of eigenstate entanglement entropy across the transition between many-body localized (MBL) and thermal phases. We develop an improved real space renormalization group approach that enables numerical simulation of large system sizes and systematic extrapolation to the infinite system size limit. For systems smaller than the correlation length, the average entanglement follows a subthermal volume law, whose coefficient is a universal scaling function. The full distribution of entanglement follows a universal scaling form, and exhibits a bimodal structure that produces universal subleading power-law corrections to the leading volume law. For systems larger than the correlation length, the short interval entanglement exhibits a discontinuous jump at the transition from fully thermal volume law on the thermal side, to pure area law on the MBL side.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size.
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics.
A Life-Cycle Model of Human Social Groups Produces a U-Shaped Distribution in Group Size
Salali, Gul Deniz; Whitehouse, Harvey; Hochberg, Michael E.
2015-01-01
One of the central puzzles in the study of sociocultural evolution is how and why transitions from small-scale human groups to large-scale, hierarchically more complex ones occurred. Here we develop a spatially explicit agent-based model as a first step towards understanding the ecological dynamics of small and large-scale human groups. By analogy with the interactions between single-celled and multicellular organisms, we build a theory of group lifecycles as an emergent property of single cell demographic and expansion behaviours. We find that once the transition from small-scale to large-scale groups occurs, a few large-scale groups continue expanding while small-scale groups gradually become scarcer, and large-scale groups become larger in size and fewer in number over time. Demographic and expansion behaviours of groups are largely influenced by the distribution and availability of resources. Our results conform to a pattern of human political change in which religions and nation states come to be represented by a few large units and many smaller ones. Future enhancements of the model should include decision-making rules and probabilities of fragmentation for large-scale societies. We suggest that the synthesis of population ecology and social evolution will generate increasingly plausible models of human group dynamics. PMID:26381745
NASA Technical Reports Server (NTRS)
Hill, Gerald M.; Evans, Richard K.
2009-01-01
A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...Utilizing Sparse Array Data to Develop and Implement a New Method for Estimating Blue and Fin Whale Density Len Thomas & Danielle Harris Centre...to develop and implement a new method for estimating blue and fin whale density that is effective over large spatial scales and is designed to cope
The Numerical Propulsion System Simulation: An Overview
NASA Technical Reports Server (NTRS)
Lytle, John K.
2000-01-01
Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.
Predictive Anomaly Management for Resilient Virtualized Computing Infrastructures
2015-05-27
PREC: Practical Root Exploit Containment for Android Devices, ACM Conference on Data and Application Security and Privacy (CODASPY) . 03-MAR-14...05-OCT-11, . : , Hiep Nguyen, Yongmin Tan, Xiaohui Gu. Propagation-aware Anomaly Localization for Cloud Hosted Distributed Applications , ACM...Workshop on Managing Large-Scale Systems via the Analysis of System Logs and the Application of Machine Learning Techniques (SLAML) in conjunction with SOSP
ERIC Educational Resources Information Center
Herrera-Ruiz, Octavio
2012-01-01
Peer-to-Peer (P2P) technology has emerged as an important alternative to the traditional client-server communication paradigm to build large-scale distributed systems. P2P enables the creation, dissemination and access to information at low cost and without the need of dedicated coordinating entities. However, existing P2P systems fail to provide…
Soldier Data Tag Study Effort.
1985-06-10
interested in protecting it. The tag itself is difficult--though not impossible--to counterfeit . Also, it (’• iii 71 -, potentially improves the data...attacks during the design, manufacture, and distribution processes, counterfeiting , unauthorized access/alteration of tag data, and use of the tag to...45 3.3.2 Hijacking of SOT System Shipments, or Large- Scale Counterfeit of SOT Systems ....................... 46 3.3.3 Unauthorized Alteration
NASA Technical Reports Server (NTRS)
Zalay, A. D.; Brashears, M. R.; Jordan, A. J.; Shrider, K. R.; Vought, C. D.
1979-01-01
The flow field measured around a hovering 70 percent scale vertical takeoff and landing (V/STOL) aircraft model is described. The velocity measurements were conducted with a ground based laser Doppler velocimeter. The remote sensing instrumentation and experimental tests of the velocity surveys are discussed. The distribution of vertical velocity in the fan jet and fountain; the radial velocity in the wall jet and the horizontal velocity along the aircraft underside are presented for different engine rpms and aircraft height above ground. Results show that it is feasible to use a mobile laser Doppler velocimeter to measure the flow field generated by a large scale V/STOL aircraft operating in ground effect.
Implications of the IRAS data for galactic gamma ray astronomy and EGRET
NASA Technical Reports Server (NTRS)
Stecker, Floyd W.
1990-01-01
Using the results of gamma-ray, millimeter wave and far surveys of the galaxy, logically consistent picture of the large scale distribution of galactic gas and cosmic rays was derived, tied to the overall processes of stellar birth and destruction on a galactic scale. Using the results of the IRAS far-infrared survey of te galaxy, the large scale radial distributions of galactic far-infrared emission independently was obtained for both the Northern and Southern Hemisphere sides of the Galaxy. The dominant feature in these distributions was found to be a broad peak coincident with the 5 kpc molecular gas cloud ring. Evidence was found for spiral arm features. Strong correlations are evident between the large scale galactic distributions of far-infrared emission, gamma-ray emission and total CO emission. There is particularly tight correlation between the distribution of warm molecular clouds and far-infrared emission on a galactic scale. The 5 kpc ring was evident in existing galactic gamma-ray data. The extent to which the more detailed spiral arm features are evident in the more resolved EGRET (Energetic Gamma-Ray Experimental Telescope) data will help to determine more precisely the propagation characteristics of cosmic rays.
Mapping spatial patterns of denitrifiers at large scales (Invited)
NASA Astrophysics Data System (ADS)
Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.
2010-12-01
Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.
Integrating scales of seagrass monitoring to meet conservation needs
Neckles, Hilary A.; Kopp, Blaine S.; Peterson, Bradley J.; Pooler, Penelope S.
2012-01-01
We evaluated a hierarchical framework for seagrass monitoring in two estuaries in the northeastern USA: Little Pleasant Bay, Massachusetts, and Great South Bay/Moriches Bay, New York. This approach includes three tiers of monitoring that are integrated across spatial scales and sampling intensities. We identified monitoring attributes for determining attainment of conservation objectives to protect seagrass ecosystems from estuarine nutrient enrichment. Existing mapping programs provided large-scale information on seagrass distribution and bed sizes (tier 1 monitoring). We supplemented this with bay-wide, quadrat-based assessments of seagrass percent cover and canopy height at permanent sampling stations following a spatially distributed random design (tier 2 monitoring). Resampling simulations showed that four observations per station were sufficient to minimize bias in estimating mean percent cover on a bay-wide scale, and sample sizes of 55 stations in a 624-ha system and 198 stations in a 9,220-ha system were sufficient to detect absolute temporal increases in seagrass abundance from 25% to 49% cover and from 4% to 12% cover, respectively. We made high-resolution measurements of seagrass condition (percent cover, canopy height, total and reproductive shoot density, biomass, and seagrass depth limit) at a representative index site in each system (tier 3 monitoring). Tier 3 data helped explain system-wide changes. Our results suggest tiered monitoring as an efficient and feasible way to detect and predict changes in seagrass systems relative to multi-scale conservation objectives.
Effects of habitat fragmentation on passerine birds breeding in Intermountain shrubsteppe
Knick, S.T.; Rotenberry, J.T.
2002-01-01
Habitat fragmentation and loss strongly influence the distribution and abundance of passerine birds breeding in Intermountain shrubsteppe. Wildfires, human activities, and change in vegetation communities often are synergistic in these systems and can result in radical conversion from shrubland to grasslands dominated by exotic annuals at large temporal and spatial scales from which recovery to native conditions is unlikely. As a result, populations of 5 of the 12 species in our review of Intermountain shrubsteppe birds are undergoing significant declines; 5 species are listed as at-risk or as candidates for protection in at least one state. The process by which fragmentation affects bird distributions in these habitats remains unknown because most research has emphasized the detection of population trends and patterns of habitat associations at relatively large spatial scales. Our research indicates that the distribution of shrubland-obligate species, such as Brewer's Sparrows (Spizella breweri), Sage Sparrows (Amphispiza belli), and Sage Thrashers (Oreoscoptes montanus), was highly sensitive to fragmentation of shrublands at spatial scales larger than individual home ranges. In contrast, the underlying mechanisms for both habitat change and bird population dynamics may operate independently of habitat boundaries. We propose alternative, but not necessarily exclusive, mechanisms to explain the relationship between habitat fragmentation and bird distribution and abundance. Fragmentation might influence productivity through differences in breeding density, nesting success, or predation. However, local and landscape variables were not significant determinants either of success, number fledged, or probability of predation or parasitism (although our tests had relatively low statistical power). Alternatively, relative absence of natal philopatry and redistribution by individuals among habitats following fledging or post-migration could account for the pattern of distribution and abundance. Thus, boundary dynamics may be important in determining the distribution of shrubland-obligate species but insignificant relative to the mechanisms causing the pattern of habitat and bird distribution. Because of the dichotomy in responses, Intermountain shrubsteppe systems present a unique challenge in understanding how landscape composition, configuration, and change influence bird population dynamics.
Mapping the distribution of the denitrifier community at large scales (Invited)
NASA Astrophysics Data System (ADS)
Philippot, L.; Bru, D.; Ramette, A.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.
2010-12-01
Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 740 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.
Performance of distributed multiscale simulations
Borgdorff, J.; Ben Belgacem, M.; Bona-Casas, C.; Fazendeiro, L.; Groen, D.; Hoenen, O.; Mizeranschi, A.; Suter, J. L.; Coster, D.; Coveney, P. V.; Dubitzky, W.; Hoekstra, A. G.; Strand, P.; Chopard, B.
2014-01-01
Multiscale simulations model phenomena across natural scales using monolithic or component-based code, running on local or distributed resources. In this work, we investigate the performance of distributed multiscale computing of component-based models, guided by six multiscale applications with different characteristics and from several disciplines. Three modes of distributed multiscale computing are identified: supplementing local dependencies with large-scale resources, load distribution over multiple resources, and load balancing of small- and large-scale resources. We find that the first mode has the apparent benefit of increasing simulation speed, and the second mode can increase simulation speed if local resources are limited. Depending on resource reservation and model coupling topology, the third mode may result in a reduction of resource consumption. PMID:24982258
Implementing Parquet equations using HPX
NASA Astrophysics Data System (ADS)
Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark
A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.
Shifts in Summertime Precipitation Accumulation Distributions over the US
NASA Astrophysics Data System (ADS)
Martinez-Villalobos, C.; Neelin, J. D.
2016-12-01
Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.
Shifts in Summertime Precipitation Accumulation Distributions over the US
NASA Astrophysics Data System (ADS)
Martinez-Villalobos, C.; Neelin, J. D.
2017-12-01
Precipitation accumulations, i.e., the amount of precipitation integrated over the course of an event, is a variable with both important physical and societal implications. Previous observational studies show that accumulation distributions have a characteristic shape, with an approximately power law decrease at first, followed by a sharp decrease at a characteristic large event cutoff scale. This cutoff scale is important as it limits the biggest accumulation events. Stochastic prototypes show that the resulting distributions, and importantly the large event cutoff scale, can be understood as a result of the interplay between moisture loss by precipitation and changes in moisture sinks/sources due to fluctuations in moisture divergence over the course of a precipitation event. The strength of this fluctuating moisture sink/source term is expected to increase under global warming, with both theory and climate model simulations predicting a concomitant increase in the large event cutoff scale. This cutoff scale increase has important consequences as it implies an approximately exponential increase for the largest accumulation events. Given its importance, in this study we characterize and track changes in the distribution of precipitation events accumulations over the contiguous US. Accumulation distributions are calculated using hourly precipitation data from 1700 stations, covering the 1974-2013 period over May-October. The resulting distributions largely follow the aforementioned shape, with individual cutoff scales depending on the local climate. An increase in the large event cutoff scale over this period is observed over several regions over the US, most notably over the eastern third of the US. In agreement with the increase in the cutoff, almost exponential increases in the highest accumulation percentiles occur over these regions, with increases in the 99.9 percentile in the Northeast of 70% for example. The relationship to changes in daily precipitation that have previously been noted and to changes in the moisture budget over this period are examined.
Double inflation - A possible resolution of the large-scale structure problem
NASA Technical Reports Server (NTRS)
Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman
1987-01-01
A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.
NASDA's Advanced On-Line System (ADOLIS)
NASA Technical Reports Server (NTRS)
Yamamoto, Yoshikatsu; Hara, Hideo; Yamada, Shigeo; Hirata, Nobuyuki; Komatsu, Shigenori; Nishihata, Seiji; Oniyama, Akio
1993-01-01
Spacecraft operations including ground system operations are generally realized by various large or small scale group work which is done by operators, engineers, managers, users and so on, and their positions are geographically distributed in many cases. In face-to-face work environments, it is easy for them to understand each other. However, in distributed work environments which need communication media, if only using audio, they become estranged from each other and lose interest in and continuity of work. It is an obstacle to smooth operation of spacecraft. NASDA has developed an experimental model of a new real-time operation control system called 'ADOLIS' (ADvanced On-Line System) adopted to such a distributed environment using a multi-media system dealing with character, figure, image, handwriting, video and audio information which is accommodated to operation systems of a wide range including spacecraft and ground systems. This paper describes the results of the development of the experimental model.
High quality uniform YBCO film growth by the metalorganic deposition using trifluoroacetates
NASA Astrophysics Data System (ADS)
Wang, S. S.; Zhang, Z. L.; Wang, L.; Gao, L. K.; Liu, J.
2017-03-01
A need exists for the large-area superconducting YBa2Cu3O7-x (YBCO) films with high critical current density for microwave communication and/or electric power applications. Trifluoroacetic metalorganic (TFA-MOD) method is a promising low cost technique for large-scale production of YBCO films, because it does not need high vacuum device and is easily applicable to substrates of various shape and size. In this paper, double-sided YBCO films with maximum 2 in diameter were prepared on LaAlO3 substrates by TFA-MOD method. Inductive critical current densitiy Jc, microwave surface resistance Rs, as well as the microstructure were characterized. A newly homemade furnace system was used to epitaxially grown YBCO films, which can improve the uniformity of YBCO film significantly by gas supply and temperature distribution proper design. Results showed that the large area YBCO films were very uniform in microstructure and thickness distribution, an average inductive Jc in excess of 6 MA/cm2 with uniform distribution, and low Rs (10 GHz) below 0.3 mΩ at 77 K were obtained. Andthe film filter may be prepared to work at temperatures lower than 74 K. These results are very close to the highest value of YBCO films made by conventional vacuum method, so we show a very promising route for large-scale production of high quality large-area YBCO superconducting films at a lower cost.
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
A Weibull distribution accrual failure detector for cloud computing
Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin
2017-01-01
Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229
Emergence of universal scaling in financial markets from mean-field dynamics
NASA Astrophysics Data System (ADS)
Vikram, S. V.; Sinha, Sitabhra
2011-01-01
Collective phenomena with universal properties have been observed in many complex systems with a large number of components. Here we present a microscopic model of the emergence of scaling behavior in such systems, where the interaction dynamics between individual components is mediated by a global variable making the mean-field description exact. Using the example of financial markets, we show that asset price can be such a global variable with the critical role of coordinating the actions of agents who are otherwise independent. The resulting model accurately reproduces empirical properties such as the universal scaling of the price fluctuation and volume distributions, long-range correlations in volatility, and multiscaling.
Large Fluctuations for Spatial Diffusion of Cold Atoms
NASA Astrophysics Data System (ADS)
Aghion, Erez; Kessler, David A.; Barkai, Eli
2017-06-01
We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.
Communication architecture for large geostationary platforms
NASA Technical Reports Server (NTRS)
Bond, F. E.
1979-01-01
Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.
NASA Astrophysics Data System (ADS)
Leitão, João; Pereira, José; Rodrigues, Luís
Gossip, or epidemic, protocols have emerged as a powerful strategy to implement highly scalable and resilient reliable broadcast primitives on large scale peer-to-peer networks. Epidemic protocols are scalable because they distribute the load among all nodes in the system and resilient because they have an intrinsic level of redundancy that masks node and network failures. This chapter provides an introduction to gossip-based broadcast on large-scale unstructured peer-to-peer overlay networks: it surveys the main results in the field, discusses techniques to build and maintain the overlays that support efficient dissemination strategies, and provides an in-depth discussion and experimental evaluation of two concrete protocols, named HyParView and Plumtree.
Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility
Woolley-Meza, Olivia; Grady, Daniel; Thiemann, Christian; Bagrow, James P.; Brockmann, Dirk
2013-01-01
Large-scale disasters that interfere with globalized socio-technical infrastructure, such as mobility and transportation networks, trigger high socio-economic costs. Although the origin of such events is often geographically confined, their impact reverberates through entire networks in ways that are poorly understood, difficult to assess, and even more difficult to predict. We investigate how the eruption of volcano Eyjafjallajökull, the September 11th terrorist attacks, and geographical disruptions in general interfere with worldwide mobility. To do this we track changes in effective distance in the worldwide air transportation network from the perspective of individual airports. We find that universal features exist across these events: airport susceptibilities to regional disruptions follow similar, strongly heterogeneous distributions that lack a scale. On the other hand, airports are more uniformly susceptible to attacks that target the most important hubs in the network, exhibiting a well-defined scale. The statistical behavior of susceptibility can be characterized by a single scaling exponent. Using scaling arguments that capture the interplay between individual airport characteristics and the structural properties of routes we can recover the exponent for all types of disruption. We find that the same mechanisms responsible for efficient passenger flow may also keep the system in a vulnerable state. Our approach can be applied to understand the impact of large, correlated disruptions in financial systems, ecosystems and other systems with a complex interaction structure between heterogeneous components. PMID:23950904
Dynamic Control of Facts Devices to Enable Large Scale Penetration of Renewable Energy Resources
NASA Astrophysics Data System (ADS)
Chavan, Govind Sahadeo
This thesis focuses on some of the problems caused by large scale penetration of Renewable Energy Resources within EHV transmission networks, and investigates some approaches in resolving these problems. In chapter 4, a reduced-order model of the 500 kV WECC transmission system is developed by estimating its key parameters from phasor measurement unit (PMU) data. The model was then implemented in RTDS and was investigated for its accuracy with respect to the PMU data. Finally it was tested for observing the effects of various contingencies like transmission line loss, generation loss and large scale penetration of wind farms on EHV transmission systems. Chapter 5 introduces Static Series Synchronous Compensators (SSSC) which are seriesconnected converters that can control real power flow along a transmission line. A new application of SSSCs in mitigating Ferranti effect on unloaded transmission lines was demonstrated on PSCAD. A new control scheme for SSSCs based on the Cascaded H-bridge (CHB) converter configuration was proposed and was demonstrated using PSCAD and RTDS. A new centralized controller was developed for the distributed SSSCs based on some of the concepts used in the CHB-based SSSC. The controller's efficacy was demonstrated using RTDS. Finally chapter 6 introduces the problem of power oscillations induced by renewable sources in a transmission network. A power oscillation damping (POD) controller is designed using distributed SSSCs in NYPA's 345 kV three-bus AC system and its efficacy is demonstrated in PSCAD. A similar POD controller is then designed for the CHB-based SSSC in the IEEE 14 bus system in PSCAD. Both controllers were noted to have significantly damped power oscillations in the transmission networks.
Distributed resource allocation under communication constraints
NASA Astrophysics Data System (ADS)
Dodin, Pierre; Nimier, Vincent
2001-03-01
This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.
Critiquing ';pore connectivity' as basis for in situ flow in geothermal systems
NASA Astrophysics Data System (ADS)
Kenedi, C. L.; Leary, P.; Malin, P.
2013-12-01
Geothermal system in situ flow systematics derived from detailed examination of grain-scale structures, fabrics, mineral alteration, and pore connectivity may be extremely misleading if/when extrapolated to reservoir-scale flow structure. In oil/gas field clastic reservoir operations, it is standard to assume that small scale studies of flow fabric - notably the Kozeny-Carman and Archie's Law treatments at the grain-scale and well-log/well-bore sampling of formations/reservoirs at the cm-m scale - are adequate to define the reservoir-scale flow properties. In the case of clastic reservoirs, however, a wide range of reservoir-scale data wholly discredits this extrapolation: Well-log data show that grain-scale fracture density fluctuation power scales inversely with spatial frequency k, S(k) ~ 1/k^β, 1.0 < β < 1.2, 1cycle/km < k < 1cycle/cm; the scaling is a ';universal' feature of well-logs (neutron porosity, sonic velocity, chemical abundance, mass density, resistivity, in many forms of clastic rock and instances of shale bodies, for both horizontal and vertical wells). Grain-scale fracture density correlates with in situ porosity; spatial fluctuations of porosity φ in well-core correlate with spatial fluctuations in the logarithm of well-core permeability, δφ ~ δlog(κ) with typical correlation coefficient ~ 85%; a similar relation is observed in consolidating sediments/clays, indicating a generic coupling between fluid pressure and solid deformation at pore sites. In situ macroscopic flow systems are lognormally distributed according to κ ~ κ0 exp(α(φ-φ0)), α >>1 an empirical parameter for degree of in situ fracture connectivity; the lognormal distribution applies to well-productivities in US oil fields and NZ geothermal fields, ';frack productivity' in oil/gas shale body reservoirs, ore grade distributions, and trace element abundances. Although presently available evidence for these properties in geothermal reservoirs is limited, there are indications that geothermal system flow essentially obeys the same ';universal' in situ flow rules as does clastic rock: Well-log data from Los Azufres, MX, show power-law scaling S(k) ~ 1/k^β, 1.2 < β < 1.4, for spatial frequency range 2cycles/km to 0.5cycle/m; higher β-values are likely due to the relatively fresh nature of geothermal systems; Well-core at Bulalo (PH) and Ohaaki (NZ) show statistically significant spatial correlation, δφ ~ δlog(κ) Well productivity at Ohaaki/Ngawha (NZ) and in geothermal systems elsewhere are lognormally distributed; K/Th/U abundances lognormally distributed in Los Azufres well-logs We therefore caution that small-scale evidence for in situ flow fabric in geothermal systems that is interpreted in terms of ';pore connectivity' may in fact not reflect how small-scale chemical processes are integrated into a large-scale geothermal flow structure. Rather such small scale studies should (perhaps) be considered in term of the above flow rules. These flow rules are easily incorporated into standard flow simulation codes, in particular the OPM = Open Porous Media open-source industry-standard flow code. Geochemical transport data relevant to geothermal systems can thus be expected to be well modeled by OPM or equivalent (e.g., INL/LANL) codes.
Quantifying Stock Return Distributions in Financial Markets
Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593
Quantifying Stock Return Distributions in Financial Markets.
Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias
2015-01-01
Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.
NASA Technical Reports Server (NTRS)
Johnston, William E.; Gannon, Dennis; Nitzberg, Bill
2000-01-01
We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.
NASA Astrophysics Data System (ADS)
Bandyopadhyay, Saptarshi
Multi-agent systems are widely used for constructing a desired formation shape, exploring an area, surveillance, coverage, and other cooperative tasks. This dissertation introduces novel algorithms in the three main areas of shape formation, distributed estimation, and attitude control of large-scale multi-agent systems. In the first part of this dissertation, we address the problem of shape formation for thousands to millions of agents. Here, we present two novel algorithms for guiding a large-scale swarm of robotic systems into a desired formation shape in a distributed and scalable manner. These probabilistic swarm guidance algorithms adopt an Eulerian framework, where the physical space is partitioned into bins and the swarm's density distribution over each bin is controlled using tunable Markov chains. In the first algorithm - Probabilistic Swarm Guidance using Inhomogeneous Markov Chains (PSG-IMC) - each agent determines its bin transition probabilities using a time-inhomogeneous Markov chain that is constructed in real-time using feedback from the current swarm distribution. This PSG-IMC algorithm minimizes the expected cost of the transitions required to achieve and maintain the desired formation shape, even when agents are added to or removed from the swarm. The algorithm scales well with a large number of agents and complex formation shapes, and can also be adapted for area exploration applications. In the second algorithm - Probabilistic Swarm Guidance using Optimal Transport (PSG-OT) - each agent determines its bin transition probabilities by solving an optimal transport problem, which is recast as a linear program. In the presence of perfect feedback of the current swarm distribution, this algorithm minimizes the given cost function, guarantees faster convergence, reduces the number of transitions for achieving the desired formation, and is robust to disturbances or damages to the formation. We demonstrate the effectiveness of these two proposed swarm guidance algorithms using results from numerical simulations and closed-loop hardware experiments on multiple quadrotors. In the second part of this dissertation, we present two novel discrete-time algorithms for distributed estimation, which track a single target using a network of heterogeneous sensing agents. The Distributed Bayesian Filtering (DBF) algorithm, the sensing agents combine their normalized likelihood functions using the logarithmic opinion pool and the discrete-time dynamic average consensus algorithm. Each agent's estimated likelihood function converges to an error ball centered on the joint likelihood function of the centralized multi-sensor Bayesian filtering algorithm. Using a new proof technique, the convergence, stability, and robustness properties of the DBF algorithm are rigorously characterized. The explicit bounds on the time step of the robust DBF algorithm are shown to depend on the time-scale of the target dynamics. Furthermore, the DBF algorithm for linear-Gaussian models can be cast into a modified form of the Kalman information filter. In the Bayesian Consensus Filtering (BCF) algorithm, the agents combine their estimated posterior pdfs multiple times within each time step using the logarithmic opinion pool scheme. Thus, each agent's consensual pdf minimizes the sum of Kullback-Leibler divergences with the local posterior pdfs. The performance and robust properties of these algorithms are validated using numerical simulations. In the third part of this dissertation, we present an attitude control strategy and a new nonlinear tracking controller for a spacecraft carrying a large object, such as an asteroid or a boulder. If the captured object is larger or comparable in size to the spacecraft and has significant modeling uncertainties, conventional nonlinear control laws that use exact feed-forward cancellation are not suitable because they exhibit a large resultant disturbance torque. The proposed nonlinear tracking control law guarantees global exponential convergence of tracking errors with finite-gain Lp stability in the presence of modeling uncertainties and disturbances, and reduces the resultant disturbance torque. Further, this control law permits the use of any attitude representation and its integral control formulation eliminates any constant disturbance. Under small uncertainties, the best strategy for stabilizing the combined system is to track a fuel-optimal reference trajectory using this nonlinear control law, because it consumes the least amount of fuel. In the presence of large uncertainties, the most effective strategy is to track the derivative plus proportional-derivative based reference trajectory, because it reduces the resultant disturbance torque. The effectiveness of the proposed attitude control law is demonstrated by using results of numerical simulation based on an Asteroid Redirect Mission concept. The new algorithms proposed in this dissertation will facilitate the development of versatile autonomous multi-agent systems that are capable of performing a variety of complex tasks in a robust and scalable manner.
Large-scale velocities and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Fabian
2010-09-15
We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less
Data Intensive Systems (DIS) Benchmark Performance Summary
2003-08-01
models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures
Debugging and Analysis of Large-Scale Parallel Programs
1989-09-01
Przybylski, T. Riordan , C. Rowen, and D. Van’t Hof, "A CMOS RISC Processor with Integrated System Functions," In Proc. of the 1986 COMPCON. IEEE, March 1986...Sequencers," Communications of the ACM, 22(2):115-123, 1979. 115 [Richardson, 1988] Rick Richardson, "Dhrystone 2.1 Benchmark," Usenet Distribution
USDA-ARS?s Scientific Manuscript database
The combined use of water erosion models and geographic information systems (GIS) has facilitated soil loss estimation at the watershed scale. Tools such as the Geo-spatial interface for the Water Erosion Prediction Project (GeoWEPP) model provide a convenient spatially distributed soil loss estimat...
Solving large scale unit dilemma in electricity system by applying commutative law
NASA Astrophysics Data System (ADS)
Legino, Supriadi; Arianto, Rakhmat
2018-03-01
The conventional system, pooling resources with large centralized power plant interconnected as a network. provides a lot of advantages compare to the isolated one include optimizing efficiency and reliability. However, such a large plant need a huge capital. In addition, more problems emerged to hinder the construction of big power plant as well as its associated transmission lines. By applying commutative law of math, ab = ba, for all a,b €-R, the problem associated with conventional system as depicted above, can be reduced. The idea of having small unit but many power plants, namely “Listrik Kerakyatan,” abbreviated as LK provides both social and environmental benefit that could be capitalized by using proper assumption. This study compares the cost and benefit of LK to those of conventional system, using simulation method to prove that LK offers alternative solution to answer many problems associated with the large system. Commutative Law of Algebra can be used as a simple mathematical model to analyze whether the LK system as an eco-friendly distributed generation can be applied to solve various problems associated with a large scale conventional system. The result of simulation shows that LK provides more value if its plants operate in less than 11 hours as peaker power plant or load follower power plant to improve load curve balance of the power system. The result of simulation indicates that the investment cost of LK plant should be optimized in order to minimize the plant investment cost. This study indicates that the benefit of economies of scale principle does not always apply to every condition, particularly if the portion of intangible cost and benefit is relatively high.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nole, Michael; Daigle, Hugh; Cook, Ann E.
The goal of this study is to computationally determine the potential distribution patterns of diffusion-driven methane hydrate accumulations in coarse-grained marine sediments. Diffusion of dissolved methane in marine gas hydrate systems has been proposed as a potential transport mechanism through which large concentrations of hydrate can preferentially accumulate in coarse-grained sediments over geologic time. Using one-dimensional compositional reservoir simulations, we examine hydrate distribution patterns at the scale of individual sand layers (1 to 20 m thick) that are deposited between microbially active fine-grained material buried through the gas hydrate stability zone (GHSZ). We then extrapolate to two- dimensional and basin-scalemore » three-dimensional simulations, where we model dipping sands and multilayered systems. We find that properties of a sand layer including pore size distribution, layer thickness, dip, and proximity to other layers in multilayered systems all exert control on diffusive methane fluxes toward and within a sand, which in turn impact the distribution of hydrate throughout a sand unit. In all of these simulations, we incorporate data on physical properties and sand layer geometries from the Terrebonne Basin gas hydrate system in the Gulf of Mexico. We demonstrate that diffusion can generate high hydrate saturations (upward of 90%) at the edges of thin sands at shallow depths within the GHSZ, but that it is ineffective at producing high hydrate saturations throughout thick (greater than 10 m) sands buried deep within the GHSZ. As a result, we find that hydrate in fine-grained material can preserve high hydrate saturations in nearby thin sands with burial.« less
Nole, Michael; Daigle, Hugh; Cook, Ann E.; ...
2017-02-01
The goal of this study is to computationally determine the potential distribution patterns of diffusion-driven methane hydrate accumulations in coarse-grained marine sediments. Diffusion of dissolved methane in marine gas hydrate systems has been proposed as a potential transport mechanism through which large concentrations of hydrate can preferentially accumulate in coarse-grained sediments over geologic time. Using one-dimensional compositional reservoir simulations, we examine hydrate distribution patterns at the scale of individual sand layers (1 to 20 m thick) that are deposited between microbially active fine-grained material buried through the gas hydrate stability zone (GHSZ). We then extrapolate to two- dimensional and basin-scalemore » three-dimensional simulations, where we model dipping sands and multilayered systems. We find that properties of a sand layer including pore size distribution, layer thickness, dip, and proximity to other layers in multilayered systems all exert control on diffusive methane fluxes toward and within a sand, which in turn impact the distribution of hydrate throughout a sand unit. In all of these simulations, we incorporate data on physical properties and sand layer geometries from the Terrebonne Basin gas hydrate system in the Gulf of Mexico. We demonstrate that diffusion can generate high hydrate saturations (upward of 90%) at the edges of thin sands at shallow depths within the GHSZ, but that it is ineffective at producing high hydrate saturations throughout thick (greater than 10 m) sands buried deep within the GHSZ. As a result, we find that hydrate in fine-grained material can preserve high hydrate saturations in nearby thin sands with burial.« less
Design and Realization of Online Monitoring System of Distributed New Energy and Renewable Energy
NASA Astrophysics Data System (ADS)
Tang, Yanfen; Zhou, Tao; Li, Mengwen; Zheng, Guotai; Li, Hao
2018-01-01
Aimed at difficult centralized monitoring and management of current distributed new energy and renewable energy generation projects due to great varieties, different communication protocols and large-scale difference, this paper designs a online monitoring system of new energy and renewable energy characterized by distributed deployment, tailorable functions, extendible applications and fault self-healing performance. This system is designed based on international general standard for grid information data model, formulates unified data acquisition and transmission standard for different types of new energy and renewable energy generation projects, and can realize unified data acquisition and real-time monitoring of new energy and renewable energy generation projects, such as solar energy, wind power, biomass energy, etc. within its jurisdiction. This system has applied in Beijing. At present, 576 projects are connected to the system. Good effect is achieved and stability and reliability of the system have been validated.
Tropical Oceanic Precipitation Processes Over Warm Pool: 2D and 3D Cloud Resolving Model Simulations
NASA Technical Reports Server (NTRS)
Tao, W.-K.; Johnson, D.; Simpson, J.; Einaudi, Franco (Technical Monitor)
2001-01-01
Rainfall is a key link in the hydrologic cycle as well as the primary heat source for the atmosphere. The vertical distribution of convective latent-heat release modulates the large-scale circulations of the topics. Furthermore, changes in the moisture distribution at middle and upper levels of the troposphere can affect cloud distributions and cloud liquid water and ice contents. How the incoming solar and outgoing longwave radiation respond to these changes in clouds is a major factor in assessing climate change. Present large-scale weather and climate model simulate processes only crudely, reducing confidence in their predictions on both global and regional scales. One of the most promising methods to test physical parameterizations used in General Circulation Models (GCMs) and climate models is to use field observations together with Cloud Resolving Models (CRMs). The CRMs use more sophisticated and physically realistic parameterizations of cloud microphysical processes, and allow for their complex interactions with solar and infrared radiative transfer processes. The CRMs can reasonably well resolve the evolution, structure, and life cycles of individual clouds and clouds systems. The major objective of this paper is to investigate the latent heating, moisture and momentum budgets associated with several convective systems developed during the TOGA COARE IFA - westerly wind burst event (late December, 1992). The tool for this study is the Goddard Cumulus Ensemble (GCE) model which includes a 3-class ice-phase microphysics scheme.
NASA Astrophysics Data System (ADS)
Massiot, Cécile; Nicol, Andrew; McNamara, David D.; Townend, John
2017-08-01
Analysis of fracture orientation, spacing, and thickness from acoustic borehole televiewer (BHTV) logs and cores in the andesite-hosted Rotokawa geothermal reservoir (New Zealand) highlights potential controls on the geometry of the fracture system. Cluster analysis of fracture orientations indicates four fracture sets. Probability distributions of fracture spacing and thickness measured on BHTV logs are estimated for each fracture set, using maximum likelihood estimations applied to truncated size distributions to account for sampling bias. Fracture spacing is dominantly lognormal, though two subordinate fracture sets have a power law spacing. This difference in spacing distributions may reflect the influence of the andesitic sequence stratification (lognormal) and tectonic faults (power law). Fracture thicknesses of 9-30 mm observed in BHTV logs, and 1-3 mm in cores, are interpreted to follow a power law. Fractures in thin sections (˜5 μm thick) do not fit this power law distribution, which, together with their orientation, reflect a change of controls on fracture thickness from uniform (such as thermal) controls at thin section scale to anisotropic (tectonic) at core and BHTV scales of observation. However, the ˜5% volumetric percentage of fractures within the rock at all three scales suggests a self-similar behavior in 3-D. Power law thickness distributions potentially associated with power law fluid flow rates, and increased connectivity where fracture sets intersect, may cause the large permeability variations that occur at hundred meter scales in the reservoir. The described fracture geometries can be incorporated into fracture and flow models to explore the roles of fracture connectivity, stress, and mineral precipitation/dissolution on permeability in such andesite-hosted geothermal systems.
Towards building high performance medical image management system for clinical trials
NASA Astrophysics Data System (ADS)
Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel
2011-03-01
Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.
Real-Time Large-Scale Dense Mapping with Surfels
Fu, Xingyin; Zhu, Feng; Wu, Qingxiao; Sun, Yunlei; Lu, Rongrong; Yang, Ruigang
2018-01-01
Real-time dense mapping systems have been developed since the birth of consumer RGB-D cameras. Currently, there are two commonly used models in dense mapping systems: truncated signed distance function (TSDF) and surfel. The state-of-the-art dense mapping systems usually work fine with small-sized regions. The generated dense surface may be unsatisfactory around the loop closures when the system tracking drift grows large. In addition, the efficiency of the system with surfel model slows down when the number of the model points in the map becomes large. In this paper, we propose to use two maps in the dense mapping system. The RGB-D images are integrated into a local surfel map. The old surfels that reconstructed in former times and far away from the camera frustum are moved from the local map to the global map. The updated surfels in the local map when every frame arrives are kept bounded. Therefore, in our system, the scene that can be reconstructed is very large, and the frame rate of our system remains high. We detect loop closures and optimize the pose graph to distribute system tracking drift. The positions and normals of the surfels in the map are also corrected using an embedded deformation graph so that they are consistent with the updated poses. In order to deal with large surface deformations, we propose a new method for constructing constraints with system trajectories and loop closure keyframes. The proposed new method stabilizes large-scale surface deformation. Experimental results show that our novel system behaves better than the prior state-of-the-art dense mapping systems. PMID:29747450
Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations
NASA Technical Reports Server (NTRS)
Hogan, Craig J.; Woods, Jorden
1992-01-01
The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2015-01-01
During inactive phases of Madden-Julian Oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES (Clouds and the Earth's Radiant Energy System) observations between July 2006 and June 2010 for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index, which assigns the tropics to one of the eight MJO phases each day. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The criteria for defining these cloud types are overcast footprints and cloud top pressures less than 400 hPa, but DC has higher cloud optical depths (=10) than those of CS (<10). The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation speeds/directions.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Large-scale motions in the universe: Using clusters of galaxies as tracers
NASA Technical Reports Server (NTRS)
Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard
1995-01-01
Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).
Visualizing complex hydrodynamic features
NASA Astrophysics Data System (ADS)
Kempf, Jill L.; Marshall, Robert E.; Yen, Chieh-Cheng
1990-08-01
The Lake Erie Forecasting System is a cooperative project by university, private and governmental institutions to provide continuous forecasting of three-dimensional structure within the lake. The forecasts will include water velocity and temperature distributions throughout the body of water, as well as water level and wind-wave distributions at the lake's surface. Many hydrodynamic features can be extracted from this data, including coastal jets, large-scale thermocline motion and zones of upwelling and downwelling. A visualization system is being developed that will aid in understanding these features and their interactions. Because of the wide variety of features, they cannot all be adequately represented by a single rendering technique. Particle tracing, surface rendering, and volumetric techniques are all necessary. This visualization effortis aimed towards creating a system that will provide meaningful forecasts for those using the lake for recreational and commercial purposes. For example, the fishing industry needs to know about large-scale thermocline motion in order to find the best fishing areas and power plants need to know water intAke temperatures. The visualization system must convey this information in a manner that is easily understood by these users. Scientists must also be able to use this system to verify their hydrodynamic simulation. The focus of the system, therefore, is to provide the information to serve these diverse interests, without overwhelming any single user with unnecessary data.
Dynamic structural disorder in supported nanoscale catalysts
NASA Astrophysics Data System (ADS)
Rehr, J. J.; Vila, F. D.
2014-04-01
We investigate the origin and physical effects of "dynamic structural disorder" (DSD) in supported nano-scale catalysts. DSD refers to the intrinsic fluctuating, inhomogeneous structure of such nano-scale systems. In contrast to bulk materials, nano-scale systems exhibit substantial fluctuations in structure, charge, temperature, and other quantities, as well as large surface effects. The DSD is driven largely by the stochastic librational motion of the center of mass and fluxional bonding at the nanoparticle surface due to thermal coupling with the substrate. Our approach for calculating and understanding DSD is based on a combination of real-time density functional theory/molecular dynamics simulations, transient coupled-oscillator models, and statistical mechanics. This approach treats thermal and dynamic effects over multiple time-scales, and includes bond-stretching and -bending vibrations, and transient tethering to the substrate at longer ps time-scales. Potential effects on the catalytic properties of these clusters are briefly explored. Model calculations of molecule-cluster interactions and molecular dissociation reaction paths are presented in which the reactant molecules are adsorbed on the surface of dynamically sampled clusters. This model suggests that DSD can affect both the prefactors and distribution of energy barriers in reaction rates, and thus can significantly affect catalytic activity at the nano-scale.
Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.
Rangan, Aaditya V; Cai, David
2007-02-01
We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.
Gravitational lenses and large scale structure
NASA Technical Reports Server (NTRS)
Turner, Edwin L.
1987-01-01
Four possible statistical tests of the large scale distribution of cosmic material are described. Each is based on gravitational lensing effects. The current observational status of these tests is also summarized.
Fiáth, Richárd; Beregszászi, Patrícia; Horváth, Domonkos; Wittner, Lucia; Aarts, Arno A. A.; Ruther, Patrick; Neves, Hercules P.; Bokor, Hajnalka; Acsády, László
2016-01-01
Recording simultaneous activity of a large number of neurons in distributed neuronal networks is crucial to understand higher order brain functions. We demonstrate the in vivo performance of a recently developed electrophysiological recording system comprising a two-dimensional, multi-shank, high-density silicon probe with integrated complementary metal-oxide semiconductor electronics. The system implements the concept of electronic depth control (EDC), which enables the electronic selection of a limited number of recording sites on each of the probe shafts. This innovative feature of the system permits simultaneous recording of local field potentials (LFP) and single- and multiple-unit activity (SUA and MUA, respectively) from multiple brain sites with high quality and without the actual physical movement of the probe. To evaluate the in vivo recording capabilities of the EDC probe, we recorded LFP, MUA, and SUA in acute experiments from cortical and thalamic brain areas of anesthetized rats and mice. The advantages of large-scale recording with the EDC probe are illustrated by investigating the spatiotemporal dynamics of pharmacologically induced thalamocortical slow-wave activity in rats and by the two-dimensional tonotopic mapping of the auditory thalamus. In mice, spatial distribution of thalamic responses to optogenetic stimulation of the neocortex was examined. Utilizing the benefits of the EDC system may result in a higher yield of useful data from a single experiment compared with traditional passive multielectrode arrays, and thus in the reduction of animals needed for a research study. PMID:27535370
NASA Astrophysics Data System (ADS)
Crave, A.; Davy, P.
1997-01-01
We present a statistical analysis on two watersheds in French Brittany whose drainage areas are about 10,000 and 2000 km2. The channel system was analysed from the digitised blue lines of the 1:100,000 map and from a 250-m DEM. Link lengths follow an exponential distribution, consistent with the Markovian model of channel branching proposed by Smart (1968). The departure from the exponential distribution for small lengths, that has been extensively discussed before, results from a statistical effect due to the finite number of channels and junctions. The Strahler topology applied on channels defines a self-similar organisation whose similarity dimension is about 1.7, that is clearly smaller than the value of 2 expected for a random organisation. The similarity dimension is consistent with an independent measurement of the Horton ratios of stream numbers and lengths. The variables defined by an upstream integral (drainage area, mainstream length, upstream length) follow power-law distributions limited at large scales by a finite size effect, due to the finite area of the watersheds. A special emphasis is given to the exponent of the drainage area, aA, that has been previously discussed in the context of different aggregation models relevant to channel network growth. We show that aA is consistent with 4/3, a value that was obtained and analytically demonstrated from directed random walk aggregating models, inspired by the model of Scheidegger (1967). The drainage density and mainstream length present no simple scaling with area, except at large areas where they tend to trivial values: constant density and square root of drainage area, respectively. These asymptotic limits necessarily imply that the space dimension of channel networks is 2, equal to the embedding space. The limits are reached for drainage areas larger than 100 km2. For smaller areas, the asymptotic limit represents either a lower bound (drainage density) or an upper bound (mainstream length) of the distributions. Because the fluctuations of the drainage density slowly converge to a finite limit, the system could be adequately described as a fat fractal, where the average drainage density is the sum of a constant plus a fluctuation decreasing as a power law with integrating area. A fat fractal hypothesis could explain why the similarity dimension is not equal to the fractal capacity dimension, as it is for thin fractals. The physical consequences are not yet really understood, but we draw an analogy with a directed aggregating system where the growth process involves both stochastic and deterministic growth. These models are known to be fat fractals, and the deterministic growth, which constitutes a fundamental ingredient of these models, could be attributed in river systems to the role of terrestrial gravity.
Portable parallel stochastic optimization for the design of aeropropulsion components
NASA Technical Reports Server (NTRS)
Sues, Robert H.; Rhodes, G. S.
1994-01-01
This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.
El-Chakhtoura, Joline; Prest, Emmanuelle; Saikaly, Pascal; van Loosdrecht, Mark; Hammes, Frederik; Vrouwenvelder, Hans
2015-05-01
Understanding the biological stability of drinking water distribution systems is imperative in the framework of process control and risk management. The objective of this research was to examine the dynamics of the bacterial community during drinking water distribution at high temporal resolution. Water samples (156 in total) were collected over short time-scales (minutes/hours/days) from the outlet of a treatment plant and a location in its corresponding distribution network. The drinking water is treated by biofiltration and disinfectant residuals are absent during distribution. The community was analyzed by 16S rRNA gene pyrosequencing and flow cytometry as well as conventional, culture-based methods. Despite a random dramatic event (detected with pyrosequencing and flow cytometry but not with plate counts), the bacterial community profile at the two locations did not vary significantly over time. A diverse core microbiome was shared between the two locations (58-65% of the taxa and 86-91% of the sequences) and found to be dependent on the treatment strategy. The bacterial community structure changed during distribution, with greater richness detected in the network and phyla such as Acidobacteria and Gemmatimonadetes becoming abundant. The rare taxa displayed the highest dynamicity, causing the major change during water distribution. This change did not have hygienic implications and is contingent on the sensitivity of the applied methods. The concept of biological stability therefore needs to be revised. Biostability is generally desired in drinking water guidelines but may be difficult to achieve in large-scale complex distribution systems that are inherently dynamic. Copyright © 2015 Elsevier Ltd. All rights reserved.
Generic solar photovoltaic system dynamic simulation model specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellis, Abraham; Behnke, Michael Robert; Elliott, Ryan Thomas
This document is intended to serve as a specification for generic solar photovoltaic (PV) system positive-sequence dynamic models to be implemented by software developers and approved by the WECC MVWG for use in bulk system dynamic simulations in accordance with NERC MOD standards. Two specific dynamic models are included in the scope of this document. The first, a Central Station PV System model, is intended to capture the most important dynamic characteristics of large scale (> 10 MW) PV systems with a central Point of Interconnection (POI) at the transmission level. The second, a Distributed PV System model, is intendedmore » to represent an aggregation of smaller, distribution-connected systems that comprise a portion of a composite load that might be modeled at a transmission load bus.« less
Parallel Simulation of Unsteady Turbulent Flames
NASA Technical Reports Server (NTRS)
Menon, Suresh
1996-01-01
Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.
On the improvement for charging large-scale flexible electrostatic actuators
NASA Astrophysics Data System (ADS)
Liao, Hsu-Ching; Chen, Han-Long; Su, Yu-Hao; Chen, Yu-Chi; Ko, Wen-Ching; Liou, Chang-Ho; Wu, Wen-Jong; Lee, Chih-Kung
2011-04-01
Recently, the development of flexible electret based electrostatic actuator has been widely discussed. The devices was shown to have high sound quality, energy saving, flexible structure and can be cut to any shape. However, achieving uniform charge on the electret diaphragm is one of the most critical processes needed to have the speaker ready for large-scale production. In this paper, corona discharge equipment contains multi-corona probes and grid bias was set up to inject spatial charges within the electret diaphragm. The optimal multi-corona probes system was adjusted to achieve uniform charge distribution of electret diaphragm. The processing conditions include the distance between the corona probes, the voltages of corona probe and grid bias, etc. We assembled the flexible electret loudspeakers first and then measured their sound pressure and beam pattern. The uniform charge distribution within the electret diaphragm based flexible electret loudspeaker provided us with the opportunity to shape the loudspeaker arbitrarily and to tailor the sound distribution per specifications request. Some of the potential futuristic applications for this device such as sound poster, smart clothes, and sound wallpaper, etc. were discussed as well.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
Bringing modeling to the masses: A web based system to predict potential species distributions
Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul
2010-01-01
Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.
Testing Collisional Scaling Laws: Comparing with Observables
NASA Astrophysics Data System (ADS)
Davis, D. R.; Marzari, F.; Farinella, P.
1999-09-01
How large bodies break up in response to energetic collisions is a problem that has attracted considerable attention in recent years. Ever more sophisticated computation methods have also been developed; prominent among these are hydrocode simulations of collisional disruption by Benz and Asphaug (1999, Icarus, in press), Love and Ahrens (1996, LPSC XXVII, 777-778), and Melosh and Ryan (1997, Icarus 129, 562-564). Durda et al. (1998, Icarus 135, 431-440) used the observed asteroid size distribution to infer a scaling algorithm. The present situation is that there are several proposed scaling laws that differ by as much as two orders of magnitude at particular sizes. We have expanded upon the work of Davis et al. (1994, Goutelas Proceedings) and tested the suite of proposed scaling algorithms against observations of the main-belt asteroids. The effects of collisions among the asteroids produce the following observables: (a) the size distribution has been significantly shaped by collisions, (b) collisions have produced about 25 well recognized asteroid families, and (c) the basaltic crust of Vesta has been largely preserved in the face of about 4.5 Byr of impacts. We will present results from a numerical simulation of asteroid collisional evolution over the age of the solar system using proposed scaling laws and a range of hypothetical initial populations.
Implementation Strategies for Large-Scale Transport Simulations Using Time Domain Particle Tracking
NASA Astrophysics Data System (ADS)
Painter, S.; Cvetkovic, V.; Mancillas, J.; Selroos, J.
2008-12-01
Time domain particle tracking is an emerging alternative to the conventional random walk particle tracking algorithm. With time domain particle tracking, particles are moved from node to node on one-dimensional pathways defined by streamlines of the groundwater flow field or by discrete subsurface features. The time to complete each deterministic segment is sampled from residence time distributions that include the effects of advection, longitudinal dispersion, a variety of kinetically controlled retention (sorption) processes, linear transformation, and temporal changes in groundwater velocities and sorption parameters. The simulation results in a set of arrival times at a monitoring location that can be post-processed with a kernel method to construct mass discharge (breakthrough) versus time. Implementation strategies differ for discrete flow (fractured media) systems and continuous porous media systems. The implementation strategy also depends on the scale at which hydraulic property heterogeneity is represented in the supporting flow model. For flow models that explicitly represent discrete features (e.g., discrete fracture networks), the sampling of residence times along segments is conceptually straightforward. For continuous porous media, such sampling needs to be related to the Lagrangian velocity field. Analytical or semi-analytical methods may be used to approximate the Lagrangian segment velocity distributions in aquifers with low-to-moderate variability, thereby capturing transport effects of subgrid velocity variability. If variability in hydraulic properties is large, however, Lagrangian velocity distributions are difficult to characterize and numerical simulations are required; in particular, numerical simulations are likely to be required for estimating the velocity integral scale as a basis for advective segment distributions. Aquifers with evolving heterogeneity scales present additional challenges. Large-scale simulations of radionuclide transport at two potential repository sites for high-level radioactive waste will be used to demonstrate the potential of the method. The simulations considered approximately 1000 source locations, multiple radionuclides with contrasting sorption properties, and abrupt changes in groundwater velocity associated with future glacial scenarios. Transport pathways linking the source locations to the accessible environment were extracted from discrete feature flow models that include detailed representations of the repository construction (tunnels, shafts, and emplacement boreholes) embedded in stochastically generated fracture networks. Acknowledgment The authors are grateful to SwRI Advisory Committee for Research, the Swedish Nuclear Fuel and Waste Management Company, and Posiva Oy for financial support.
NASA Astrophysics Data System (ADS)
Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars
2013-08-01
There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.
Understanding I/O workload characteristics of a Peta-scale storage system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Youngjae; Gunasekaran, Raghul
2015-01-01
Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less
Silver hake tracks changes in Northwest Atlantic circulation.
Nye, Janet A; Joyce, Terrence M; Kwon, Young-Oh; Link, Jason S
2011-08-02
Recent studies documenting shifts in spatial distribution of many organisms in response to a warming climate highlight the need to understand the mechanisms underlying species distribution at large spatial scales. Here we present one noteworthy example of remote oceanographic processes governing the spatial distribution of adult silver hake, Merluccius bilinearis, a commercially important fish in the Northeast US shelf region. Changes in spatial distribution of silver hake over the last 40 years are highly correlated with the position of the Gulf Stream. These changes in distribution are in direct response to local changes in bottom temperature on the continental shelf that are responding to the same large scale circulation change affecting the Gulf Stream path, namely changes in the Atlantic meridional overturning circulation (AMOC). If the AMOC weakens, as is suggested by global climate models, silver hake distribution will remain in a poleward position, the extent to which could be forecast at both decadal and multidecadal scales.
NASA Astrophysics Data System (ADS)
Comas, X.; Wright, W. J.; Hynek, S. A.; Ntarlagiannis, D.; Terry, N.; Job, M. J.; Fletcher, R. C.; Brantley, S.
2017-12-01
Previous studies in the Rio Icacos watershed in the Luquillo Mountains (Puerto Rico) have shown that regolith materials are rapidly developed from the alteration of quartz diorite bedrock, and create a blanket on top of the bedrock with a thickness that decreases with proximity to the knickpoint. The watershed is also characterized by a system of heterogeneous fractures that likely drive bedrock weathering and the formation of corestones and associated spheroidal fracturing and rindlets. Previous efforts to characterize the spatial distribution of fractures were based on aerial images that did not account for the architecture of the critical zone below the subsurface. In this study we use an array of near-surface geophysical methods at multiple scales to better understand how the spatial distribution and density of fractures varies with topography and proximity to the knickpoint. Large km-scale surveys using ground penetrating radar (GPR), terrain conductivity, and capacitively coupled resistivity, were combined with smaller scale surveys (10-100 m) using electrical resistivity imaging (ERI), and shallow seismics, and were directly constrained with boreholes from previous studies. Geophysical results were compared to theoretical models of compressive stress as due to gravity and regional compression, and showed consistency at describing increased dilation of fractures with proximity to the knickpoint. This study shows the potential of multidisciplinary approaches to model critical zone processes at multiple scales of measurement and high spatial resolution. The approach can be particularly efficient at large km-scales when applying geophysical methods that allow for rapid data acquisition (i.e. walking pace) at high spatial resolution (i.e. cm scales).
Nonextensive Entropy Approach to Space Plasma Fluctuations and Turbulence
NASA Astrophysics Data System (ADS)
Leubner, M. P.; Vörös, Z.; Baumjohann, W.
Spatial intermittency in fully developed turbulence is an established feature of astrophysical plasma fluctuations and in particular apparent in the interplanetary medium by in situ observations. In this situation, the classical Boltzmann— Gibbs extensive thermo-statistics, applicable when microscopic interactions and memory are short ranged and the environment is a continuous and differentiable manifold, fails. Upon generalization of the entropy function to nonextensivity, accounting for long-range interactions and thus for correlations in the system, it is demonstrated that the corresponding probability distribution functions (PDFs) are members of a family of specific power-law distributions. In particular, the resulting theoretical bi-κ functional reproduces accurately the observed global leptokurtic, non-Gaussian shape of the increment PDFs of characteristic solar wind variables on all scales, where nonlocality in turbulence is controlled via a multiscale coupling parameter. Gradual decoupling is obtained by enhancing the spatial separation scale corresponding to increasing κ-values in case of slow solar wind conditions where a Gaussian is approached in the limit of large scales. Contrary, the scaling properties in the high speed solar wind are predominantly governed by the mean energy or variance of the distribution, appearing as second parameter in the theory. The PDFs of solar wind scalar field differences are computed from WIND and ACE data for different time-lags and bulk speeds and analyzed within the nonextensive theory, where also a particular nonlinear dependence of the coupling parameter and variance with scale arises for best fitting theoretical PDFs. Consequently, nonlocality in fluctuations, related to both, turbulence and its large scale driving, should be related to long-range interactions in the context of nonextensive entropy generalization, providing fundamentally the physical background of the observed scale dependence of fluctuations in intermittent space plasmas.
Pinton, Aurélie; Agnèse, Jean-François; Paugy, Didier; Otero, Olga
2013-03-01
To explain the spatial variability of fish taxa at a large scale, two alternative proposals are usually evoked. In recent years, the debate has centred on the relative roles of present and historical processes in shaping biodiversity patterns. In Africa, attempts to understand the processes that determine the large scale distribution of fishes and exploration of historical contingencies have been under-investigated given that most of the phylogenetic studies focus on the history of the Great Lakes. Here, we explore phylogeographic events in the evolutionary history of Synodontis (Mohokidae, Siluriformes) over Africa during the Cenozoic focusing on the putative role of historical processes. We discuss how known geological events together with hydrographical changes contributed to shape Synodontis biogeographical history. Synodontis was chosen on the basis of its high diversity and distribution in Africa: it consists of approximately 120 species that are widely distributed in all hydrographic basins except the Maghreb and South Africa. We propose the most comprehensive phylogeny of this catfish genus. Our results provide support for the 'hydrogeological' hypothesis, which proposes that palaeohydrological changes linked with the geological context may have been the cause of diversification of freshwater fish deep in the Tertiary. More precisely, the two main geological structures that participated to shape the hydrographical network in Africa, namely the Central African Shear zone and the East African rift system, appear as strong drivers of Synodontis diversification and evolution. Copyright © 2012 Elsevier Inc. All rights reserved.
Biodiversity, ecology, and microelement composition of Kyzylkum Desert shrubs (Uzbekistan)
Lyuba A. Kapustina
2001-01-01
Geobotanic research and large-scale mapping with the help of Geographical Information System (GIS) permit us to find out the present state of Kyzylkum Desert shrublands, regularities of plant communities distribution, and chemical composition of the main dominant shrubs. Zonal vegetation types were formed on the basis of Old Xerophilous and Old Mediterranean floras in...
NASA Astrophysics Data System (ADS)
Frey, Davide; Guerraoui, Rachid; Kermarrec, Anne-Marie; Koldehofe, Boris; Mogensen, Martin; Monod, Maxime; Quéma, Vivien
Gossip-based information dissemination protocols are considered easy to deploy, scalable and resilient to network dynamics. Load-balancing is inherent in these protocols as the dissemination work is evenly spread among all nodes. Yet, large-scale distributed systems are usually heterogeneous with respect to network capabilities such as bandwidth. In practice, a blind load-balancing strategy might significantly hamper the performance of the gossip dissemination.
AFRL/Cornell Information Assurance Institute
2007-03-01
revewing this colection ofinformation . Send connents regarding this burden estimate or any other aspect of this collection of information, indcudng...collabora- tions involving Cornell and AFRL researchers, with * AFRL researchers able to participate in Cornell research projects, fa- cilitating technology ...approach to developing a science base and technology for supporting large-scale reliable distributed systems. First, so- lutions to core problems were
Distributed multimodal data fusion for large scale wireless sensor networks
NASA Astrophysics Data System (ADS)
Ertin, Emre
2006-05-01
Sensor network technology has enabled new surveillance systems where sensor nodes equipped with processing and communication capabilities can collaboratively detect, classify and track targets of interest over a large surveillance area. In this paper we study distributed fusion of multimodal sensor data for extracting target information from a large scale sensor network. Optimal tracking, classification, and reporting of threat events require joint consideration of multiple sensor modalities. Multiple sensor modalities improve tracking by reducing the uncertainty in the track estimates as well as resolving track-sensor data association problems. Our approach to solving the fusion problem with large number of multimodal sensors is construction of likelihood maps. The likelihood maps provide a summary data for the solution of the detection, tracking and classification problem. The likelihood map presents the sensory information in an easy format for the decision makers to interpret and is suitable with fusion of spatial prior information such as maps, imaging data from stand-off imaging sensors. We follow a statistical approach to combine sensor data at different levels of uncertainty and resolution. The likelihood map transforms each sensor data stream to a spatio-temporal likelihood map ideally suitable for fusion with imaging sensor outputs and prior geographic information about the scene. We also discuss distributed computation of the likelihood map using a gossip based algorithm and present simulation results.
Gyrodampers for large space structures
NASA Technical Reports Server (NTRS)
Aubrun, J. N.; Margulies, G.
1979-01-01
The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.
Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System
Prest, E. I.; Weissbrodt, D. G.; Hammes, F.; van Loosdrecht, M. C. M.; Vrouwenvelder, J. S.
2016-01-01
Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1–3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson’s correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously undocumented seasonal dynamics in the distribution network. Moreover, high-resolution FCM data enabled prediction of bacterial cell concentrations at specific water temperatures and time of year. The study highlights the need to systematically assess temporal fluctuations in parallel to spatial dynamics for individual drinking water distribution systems. PMID:27792739
Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System.
Prest, E I; Weissbrodt, D G; Hammes, F; van Loosdrecht, M C M; Vrouwenvelder, J S
2016-01-01
Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1-3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson's correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously undocumented seasonal dynamics in the distribution network. Moreover, high-resolution FCM data enabled prediction of bacterial cell concentrations at specific water temperatures and time of year. The study highlights the need to systematically assess temporal fluctuations in parallel to spatial dynamics for individual drinking water distribution systems.
Analysis of labor productivity using large-scale data of firm's financial statements
NASA Astrophysics Data System (ADS)
Ikeda, Y.; Souma, W.; Aoyama, H.; Fujiwara, Y.; Iyetomi, H.
2010-08-01
We investigated labor productivity distribution by analyzing large-scale financial statement data consisting of listed and unlisted Japanese firms to clarify the characteristics of the Japanese labor market. Both high and low productivity sides of the labor productivity distribution follows the power-law distribution. Large inequality in the low productivity side was observed only for the manufacturing sectors in Japan fiscal year (JFY) 1999 and observed for both the manufacturing and non-manufacturing sectors in JFY 2002. The decline in the Japanese GDP in JFY 1999 and JFY 2002 were coincided with the large inequality in the low productivity side of the distribution. A lower peak was found for all non-manufacturing sectors. This might be the origin of the low productivity of the non-manufacturing sectors reported in recent economic studies.
Simulation framework for intelligent transportation systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, T.; Doss, E.; Hanebutte, U.
1996-10-01
A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System (ITS). The simulator is designed for running on parallel computers and distributed (networked) computer systems, but can run on standalone workstations for smaller simulations. The simulator currently models instrumented smart vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide two-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphicalmore » user interfaces to support human-factors studies. Realistic modeling of variations of the posted driving speed are based on human factors studies that take into consideration weather, road conditions, driver personality and behavior, and vehicle type. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on parallel computers, such as ANL`s IBM SP-2, for large-scale problems. A novel feature of the approach is that vehicles are represented by autonomous computer processes which exchange messages with other processes. The vehicles have a behavior model which governs route selection and driving behavior, and can react to external traffic events much like real vehicles. With this approach, the simulation is scaleable to take advantage of emerging massively parallel processor (MPP) systems.« less
Stochastic models for regulatory networks of the genetic toggle switch.
Tian, Tianhai; Burrage, Kevin
2006-05-30
Bistability arises within a wide range of biological systems from the lambda phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.
Stochastic models for regulatory networks of the genetic toggle switch
Tian, Tianhai; Burrage, Kevin
2006-01-01
Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks. PMID:16714385
ATLAS Data Management Accounting with Hadoop Pig and HBase
NASA Astrophysics Data System (ADS)
Lassnig, Mario; Garonne, Vincent; Dimitrov, Gancho; Canali, Luca
2012-12-01
The ATLAS Distributed Data Management system requires accounting of its contents at the metadata layer. This presents a hard problem due to the large scale of the system, the high dimensionality of attributes, and the high rate of concurrent modifications of data. The system must efficiently account more than 90PB of disk and tape that store upwards of 500 million files across 100 sites globally. In this work a generic accounting system is presented, which is able to scale to the requirements of ATLAS. The design and architecture is presented, and the implementation is discussed. An emphasis is placed on the design choices such that the underlying data models are generally applicable to different kinds of accounting, reporting and monitoring.
High-uniformity centimeter-wide Si etching method for MEMS devices with large opening elements
NASA Astrophysics Data System (ADS)
Okamoto, Yuki; Tohyama, Yukiya; Inagaki, Shunsuke; Takiguchi, Mikio; Ono, Tomoki; Lebrasseur, Eric; Mita, Yoshio
2018-04-01
We propose a compensated mesh pattern filling method to achieve highly uniform wafer depth etching (over hundreds of microns) with a large-area opening (over centimeter). The mesh opening diameter is gradually changed between the center and the edge of a large etching area. Using such a design, the etching depth distribution depending on sidewall distance (known as the local loading effect) inversely compensates for the over-centimeter-scale etching depth distribution, known as the global or within-die(chip)-scale loading effect. Only a single DRIE with test structure patterns provides a micro-electromechanical systems (MEMS) designer with the etched depth dependence on the mesh opening size as well as on the distance from the chip edge, and the designer only has to set the opening size so as to obtain a uniform etching depth over the entire chip. This method is useful when process optimization cannot be performed, such as in the cases of using standard conditions for a foundry service and of short turn-around-time prototyping. To demonstrate, a large MEMS mirror that needed over 1 cm2 of backside etching was successfully fabricated using as-is-provided DRIE conditions.
Impact of Data Placement on Resilience in Large-Scale Object Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carns, Philip; Harms, Kevin; Jenkins, John
Distributed object storage architectures have become the de facto standard for high-performance storage in big data, cloud, and HPC computing. Object storage deployments using commodity hardware to reduce costs often employ object replication as a method to achieve data resilience. Repairing object replicas after failure is a daunting task for systems with thousands of servers and billions of objects, however, and it is increasingly difficult to evaluate such scenarios at scale on realworld systems. Resilience and availability are both compromised if objects are not repaired in a timely manner. In this work we leverage a high-fidelity discrete-event simulation model tomore » investigate replica reconstruction on large-scale object storage systems with thousands of servers, billions of objects, and petabytes of data. We evaluate the behavior of CRUSH, a well-known object placement algorithm, and identify configuration scenarios in which aggregate rebuild performance is constrained by object placement policies. After determining the root cause of this bottleneck, we then propose enhancements to CRUSH and the usage policies atop it to enable scalable replica reconstruction. We use these methods to demonstrate a simulated aggregate rebuild rate of 410 GiB/s (within 5% of projected ideal linear scaling) on a 1,024-node commodity storage system. We also uncover an unexpected phenomenon in rebuild performance based on the characteristics of the data stored on the system.« less
Rucio, the next-generation Data Management system in ATLAS
NASA Astrophysics Data System (ADS)
Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration
2016-04-01
Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.
Effect of weak rotation on large-scale circulation cessations in turbulent convection.
Assaf, Michael; Angheluta, Luiza; Goldenfeld, Nigel
2012-08-17
We investigate the effect of weak rotation on the large-scale circulation (LSC) of turbulent Rayleigh-Bénard convection, using the theory for cessations in a low-dimensional stochastic model of the flow previously studied. We determine the cessation frequency of the LSC as a function of rotation, and calculate the statistics of the amplitude and azimuthal velocity fluctuations of the LSC as a function of the rotation rate for different Rayleigh numbers. Furthermore, we show that the tails of the reorientation PDF remain unchanged for rotating systems, while the distribution of the LSC amplitude and correspondingly the cessation frequency are strongly affected by rotation. Our results are in close agreement with experimental observations.
Tuneable diode laser gas analyser for methane measurements on a large scale solid oxide fuel cell
NASA Astrophysics Data System (ADS)
Lengden, Michael; Cunningham, Robert; Johnstone, Walter
2011-10-01
A new in-line, real time gas analyser is described that uses tuneable diode laser spectroscopy (TDLS) for the measurement of methane in solid oxide fuel cells. The sensor has been tested on an operating solid oxide fuel cell (SOFC) in order to prove the fast response and accuracy of the technology as compared to a gas chromatograph. The advantages of using a TDLS system for process control in a large-scale, distributed power SOFC unit are described. In future work, the addition of new laser sources and wavelength modulation will allow the simultaneous measurement of methane, water vapour, carbon-dioxide and carbon-monoxide concentrations.
Entanglement replication in driven dissipative many-body systems.
Zippilli, S; Paternostro, M; Adesso, G; Illuminati, F
2013-01-25
We study the dissipative dynamics of two independent arrays of many-body systems, locally driven by a common entangled field. We show that in the steady state the entanglement of the driving field is reproduced in an arbitrarily large series of inter-array entangled pairs over all distances. Local nonclassical driving thus realizes a scale-free entanglement replication and long-distance entanglement distribution mechanism that has immediate bearing on the implementation of quantum communication networks.
Reengineering Real-Time Software Systems
1993-09-09
reengineering existing large-scale (or real-time) systems; systems designed prior to or during the advent of applied SE (Parnas 1979, Freeman 1980). Is... Advisor : Yutaka Kanayama Approved for public release; distribution is unlimited. 93-29769 93 12 6 098 Form Appmoved REPORT DOCUMENTATION PAGE 1o No. PI rep...trm b Idn 1o tl# caik t al wdornon s easnated to waere 1how per response. fr4ikcdm the time rem matnodons. siauide exetig da"a siuo a i and mami diqw
NASA Astrophysics Data System (ADS)
Chen, X.; Song, X.; Shuai, P.; Hammond, G. E.; Ren, H.; Zachara, J. M.
2017-12-01
Hydrologic exchange flows (HEFs) in rivers play vital roles in watershed ecological and biogeochemical functions due to their strong capacity to attenuate contaminants and process significant quantities of carbon and nutrients. While most of existing HEF studies focus on headwater systems with the assumption of steady-state flow, there is lack of understanding of large-scale HEFs in high-order regulated rivers that experience high-frequency stage fluctuations. The large variability of HEFs is a result of interactions between spatial heterogeneity in hydrogeologic properties and temporal variation in river discharge induced by natural or anthropogenic perturbations. Our 9-year spatially distributed dataset (water elevation, specific conductance, and temperature) combined with mechanistic hydrobiogeochemical simulations have revealed complex spatial and temporal dynamics in km-scale HEFs and their significant impacts on contaminant plume mobility and hyporheic biogeochemical processes along the Hanford Reach. Extended multidirectional flow behaviors of unconfined, river corridor groundwater were observed hundreds of meters inland from the river shore resulting from discharge-dependent HEFs. An appropriately sized modeling domain to capture the impact of regional groundwater flow as well as knowledge of subsurface structures controlling intra-aquifer hydrologic connectivity were essential to realistically model transient storage in this large-scale river corridor. This work showed that both river water and mobile groundwater contaminants could serve as effective tracers of HEFs, thus providing valuable information for evaluating and validating the HEF models. Multimodal residence time distributions with long tails were resulted from the mixture of long and short exchange pathways, which consequently impact the carbon and nutrient cycling within the river corridor. Improved understanding of HEFs using integrated observational and modeling approaches sheds light on developing fundamental understanding of the influences of HEFs on water quality, nutrient dynamics, and ecosystem health in dynamic river corridor systems.
Blatchley, E R; Shen, C; Scheible, O K; Robinson, J P; Ragheb, K; Bergstrom, D E; Rokjer, D
2008-02-01
Dyed microspheres have been developed as a new method for validation of ultraviolet (UV) reactor systems. When properly applied, dyed microspheres allow measurement of the UV dose distribution delivered by a photochemical reactor for a given operating condition. Prior to this research, dyed microspheres had only been applied to a bench-scale UV reactor. The goal of this research was to extend the application of dyed microspheres to large-scale reactors. Dyed microsphere tests were conducted on two prototype large-scale UV reactors at the UV Validation and Research Center of New York (UV Center) in Johnstown, NY. All microsphere tests were conducted under conditions that had been used previously in biodosimetry experiments involving two challenge bacteriophage: MS2 and Qbeta. Numerical simulations based on computational fluid dynamics and irradiance field modeling were also performed for the same set of operating conditions used in the microspheres assays. Microsphere tests on the first reactor illustrated difficulties in sample collection and discrimination of microspheres against ambient particles. Changes in sample collection and work-up were implemented in tests conducted on the second reactor that allowed for improvements in microsphere capture and discrimination against the background. Under these conditions, estimates of the UV dose distribution from the microspheres assay were consistent with numerical simulations and the results of biodosimetry, using both challenge organisms. The combined application of dyed microspheres, biodosimetry, and numerical simulation offers the potential to provide a more in-depth description of reactor performance than any of these methods individually, or in combination. This approach also has the potential to substantially reduce uncertainties in reactor validation, thereby leading to better understanding of reactor performance, improvements in reactor design, and decreases in reactor capital and operating costs.
Landscape heterogeneity shapes predation in a newly restored predator-prey system.
Kauffman, Matthew J; Varley, Nathan; Smith, Douglas W; Stahler, Daniel R; MacNulty, Daniel R; Boyce, Mark S
2007-08-01
Because some native ungulates have lived without top predators for generations, it has been uncertain whether runaway predation would occur when predators are newly restored to these systems. We show that landscape features and vegetation, which influence predator detection and capture of prey, shape large-scale patterns of predation in a newly restored predator-prey system. We analysed the spatial distribution of wolf (Canis lupus) predation on elk (Cervus elaphus) on the Northern Range of Yellowstone National Park over 10 consecutive winters. The influence of wolf distribution on kill sites diminished over the course of this study, a result that was likely caused by territorial constraints on wolf distribution. In contrast, landscape factors strongly influenced kill sites, creating distinct hunting grounds and prey refugia. Elk in this newly restored predator-prey system should be able to mediate their risk of predation by movement and habitat selection across a heterogeneous risk landscape.
Landscape heterogeneity shapes predation in a newly restored predator-prey system
Kauffman, M.J.; Varley, N.; Smith, D.W.; Stahler, D.R.; MacNulty, D.R.; Boyce, M.S.
2007-01-01
Because some native ungulates have lived without top predators for generations, it has been uncertain whether runaway predation would occur when predators are newly restored to these systems. We show that landscape features and vegetation, which influence predator detection and capture of prey, shape large-scale patterns of predation in a newly restored predator-prey system. We analysed the spatial distribution of wolf (Canis lupus) predation on elk (Cervus elaphus) on the Northern Range of Yellowstone National Park over 10 consecutive winters. The influence of wolf distribution on kill sites diminished over the course of this study, a result that was likely caused by territorial constraints on wolf distribution. In contrast, landscape factors strongly influenced kill sites, creating distinct hunting grounds and prey refugia. Elk in this newly restored predator-prey system should be able to mediate their risk of predation by movement and habitat selection across a heterogeneous risk landscape. ?? 2007 Blackwell Publishing Ltd/CNRS.
Parallel Clustering Algorithm for Large-Scale Biological Data Sets
Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang
2014-01-01
Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Choi, Yun-Young
2015-02-01
We report a detection of the effect of the large-scale velocity shear on the spatial distributions of the galactic satellites around the isolated hosts. Identifying the isolated galactic systems, each of which consists of a single host galaxy and its satellites, from the Seventh Data Release of the Sloan Digital Sky Survey and reconstructing linearly the velocity shear field in the local universe, we measure the alignments between the relative positions of the satellites from their isolated hosts and the principal axes of the local velocity shear tensors projected onto the plane of sky. We find a clear signal that the galactic satellites in isolated systems are located preferentially along the directions of the minor principal axes of the large-scale velocity shear field. Those galactic satellites that are spirals, are brighter, are located at distances larger than the projected virial radii of the hosts, and belong to the spiral hosts yield stronger alignment signals, which implies that the alignment strength depends on the formation and accretion epochs of the galactic satellites. It is also shown that the alignment strength is quite insensitive to the cosmic web environment, as well as the size and luminosity of the isolated hosts. Although this result is consistent with the numerical finding of Libeskind et al. based on an N-body experiment, owing to the very low significance of the observed signals, it remains inconclusive whether or not the velocity shear effect on the satellite distribution is truly universal.
Zhao, Meng; Ding, Baocang
2015-03-01
This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
Large-N kinetic theory for highly occupied systems
NASA Astrophysics Data System (ADS)
Walz, R.; Boguslavski, K.; Berges, J.
2018-06-01
We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.
Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W.
2011-01-01
The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg–Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers. PMID:21731106
Ding, Edwin; Lefrancois, Simon; Kutz, Jose Nathan; Wise, Frank W
2011-01-01
The mode-locking of dissipative soliton fiber lasers using large mode area fiber supporting multiple transverse modes is studied experimentally and theoretically. The averaged mode-locking dynamics in a multi-mode fiber are studied using a distributed model. The co-propagation of multiple transverse modes is governed by a system of coupled Ginzburg-Landau equations. Simulations show that stable and robust mode-locked pulses can be produced. However, the mode-locking can be destabilized by excessive higher-order mode content. Experiments using large core step-index fiber, photonic crystal fiber, and chirally-coupled core fiber show that mode-locking can be significantly disturbed in the presence of higher-order modes, resulting in lower maximum single-pulse energies. In practice, spatial mode content must be carefully controlled to achieve full pulse energy scaling. This paper demonstrates that mode-locking performance is very sensitive to the presence of multiple waveguide modes when compared to systems such as amplifiers and continuous-wave lasers.
NASA Astrophysics Data System (ADS)
Forrester, Peter J.; Trinh, Allan K.
2018-05-01
The neighbourhood of the largest eigenvalue λmax in the Gaussian unitary ensemble (GUE) and Laguerre unitary ensemble (LUE) is referred to as the soft edge. It is known that there exists a particular centring and scaling such that the distribution of λmax tends to a universal form, with an error term bounded by 1/N2/3. We take up the problem of computing the exact functional form of the leading error term in a large N asymptotic expansion for both the GUE and LUE—two versions of the LUE are considered, one with the parameter a fixed and the other with a proportional to N. Both settings in the LUE case allow for an interpretation in terms of the distribution of a particular weighted path length in a model involving exponential variables on a rectangular grid, as the grid size gets large. We give operator theoretic forms of the corrections, which are corollaries of knowledge of the first two terms in the large N expansion of the scaled kernel and are readily computed using a method due to Bornemann. We also give expressions in terms of the solutions of particular systems of coupled differential equations, which provide an alternative method of computation. Both characterisations are well suited to a thinned generalisation of the original ensemble, whereby each eigenvalue is deleted independently with probability (1 - ξ). In Sec. V, we investigate using simulation the question of whether upon an appropriate centring and scaling a wider class of complex Hermitian random matrix ensembles have their leading correction to the distribution of λmax proportional to 1/N2/3.
NASA Astrophysics Data System (ADS)
Jayne, R., Jr.; Pollyea, R.
2016-12-01
Carbon capture and sequestration (CCS) in geologic reservoirs is one strategy for reducing anthropogenic CO2 emissions from large-scale point-source emitters. Recent developments at the CarbFix CCS pilot in Iceland have shown that basalt reservoirs are highly effective for permanent mineral trapping on the basis of CO2-water-rock interactions, which result in the formation of carbonates minerals. In order to advance our understanding of basalt sequestration in large igneous provinces, this research uses numerical simulation to evaluate the feasibility of industrial-scale CO2 injections in the Columbia River Basalt Group (CRBG). Although bulk reservoir properties are well constrained on the basis of field and laboratory testing from the Wallula Basalt Sequestration Pilot Project, there remains significant uncertainty in the spatial distribution of permeability at the scale of individual basalt flows. Geostatistical analysis of hydrologic data from 540 wells illustrates that CRBG reservoirs are reasonably modeled as layered heterogeneous systems on the basis of basalt flow morphology; however, the regional dataset is insufficient to constrain permeability variability at the scale of an individual basalt flow. As a result, permeability distribution for this modeling study is established by centering the lognormal permeability distribution in the regional dataset over the bulk permeability measured at Wallula site, which results in a spatially random permeability distribution within the target reservoir. In order to quantify the effects of this permeability uncertainty, CO2 injections are simulated within 50 equally probable synthetic reservoir domains. Each model domain comprises three-dimensional geometry with 530,000 grid blocks, and fracture-matrix interaction is simulated as interacting continua for the two low permeability layers (flow interiors) bounding the injection zone. Results from this research illustrate that permeability uncertainty at the scale of individual basalt flows may significantly impact both injection pressure accumulation and CO2 distribution.
Adam E. Duerr; Tricia A. Miller; Kerri L. Cornell Duerr; Michael J. Lanzone; Amy Fesnock; Todd E. Katzner
2015-01-01
Anthropogenic development has great potential to affect fragile desert environments. Large-scale development of renewable energy infrastructure is planned for many desert ecosystems. Development plans should account for anthropogenic effects to distributions and abundance of rare or sensitive wildlife; however, baseline data on abundance and distribution of such...
Large Scale Density Estimation of Blue and Fin Whales (LSD)
2015-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...sensors, or both. The goal of this research is to develop and implement a new method for estimating blue and fin whale density that is effective over...develop and implement a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse
An informal paper on large-scale dynamic systems
NASA Technical Reports Server (NTRS)
Ho, Y. C.
1975-01-01
Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Williams, Dean; Aloisio, Giovanni
2016-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated (e.g., the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). Most of the tools currently available for scientific data analysis in the climate domain fail at large scale since they: (1) are desktop based and need the data locally; (2) are sequential, so do not benefit from available multicore/parallel machines; (3) do not provide declarative languages to express scientific data analysis tasks; (4) are domain-specific, which ties their adoption to a specific domain; and (5) do not provide a workflow support, to enable the definition of complex "experiments". The Ophidia project aims at facing most of the challenges highlighted above by providing a big data analytics framework for eScience. Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes ("datacubes"). The project relies on a strong background of high performance database management and OLAP systems to manage large scientific data sets. It also provides a native workflow management support, to define processing chains and workflows with tens to hundreds of data analytics operators to build real scientific use cases. With regard to interoperability aspects, the talk will present the contribution provided both to the RDA Working Group on Array Databases, and the Earth System Grid Federation (ESGF) Compute Working Team. Also highlighted will be the results of large scale climate model intercomparison data analysis experiments, for example: (1) defined in the context of the EU H2020 INDIGO-DataCloud project; (2) implemented in a real geographically distributed environment involving CMCC (Italy) and LLNL (US) sites; (3) exploiting Ophidia as server-side, parallel analytics engine; and (4) applied on real CMIP5 data sets available through ESGF.
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Jo, Hang-Hyun
2015-05-01
Many complex networks in natural and social phenomena have often been characterized by heavy-tailed degree distributions. However, due to rapidly growing size of network data and concerns on privacy issues about using these data, it becomes more difficult to analyze complete data sets. Thus, it is crucial to devise effective and efficient estimation methods for heavy tails of degree distributions in large-scale networks only using local information of a small fraction of sampled nodes. Here we propose a tail-scope method based on local observational bias of the friendship paradox. We show that the tail-scope method outperforms the uniform node sampling for estimating heavy tails of degree distributions, while the opposite tendency is observed in the range of small degrees. In order to take advantages of both sampling methods, we devise the hybrid method that successfully recovers the whole range of degree distributions. Our tail-scope method shows how structural heterogeneities of large-scale complex networks can be used to effectively reveal the network structure only with limited local information.
Efficient feature extraction from wide-area motion imagery by MapReduce in Hadoop
NASA Astrophysics Data System (ADS)
Cheng, Erkang; Ma, Liya; Blaisse, Adam; Blasch, Erik; Sheaff, Carolyn; Chen, Genshe; Wu, Jie; Ling, Haibin
2014-06-01
Wide-Area Motion Imagery (WAMI) feature extraction is important for applications such as target tracking, traffic management and accident discovery. With the increasing amount of WAMI collections and feature extraction from the data, a scalable framework is needed to handle the large amount of information. Cloud computing is one of the approaches recently applied in large scale or big data. In this paper, MapReduce in Hadoop is investigated for large scale feature extraction tasks for WAMI. Specifically, a large dataset of WAMI images is divided into several splits. Each split has a small subset of WAMI images. The feature extractions of WAMI images in each split are distributed to slave nodes in the Hadoop system. Feature extraction of each image is performed individually in the assigned slave node. Finally, the feature extraction results are sent to the Hadoop File System (HDFS) to aggregate the feature information over the collected imagery. Experiments of feature extraction with and without MapReduce are conducted to illustrate the effectiveness of our proposed Cloud-Enabled WAMI Exploitation (CAWE) approach.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
States of mind: Emotions, body feelings, and thoughts share distributed neural networks
Oosterwijk, Suzanne; Lindquist, Kristen A.; Anderson, Eric; Dautoff, Rebecca; Moriguchi, Yoshiya; Barrett, Lisa Feldman
2012-01-01
Scientists have traditionally assumed that different kinds of mental states (e.g., fear, disgust, love, memory, planning, concentration, etc.) correspond to different psychological faculties that have domain-specific correlates in the brain. Yet, growing evidence points to the constructionist hypothesis that mental states emerge from the combination of domain-general psychological processes that map to large-scale distributed brain networks. In this paper, we report a novel study testing a constructionist model of the mind in which participants generated three kinds of mental states (emotions, body feelings, or thoughts) while we measured activity within large-scale distributed brain networks using fMRI. We examined the similarity and differences in the pattern of network activity across these three classes of mental states. Consistent with a constructionist hypothesis, a combination of large-scale distributed networks contributed to emotions, thoughts, and body feelings, although these mental states differed in the relative contribution of those networks. Implications for a constructionist functional architecture of diverse mental states are discussed. PMID:22677148
The topology of a causal network for the Chinese financial system
NASA Astrophysics Data System (ADS)
Gao, Bo; Ren, Ruo-en
2013-07-01
The paper builds a causal network for the Chinese financial system based on the Granger causality of company risks, studies its different topologies in crisis and bull period, and applies the centrality to explain individual risk and prevent systemic risk. The results show that this causal network possesses both small-world phenomenon and scale-free property, and has a little different average distance, clustering coefficient, and degree distribution in different periods, and financial institutions with high centrality not only have large individual risk, but also are important for systemic risk immunization.
Gonzalez-Quevedo, Catalina; Davies, Richard G; Richardson, David S
2014-09-01
How the environment influences the transmission and prevalence of disease in a population of hosts is a key aspect of disease ecology. The role that environmental factors play in host-pathogen systems has been well studied at large scales, that is, differences in pathogen pressures among separate populations of hosts or across land masses. However, despite considerable understanding of how environmental conditions vary at fine spatial scales, the effect of these parameters on host-pathogen dynamics at such scales has been largely overlooked. Here, we used a combination of molecular screening and GIS-based analysis to investigate how environmental factors determine the distribution of malaria across the landscape in a population of Berthelot's pipit (Anthus berthelotii, Bolle 1862) on the island of Tenerife (Canary Islands, Spain) using spatially explicit models that account for spatial autocorrelation. Minimum temperature of the coldest month was found to be the most important predictor of malaria infection at the landscape scale across this population. Additionally, anthropogenic factors such as distance to artificial water reservoirs and distance to poultry farms were important predictors of malaria. A model including these factors, and the interaction between distance to artificial water reservoirs and minimum temperature, best explained the distribution of malaria infection in this system. These results suggest that levels of malaria infection in this endemic species may be artificially elevated by the impact of humans. Studies such as the one described here improve our understanding of how environmental factors, and their heterogeneity, affect the distribution of pathogens within wild populations. The results demonstrate the importance of measuring fine-scale variation - and not just regional effects - to understand how environmental variation can influence wildlife diseases. Such understanding is important for predicting the future spread and impact of disease and may help inform disease management programmes as well as the conservation of specific host species. © 2014 The Authors. Journal of Animal Ecology © 2014 British Ecological Society.
Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision (in Chinese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong
Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logicmore » and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.« less
Ecosystem variability in the offshore northeastern Chukchi Sea
NASA Astrophysics Data System (ADS)
Blanchard, Arny L.; Day, Robert H.; Gall, Adrian E.; Aerts, Lisanne A. M.; Delarue, Julien; Dobbins, Elizabeth L.; Hopcroft, Russell R.; Questel, Jennifer M.; Weingartner, Thomas J.; Wisdom, Sheyna S.
2017-12-01
Understanding influences of cumulative effects from multiple stressors in marine ecosystems requires an understanding of the sources for and scales of variability. A multidisciplinary ecosystem study in the offshore northeastern Chukchi Sea during 2008-2013 investigated the variability of the study area's two adjacent sub-ecosystems: a pelagic system influenced by interannual and/or seasonal temporal variation at large, oceanographic (regional) scales, and a benthic-associated system more influenced by small-scale spatial variations. Variability in zooplankton communities reflected interannual oceanographic differences in waters advected northward from the Bering Sea, whereas variation in benthic communities was associated with seafloor and bottom-water characteristics. Variations in the planktivorous seabird community were correlated with prey distributions, whereas interaction effects in ANOVA for walruses were related to declines of sea-ice. Long-term shifts in seabird distributions were also related to changes in sea-ice distributions that led to more open water. Although characteristics of the lower trophic-level animals within sub-ecosystems result from oceanographic variations and interactions with seafloor topography, distributions of apex predators were related to sea-ice as a feeding platform (walruses) or to its absence (i.e., open water) for feeding (seabirds). The stability of prey resources appears to be a key factor in mediating predator interactions with other ocean characteristics. Seabirds reliant on highly-variable zooplankton prey show long-term changes as open water increases, whereas walruses taking benthic prey in biomass hotspots respond to sea-ice changes in the short-term. A better understanding of how variability scales up from prey to predators and how prey resource stability (including how critical prey respond to environmental changes over space and time) might be altered by climate and anthropogenic stressors is essential to predicting the future state of both the Chukchi and other arctic systems.
Determinants of pulmonary blood flow distribution.
Glenny, Robb W; Robertson, H Thomas
2011-01-01
The primary function of the pulmonary circulation is to deliver blood to the alveolar capillaries to exchange gases. Distributing blood over a vast surface area facilitates gas exchange, yet the pulmonary vascular tree must be constrained to fit within the thoracic cavity. In addition, pressures must remain low within the circulatory system to protect the thin alveolar capillary membranes that allow efficient gas exchange. The pulmonary circulation is engineered for these unique requirements and in turn these special attributes affect the spatial distribution of blood flow. As the largest organ in the body, the physical characteristics of the lung vary regionally, influencing the spatial distribution on large-, moderate-, and small-scale levels. © 2011 American Physiological Society.
Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
David Deangelis; Rich Depuy; Debashis Dey
2004-09-30
This report summarizes the work performed by Hybrid Power Generation Systems, LLC (HPGS) during the April to October 2004 reporting period in Task 2.3 (SOFC Scaleup for Hybrid and Fuel Cell Systems) under Cooperative Agreement DE-FC26-01NT40779 for the U. S. Department of Energy, National Energy Technology Laboratory (DOE/NETL), entitled ''Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation''. This study analyzes the performance and economics of power generation systems for central power generation application based on Solid Oxide Fuel Cell (SOFC) technology and fueled by natural gas. The main objective of this task is to develop credible scale upmore » strategies for large solid oxide fuel cell-gas turbine systems. System concepts that integrate a SOFC with a gas turbine were developed and analyzed for plant sizes in excess of 20 MW. A 25 MW plant configuration was selected with projected system efficiency of over 65% and a factory cost of under $400/kW. The plant design is modular and can be scaled to both higher and lower plant power ratings. Technology gaps and required engineering development efforts were identified and evaluated.« less
Intelligent systems engineering methodology
NASA Technical Reports Server (NTRS)
Fouse, Scott
1990-01-01
An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.
Yang, Haishui; Zang, Yanyan; Yuan, Yongge; Tang, Jianjun; Chen, Xin
2012-04-12
Arbuscular mycorrhizal fungi (AMF) can form obligate symbioses with the vast majority of land plants, and AMF distribution patterns have received increasing attention from researchers. At the local scale, the distribution of AMF is well documented. Studies at large scales, however, are limited because intensive sampling is difficult. Here, we used ITS rDNA sequence metadata obtained from public databases to study the distribution of AMF at continental and global scales. We also used these sequence metadata to investigate whether host plant is the main factor that affects the distribution of AMF at large scales. We defined 305 ITS virtual taxa (ITS-VTs) among all sequences of the Glomeromycota by using a comprehensive maximum likelihood phylogenetic analysis. Each host taxonomic order averaged about 53% specific ITS-VTs, and approximately 60% of the ITS-VTs were host specific. Those ITS-VTs with wide host range showed wide geographic distribution. Most ITS-VTs occurred in only one type of host functional group. The distributions of most ITS-VTs were limited across ecosystem, across continent, across biogeographical realm, and across climatic zone. Non-metric multidimensional scaling analysis (NMDS) showed that AMF community composition differed among functional groups of hosts, and among ecosystem, continent, biogeographical realm, and climatic zone. The Mantel test showed that AMF community composition was significantly correlated with plant community composition among ecosystem, among continent, among biogeographical realm, and among climatic zone. The structural equation modeling (SEM) showed that the effects of ecosystem, continent, biogeographical realm, and climatic zone were mainly indirect on AMF distribution, but plant had strongly direct effects on AMF. The distribution of AMF as indicated by ITS rDNA sequences showed a pattern of high endemism at large scales. This pattern indicates high specificity of AMF for host at different scales (plant taxonomic order and functional group) and high selectivity from host plants for AMF. The effects of ecosystemic, biogeographical, continental and climatic factors on AMF distribution might be mediated by host plants.
Performance Monitoring of Residential Hot Water Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Anna; Lanzisera, Steven; Lutz, Jim
Current water distribution systems are designed such that users need to run the water for some time to achieve the desired temperature, wasting energy and water in the process. We developed a wireless sensor network for large-scale, long time-series monitoring of residential water end use. Our system consists of flow meters connected to wireless motes transmitting data to a central manager mote, which in turn posts data to our server via the internet. This project also demonstrates a reliable and flexible data collection system that could be configured for various other forms of end use metering in buildings. The purposemore » of this study was to determine water and energy use and waste in hot water distribution systems in California residences. We installed meters at every end use point and the water heater in 20 homes and collected 1s flow and temperature data over an 8 month period. For a typical shower and dishwasher events, approximately half the energy is wasted. This relatively low efficiency highlights the importance of further examining the energy and water waste in hot water distribution systems.« less
The merger remnant NGC 3610 and its globular cluster system: a large-scale study
NASA Astrophysics Data System (ADS)
Bassino, Lilia P.; Caso, Juan P.
2017-04-01
We present a photometric study of the prototype merger remnant NGC 3610 and its globular cluster (GC) system, based on new Gemini/GMOS and Advanced Camera for Surveys/Hubble Space Telescope archival images. Thanks to the large field of view of our GMOS data, larger than previous studies, we are able to detect a 'classical' bimodal GC colour distribution, corresponding to metal-poor and metal-rich GCs, at intermediate radii and a small subsample of likely young clusters of intermediate colours, mainly located in the outskirts. The extent of the whole GC system is settled as about 40 kpc. The GC population is quite poor, about 500 ± 110 members that corresponds to a low total specific frequency SN ˜ 0.8. The effective radii of a cluster sample are determined, including those of two spectroscopically confirmed young and metal-rich clusters, that are in the limit between GC and UCD sizes and brightness. The large-scale galaxy surface-brightness profile can be decomposed as an inner embedded disc and an outer spheroid, determining for both larger extents than earlier research (10 and 30 kpc, respectively). We detect boxy isophotes, expected in merger remnants, and show a wealth of fine-structure in the surface-brightness distribution with unprecedented detail, coincident with the outer spheroid. The lack of symmetry in the galaxy colour map adds a new piece of evidence to the recent merger scenario of NGC 3610.
NASA Astrophysics Data System (ADS)
Awada, H.; Ciraolo, G.; Maltese, A.; Moreno Hidalgo, M. A.; Provenzano, G.; Còrcoles, J. I.
2017-10-01
Satellite imagery provides a dependable basis for computational models that aimed to determine actual evapotranspiration (ET) by surface energy balance. Satellite-based models enables quantifying ET over large areas for a wide range of applications, such as monitoring water distribution, managing irrigation and assessing irrigation systems' performance. With the aim to evaluate the energy and water consumption of a large scale on-turn pressurized irrigation system in the district of Aguas Nuevas, Albacete, Spain, the satellite-based image-processing model SEBAL was used for calculating actual ET. The model has been applied to quantify instantaneous, daily, and seasonal actual ET over high- resolution Landsat images for the peak water demand season (May to September) and for the years 2006 - 2008. The model provided a direct estimation of the distribution of main energy fluxes, at the instant when the satellite overpassed over each field of the district. The image acquisition day Evapotranspiration (ET24) was obtained from instantaneous values by assuming a constant evaporative fraction (Λ) for the entire day of acquisition; then, monthly and seasonal ET were estimated from the daily evapotranspiration (ETdaily) assuming that ET24 varies in proportion to reference ET (ETr) at the meteorological station, thus accounting for day to day variation in meteorological forcing. The comparison between the hydrants water consumption and the actual evapotranspiration, considering an irrigation efficiency of 85%, showed that a considerable amount of water and energy can be saved at district level.
Distributed wavefront reconstruction with SABRE for real-time large scale adaptive optics control
NASA Astrophysics Data System (ADS)
Brunner, Elisabeth; de Visser, Cornelis C.; Verhaegen, Michel
2014-08-01
We present advances on Spline based ABerration REconstruction (SABRE) from (Shack-)Hartmann (SH) wavefront measurements for large-scale adaptive optics systems. SABRE locally models the wavefront with simplex B-spline basis functions on triangular partitions which are defined on the SH subaperture array. This approach allows high accuracy through the possible use of nonlinear basis functions and great adaptability to any wavefront sensor and pupil geometry. The main contribution of this paper is a distributed wavefront reconstruction method, D-SABRE, which is a 2 stage procedure based on decomposing the sensor domain into sub-domains each supporting a local SABRE model. D-SABRE greatly decreases the computational complexity of the method and removes the need for centralized reconstruction while obtaining a reconstruction accuracy for simulated E-ELT turbulences within 1% of the global method's accuracy. Further, a generalization of the methodology is proposed making direct use of SH intensity measurements which leads to an improved accuracy of the reconstruction compared to centroid algorithms using spatial gradients.
Joseph A. Tainter; Bonnie Bagley Tainter
1996-01-01
Ecosystem management should be based on the fullest possible knowledge of ecological structures and processes. In prehistoric North America, the involvement of Indian populations in ecosystem processes ranged from inadvertent alteration of the distribution and abundance of species to large-scale management of landscapes. The knowledge needed to manage ecosystems today...
Mark A. Finney; Charles W. McHugh; Isaac Grenfell; Karin L. Riley
2010-01-01
Components of a quantitative risk assessment were produced by simulation of burn probabilities and fire behavior variation for 134 fire planning units (FPUs) across the continental U.S. The system uses fire growth simulation of ignitions modeled from relationships between large fire occurrence and the fire danger index Energy Release Component (ERC). Simulations of 10,...
Bibliography--Unclassified Technical Reports, Special Reports, and Technical Notes: FY 1982.
1982-11-01
in each category are listed in chronological order under seven areas: manpower management, personnel administration , organization management, education...7633). Technical reports listed that have unlimited distribution can also be obtained from the National Technical Information Service , 5285 Port Royal...simulations of manpower systems. This research exploits the technology of computer-managed large-scale data bases. PERSONNEL ADMINISTRATION The personnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
Postigo Marcos, Fernando E.; Domingo, Carlos Mateo; San Roman, Tomas Gomez; ...
2017-11-18
Under the increasing penetration of distributed energy resources and new smart network technologies, distribution utilities face new challenges and opportunities to ensure reliable operations, manage service quality, and reduce operational and investment costs. Simultaneously, the research community is developing algorithms for advanced controls and distribution automation that can help to address some of these challenges. However, there is a shortage of realistic test systems that are publically available for development, testing, and evaluation of such new algorithms. Concerns around revealing critical infrastructure details and customer privacy have severely limited the number of actual networks published and that are available formore » testing. In recent decades, several distribution test feeders and US-featured representative networks have been published, but the scale, complexity, and control data vary widely. This paper presents a first-of-a-kind structured literature review of published distribution test networks with a special emphasis on classifying their main characteristics and identifying the types of studies for which they have been used. As a result, this both aids researchers in choosing suitable test networks for their needs and highlights the opportunities and directions for further test system development. In particular, we highlight the need for building large-scale synthetic networks to overcome the identified drawbacks of current distribution test feeders.« less
MHD Modeling of the Solar Wind with Turbulence Transport and Heating
NASA Technical Reports Server (NTRS)
Goldstein, M. L.; Usmanov, A. V.; Matthaeus, W. H.; Breech, B.
2009-01-01
We have developed a magnetohydrodynamic model that describes the global axisymmetric steady-state structure of the solar wind near solar minimum with account for transport of small-scale turbulence associated heating. The Reynolds-averaged mass, momentum, induction, and energy equations for the large-scale solar wind flow are solved simultaneously with the turbulence transport equations in the region from 0.3 to 100 AU. The large-scale equations include subgrid-scale terms due to turbulence and the turbulence (small-scale) equations describe the effects of transport and (phenomenologically) dissipation of the MHD turbulence based on a few statistical parameters (turbulence energy, normalized cross-helicity, and correlation scale). The coupled set of equations is integrated numerically for a source dipole field on the Sun by a time-relaxation method in the corotating frame of reference. We present results on the plasma, magnetic field, and turbulence distributions throughout the heliosphere and on the role of the turbulence in the large-scale structure and temperature distribution in the solar wind.
2013-11-01
big data with R is relatively new. RHadoop is a mature product from Revolution Analytics that uses R with Hadoop Streaming [15] and provides...agnostic all- data summaries or computations, in which case we use MapReduce directly. 2.3 D&R Software Environment In this work, we use the Hadoop ...job scheduling and tracking, data distribu- tion, system architecture, heterogeneity, and fault-tolerance. Hadoop also provides a distributed key-value
Groups of galaxies in the Center for Astrophysics redshift survey
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1989-01-01
By applying the Huchra and Geller (1982) objective group identification algorithm to the Center for Astrophysics' redshift survey, a catalog of 128 groups with three or more members is extracted, and 92 of these are used as a statistical sample. A comparison of the distribution of group centers with the distribution of all galaxies in the survey indicates qualitatively that groups trace the large-scale structure of the region. The physical properties of groups may be related to the details of large-scale structure, and it is concluded that differences among group catalogs may be due to the properties of large-scale structures and their location relative to the survey limits.
Dubrou, S; Konjek, J; Macheras, E; Welté, B; Guidicelli, L; Chignon, E; Joyeux, M; Gaillard, J L; Heym, B; Tully, T; Sapriel, G
2013-09-01
Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network.
Dubrou, S.; Konjek, J.; Macheras, E.; Welté, B.; Guidicelli, L.; Chignon, E.; Joyeux, M.; Gaillard, J. L.; Heym, B.; Tully, T.
2013-01-01
Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network. PMID:23835173
Griffin, Kingsley J; Hedge, Luke H; González-Rivero, Manuel; Hoegh-Guldberg, Ove I; Johnston, Emma L
2017-07-01
Historically, marine ecologists have lacked efficient tools that are capable of capturing detailed species distribution data over large areas. Emerging technologies such as high-resolution imaging and associated machine-learning image-scoring software are providing new tools to map species over large areas in the ocean. Here, we combine a novel diver propulsion vehicle (DPV) imaging system with free-to-use machine-learning software to semi-automatically generate dense and widespread abundance records of a habitat-forming algae over ~5,000 m 2 of temperate reef. We employ replicable spatial techniques to test the effectiveness of traditional diver-based sampling, and better understand the distribution and spatial arrangement of one key algal species. We found that the effectiveness of a traditional survey depended on the level of spatial structuring, and generally 10-20 transects (50 × 1 m) were required to obtain reliable results. This represents 2-20 times greater replication than have been collected in previous studies. Furthermore, we demonstrate the usefulness of fine-resolution distribution modeling for understanding patterns in canopy algae cover at multiple spatial scales, and discuss applications to other marine habitats. Our analyses demonstrate that semi-automated methods of data gathering and processing provide more accurate results than traditional methods for describing habitat structure at seascape scales, and therefore represent vastly improved techniques for understanding and managing marine seascapes.
Fiáth, Richárd; Beregszászi, Patrícia; Horváth, Domonkos; Wittner, Lucia; Aarts, Arno A A; Ruther, Patrick; Neves, Hercules P; Bokor, Hajnalka; Acsády, László; Ulbert, István
2016-11-01
Recording simultaneous activity of a large number of neurons in distributed neuronal networks is crucial to understand higher order brain functions. We demonstrate the in vivo performance of a recently developed electrophysiological recording system comprising a two-dimensional, multi-shank, high-density silicon probe with integrated complementary metal-oxide semiconductor electronics. The system implements the concept of electronic depth control (EDC), which enables the electronic selection of a limited number of recording sites on each of the probe shafts. This innovative feature of the system permits simultaneous recording of local field potentials (LFP) and single- and multiple-unit activity (SUA and MUA, respectively) from multiple brain sites with high quality and without the actual physical movement of the probe. To evaluate the in vivo recording capabilities of the EDC probe, we recorded LFP, MUA, and SUA in acute experiments from cortical and thalamic brain areas of anesthetized rats and mice. The advantages of large-scale recording with the EDC probe are illustrated by investigating the spatiotemporal dynamics of pharmacologically induced thalamocortical slow-wave activity in rats and by the two-dimensional tonotopic mapping of the auditory thalamus. In mice, spatial distribution of thalamic responses to optogenetic stimulation of the neocortex was examined. Utilizing the benefits of the EDC system may result in a higher yield of useful data from a single experiment compared with traditional passive multielectrode arrays, and thus in the reduction of animals needed for a research study. Copyright © 2016 the American Physiological Society.
NASA Astrophysics Data System (ADS)
Habarulema, John Bosco; Yizengaw, Endawoke; Katamzi-Joseph, Zama T.; Moldwin, Mark B.; Buchert, Stephan
2018-01-01
This paper discusses the ionosphere's response to the largest storm of solar cycle 24 during 16-18 March 2015. We have used the Global Navigation Satellite Systems (GNSS) total electron content data to study large-scale traveling ionospheric disturbances (TIDs) over the American, African, and Asian regions. Equatorward large-scale TIDs propagated and crossed the equator to the other side of the hemisphere especially over the American and Asian sectors. Poleward TIDs with velocities in the range ≈400-700 m/s have been observed during local daytime over the American and African sectors with origin from around the geomagnetic equator. Our investigation over the American sector shows that poleward TIDs may have been launched by increased Lorentz coupling as a result of penetrating electric field during the southward turning of the interplanetary magnetic field, Bz. We have observed increase in SWARM satellite electron density (Ne) at the same time when equatorward large-scale TIDs are visible over the European-African sector. The altitude Ne profiles from ionosonde observations show a possible link that storm-induced TIDs may have influenced the plasma distribution in the topside ionosphere at SWARM satellite altitude.
Mapping the universe in three dimensions
Haynes, Martha P.
1996-01-01
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble’s law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin. PMID:11607714
Mapping the universe in three dimensions.
Haynes, M P
1996-12-10
The determination of the three-dimensional layout of galaxies is critical to our understanding of the evolution of galaxies and the structures in which they lie, to our determination of the fundamental parameters of cosmology, and to our understanding of both the past and future histories of the universe at large. The mapping of the large scale structure in the universe via the determination of galaxy red shifts (Doppler shifts) is a rapidly growing industry thanks to technological developments in detectors and spectrometers at radio and optical wavelengths. First-order application of the red shift-distance relation (Hubble's law) allows the analysis of the large-scale distribution of galaxies on scales of hundreds of megaparsecs. Locally, the large-scale structure is very complex but the overall topology is not yet clear. Comparison of the observed red shifts with ones expected on the basis of other distance estimates allows mapping of the gravitational field and the underlying total density distribution. The next decade holds great promise for our understanding of the character of large-scale structure and its origin.
Wildhaber, Mark L.; Wikle, Christopher K.; Anderson, Christopher J.; Franz, Kristie J.; Moran, Edward H.; Dey, Rima; Mader, Helmut; Kraml, Julia
2012-01-01
Climate change operates over a broad range of spatial and temporal scales. Understanding its effects on ecosystems requires multi-scale models. For understanding effects on fish populations of riverine ecosystems, climate predicted by coarse-resolution Global Climate Models must be downscaled to Regional Climate Models to watersheds to river hydrology to population response. An additional challenge is quantifying sources of uncertainty given the highly nonlinear nature of interactions between climate variables and community level processes. We present a modeling approach for understanding and accomodating uncertainty by applying multi-scale climate models and a hierarchical Bayesian modeling framework to Midwest fish population dynamics and by linking models for system components together by formal rules of probability. The proposed hierarchical modeling approach will account for sources of uncertainty in forecasts of community or population response. The goal is to evaluate the potential distributional changes in an ecological system, given distributional changes implied by a series of linked climate and system models under various emissions/use scenarios. This understanding will aid evaluation of management options for coping with global climate change. In our initial analyses, we found that predicted pallid sturgeon population responses were dependent on the climate scenario considered.
Baity-Jesi, Marco; Calore, Enrico; Cruz, Andres; Fernandez, Luis Antonio; Gil-Narvión, José Miguel; Gordillo-Guerrero, Antonio; Iñiguez, David; Maiorano, Andrea; Marinari, Enzo; Martin-Mayor, Victor; Monforte-Garcia, Jorge; Muñoz Sudupe, Antonio; Navarro, Denis; Parisi, Giorgio; Perez-Gaviro, Sergio; Ricci-Tersenghi, Federico; Ruiz-Lorenzo, Juan Jesus; Schifano, Sebastiano Fabio; Tarancón, Alfonso; Tripiccione, Raffaele; Yllanes, David
2017-01-01
We have performed a very accurate computation of the nonequilibrium fluctuation–dissipation ratio for the 3D Edwards–Anderson Ising spin glass, by means of large-scale simulations on the special-purpose computers Janus and Janus II. This ratio (computed for finite times on very large, effectively infinite, systems) is compared with the equilibrium probability distribution of the spin overlap for finite sizes. Our main result is a quantitative statics-dynamics dictionary, which could allow the experimental exploration of important features of the spin-glass phase without requiring uncontrollable extrapolations to infinite times or system sizes. PMID:28174274
Experiences with hypercube operating system instrumentation
NASA Technical Reports Server (NTRS)
Reed, Daniel A.; Rudolph, David C.
1989-01-01
The difficulties in conceptualizing the interactions among a large number of processors make it difficult both to identify the sources of inefficiencies and to determine how a parallel program could be made more efficient. This paper describes an instrumentation system that can trace the execution of distributed memory parallel programs by recording the occurrence of parallel program events. The resulting event traces can be used to compile summary statistics that provide a global view of program performance. In addition, visualization tools permit the graphic display of event traces. Visual presentation of performance data is particularly useful, indeed, necessary for large-scale parallel computers; the enormous volume of performance data mandates visual display.
Using DoD Maps to Examine the Influence of Large Wood on Channel Morphodynamics
NASA Astrophysics Data System (ADS)
MacKenzie, L. C.; Eaton, B. C.
2012-12-01
Since the advent of logging and slash burning, many streams in British Columbia have experienced changes to the amount of large wood added to or removed from these systems, which has, in turn, influenced the storage and movement of sediment within these channels. This set of flume experiments examines and quantifies the impacts of large wood on the reach-scale morphodynamics. Understanding the relation between the wood load and channel morphodynamics is important when assessing the quality of the aquatic habitat of a stream. The experiments were conducted using a fixed-bank, mobile bed Froude-scaled physical model of Fishtrap Creek, British Columbia, built in a shallow flume that is 1.5 m wide and 11 m long. The stream table was run without wood until it reached equilibrium at which point wood pieces of varying sizes were added to the channel. The bed morphology was surveyed using a laser profiling system at five-hour intervals. The laser profiles were then interpolated to create digital elevation models (DEM) from which DEM of difference (DoD) maps were produced. Analysis of the DoD maps focused on quantifying and locating differences in the distribution of sediment storage, erosion, and deposition between the runs as well as those induced by the addition of large wood into the stream channel. We then assessed the typical influence of individual pieces and of jams on pool frequency, size and distribution along the channels.
Scale relativity and hierarchical structuring of planetary systems
NASA Astrophysics Data System (ADS)
Galopeau, P. H. M.; Nottale, L.; da Rocha, D.; Tran Minh, N.
2003-04-01
The theory of scale relativity, applied to macroscopic gravitational systems like planetary systems, allows one to predict quantization laws of several key parameters characterizing those systems (distance between planets and central star, obliquity, eccentricity...) which are organized in a hierarchical way. In the framework of the scale relativity approach, one demonstrates that the motion (at relatively large time-scales) of the bodies in planetary systems, described in terms of fractal geodesic trajectories, is governed by a Schrödinger-like equation. Preferential orbits are predicted in terms of probability density peaks with semi-major axis given by: a_n = GMn^2/w^2 (M is the mass of the central star and w is a velocity close to 144 km s-1 in the case of our inner solar system and of the presently observed exoplanets). The velocity of the planet orbiting at this distance satisfies the relation v_n = w/n. Moreover, the mass distribution of the planets in our solar system can be accounted for in this model. These predictions are in good agreement with the observed values of the actual orbital parameters. Furthermore, the exoplanets which have been recently discovered around nearby stars also follow the same law in terms of the same constant in a highly significant statistical way. The theory of scale relativity also predicts structures for the obliquities and inclinations of the planets and satellites: the probability density of their distribution between 0 and pi are expected to display peaks at particular angles θ_k = kpi/n. A statistical agreement is obtained for our solar system with n=7. Another prediction concerns the distribution of the planets eccentricities e. The theory foresees a quantization law e = k/n where k is an integer and n is the quantum number that characterizes semi-major axes. The presently known exoplanet eccentricities are compatible with this theoretical prediction. Finally, although all these planetary systems may look very different from our solar system, they actually present universal structures comparable to ours, so that a high probability to discover exoplanets having orbital characteristics very similar to the Earth's ones can be expected.
Validating Bayesian truth serum in large-scale online human experiments.
Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad
2017-01-01
Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.
Validating Bayesian truth serum in large-scale online human experiments
Frank, Morgan R.; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad
2017-01-01
Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method’s mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon’s Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the “honest” distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where “honest” answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers. PMID:28494000
Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities
NASA Astrophysics Data System (ADS)
Doster, F.; Celia, M. A.; Nordbotten, J. M.
2012-12-01
Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.
Distributed sensor networks: a cellular nonlinear network perspective.
Haenggi, Martin
2003-12-01
Large-scale networks of integrated wireless sensors become increasingly tractable. Advances in hardware technology and engineering design have led to dramatic reductions in size, power consumption, and cost for digital circuitry, and wireless communications. Networking, self-organization, and distributed operation are crucial ingredients to harness the sensing, computing, and computational capabilities of the nodes into a complete system. This article shows that those networks can be considered as cellular nonlinear networks (CNNs), and that their analysis and design may greatly benefit from the rich theoretical results available for CNNs.
A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.
The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework
2012-01-01
Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909
Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.
Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John
2012-12-05
For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.
Universality of local dissipation scales in buoyancy-driven turbulence.
Zhou, Quan; Xia, Ke-Qing
2010-03-26
We report an experimental investigation of the local dissipation scale field eta in turbulent thermal convection. Our results reveal two types of universality of eta. The first one is that, for the same flow, the probability density functions (PDFs) of eta are insensitive to turbulent intensity and large-scale inhomogeneity and anisotropy of the system. The second is that the small-scale dissipation dynamics in buoyancy-driven turbulence can be described by the same models developed for homogeneous and isotropic turbulence. However, the exact functional form of the PDF of the local dissipation scale is not universal with respect to different types of flows, but depends on the integral-scale velocity boundary condition, which is found to have an exponential, rather than Gaussian, distribution in turbulent Rayleigh-Bénard convection.
Properties of on-line social systems
NASA Astrophysics Data System (ADS)
Grabowski, A.; Kruszewska, N.; Kosiński, R. A.
2008-11-01
We study properties of five different social systems: (i) internet society of friends consisting of over 106 people, (ii) social network consisting of 3 × 104 individuals, who interact in a large virtual world of Massive Multiplayer Online Role Playing Games (MMORPGs), (iii) over 106 users of music community website, (iv) over 5 × 106 users of gamers community server and (v) over 0.25 × 106 users of books admirer website. Individuals included in large social network form an Internet community and organize themselves in groups of different sizes. The destiny of those systems, as well as the method of creating of new connections, are different, however we found that the properties of these networks are very similar. We have found that the network components size distribution follow the power-law scaling form. In all five systems we have found interesting scaling laws concerning human dynamics. Our research has shown how long people are interested in a single task, how much time they devote to it and how fast they are making friends. It is surprising that the time evolution of an individual connectivity is very similar in each system.
Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.
Guo, Tianjiao; Englehardt, James D; Fallon, Howard J
While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks
NASA Technical Reports Server (NTRS)
Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng
2005-01-01
Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.
Distributed and grid computing projects with research focus in human health.
Diomidous, Marianna; Zikos, Dimitrios
2012-01-01
Distributed systems and grid computing systems are used to connect several computers to obtain a higher level of performance, in order to solve a problem. During the last decade, projects use the World Wide Web to aggregate individuals' CPU power for research purposes. This paper presents the existing active large scale distributed and grid computing projects with research focus in human health. There have been found and presented 11 active projects with more than 2000 Processing Units (PUs) each. The research focus for most of them is molecular biology and, specifically on understanding or predicting protein structure through simulation, comparing proteins, genomic analysis for disease provoking genes and drug design. Though not in all cases explicitly stated, common target diseases include research to find cure against HIV, dengue, Duchene dystrophy, Parkinson's disease, various types of cancer and influenza. Other diseases include malaria, anthrax, Alzheimer's disease. The need for national initiatives and European Collaboration for larger scale projects is stressed, to raise the awareness of citizens to participate in order to create a culture of internet volunteering altruism.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stewart, J; Lindsay, P; University of Toronto, Toronto
Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mmmore » circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations. Funding Support: this work is supported by funding the National Sciences and Engineering Research Council of Canada, and a Mitacs-accelerate fellowship. Conflict of Interest: Dr. Lindsay and Dr. Jaffray are listed as inventors of the small animal microirradiator described herein. This system has been licensed for commercial development.« less
Design for Run-Time Monitor on Cloud Computing
NASA Astrophysics Data System (ADS)
Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon
Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.
Extreme-Scale De Novo Genome Assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georganas, Evangelos; Hofmeyr, Steven; Egan, Rob
De novo whole genome assembly reconstructs genomic sequence from short, overlapping, and potentially erroneous DNA segments and is one of the most important computations in modern genomics. This work presents HipMER, a high-quality end-to-end de novo assembler designed for extreme scale analysis, via efficient parallelization of the Meraculous code. Genome assembly software has many components, each of which stresses different components of a computer system. This chapter explains the computational challenges involved in each step of the HipMer pipeline, the key distributed data structures, and communication costs in detail. We present performance results of assembling the human genome and themore » large hexaploid wheat genome on large supercomputers up to tens of thousands of cores.« less
GenASiS Basics: Object-oriented utilitarian functionality for large-scale physics simulations
Cardall, Christian Y.; Budiardja, Reuben D.
2015-06-11
Aside from numerical algorithms and problem setup, large-scale physics simulations on distributed-memory supercomputers require more basic utilitarian functionality, such as physical units and constants; display to the screen or standard output device; message passing; I/O to disk; and runtime parameter management and usage statistics. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of this sort of rudimentary functionality, along with individual `unit test' programs and larger example problems demonstrating their use. Lastly, these classes compose the Basics division of our developing astrophysics simulation code GenASiS (General Astrophysical Simulation System), but their fundamental nature makes themmore » useful for physics simulations in many fields.« less
Anomalies in the GRBs' distribution
NASA Astrophysics Data System (ADS)
Bagoly, Zsolt; Horvath, Istvan; Hakkila, Jon; Toth, Viktor
2015-08-01
Gamma-ray bursts (GRBs) are the most luminous objects known: they outshine their host galaxies making them ideal candidates for probing large-scale structure. Earlier, the angular distribution of different GRBs (long, intermediate and short) has been studied in detail with different methods and it has been found that the short and intermediate groups showed deviation from the full randomness at different levels (e.g. Vavrek, R., et al. 2008). However these result based only angular measurements of the BATSE experiment, without any spatial distance indicator involved.Currently we have more than 361 GRBs with measured precise position, optical afterglow and redshift, mainly due to the observations of the Swift mission. This sample is getting large enough that it its homogeneous and isotropic distribution a large scale can be checked. We have recently (Horvath, I. et al., 2014) identified a large clustering of gamma-ray bursts at redshift z ~ 2 in the general direction of the constellations of Hercules and Corona Borealis. This angular excess cannot be entirely attributed to known selection biases, making its existence due to chance unlikely. The scale on which the clustering occurs is disturbingly large, about 2-3 Gpc: the underlying distribution of matter suggested by this cluster is big enough to question standard assumptions about Universal homogeneity and isotropy.
Davis, Genevieve E; Baumgartner, Mark F; Bonnell, Julianne M; Bell, Joel; Berchok, Catherine; Bort Thornton, Jacqueline; Brault, Solange; Buchanan, Gary; Charif, Russell A; Cholewiak, Danielle; Clark, Christopher W; Corkeron, Peter; Delarue, Julien; Dudzinski, Kathleen; Hatch, Leila; Hildebrand, John; Hodge, Lynne; Klinck, Holger; Kraus, Scott; Martin, Bruce; Mellinger, David K; Moors-Murphy, Hilary; Nieukirk, Sharon; Nowacek, Douglas P; Parks, Susan; Read, Andrew J; Rice, Aaron N; Risch, Denise; Širović, Ana; Soldevilla, Melissa; Stafford, Kate; Stanistreet, Joy E; Summers, Erin; Todd, Sean; Warde, Ann; Van Parijs, Sofie M
2017-10-18
Given new distribution patterns of the endangered North Atlantic right whale (NARW; Eubalaena glacialis) population in recent years, an improved understanding of spatio-temporal movements are imperative for the conservation of this species. While so far visual data have provided most information on NARW movements, passive acoustic monitoring (PAM) was used in this study in order to better capture year-round NARW presence. This project used PAM data from 2004 to 2014 collected by 19 organizations throughout the western North Atlantic Ocean. Overall, data from 324 recorders (35,600 days) were processed and analyzed using a classification and detection system. Results highlight almost year-round habitat use of the western North Atlantic Ocean, with a decrease in detections in waters off Cape Hatteras, North Carolina in summer and fall. Data collected post 2010 showed an increased NARW presence in the mid-Atlantic region and a simultaneous decrease in the northern Gulf of Maine. In addition, NARWs were widely distributed across most regions throughout winter months. This study demonstrates that a large-scale analysis of PAM data provides significant value to understanding and tracking shifts in large whale movements over long time scales.
Statistics of velocity fluctuations of Geldart A particles in a circulating fluidized bed riser
Vaidheeswaran, Avinash; Shaffer, Franklin; Gopalan, Balaji
2017-11-21
Here, the statistics of fluctuating velocity components are studied in the riser of a closed-loop circulating fluidized bed with fluid catalytic cracking catalyst particles. Our analysis shows distinct similarities as well as deviations compared to existing theories and bench-scale experiments. The study confirms anisotropic and non-Maxwellian distribution of fluctuating velocity components. The velocity distribution functions (VDFs) corresponding to transverse fluctuations exhibit symmetry, and follow a stretched-exponential behavior up to three standard deviations. The form of the transverse VDF is largely determined by interparticle interactions. The tails become more overpopulated with an increase in particle loading. The observed deviations from themore » Gaussian distribution are represented using the leading order term in the Sonine expansion, which is commonly used to approximate the VDFs in kinetic theory for granular flows. The vertical fluctuating VDFs are asymmetric and the skewness shifts as the wall is approached. In comparison to transverse fluctuations, the vertical VDF is determined by the local hydrodynamics. This is an observation of particle velocity fluctuations in a large-scale system and their quantitative comparison with the Maxwell-Boltzmann statistics.« less
Suboptimal distributed control and estimation: application to a four coupled tanks system
NASA Astrophysics Data System (ADS)
Orihuela, Luis; Millán, Pablo; Vivas, Carlos; Rubio, Francisco R.
2016-06-01
The paper proposes an innovative estimation and control scheme that enables the distributed monitoring and control of large-scale processes. The proposed approach considers a discrete linear time-invariant process controlled by a network of agents that may both collect information about the evolution of the plant and apply control actions to drive its behaviour. The problem makes full sense when local observability/controllability is not assumed and the communication between agents can be exploited to reach system-wide goals. Additionally, to reduce agents bandwidth requirements and power consumption, an event-based communication policy is studied. The design procedure guarantees system stability, allowing the designer to trade-off performance, control effort and communication requirements. The obtained controllers and observers are implemented in a fully distributed fashion. To illustrate the performance of the proposed technique, experimental results on a quadruple-tank process are provided.
NASA Astrophysics Data System (ADS)
Hsia, H.-M.; Chou, Y.-L.; Longman, R. W.
1983-07-01
The topics considered are related to measurements and controls in physical systems, the control of large scale and distributed parameter systems, chemical engineering systems, aerospace science and technology, thermodynamics and fluid mechanics, and computer applications. Subjects in structural dynamics are discussed, taking into account finite element approximations in transient analysis, buckling finite element analysis of flat plates, dynamic analysis of viscoelastic structures, the transient analysis of large frame structures by simple models, large amplitude vibration of an initially stressed thick plate, nonlinear aeroelasticity, a sensitivity analysis of a combined beam-spring-mass structure, and the optimal design and aeroelastic investigation of segmented windmill rotor blades. Attention is also given to dynamics and control of mechanical and civil engineering systems, composites, and topics in materials. For individual items see A83-44002 to A83-44061
Mass dependence of Higgs boson production at large transverse momentum through a bottom-quark loop
NASA Astrophysics Data System (ADS)
Braaten, Eric; Zhang, Hong; Zhang, Jia-Wei
2018-05-01
In the production of the Higgs through a bottom-quark loop, the transverse momentum distribution of the Higgs at large PT is complicated by its dependence on two other important scales: the bottom quark mass mb and the Higgs mass mH. A strategy for simplifying the calculation of the cross section at large PT is to calculate only the leading terms in its expansion in mb2/PT2. In this paper, we consider the bottom-quark-loop contribution to the parton process q q ¯→H +g at leading order in αs. We show that the leading power of 1 /PT2 can be expressed in the form of a factorization formula that separates the large scale PT from the scale of the masses. All the dependence on mb and mH can be factorized into a distribution amplitude for b b ¯ in the Higgs, a distribution amplitude for b b ¯ in a real gluon, and an end point contribution. The factorization formula can be used to organize the calculation of the leading terms in the expansion in mb2/PT2 so that every calculation involves at most two scales.
The Convergence of High Performance Computing and Large Scale Data Analytics
NASA Astrophysics Data System (ADS)
Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.
2015-12-01
As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.
Universality classes of fluctuation dynamics in hierarchical complex systems
NASA Astrophysics Data System (ADS)
Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.
2017-03-01
A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.
Iron and copper release in drinking-water distribution systems.
Shi, Baoyou; Taylor, James S
2007-09-01
A large-scale pilot study was carried out to evaluate the impacts of changes in water source and treatment process on iron and copper release in water distribution systems. Finished surface waters, groundwaters, and desalinated waters were produced with seven different treatment systems and supplied to 18 pipe distribution systems (PDSs). The major water treatment processes included lime softening, ferric sulfate coagulation, reverse osmosis, nanofiltration, and integrated membrane systems. PDSs were constructed from PVC, lined cast iron, unlined cast iron, and galvanized pipes. Copper pipe loops were set up for corrosion monitoring. Results showed that surface water after ferric sulfate coagulation had low alkalinity and high sulfates, and consequently caused the highest iron release. Finished groundwater treated by conventional method produced the lowest iron release but the highest copper release. The iron release of desalinated water was relatively high because of the water's high chloride level and low alkalinity. Both iron and copper release behaviors were influenced by temperature.
Research on Fault Characteristics and Line Protections Within a Large-scale Photovoltaic Power Plant
NASA Astrophysics Data System (ADS)
Zhang, Chi; Zeng, Jie; Zhao, Wei; Zhong, Guobin; Xu, Qi; Luo, Pandian; Gu, Chenjie; Liu, Bohan
2017-05-01
Centralized photovoltaic (PV) systems have different fault characteristics from distributed PV systems due to the different system structures and controls. This makes the fault analysis and protection methods used in distribution networks with distributed PV not suitable for a centralized PV power plant. Therefore, a consolidated expression for the fault current within a PV power plant under different controls was calculated considering the fault response of the PV array. Then, supported by the fault current analysis and the on-site testing data, the overcurrent relay (OCR) performance was evaluated in the collection system of an 850 MW PV power plant. It reveals that the OCRs at downstream side on overhead lines may malfunction. In this case, a new relay scheme was proposed using directional distance elements. In the PSCAD/EMTDC, a detailed PV system model was built and verified using the on-site testing data. Simulation results indicate that the proposed relay scheme could effectively solve the problems under variant fault scenarios and PV plant output levels.
NASA Astrophysics Data System (ADS)
Ajo Franklin, J. B.; Wagner, A. M.; Lindsey, N.; Dou, S.; Bjella, K.; Daley, T. M.; Freifeld, B. M.; Ulrich, C.; Gelvin, A.; Morales, A.; James, S. R.; Saari, S.; Ekblaw, I.; Wood, T.; Robertson, M.; Martin, E. R.
2016-12-01
In a warming world, permafrost landscapes are being rapidly transformed by thaw, yielding surface subsidence and groundwater flow alteration. The same transformations pose a threat to arctic infrastructure and can induce catastrophic failure of the roads, runways, and pipelines on which human habitation depends. Scalable solutions to monitoring permafrost thaw dynamics are required to both quantitatively understand biogeochemical feedbacks as well as to protect built infrastructure from damage. Unfortunately, permafrost alteration happens over the time scale of climate change, years to decades, a decided challenge for testing new sensing technologies in a limited context. One solution is to engineer systems capable of rapidly thawing large permafrost units to allow short duration experiments targeting next-generation sensing approaches. We present preliminary results from a large-scale controlled permafrost thaw experiment designed to evaluate the utility of different geophysical approaches for tracking the cause, precursors, and early phases of thaw subsidence. We focus on the use of distributed fiber optic sensing for this challenge and deployed distributed temperature (DTS), strain (DSS), and acoustic (DAS) sensing systems in a 2D array to detect thaw signatures. A 10 x 15 x 1 m section of subsurface permafrost was heated using an array of 120 downhole heaters (60 w) at an experimental site near Fairbanks, AK. Ambient noise analysis of DAS datasets collected at the plot, coupled to shear wave inversion, was utilized to evaluate changes in shear wave velocity associated with heating and thaw. These measurements were confirmed by seismic surveys collected using a semi-permanent orbital seismic source activated on a daily basis. Fiber optic measurements were complemented by subsurface thermistor and thermocouple arrays, timelapse total station surveys, LIDAR, secondary seismic measurements (geophone and broadband recordings), timelapse ERT, borehole NMR, soil moisture measurements, hydrologic measurements, and multi-angle photogrammetry. This unusually dense combination of measurement techniques provides an excellent opportunity to characterize the geophysical signatures of permafrost thaw in a controlled environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano
Past works that focused on addressing power-quality and reliability concerns related to renewable energy resources (RESs) operating with business-as-usual practices have looked at the design of Volt/VAr and Volt/Watt strategies to regulate real or reactive powers based on local voltage measurements, so that terminal voltages are within acceptable levels. These control strategies have the potential of operating at the same time scale of distribution-system dynamics, and can therefore mitigate disturbances precipitated fast time-varying loads and ambient conditions; however, they do not necessarily guarantee system-level optimality, and stability claims are mainly based on empirical evidences. On a different time scale, centralizedmore » and distributed optimal power flow (OPF) algorithms have been proposed to compute optimal steady-state inverter setpoints, so that power losses and voltage deviations are minimized and economic benefits to end-users providing ancillary services are maximized. However, traditional OPF schemes may offer decision making capabilities that do not match the dynamics of distribution systems. Particularly, during the time required to collect data from all the nodes of the network (e.g., loads), solve the OPF, and subsequently dispatch setpoints, the underlying load, ambient, and network conditions may have already changed; in this case, the DER output powers would be consistently regulated around outdated setpoints, leading to suboptimal system operation and violation of relevant electrical limits. The present work focuses on the synthesis of distributed RES-inverter controllers that leverage the opportunities for fast feedback offered by power-electronics interfaced RESs. The overarching objective is to bridge the temporal gap between long-term system optimization and real-time control, to enable seamless RES integration in large scale with stability and efficiency guarantees, while congruently pursuing system-level optimization objectives. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. The proposed controllers enable an update of the power outputs at a time scale that is compatible with the underlying dynamics of loads and ambient conditions, and continuously drive the system operation towards OPF-based solutions.« less
FROM FINANCE TO COSMOLOGY: THE COPULA OF LARGE-SCALE STRUCTURE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherrer, Robert J.; Berlind, Andreas A.; Mao, Qingqing
2010-01-01
Any multivariate distribution can be uniquely decomposed into marginal (one-point) distributions, and a function called the copula, which contains all of the information on correlations between the distributions. The copula provides an important new methodology for analyzing the density field in large-scale structure. We derive the empirical two-point copula for the evolved dark matter density field. We find that this empirical copula is well approximated by a Gaussian copula. We consider the possibility that the full n-point copula is also Gaussian and describe some of the consequences of this hypothesis. Future directions for investigation are discussed.
Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki
2014-04-01
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
Huang, H K
2011-07-01
The concept of PACS (picture archiving and communication system) was initiated in 1982 during the SPIE medical imaging conference in New Port Beach, CA. Since then PACS has been matured to become an everyday clinical tool for image archiving, communication, display, and review. This paper follows the continuous development of PACS technology including Web-based PACS, PACS and ePR (electronic patient record), enterprise PACS to ePR with image distribution (ID). The concept of large-scale Web-based enterprise PACS and ePR with image distribution is presented along with its implementation, clinical deployment, and operation. The Hong Kong Hospital Authority's (HKHA) integration of its home-grown clinical management system (CMS) with PACS and ePR with image distribution is used as a case study. The current concept and design criteria of the HKHA enterprise integration of the CMS, PACS, and ePR-ID for filmless healthcare delivery are discussed, followed by its work-in-progress and current status.
NASA Technical Reports Server (NTRS)
Hussain, A. K. M. F.
1980-01-01
Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
NASA Astrophysics Data System (ADS)
Liu, Haitao; Huang, Zhaohui; Zhang, Xiaoguang; Fang, Minghao; Liu, Yan-gai; Wu, Xiaowen; Min, Xin
2018-01-01
Understanding the kinetic barrier and driving force for crystal nucleation and growth is decisive for the synthesis of nanowires with controllable yield and morphology. In this research, we developed an effective reaction system to synthesize very large scale α-Si3N4 nanowires (hundreds of milligrams) and carried out a comparative study to characterize the kinetic influence of gas precursor supersaturation and liquid metal catalyst. The phase composition, morphology, microstructure and photoluminescence properties of the as-synthesized products were characterized by X-ray diffraction, fourier-transform infrared spectroscopy, field emission scanning electron microscopy, transmission electron microscopy and room temperature photoluminescence measurement. The yield of the products not only relates to the reaction temperature (thermodynamic condition) but also to the distribution of gas precursors (kinetic condition). As revealed in this research, by controlling the gas diffusion process, the yield of the nanowire products could be greatly improved. The experimental results indicate that the supersaturation is the dominant factor in the as-designed system rather than the catalyst. With excellent non-flammability and high thermal stability, the large scale α-Si3N4 products would have potential applications to the improvement of strength of high temperature ceramic composites. The photoluminescence spectrum of the α-Si3N4 shows a blue shift which could be valued for future applications in blue-green emitting devices. There is no doubt that the large scale products are the base of these applications.
Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks
2012-01-01
SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...of agents, and each agent attempts to form a coalition with its most profitable partner. The second algorithm builds upon the Shapley for- mula [37...ters at the second layer. These Category Layer clusters each represent a single resource, and agents join one or more clusters based on their
Recurrence and interoccurrence behavior of self-organized complex phenomena
NASA Astrophysics Data System (ADS)
Abaimov, S. G.; Turcotte, D. L.; Shcherbakov, R.; Rundle, J. B.
2007-08-01
The sandpile, forest-fire and slider-block models are said to exhibit self-organized criticality. Associated natural phenomena include landslides, wildfires, and earthquakes. In all cases the frequency-size distributions are well approximated by power laws (fractals). Another important aspect of both the models and natural phenomena is the statistics of interval times. These statistics are particularly important for earthquakes. For earthquakes it is important to make a distinction between interoccurrence and recurrence times. Interoccurrence times are the interval times between earthquakes on all faults in a region whereas recurrence times are interval times between earthquakes on a single fault or fault segment. In many, but not all cases, interoccurrence time statistics are exponential (Poissonian) and the events occur randomly. However, the distribution of recurrence times are often Weibull to a good approximation. In this paper we study the interval statistics of slip events using a slider-block model. The behavior of this model is sensitive to the stiffness α of the system, α=kC/kL where kC is the spring constant of the connector springs and kL is the spring constant of the loader plate springs. For a soft system (small α) there are no system-wide events and interoccurrence time statistics of the larger events are Poissonian. For a stiff system (large α), system-wide events dominate the energy dissipation and the statistics of the recurrence times between these system-wide events satisfy the Weibull distribution to a good approximation. We argue that this applicability of the Weibull distribution is due to the power-law (scale invariant) behavior of the hazard function, i.e. the probability that the next event will occur at a time t0 after the last event has a power-law dependence on t0. The Weibull distribution is the only distribution that has a scale invariant hazard function. We further show that the onset of system-wide events is a well defined critical point. We find that the number of system-wide events NSWE satisfies the scaling relation NSWE ∝(α-αC)δ where αC is the critical value of the stiffness. The system-wide events represent a new phase for the slider-block system.
Numerical simulation of cloud and precipitation structure during GALE IOP-2
NASA Technical Reports Server (NTRS)
Robertson, F. R.; Perkey, D. J.; Seablom, M. S.
1988-01-01
A regional scale model, LAMPS (Limited Area Mesoscale Prediction System), is used to investigate cloud and precipitation structure that accompanied a short wave system during a portion of GALE IOP-2. A comparison of satellite imagery and model fields indicates that much of the large mesoscale organization of condensation has been captured by the simulation. In addition to reproducing a realistic phasing of two baroclinic zones associated with a split cold front, a reasonable simulation of the gross mesoscale cloud distribution has been achieved.
Workshop on Advances in Scientific Computation and Differential Equations (SCADE)
1994-07-18
STATEMENT ~~’"j’’ Approved for public release; distribution unlimited. I ABSTRACT (MAMMU 200WOMW 94 808 1 64 4.L SUBIECT TERMS Ii11URE Of PAGES 12 16...called differential algebraic ODEs (DAES). (Some important early research on this topic was by L. Petzold.) Both theoretically and in terms of...completely specify the solution. In many physical systems, especially those in biology, or other large scale slowly responding systems, the inclusion of some
Universal scaling relations in scale-free structure formation
NASA Astrophysics Data System (ADS)
Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.
2018-07-01
A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM ∝ M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters, and even dark matter haloes. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power-law tail of dA/dln Σ ∝ Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D ∝ R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM haloes) tend to a ρ ∝ R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation, and detailed `full physics' hydrodynamical simulations. We find that these power laws are good first-order descriptions in all cases.
Universal Scaling Relations in Scale-Free Structure Formation
NASA Astrophysics Data System (ADS)
Guszejnov, Dávid; Hopkins, Philip F.; Grudić, Michael Y.
2018-04-01
A large number of astronomical phenomena exhibit remarkably similar scaling relations. The most well-known of these is the mass distribution dN/dM∝M-2 which (to first order) describes stars, protostellar cores, clumps, giant molecular clouds, star clusters and even dark matter halos. In this paper we propose that this ubiquity is not a coincidence and that it is the generic result of scale-free structure formation where the different scales are uncorrelated. We show that all such systems produce a mass function proportional to M-2 and a column density distribution with a power law tail of dA/d lnΣ∝Σ-1. In the case where structure formation is controlled by gravity the two-point correlation becomes ξ2D∝R-1. Furthermore, structures formed by such processes (e.g. young star clusters, DM halos) tend to a ρ∝R-3 density profile. We compare these predictions with observations, analytical fragmentation cascade models, semi-analytical models of gravito-turbulent fragmentation and detailed "full physics" hydrodynamical simulations. We find that these power-laws are good first order descriptions in all cases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herman Atkins, Shoshone-Paiute; Mark Hannifan, New West Technologies
A range of sustainable energy options were assessed for feasibility in addressing chronic electric grid reliability problems at Duck Valley IR. Wind power and building energy efficiency were determined to have the most merit, with the Duck Valley Tribes now well positioned to pursue large scale wind power development for on- and off-reservation sales.
Challenges in Managing Trustworthy Large-scale Digital Science
NASA Astrophysics Data System (ADS)
Evans, B. J. K.
2017-12-01
The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.
Advanced Operating System Technologies
NASA Astrophysics Data System (ADS)
Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro
In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.
Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin
2018-04-20
We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.
GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith
2014-08-25
Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Time-sliced perturbation theory for large scale structure I: general formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blas, Diego; Garny, Mathias; Sibiryakov, Sergey
2016-07-01
We present a new analytic approach to describe large scale structure formation in the mildly non-linear regime. The central object of the method is the time-dependent probability distribution function generating correlators of the cosmological observables at a given moment of time. Expanding the distribution function around the Gaussian weight we formulate a perturbative technique to calculate non-linear corrections to cosmological correlators, similar to the diagrammatic expansion in a three-dimensional Euclidean quantum field theory, with time playing the role of an external parameter. For the physically relevant case of cold dark matter in an Einstein-de Sitter universe, the time evolution ofmore » the distribution function can be found exactly and is encapsulated by a time-dependent coupling constant controlling the perturbative expansion. We show that all building blocks of the expansion are free from spurious infrared enhanced contributions that plague the standard cosmological perturbation theory. This paves the way towards the systematic resummation of infrared effects in large scale structure formation. We also argue that the approach proposed here provides a natural framework to account for the influence of short-scale dynamics on larger scales along the lines of effective field theory.« less
NASA Astrophysics Data System (ADS)
Dinzi, R.; Hamonangan, TS; Fahmi, F.
2018-02-01
In the current distribution system, a large-capacity distribution transformer supplies loads to remote locations. The use of 220/380 V network is nowadays less common compared to 20 kV network. This results in losses due to the non-optimal distribution transformer, which neglected the load location, poor consumer profile, and large power losses along the carrier. This paper discusses how high voltage distribution systems (HVDS) can be a better system used in distribution networks than the currently used distribution system (Low Voltage Distribution System, LVDS). The proposed change of the system into the new configuration is done by replacing a large-capacity distribution transformer with some smaller-capacity distribution transformers and installed them in positions that closest to the load. The use of high voltage distribution systems will result in better voltage profiles and fewer power losses. From the non-technical side, the annual savings and payback periods on high voltage distribution systems will also be the advantage.
A Performance Comparison of Tree and Ring Topologies in Distributed System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Min
A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less
NASA Technical Reports Server (NTRS)
Ramella, Massimo; Geller, Margaret J.; Huchra, John P.
1990-01-01
The large-scale distribution of groups of galaxies selected from complete slices of the CfA redshift survey extension is examined. The survey is used to reexamine the contribution of group members to the galaxy correlation function. The relationship between the correlation function for groups and those calculated for rich clusters is discussed, and the results for groups are examined as an extension of the relation between correlation function amplitude and richness. The group correlation function indicates that groups and individual galaxies are equivalent tracers of the large-scale matter distribution. The distribution of group centers is equivalent to random sampling of the galaxy distribution. The amplitude of the correlation function for groups is consistent with an extrapolation of the amplitude-richness relation for clusters. The amplitude scaled by the mean intersystem separation is also consistent with results for richer clusters.
States of mind: emotions, body feelings, and thoughts share distributed neural networks.
Oosterwijk, Suzanne; Lindquist, Kristen A; Anderson, Eric; Dautoff, Rebecca; Moriguchi, Yoshiya; Barrett, Lisa Feldman
2012-09-01
Scientists have traditionally assumed that different kinds of mental states (e.g., fear, disgust, love, memory, planning, concentration, etc.) correspond to different psychological faculties that have domain-specific correlates in the brain. Yet, growing evidence points to the constructionist hypothesis that mental states emerge from the combination of domain-general psychological processes that map to large-scale distributed brain networks. In this paper, we report a novel study testing a constructionist model of the mind in which participants generated three kinds of mental states (emotions, body feelings, or thoughts) while we measured activity within large-scale distributed brain networks using fMRI. We examined the similarity and differences in the pattern of network activity across these three classes of mental states. Consistent with a constructionist hypothesis, a combination of large-scale distributed networks contributed to emotions, thoughts, and body feelings, although these mental states differed in the relative contribution of those networks. Implications for a constructionist functional architecture of diverse mental states are discussed. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III; Weinberg, David H.; Melott, Adrian L.
1987-01-01
A quantitative measure of the topology of large-scale structure: the genus of density contours in a smoothed density distribution, is described and applied. For random phase (Gaussian) density fields, the mean genus per unit volume exhibits a universal dependence on threshold density, with a normalizing factor that can be calculated from the power spectrum. If large-scale structure formed from the gravitational instability of small-amplitude density fluctuations, the topology observed today on suitable scales should follow the topology in the initial conditions. The technique is illustrated by applying it to simulations of galaxy clustering in a flat universe dominated by cold dark matter. The technique is also applied to a volume-limited sample of the CfA redshift survey and to a model in which galaxies reside on the surfaces of polyhedral 'bubbles'. The topology of the evolved mass distribution and 'biased' galaxy distribution in the cold dark matter models closely matches the topology of the density fluctuations in the initial conditions. The topology of the observational sample is consistent with the random phase, cold dark matter model.
NASA Astrophysics Data System (ADS)
Yang, Liping; Zhang, Lei; He, Jiansen; Tu, Chuanyi; Li, Shengtai; Wang, Xin; Wang, Linghua
2018-03-01
Multi-order structure functions in the solar wind are reported to display a monofractal scaling when sampled parallel to the local magnetic field and a multifractal scaling when measured perpendicularly. Whether and to what extent will the scaling anisotropy be weakened by the enhancement of turbulence amplitude relative to the background magnetic strength? In this study, based on two runs of the magnetohydrodynamic (MHD) turbulence simulation with different relative levels of turbulence amplitude, we investigate and compare the scaling of multi-order magnetic structure functions and magnetic probability distribution functions (PDFs) as well as their dependence on the direction of the local field. The numerical results show that for the case of large-amplitude MHD turbulence, the multi-order structure functions display a multifractal scaling at all angles to the local magnetic field, with PDFs deviating significantly from the Gaussian distribution and a flatness larger than 3 at all angles. In contrast, for the case of small-amplitude MHD turbulence, the multi-order structure functions and PDFs have different features in the quasi-parallel and quasi-perpendicular directions: a monofractal scaling and Gaussian-like distribution in the former, and a conversion of a monofractal scaling and Gaussian-like distribution into a multifractal scaling and non-Gaussian tail distribution in the latter. These results hint that when intermittencies are abundant and intense, the multifractal scaling in the structure functions can appear even if it is in the quasi-parallel direction; otherwise, the monofractal scaling in the structure functions remains even if it is in the quasi-perpendicular direction.
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
The cosmological principle is not in the sky
NASA Astrophysics Data System (ADS)
Park, Chan-Gyung; Hyun, Hwasu; Noh, Hyerim; Hwang, Jai-chan
2017-08-01
The homogeneity of matter distribution at large scales, known as the cosmological principle, is a central assumption in the standard cosmological model. The case is testable though, thus no longer needs to be a principle. Here we perform a test for spatial homogeneity using the Sloan Digital Sky Survey Luminous Red Galaxies (LRG) sample by counting galaxies within a specified volume with the radius scale varying up to 300 h-1 Mpc. We directly confront the large-scale structure data with the definition of spatial homogeneity by comparing the averages and dispersions of galaxy number counts with allowed ranges of the random distribution with homogeneity. The LRG sample shows significantly larger dispersions of number counts than the random catalogues up to 300 h-1 Mpc scale, and even the average is located far outside the range allowed in the random distribution; the deviations are statistically impossible to be realized in the random distribution. This implies that the cosmological principle does not hold even at such large scales. The same analysis of mock galaxies derived from the N-body simulation, however, suggests that the LRG sample is consistent with the current paradigm of cosmology, thus the simulation is also not homogeneous in that scale. We conclude that the cosmological principle is neither in the observed sky nor demanded to be there by the standard cosmological world model. This reveals the nature of the cosmological principle adopted in the modern cosmology paradigm, and opens a new field of research in theoretical cosmology.
Large-scale-system effectiveness analysis. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patton, A.D.; Ayoub, A.K.; Foster, J.W.
1979-11-01
Objective of the research project has been the investigation and development of methods for calculating system reliability indices which have absolute, and measurable, significance to consumers. Such indices are a necessary prerequisite to any scheme for system optimization which includes the economic consequences of consumer service interruptions. A further area of investigation has been joint consideration of generation and transmission in reliability studies. Methods for finding or estimating the probability distributions of some measures of reliability performance have been developed. The application of modern Monte Carlo simulation methods to compute reliability indices in generating systems has been studied.
Application of superconducting technology to earth-to-orbit electromagnetic launch systems
NASA Technical Reports Server (NTRS)
Hull, J. R.; Carney, L. M.
1988-01-01
Benefits may occur by incorporating superconductors, both existing and those currently under development, in one or more parts of a large-scale electromagnetic launch (EML) system that is capable of delivering payloads from the surface of the Earth to space. The use of superconductors for many of the EML components results in lower system losses; consequently, reductions in the size and number of energy storage devices are possible. Applied high-temperature superconductivity may eventually enable novel design concepts for energy distribution and switching. All of these technical improvements have the potential to reduce system complexity and lower payload launch costs.
Network Theory: A Primer and Questions for Air Transportation Systems Applications
NASA Technical Reports Server (NTRS)
Holmes, Bruce J.
2004-01-01
A new understanding (with potential applications to air transportation systems) has emerged in the past five years in the scientific field of networks. This development emerges in large part because we now have a new laboratory for developing theories about complex networks: The Internet. The premise of this new understanding is that most complex networks of interest, both of nature and of human contrivance, exhibit a fundamentally different behavior than thought for over two hundred years under classical graph theory. Classical theory held that networks exhibited random behavior, characterized by normal, (e.g., Gaussian or Poisson) degree distributions of the connectivity between nodes by links. The new understanding turns this idea on its head: networks of interest exhibit scale-free (or small world) degree distributions of connectivity, characterized by power law distributions. The implications of scale-free behavior for air transportation systems include the potential that some behaviors of complex system architectures might be analyzed through relatively simple approximations of local elements of the system. For air transportation applications, this presentation proposes a framework for constructing topologies (architectures) that represent the relationships between mobility, flight operations, aircraft requirements, and airspace capacity, and the related externalities in airspace procedures and architectures. The proposed architectures or topologies may serve as a framework for posing comparative and combinative analyses of performance, cost, security, environmental, and related metrics.
Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A
2014-01-28
A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.
Climate mitigation is not the only benefit of a national energy system
NASA Astrophysics Data System (ADS)
Clack, C.
2016-12-01
Many speculate that the main driving force for a continental scale energy system is for climate mitigation. While this is a strong driver, there are multiple co-benefits that emerge from such a transition when purely driven by costs. These components could be managed within a planned system to provide a close-to-optimal solution that enhances the probability of realization. It is shown that these co-benefits of a continental scale electric system occur at costs lower than existing ones. That means there are multiple additional savings without extra costs or effort. The disadvantage is coordination between large geographic regions that could cause more complexity in planning. The main finding from different versions of the NEWS simulator is that carbon mitigation is enhanced by larger systems. In addition, there are increased jobs, reduced water consumption, Sulphur dioxide emissions, Nitrogen oxide emissions, a more distributed electric system and a lower cost of electricity.
Distributed XQuery-Based Integration and Visualization of Multimodality Brain Mapping Data
Detwiler, Landon T.; Suciu, Dan; Franklin, Joshua D.; Moore, Eider B.; Poliakov, Andrew V.; Lee, Eunjung S.; Corina, David P.; Ojemann, George A.; Brinkley, James F.
2008-01-01
This paper addresses the need for relatively small groups of collaborating investigators to integrate distributed and heterogeneous data about the brain. Although various national efforts facilitate large-scale data sharing, these approaches are generally too “heavyweight” for individual or small groups of investigators, with the result that most data sharing among collaborators continues to be ad hoc. Our approach to this problem is to create a “lightweight” distributed query architecture, in which data sources are accessible via web services that accept arbitrary query languages but return XML results. A Distributed XQuery Processor (DXQP) accepts distributed XQueries in which subqueries are shipped to the remote data sources to be executed, with the resulting XML integrated by DXQP. A web-based application called DXBrain accesses DXQP, allowing a user to create, save and execute distributed XQueries, and to view the results in various formats including a 3-D brain visualization. Example results are presented using distributed brain mapping data sources obtained in studies of language organization in the brain, but any other XML source could be included. The advantage of this approach is that it is very easy to add and query a new source, the tradeoff being that the user needs to understand XQuery and the schemata of the underlying sources. For small numbers of known sources this burden is not onerous for a knowledgeable user, leading to the conclusion that the system helps to fill the gap between ad hoc local methods and large scale but complex national data sharing efforts. PMID:19198662
Distributed design approach in persistent identifiers systems
NASA Astrophysics Data System (ADS)
Golodoniuc, Pavel; Car, Nicholas; Klump, Jens
2017-04-01
The need to identify both digital and physical objects is ubiquitous in our society. Past and present persistent identifier (PID) systems, of which there is a great variety in terms of technical and social implementations, have evolved with the advent of the Internet, which has allowed for globally unique and globally resolvable identifiers. PID systems have catered for identifier uniqueness, integrity, persistence, and trustworthiness, regardless of the identifier's application domain, the scope of which has expanded significantly in the past two decades. Since many PID systems have been largely conceived and developed by small communities, or even a single organisation, they have faced challenges in gaining widespread adoption and, most importantly, the ability to survive change of technology. This has left a legacy of identifiers that still exist and are being used but which have lost their resolution service. We believe that one of the causes of once successful PID systems fading is their reliance on a centralised technical infrastructure or a governing authority. Golodoniuc et al. (2016) proposed an approach to the development of PID systems that combines the use of (a) the Handle system, as a distributed system for the registration and first-degree resolution of persistent identifiers, and (b) the PID Service (Golodoniuc et al., 2015), to enable fine-grained resolution to different information object representations. The proposed approach solved the problem of guaranteed first-degree resolution of identifiers, but left fine-grained resolution and information delivery under the control of a single authoritative source, posing risk to the long-term availability of information resources. Herein, we develop these approaches further and explore the potential of large-scale decentralisation at all levels: (i) persistent identifiers and information resources registration; (ii) identifier resolution; and (iii) data delivery. To achieve large-scale decentralisation, we propose using Distributed Hash Tables (DHT), Peer Exchange networks (PEX), Magnet Links, and peer-to-peer (P2P) file sharing networks - the technologies that enable applications such as BitTorrent (Wu et al., 2010). The proposed approach introduces reliable information replication and caching mechanisms, eliminating the need for a central PID data store, and increases overall system fault tolerance due to the lack of a single point of failure. The proposed PID system's design aims to ensure trustworthiness of the system and incorporates important aspects of governance, such as the notion of the authoritative source, data integrity, caching, and data replication control.
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Launch processing system transition from development to operation
NASA Technical Reports Server (NTRS)
Paul, H. C.
1977-01-01
The Launch Processing System has been under development at Kennedy Space Center since 1973. A prototype system was developed and delivered to Marshall Space Flight Center for Solid Rocket Booster checkout in July 1976. The first production hardware arrived in late 1976. The System uses a distributed computer network for command and monitoring and is supported by a dual large scale computer system for 'off line' processing. A high level of automation is anticipated for Shuttle and Payload testing and launch operations to gain the advantages of short turnaround capability, repeatability of operations, and minimization of operations and maintenance (O&M) manpower. Learning how to efficiently apply the system is our current problem. We are searching for more effective ways to convey LPS system performance characteristics from the designer to a large number of users. Once we have done this, we can realize the advantages of LPS system design.
Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.
2017-01-01
Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519
Bro-Jørgensen, Jakob; Brown, Molly E; Pettorelli, Nathalie
2008-11-01
Lek-breeding species are characterized by a negative association between territorial resource availability and male mating success; however, the impact of resources on the overall distribution patterns of the two sexes in lek systems is not clear. The normalized difference vegetation index (NDVI) has recently emerged as a powerful proxy measure for primary productivity, allowing the links between the distributions of animals and resources to be explored. Using NDVI at four spatial resolutions, we here investigate how the distribution of the two sexes in a lek-breeding population of topi antelopes relates to resource abundance before and during the rut. We found that in the dry season preceding the rut, topi density correlated positively with NDVI at the large, but not the fine, scale. This suggests that before the rut, when resources were relatively scant, topi preferred pastures where green grass was widely abundant. The pattern was less pronounced in males, suggesting that the need for territorial attendance prevents males from tracking resources as freely as females do. During the rut, which occurs in the wet season, both male and female densities correlated negatively with NDVI at the fine scale. At this time, resources were generally plentiful and the results suggest that, rather than by resource maximization, distribution during the rut was determined by benefits of aggregating on relatively resource-poor leks for mating, and possibly antipredator, purposes. At the large scale, no correlation between density and NDVI was found during the rut in either sex, which can be explained by leks covering areas too small to be reflected at this resolution. The study illustrates that when investigating spatial organization, it is important: (1) to choose the appropriate analytic scale, and (2) to consider behavioural as well as strictly ecological factors.
NASA Astrophysics Data System (ADS)
Eltahir, E. A. B.; IM, E. S.
2014-12-01
This study investigates the impact of potential large-scale (about 400,000 km2) and medium-scale (about 60,000 km2) irrigation on the climate of West Africa using the MIT Regional Climate Model. A new irrigation module is implemented to assess the impact of location and scheduling of irrigation on rainfall distribution over West Africa. A control simulation (without irrigation) and various sensitivity experiments (with irrigation) are performed and compared to discern the effects of irrigation location, size and scheduling. In general, the irrigation-induced surface cooling due to anomalously wet soil tends to suppress moist convection and rainfall, which in turn induces local subsidence and low level anti-cyclonic circulation. These local effects are dominated by a consistent reduction of local rainfall over the irrigated land, irrespective of its location. However, the remote response of rainfall distribution to irrigation exhibits a significant sensitivity to the latitudinal position of irrigation. The low-level northeasterly flow associated with anti-cyclonic circulation centered over the irrigation area can enhance the extent of low level convergence through interaction with the prevailing monsoon flow, leading to significant increase in rainfall. Despite much reduced forcing of irrigation water, the medium-scale irrigation seems to draw the same response as large-scale irrigation, which supports the robustness of the response to irrigation in our modeling system. Both large-scale and medium-scale irrigation experiments show that an optimal irrigation location and scheduling exists that would lead to a more efficient use of irrigation water. The approach of using a regional climate model to investigate the impact of location and size of irrigation schemes may be the first step in incorporating land-atmosphere interactions in the design of location and size of irrigation projects. However, this theoretical approach is still in early stages of development and further research is needed before any practical application in water resources planning. Acknowledgements.This research was supported by the National Research Foundation Singapore through the Singapore MIT Alliance for Research and Technology's Center for Environmental Sensing and Modeling interdisciplinary research program.
Aquifer Vulnerability Assessment Based on Sequence Stratigraphic and ³⁹Ar Transport Modeling.
Sonnenborg, Torben O; Scharling, Peter B; Hinsby, Klaus; Rasmussen, Erik S; Engesgaard, Peter
2016-03-01
A large-scale groundwater flow and transport model is developed for a deep-seated (100 to 300 m below ground surface) sedimentary aquifer system. The model is based on a three-dimensional (3D) hydrostratigraphic model, building on a sequence stratigraphic approach. The flow model is calibrated against observations of hydraulic head and stream discharge while the credibility of the transport model is evaluated against measurements of (39)Ar from deep wells using alternative parameterizations of dispersivity and effective porosity. The directly simulated 3D mean age distributions and vertical fluxes are used to visualize the two-dimensional (2D)/3D age and flux distribution along transects and at the top plane of individual aquifers. The simulation results are used to assess the vulnerability of the aquifer system that generally has been assumed to be protected by thick overlaying clayey units and therefore proposed as future reservoirs for drinking water supply. The results indicate that on a regional scale these deep-seated aquifers are not as protected from modern surface water contamination as expected because significant leakage to the deeper aquifers occurs. The complex distribution of local and intermediate groundwater flow systems controlled by the distribution of the river network as well as the topographical variation (Tóth 1963) provides the possibility for modern water to be found in even the deepest aquifers. © 2015, National Ground Water Association.
Schreier, Amy L; Grove, Matt
2014-05-01
The benefits of spatial memory for foraging animals can be assessed on two distinct spatial scales: small-scale space (travel within patches) and large-scale space (travel between patches). While the patches themselves may be distributed at low density, within patches resources are likely densely distributed. We propose, therefore, that spatial memory for recalling the particular locations of previously visited feeding sites will be more advantageous during between-patch movement, where it may reduce the distances traveled by animals that possess this ability compared to those that must rely on random search. We address this hypothesis by employing descriptive statistics and spectral analyses to characterize the daily foraging routes of a band of wild hamadryas baboons in Filoha, Ethiopia. The baboons slept on two main cliffs--the Filoha cliff and the Wasaro cliff--and daily travel began and ended on a cliff; thus four daily travel routes exist: Filoha-Filoha, Filoha-Wasaro, Wasaro-Wasaro, Wasaro-Filoha. We use newly developed partial sum methods and distribution-fitting analyses to distinguish periods of area-restricted search from more extensive movements. The results indicate a single peak in travel activity in the Filoha-Filoha and Wasaro-Filoha routes, three peaks of travel activity in the Filoha-Wasaro routes, and two peaks in the Wasaro-Wasaro routes; and are consistent with on-the-ground observations of foraging and ranging behavior of the baboons. In each of the four daily travel routes the "tipping points" identified by the partial sum analyses indicate transitions between travel in small- versus large-scale space. The correspondence between the quantitative analyses and the field observations suggest great utility for using these types of analyses to examine primate travel patterns and especially in distinguishing between movement in small versus large-scale space. Only the distribution-fitting analyses are inconsistent with the field observations, which may be due to the scale at which these analyses were conducted. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun
2004-04-01
This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel
Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less
Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel; ...
2017-07-24
Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less
The mosaic structure of plasma bulk flows in the Earth's magnetotail
NASA Technical Reports Server (NTRS)
Ashour-Abdalla, M.; Richard, R. L.; Zelenyi, L. M.; Peroomian, V.; Bosqued, J. M.
1995-01-01
Moments of plasma distributions observed in the magnetotail vary with different time scales. In this paper we attempt to explain the observed variability on intermediate timescales of approximately 10-20 min that result from the simultaneous energization and spatial structuring of solar wind plasma in the distant magnetotail. These processes stimulate the formation of a system of spatially disjointed. highly accelerated filaments (beamlets) in the tail. We use the results from large-scale kinetic modeling of magnetotail formation from a plasma mantle source to calculate moments of ion distribution functions throughout the tail. Statistical restrictions related to the limited number of particles in our system naturally reduce the spatial resolution of our results, but we show that our model is valid on intermediate spatial scales Delta(x) x Delta(z) equal to approximately 1 R(sub E) x 1000 km. For these spatial scales the resulting pattern, which resembles a mosaic, appears to be quite variable. The complexity of the pattern is related to the spatial interference between beamlets accelerated at various locations within the distant tail which mirror in the strong near-Earth magnetic field. Global motion of the magnetotail results in the displacement of spacecraft with respect to this mosaic pattern and can produce variations in all of the moments (especially the x-component of the bulk velocity) on intermediate timescales. The results obtained enable us to view the magnetotail plasma as consisting of two different populations: a tailward-Earthward system of highly accelerated beamlets interfering with each other, and an energized quasithermal population which gradually builds as the Earth is approached. In the near-Earth tail, these populations merge into a hot quasi-isotropic ion population typical of the near-Earth plasma sheet. The transformation of plasma sheet boundary layer (PSBL) beam energy into central plasma sheet (CPS) quasi-thermal energy occurs in the absence of collisions or noise. This paper also clarifies the relationship between the global scale where an MHD description might be appropriate and the lower intermediate scales where MHD fails and large-scale kinetic theory should be used.
Study of Solid State Drives performance in PROOF distributed analysis system
NASA Astrophysics Data System (ADS)
Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.
2010-04-01
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
Enabling Diverse Software Stacks on Supercomputers using High Performance Virtual Clusters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Younge, Andrew J.; Pedretti, Kevin; Grant, Ryan
While large-scale simulations have been the hallmark of the High Performance Computing (HPC) community for decades, Large Scale Data Analytics (LSDA) workloads are gaining attention within the scientific community not only as a processing component to large HPC simulations, but also as standalone scientific tools for knowledge discovery. With the path towards Exascale, new HPC runtime systems are also emerging in a way that differs from classical distributed com- puting models. However, system software for such capabilities on the latest extreme-scale DOE supercomputing needs to be enhanced to more appropriately support these types of emerging soft- ware ecosystems. In thismore » paper, we propose the use of Virtual Clusters on advanced supercomputing resources to enable systems to support not only HPC workloads, but also emerging big data stacks. Specifi- cally, we have deployed the KVM hypervisor within Cray's Compute Node Linux on a XC-series supercomputer testbed. We also use libvirt and QEMU to manage and provision VMs directly on compute nodes, leveraging Ethernet-over-Aries network emulation. To our knowledge, this is the first known use of KVM on a true MPP supercomputer. We investigate the overhead our solution using HPC benchmarks, both evaluating single-node performance as well as weak scaling of a 32-node virtual cluster. Overall, we find single node performance of our solution using KVM on a Cray is very efficient with near-native performance. However overhead increases by up to 20% as virtual cluster size increases, due to limitations of the Ethernet-over-Aries bridged network. Furthermore, we deploy Apache Spark with large data analysis workloads in a Virtual Cluster, ef- fectively demonstrating how diverse software ecosystems can be supported by High Performance Virtual Clusters.« less
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
The future of PanDA in ATLAS distributed computing
NASA Astrophysics Data System (ADS)
De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.
2015-12-01
Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.
The Large Scale Distribution of Water Ice in the Polar Regions of the Moon
NASA Astrophysics Data System (ADS)
Jordan, A.; Wilson, J. K.; Schwadron, N.; Spence, H. E.
2017-12-01
For in situ resource utilization, one must know where water ice is on the Moon. Many datasets have revealed both surface deposits of water ice and subsurface deposits of hydrogen near the lunar poles, but it has proved difficult to resolve the differences among the locations of these deposits. Despite these datasets disagreeing on how deposits are distributed on small scales, we show that most of these datasets do agree on the large scale distribution of water ice. We present data from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER) on the Lunar Reconnaissance Orbiter (LRO), LRO's Lunar Exploration Neutron Detector (LEND), the Neutron Spectrometer on Lunar Prospector (LPNS), LRO's Lyman Alpha Mapping Project (LAMP), LRO's Lunar Orbiter Laser Altimeter (LOLA), and Chandrayaan-1's Moon Mineralogy Mapper (M3). All, including those that show clear evidence for water ice, reveal surprisingly similar trends with latitude, suggesting that both surface and subsurface datasets are measuring ice. All show that water ice increases towards the poles, and most demonstrate that its signature appears at about ±70° latitude and increases poleward. This is consistent with simulations of how surface and subsurface cold traps are distributed with latitude. This large scale agreement constrains the origin of the ice, suggesting that an ancient cometary impact (or impacts) created a large scale deposit that has been rendered locally heterogeneous by subsequent impacts. Furthermore, it also shows that water ice may be available down to ±70°—latitudes that are more accessible than the poles for landing.
NASA Astrophysics Data System (ADS)
Fiore, Sandro; Płóciennik, Marcin; Doutriaux, Charles; Blanquer, Ignacio; Barbera, Roberto; Donvito, Giacinto; Williams, Dean N.; Anantharaj, Valentine; Salomoni, Davide D.; Aloisio, Giovanni
2017-04-01
In many scientific domains such as climate, data is often n-dimensional and requires tools that support specialized data types and primitives to be properly stored, accessed, analysed and visualized. Moreover, new challenges arise in large-scale scenarios and eco-systems where petabytes (PB) of data can be available and data can be distributed and/or replicated, such as the Earth System Grid Federation (ESGF) serving the Coupled Model Intercomparison Project, Phase 5 (CMIP5) experiment, providing access to 2.5PB of data for the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report (AR5). A case study on climate models intercomparison data analysis addressing several classes of multi-model experiments is being implemented in the context of the EU H2020 INDIGO-DataCloud project. Such experiments require the availability of large amount of data (multi-terabyte order) related to the output of several climate models simulations as well as the exploitation of scientific data management tools for large-scale data analytics. More specifically, the talk discusses in detail a use case on precipitation trend analysis in terms of requirements, architectural design solution, and infrastructural implementation. The experiment has been tested and validated on CMIP5 datasets, in the context of a large scale distributed testbed across EU and US involving three ESGF sites (LLNL, ORNL, and CMCC) and one central orchestrator site (PSNC). The general "environment" of the case study relates to: (i) multi-model data analysis inter-comparison challenges; (ii) addressed on CMIP5 data; and (iii) which are made available through the IS-ENES/ESGF infrastructure. The added value of the solution proposed in the INDIGO-DataCloud project are summarized in the following: (i) it implements a different paradigm (from client- to server-side); (ii) it intrinsically reduces data movement; (iii) it makes lightweight the end-user setup; (iv) it fosters re-usability (of data, final/intermediate products, workflows, sessions, etc.) since everything is managed on the server-side; (v) it complements, extends and interoperates with the ESGF stack; (vi) it provides a "tool" for scientists to run multi-model experiments, and finally; and (vii) it can drastically reduce the time-to-solution for these experiments from weeks to hours. At the time the contribution is being written, the proposed testbed represents the first concrete implementation of a distributed multi-model experiment in the ESGF/CMIP context joining server-side and parallel processing, end-to-end workflow management and cloud computing. As opposed to the current scenario based on search & discovery, data download, and client-based data analysis, the INDIGO-DataCloud architectural solution described in this contribution addresses the scientific computing & analytics requirements by providing a paradigm shift based on server-side and high performance big data frameworks jointly with two-level workflow management systems realized at the PaaS level via a cloud infrastructure.
NASA Astrophysics Data System (ADS)
Burov, V. A.; Zotov, D. I.; Rumyantseva, O. D.
2014-07-01
A two-step algorithm is used to reconstruct the spatial distributions of the acoustic characteristics of soft biological tissues-the sound velocity and absorption coefficient. Knowing these distributions is urgent for early detection of benign and malignant neoplasms in biological tissues, primarily in the breast. At the first stage, large-scale distributions are estimated; at the second step, they are refined with a high resolution. Results of reconstruction on the base of model initial data are presented. The principal necessity of preliminary reconstruction of large-scale distributions followed by their being taken into account at the second step is illustrated. The use of CUDA technology for processing makes it possible to obtain final images of 1024 × 1024 samples in only a few minutes.
NASA Astrophysics Data System (ADS)
Matthes, J. H.; Dietze, M.; Fox, A. M.; Goring, S. J.; McLachlan, J. S.; Moore, D. J.; Poulter, B.; Quaife, T. L.; Schaefer, K. M.; Steinkamp, J.; Williams, J. W.
2014-12-01
Interactions between ecological systems and the atmosphere are the result of dynamic processes with system memories that persist from seconds to centuries. Adequately capturing long-term biosphere-atmosphere exchange within earth system models (ESMs) requires an accurate representation of changes in plant functional types (PFTs) through time and space, particularly at timescales associated with ecological succession. However, most model parameterization and development has occurred using datasets than span less than a decade. We tested the ability of ESMs to capture the ecological dynamics observed in paleoecological and historical data spanning the last millennium. Focusing on an area from the Upper Midwest to New England, we examined differences in the magnitude and spatial pattern of PFT distributions and ecotones between historic datasets and the CMIP5 inter-comparison project's large-scale ESMs. We then conducted a 1000-year model inter-comparison using six state-of-the-art biosphere models at sites that bridged regional temperature and precipitation gradients. The distribution of ecosystem characteristics in modeled climate space reveals widely disparate relationships between modeled climate and vegetation that led to large differences in long-term biosphere-atmosphere fluxes for this region. Model simulations revealed that both the interaction between climate and vegetation and the representation of ecosystem dynamics within models were important controls on biosphere-atmosphere exchange.
Linear velocity fields in non-Gaussian models for large-scale structure
NASA Technical Reports Server (NTRS)
Scherrer, Robert J.
1992-01-01
Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.
Microbial Activity and Depositional System Dynamics: Linking Scales With The Aid of New Technology
NASA Astrophysics Data System (ADS)
Defew, E. C.; Hagerthey, S. E.; Honeywill, C.; Perkins, R. G.; Black, K. S.; Paterson, D. M.
The dynamics of estuarine depositional systems are influenced by sediment-dwelling microphytobenthic assemblages. These assemblages produce extracellular polymeric substances (EPS), which are known to be important in the process of sediment biosta- bilisation. However, these communities are generally studied on very small spatial scales making the prediction of primary productivity and their importance in terms of sediment stability over large areas uncertain. Recent advances in our knowledge of the biostabilisation process have allowed the establishment of links between EPS produc- tion, spatial distribution of algal biomass and their primary productivity over much larger spatial scales. For example, during the multidisciplinary BIOPTIS project, re- mote sensing (RS) was combined with ground-truthing measurements of physical and biological parameters to produce synoptic maps leading to a better understanding of system dynamics and the potential effects of environmental perturbations such as cli- mate change. Recent work using low-temperature scanning electron microscopy (LT- SEM) and in-line laser holography has measured the influence of EPS on the erosional behaviour of sediment flocs and particles and has shown that an increase in the con- centration of EPS determines the nature of the eroded floc material and the critical threshold for sediment erosion. This provides the mechanistic link required between EPS concentration and sediment stability. Whilst it is not yet possible to discern EPS concentration directly by RS studies, we know that EPS concentrations in sediments co-vary with chlorophyll a content, and are closely related to algal productivity. There- fore, RS studies which provide large-scale spatial information of chlorophyll a distri- bution may be used to model the stability and productivity of intertidal depositional systems. This paper introduces the basis of these linkages from the cellular level (in situ chlorophyll fluorescence), the ground-truthing approach (sediment stability, struc- ture, pigment distribution, in situ chlorophyll fluorescence) and investigates the poten- tial of a RS approach in a case study of a Scottish Estuary.
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
Chen, Bo; Chen, Chen; Wang, Jianhui; ...
2017-07-07
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distributionmore » systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.« less
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Bo; Chen, Chen; Wang, Jianhui
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distributionmore » systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.« less
NASA Technical Reports Server (NTRS)
Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.
2007-01-01
The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.
Survey on large scale system control methods
NASA Technical Reports Server (NTRS)
Mercadal, Mathieu
1987-01-01
The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.
A bibliographical surveys of large-scale systems
NASA Technical Reports Server (NTRS)
Corliss, W. R.
1970-01-01
A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.
Decoding the Nature of Emotion in the Brain.
Kragel, Philip A; LaBar, Kevin S
2016-06-01
A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This 'new look' into emotion representation promises to improve and reformulate neurobiological models of affect. Copyright © 2016 Elsevier Ltd. All rights reserved.
Decoding the Nature of Emotion in the Brain
Kragel, Philip A.; LaBar, Kevin S.
2016-01-01
A central, unresolved problem in affective neuroscience is understanding how emotions are represented in nervous system activity. After prior localization approaches largely failed, researchers began applying multivariate statistical tools to reconceptualize how emotion constructs might be embedded in large-scale brain networks. Findings from pattern analyses of neuroimaging data show that affective dimensions and emotion categories are uniquely represented in the activity of distributed neural systems that span cortical and subcortical regions. Results from multiple-category decoding studies are incompatible with theories postulating that specific emotions emerge from the neural coding of valence and arousal. This ‘new look’ into emotion representation promises to improve and reformulate neurobiological models of affect. PMID:27133227
Iavindrasana, Jimison; Depeursinge, Adrien; Ruch, Patrick; Spahni, Stéphane; Geissbuhler, Antoine; Müller, Henning
2007-01-01
The diagnostic and therapeutic processes, as well as the development of new treatments, are hindered by the fragmentation of information which underlies them. In a multi-institutional research study database, the clinical information system (CIS) contains the primary data input. An important part of the money of large scale clinical studies is often paid for data creation and maintenance. The objective of this work is to design a decentralized, scalable, reusable database architecture with lower maintenance costs for managing and integrating distributed heterogeneous data required as basis for a large-scale research project. Technical and legal aspects are taken into account based on various use case scenarios. The architecture contains 4 layers: data storage and access are decentralized at their production source, a connector as a proxy between the CIS and the external world, an information mediator as a data access point and the client side. The proposed design will be implemented inside six clinical centers participating in the @neurIST project as part of a larger system on data integration and reuse for aneurism treatment.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Chris W. Woodall; Patrick D. Miles; John S. Vissage
2005-01-01
Stand density index (SDI), although developed for use in even-aged monocultures, has been used for assessing stand density in large-scale forest inventories containing diverse tree species and size distributions. To improve application of SDI in unevenaged, mixed species stands present in large-scale forest inventories, trends in maximum SDI across diameter classes...
Newland, Jamee; Newman, Christy; Treloar, Carla
2016-08-01
In Australia, sterile needles and syringes are distributed to people who inject drugs (PWID) through formal services for the purposes of preventing blood borne viruses (BBV). Peer distribution involves people acquiring needles from formal services and redistributing them to others. This paper investigates the dynamics of the distribution of sterile injecting equipment among networks of people who inject drugs in four sites in New South Wales (NSW), Australia. Qualitative data exploring the practice of peer distribution were collected through in-depth, semi-structured interviews and participatory social network mapping. These interviews explored injecting equipment demand, access to services, relationship pathways through which peer distribution occurred, an estimate of the size of the different peer distribution roles and participants' understanding of the illegality of peer distribution in NSW. Data were collected from 32 participants, and 31 (98%) reported participating in peer distribution in the months prior to interview. Of those 31 participants, five reported large-scale formal distribution, with an estimated volume of 34,970 needles and syringes annually. Twenty-two participated in reciprocal exchange, where equipment was distributed and received on an informal basis that appeared dependent on context and circumstance and four participants reported recipient peer distribution as their only access to sterile injecting equipment. Most (n=27) were unaware that it was illegal to distribute injecting equipment to their peers. Peer distribution was almost ubiquitous amongst the PWID participating in the study, and although five participants reported taking part in the highly organised, large-scale distribution of injecting equipment for altruistic reasons, peer distribution was more commonly reported to take place in small networks of friends and/or partners for reasons of convenience. The law regarding the illegality of peer distribution needs to change so that NSPs can capitalise on peer distribution to increase the options available to PWID and to acknowledge PWID as essential harm reduction agents in the prevention of BBVs. Copyright © 2016 Elsevier B.V. All rights reserved.
Long-distance continuous-variable quantum key distribution by controlling excess noise
NASA Astrophysics Data System (ADS)
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise.
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-13
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network.
Long-distance continuous-variable quantum key distribution by controlling excess noise
Huang, Duan; Huang, Peng; Lin, Dakai; Zeng, Guihua
2016-01-01
Quantum cryptography founded on the laws of physics could revolutionize the way in which communication information is protected. Significant progresses in long-distance quantum key distribution based on discrete variables have led to the secure quantum communication in real-world conditions being available. However, the alternative approach implemented with continuous variables has not yet reached the secure distance beyond 100 km. Here, we overcome the previous range limitation by controlling system excess noise and report such a long distance continuous-variable quantum key distribution experiment. Our result paves the road to the large-scale secure quantum communication with continuous variables and serves as a stepping stone in the quest for quantum network. PMID:26758727
NASA Technical Reports Server (NTRS)
Guhathakurta, M.; Fisher, R. R.
1994-01-01
In this paper we utilize the latitiude distribution of the coronal temperature during the period 1984-1992 that was derived in a paper by Guhathakurta et al, 1993, utilizing ground-based intensity observations of the green (5303 A Fe XIV) and red (6374 A Fe X) coronal forbidden lines from the National Solar Observatory at Sacramento Peak, and establish it association with the global magnetic field and the density distributions in the corona. A determination of plasma temperature, T, was estimated from the intensity ratio Fe X/Fe XIV (where T is inversely proportional to the ratio), since both emission lines come from ionized states of Fe, and the ratio is only weakly dependent on density. We observe that there is a large-scale organization of the inferred coronal temperature distribution that is associated with the large-scale, weak magnetic field structures and bright coronal features; this organization tends to persist through most of the magnetic activity cycle. These high-temperature structures exhibit time-space characteristics which are similar to those of the polar crown filaments. This distribution differs in spatial and temporal characterization from the traditional picture of sunspot and active region evolution over the range of the sunspot cycle, which are manifestations of the small-scale, strong magnetic field regions.
GISpark: A Geospatial Distributed Computing Platform for Spatiotemporal Big Data
NASA Astrophysics Data System (ADS)
Wang, S.; Zhong, E.; Wang, E.; Zhong, Y.; Cai, W.; Li, S.; Gao, S.
2016-12-01
Geospatial data are growing exponentially because of the proliferation of cost effective and ubiquitous positioning technologies such as global remote-sensing satellites and location-based devices. Analyzing large amounts of geospatial data can provide great value for both industrial and scientific applications. Data- and compute- intensive characteristics inherent in geospatial big data increasingly pose great challenges to technologies of data storing, computing and analyzing. Such challenges require a scalable and efficient architecture that can store, query, analyze, and visualize large-scale spatiotemporal data. Therefore, we developed GISpark - a geospatial distributed computing platform for processing large-scale vector, raster and stream data. GISpark is constructed based on the latest virtualized computing infrastructures and distributed computing architecture. OpenStack and Docker are used to build multi-user hosting cloud computing infrastructure for GISpark. The virtual storage systems such as HDFS, Ceph, MongoDB are combined and adopted for spatiotemporal data storage management. Spark-based algorithm framework is developed for efficient parallel computing. Within this framework, SuperMap GIScript and various open-source GIS libraries can be integrated into GISpark. GISpark can also integrated with scientific computing environment (e.g., Anaconda), interactive computing web applications (e.g., Jupyter notebook), and machine learning tools (e.g., TensorFlow/Orange). The associated geospatial facilities of GISpark in conjunction with the scientific computing environment, exploratory spatial data analysis tools, temporal data management and analysis systems make up a powerful geospatial computing tool. GISpark not only provides spatiotemporal big data processing capacity in the geospatial field, but also provides spatiotemporal computational model and advanced geospatial visualization tools that deals with other domains related with spatial property. We tested the performance of the platform based on taxi trajectory analysis. Results suggested that GISpark achieves excellent run time performance in spatiotemporal big data applications.
Structure and Variability of Water Vapor in the Upper Troposphere and Lower Stratosphere
NASA Technical Reports Server (NTRS)
Salby, Murry L.
2001-01-01
Upper-tropospheric humidity (UTH) has been synoptically mapped via an algorithm that rejects small-scale undersampled variance, which is intrinsic to asymptotic measurements of water vapor, cloud, and other convective properties. Mapped distributions of UTH have been used, jointly with high-resolution Global Cloud Imagery (GCI), to study how the upper troposphere is humidified. The time-mean distribution of UTH is spatially correlated to the time-mean distribution of cold cloud fraction (eta)(sub c) (T < than 230 K). Regions of large UTH coincide with regions of large eta(sub c), which mark deep convection. They also coincide with regions of reduced vertical stability, in which the vertical gradient of theta is weakened by convective mixing. Coldest cloud cover is attended convective overshoots above the local tropopause, which is simultaneously coldest and highest. Together, these features reflect the upper-troposphere being ventilated by convection, which mixes in moist air from lower levels. Histograms of UTH and eta(sub c) have been applied to construct the joint probability density function, which quantifies the relationship between these properties. The expected value of UTH in convective regions is strongly correlated to the expected value of eta(sub c). In ensembles of asymptotic samples, the correlation between epsilon[UTH] and epsilon[eta(sub c)] exceeds 0.80. As these expectations reflect the most likely values, the strong correlation between epsilon[UTH] and epsilon[eta(sub c)] indicates that the large-scale organization of UTH is strongly shaped by convective pumping of moisture from lower levels. The same relationship holds for unsteady fields - even though, instantaneously, those fields are comprised almost entirely of small-scale convective structure. The spatial autocorrelation of UTH, constructed at high resolution from overpass data along ascending and descending tracks of the orbit, is limited to only a couple of degrees in the horizontal. This mirrors the spatial autocorrelation of eta(sub c), which likewise operates coherently on short scales. The short correlation scale of UTH, which reflects the scale of individual convective systems, is comparable to the spacing of retrievals from MLS. These scales are undersampled in the asynoptic measurements. Despite their prevalence, the mapping algorithm described above successfully recovers synoptic behavior operating coherently on large scales. It reveals eastward migration of anomalous UTH from the Indian ocean to the central Pacific, in association with the modulation of convection by the Madden-Julian oscillation. Additional information is contained in the original extended abstract.