Sample records for large-scale complex systems

  1. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  2. The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems

    ERIC Educational Resources Information Center

    Diamanti, Eirini Ilana

    2012-01-01

    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…

  3. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  4. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    PubMed Central

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  5. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  6. Preferential pathways in complex fracture systems and their influence on large scale transport

    NASA Astrophysics Data System (ADS)

    Willmann, M.; Mañé, R.; Tyukhova, A.

    2017-12-01

    Many subsurface applications in complex fracture systems require large-scale predictions. Precise predictions are difficult because of the existence of preferential pathways at different scales. The intrinsic complexity of fracture systems increases within fractured sedimentary formations, because also the coupling of fractures and matrix has to be taken into account. This interplay of fracture system and the sedimentary matrix is strongly controlled by the actual fracture aperture of an individual fracture. And an effective aperture cannot be easily be determined because of the preferential pathways along the fracture plane. We investigate the influence of these preferential pathways on large scale solute transport and upscale the aperture. By explicitly modeling flow and particle tracking in individual fractures, we develop a new effective transport aperture, which is weighted by the aperture along the preferential paths, a Lagrangian aperture. We show that this new aperture is consistently larger than existing definitions of effective flow and transport apertures. Finally, we apply our results to a fractured sedimentary formation in Northern Switzerland.

  7. Prediction of monthly rainfall on homogeneous monsoon regions of India based on large scale circulation patterns using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Kashid, Satishkumar S.; Maity, Rajib

    2012-08-01

    SummaryPrediction of Indian Summer Monsoon Rainfall (ISMR) is of vital importance for Indian economy, and it has been remained a great challenge for hydro-meteorologists due to inherent complexities in the climatic systems. The Large-scale atmospheric circulation patterns from tropical Pacific Ocean (ENSO) and those from tropical Indian Ocean (EQUINOO) are established to influence the Indian Summer Monsoon Rainfall. The information of these two large scale atmospheric circulation patterns in terms of their indices is used to model the complex relationship between Indian Summer Monsoon Rainfall and the ENSO as well as EQUINOO indices. However, extracting the signal from such large-scale indices for modeling such complex systems is significantly difficult. Rainfall predictions have been done for 'All India' as one unit, as well as for five 'homogeneous monsoon regions of India', defined by Indian Institute of Tropical Meteorology. Recent 'Artificial Intelligence' tool 'Genetic Programming' (GP) has been employed for modeling such problem. The Genetic Programming approach is found to capture the complex relationship between the monthly Indian Summer Monsoon Rainfall and large scale atmospheric circulation pattern indices - ENSO and EQUINOO. Research findings of this study indicate that GP-derived monthly rainfall forecasting models, that use large-scale atmospheric circulation information are successful in prediction of All India Summer Monsoon Rainfall with correlation coefficient as good as 0.866, which may appears attractive for such a complex system. A separate analysis is carried out for All India Summer Monsoon rainfall for India as one unit, and five homogeneous monsoon regions, based on ENSO and EQUINOO indices of months of March, April and May only, performed at end of month of May. In this case, All India Summer Monsoon Rainfall could be predicted with 0.70 as correlation coefficient with somewhat lesser Correlation Coefficient (C.C.) values for different 'homogeneous monsoon regions'.

  8. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  9. Variations of trends of indicators describing complex systems: Change of scaling precursory to extreme events

    NASA Astrophysics Data System (ADS)

    Keilis-Borok, V. I.; Soloviev, A. A.

    2010-09-01

    Socioeconomic and natural complex systems persistently generate extreme events also known as disasters, crises, or critical transitions. Here we analyze patterns of background activity preceding extreme events in four complex systems: economic recessions, surges in homicides in a megacity, magnetic storms, and strong earthquakes. We use as a starting point the indicators describing the system's behavior and identify changes in an indicator's trend. Those changes constitute our background events (BEs). We demonstrate a premonitory pattern common to all four systems considered: relatively large magnitude BEs become more frequent before extreme event. A premonitory change of scaling has been found in various models and observations. Here we demonstrate this change in scaling of uniformly defined BEs in four real complex systems, their enormous differences notwithstanding.

  10. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  11. Self-assembly of polyelectrolyte surfactant complexes using large scale MD simulation

    NASA Astrophysics Data System (ADS)

    Goswami, Monojoy; Sumpter, Bobby

    2014-03-01

    Polyelectrolytes (PE) and surfactants are known to form interesting structures with varied properties in aqueous solutions. The morphological details of the PE-surfactant complexes depend on a combination of polymer backbone, electrostatic interactions and hydrophobic interactions. We study the self-assembly of cationic PE and anionic surfactants complexes in dilute condition. The importance of such complexes of PE with oppositely charged surfactants can be found in biological systems, such as immobilization of enzymes in polyelectrolyte complexes or nonspecific association of DNA with protein. Many useful properties of PE surfactant complexes come from the highly ordered structures of surfactant self-assembly inside the PE aggregate which has applications in industry. We do large scale molecular dynamics simulation using LAMMPS to understand the structure and dynamics of PE-surfactant systems. Our investigation shows highly ordered pearl-necklace structures that have been observed experimentally in biological systems. We investigate many different properties of PE-surfactant complexation for different parameter ranges that are useful for pharmaceutical, engineering and biological applications.

  12. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  13. A Study of Students' Reasoning about Probabilistic Causality: Implications for Understanding Complex Systems and for Instructional Design

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell

    2017-01-01

    Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…

  14. Research directions in large scale systems and decentralized control

    NASA Technical Reports Server (NTRS)

    Tenney, R. R.

    1980-01-01

    Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.

  15. Scaling the Pyramid Model across Complex Systems Providing Early Care for Preschoolers: Exploring How Models for Decision Making May Enhance Implementation Science

    ERIC Educational Resources Information Center

    Johnson, LeAnne D.

    2017-01-01

    Bringing effective practices to scale across large systems requires attending to how information and belief systems come together in decisions to adopt, implement, and sustain those practices. Statewide scaling of the Pyramid Model, a framework for positive behavior intervention and support, across different types of early childhood programs…

  16. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    USDA-ARS?s Scientific Manuscript database

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  17. A SPATIALLY EXPLICIT HIERARCHICAL APPROACH TO MODELING COMPLEX ECOLOGICAL SYSTEMS: THEORY AND APPLICATIONS. (R827676)

    EPA Science Inventory

    Ecological systems are generally considered among the most complex because they are characterized by a large number of diverse components, nonlinear interactions, scale multiplicity, and spatial heterogeneity. Hierarchy theory, as well as empirical evidence, suggests that comp...

  18. Complexity, Robustness, and Network Thermodynamics in Large-Scale and Multiagent Systems: A Hybrid Control Approach

    DTIC Science & Technology

    2012-01-11

    dynamic behavior , wherein a dissipative dynamical system can deliver only a fraction of its energy to its surroundings and can store only a fraction of the...collection of interacting subsystems. The behavior and properties of the aggregate large-scale system can then be deduced from the behaviors of the...uniqueness is established. This state space formalism of thermodynamics shows that the behavior of heat, as described by the conservation equations of

  19. Organizational Influences on Interdisciplinary Interactions during Research and Design of Large-Scale Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.

    2012-01-01

    The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.

  20. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  1. Can Models Capture the Complexity of the Systems Engineering Process?

    NASA Astrophysics Data System (ADS)

    Boppana, Krishna; Chow, Sam; de Weck, Olivier L.; Lafon, Christian; Lekkakos, Spyridon D.; Lyneis, James; Rinaldi, Matthew; Wang, Zhiyong; Wheeler, Paul; Zborovskiy, Marat; Wojcik, Leonard A.

    Many large-scale, complex systems engineering (SE) programs have been problematic; a few examples are listed below (Bar-Yam, 2003 and Cullen, 2004), and many others have been late, well over budget, or have failed: Hilton/Marriott/American Airlines system for hotel reservations and flights; 1988-1992; 125 million; "scrapped"

  2. Organizational Agility and Complex Enterprise System Innovations: A Mixed Methods Study of the Effects of Enterprise Systems on Organizational Agility

    ERIC Educational Resources Information Center

    Kharabe, Amol T.

    2012-01-01

    Over the last two decades, firms have operated in "increasingly" accelerated "high-velocity" dynamic markets, which require them to become "agile." During the same time frame, firms have increasingly deployed complex enterprise systems--large-scale packaged software "innovations" that integrate and automate…

  3. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  4. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    NASA Astrophysics Data System (ADS)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.

  5. Assessing Understanding of Complex Causal Networks Using an Interactive Game

    ERIC Educational Resources Information Center

    Ross, Joel

    2013-01-01

    Assessing people's understanding of the causal relationships found in large-scale complex systems may be necessary for addressing many critical social concerns, such as environmental sustainability. Existing methods for assessing systems thinking and causal understanding frequently use the technique of cognitive causal mapping. However, the…

  6. Generalized Master Equation with Non-Markovian Multichromophoric Förster Resonance Energy Transfer for Modular Exciton Densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Seogjoo; Hoyer, Stephan; Fleming, Graham

    2014-10-31

    A generalized master equation (GME) governing quantum evolution of modular exciton density (MED) is derived for large scale light harvesting systems composed of weakly interacting modules of multiple chromophores. The GME-MED offers a practical framework to incorporate real time coherent quantum dynamics calculations of small length scales into dynamics over large length scales, and also provides a non-Markovian generalization and rigorous derivation of the Pauli master equation employing multichromophoric Förster resonance energy transfer rates. A test of the GME-MED for four sites of the Fenna-Matthews-Olson complex demonstrates how coherent dynamics of excitonic populations over coupled chromophores can be accurately describedmore » by transitions between subgroups (modules) of delocalized excitons. Application of the GME-MED to the exciton dynamics between a pair of light harvesting complexes in purple bacteria demonstrates its promise as a computationally efficient tool to investigate large scale exciton dynamics in complex environments.« less

  7. Large-scale systems: Complexity, stability, reliability

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1975-01-01

    After showing that a complex dynamic system with a competitive structure has highly reliable stability, a class of noncompetitive dynamic systems for which competitive models can be constructed is defined. It is shown that such a construction is possible in the context of the hierarchic stability analysis. The scheme is based on the comparison principle and vector Liapunov functions.

  8. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  9. Towards a comprehensive understanding of emerging dynamics and function of pancreatic islets: A complex network approach. Comment on "Network science of biological systems at different scales: A review" by Gosak et al.

    NASA Astrophysics Data System (ADS)

    Loppini, Alessandro

    2018-03-01

    Complex network theory represents a comprehensive mathematical framework to investigate biological systems, ranging from sub-cellular and cellular scales up to large-scale networks describing species interactions and ecological systems. In their exhaustive and comprehensive work [1], Gosak et al. discuss several scenarios in which the network approach was able to uncover general properties and underlying mechanisms of cells organization and regulation, tissue functions and cell/tissue failure in pathology, by the study of chemical reaction networks, structural networks and functional connectivities.

  10. The large-scale organization of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.

    2000-10-01

    In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.

  11. About the bears and the bees: Adaptive responses to asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Ryan, Alex

    Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a 'game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.

  12. About the bears and the bees: Adaptive responses to asymmetric warfare

    NASA Astrophysics Data System (ADS)

    Ryan, Alex

    Conventional military forces are organised to generate large scale effects against similarly structured adversaries. Asymmetric warfare is a `game' between a conventional military force and a weaker adversary that is unable to match the scale of effects of the conventional force. In asymmetric warfare, an insurgents' strategy can be understood using a multi-scale perspective: by generating and exploiting fine scale complexity, insurgents prevent the conventional force from acting at the scale they are designed for. This paper presents a complex systems approach to the problem of asymmetric warfare, which shows how future force structures can be designed to adapt to environmental complexity at multiple scales and achieve full spectrum dominance.

  13. Regional modeling of groundwater flow and arsenic transport in the Bengal Basin: challenges of scale and complexity (Invited)

    NASA Astrophysics Data System (ADS)

    Michael, H. A.; Voss, C. I.

    2009-12-01

    Widespread arsenic poisoning is occurring in large areas of Bangladesh and West Bengal, India due to high arsenic levels in shallow groundwater, which is the primary source of irrigation and drinking water in the region. The high-arsenic groundwater exists in aquifers of the Bengal Basin, a huge sedimentary system approximately 500km x 500km wide and greater than 15km deep in places. Deeper groundwater (>150m) is nearly universally low in arsenic and a potential source of safe drinking water, but evaluation of its sustainability requires understanding of the entire, interconnected regional aquifer system. Numerical modeling of flow and arsenic transport in the basin introduces problems of scale: challenges in representing the system in enough detail to produce meaningful simulations and answer relevant questions while maintaining enough simplicity to understand controls on processes and operating within computational constraints. A regional groundwater flow and transport model of the Bengal Basin was constructed to assess the large-scale functioning of the deep groundwater flow system, the vulnerability of deep groundwater to pumping-induced migration from above, and the effect of chemical properties of sediments (sorption) on sustainability. The primary challenges include the very large spatial scale of the system, dynamic monsoonal hydrology (small temporal scale fluctuations), complex sedimentary architecture (small spatial scale heterogeneity), and a lack of reliable hydrologic and geologic data. The approach was simple. Detailed inputs were reduced to only those that affect the functioning of the deep flow system. Available data were used to estimate upscaled parameter values. Nested small-scale simulations were performed to determine the effects of the simplifications, which include treatment of the top boundary condition and transience, effects of small-scale heterogeneity, and effects of individual pumping wells. Simulation of arsenic transport at the large scale adds another element of complexity. Minimization of numerical oscillation and mass balance errors required experimentation with solvers and discretization. In the face of relatively few data in a very large-scale model, sensitivity analyses were essential. The scale of the system limits evaluation of localized behavior, but results clearly identified the primary controls on the system and effects of various pumping scenarios and sorptive properties. It was shown that limiting deep pumping to domestic supply may result in sustainable arsenic-safe water for 90% of the arsenic-affected region over a 1000 year timescale, and that sorption of arsenic onto deep, oxidized Pleistocene sediments may increase the breakthrough time in unsustainable zones by more than an order of magnitude. Thus, both hydraulic and chemical defenses indicate the potential for sustainable, managed use of deep, safe groundwater resources in the Bengal Basin.

  14. Molecular dynamics simulations of large macromolecular complexes.

    PubMed

    Perilla, Juan R; Goh, Boon Chong; Cassidy, C Keith; Liu, Bo; Bernardi, Rafael C; Rudack, Till; Yu, Hang; Wu, Zhe; Schulten, Klaus

    2015-04-01

    Connecting dynamics to structural data from diverse experimental sources, molecular dynamics simulations permit the exploration of biological phenomena in unparalleled detail. Advances in simulations are moving the atomic resolution descriptions of biological systems into the million-to-billion atom regime, in which numerous cell functions reside. In this opinion, we review the progress, driven by large-scale molecular dynamics simulations, in the study of viruses, ribosomes, bioenergetic systems, and other diverse applications. These examples highlight the utility of molecular dynamics simulations in the critical task of relating atomic detail to the function of supramolecular complexes, a task that cannot be achieved by smaller-scale simulations or existing experimental approaches alone. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. How Robust Is Your Project? From Local Failures to Global Catastrophes: A Complex Networks Approach to Project Systemic Risk.

    PubMed

    Ellinas, Christos; Allan, Neil; Durugbo, Christopher; Johansson, Anders

    2015-01-01

    Current societal requirements necessitate the effective delivery of complex projects that can do more while using less. Yet, recent large-scale project failures suggest that our ability to successfully deliver them is still at its infancy. Such failures can be seen to arise through various failure mechanisms; this work focuses on one such mechanism. Specifically, it examines the likelihood of a project sustaining a large-scale catastrophe, as triggered by single task failure and delivered via a cascading process. To do so, an analytical model was developed and tested on an empirical dataset by the means of numerical simulation. This paper makes three main contributions. First, it provides a methodology to identify the tasks most capable of impacting a project. In doing so, it is noted that a significant number of tasks induce no cascades, while a handful are capable of triggering surprisingly large ones. Secondly, it illustrates that crude task characteristics cannot aid in identifying them, highlighting the complexity of the underlying process and the utility of this approach. Thirdly, it draws parallels with systems encountered within the natural sciences by noting the emergence of self-organised criticality, commonly found within natural systems. These findings strengthen the need to account for structural intricacies of a project's underlying task precedence structure as they can provide the conditions upon which large-scale catastrophes materialise.

  16. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  17. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  18. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  19. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  20. Intelligent systems engineering methodology

    NASA Technical Reports Server (NTRS)

    Fouse, Scott

    1990-01-01

    An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.

  1. The problem of ecological scaling in spatially complex, nonequilibrium ecological systems [chapter 3

    Treesearch

    Samuel A. Cushman; Jeremy Littell; Kevin McGarigal

    2010-01-01

    In the previous chapter we reviewed the challenges posed by spatial complexity and temporal disequilibrium to efforts to understand and predict the structure and dynamics of ecological systems. The central theme was that spatial variability in the environment and population processes fundamentally alters the interactions between species and their environments, largely...

  2. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  3. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  4. Toward a theoretical framework for trustworthy cyber sensing

    NASA Astrophysics Data System (ADS)

    Xu, Shouhuai

    2010-04-01

    Cyberspace is an indispensable part of the economy and society, but has been "polluted" with many compromised computers that can be abused to launch further attacks against the others. Since it is likely that there always are compromised computers, it is important to be aware of the (dynamic) cyber security-related situation, which is however challenging because cyberspace is an extremely large-scale complex system. Our project aims to investigate a theoretical framework for trustworthy cyber sensing. With the perspective of treating cyberspace as a large-scale complex system, the core question we aim to address is: What would be a competent theoretical (mathematical and algorithmic) framework for designing, analyzing, deploying, managing, and adapting cyber sensor systems so as to provide trustworthy information or input to the higher layer of cyber situation-awareness management, even in the presence of sophisticated malicious attacks against the cyber sensor systems?

  5. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  6. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: Overview and Air-side System Description

    NASA Technical Reports Server (NTRS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; hide

    2016-01-01

    This work presents an overview of the This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes., a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  7. Large space telescope engineering scale model optical design

    NASA Technical Reports Server (NTRS)

    Facey, T. A.

    1973-01-01

    The objective is to develop the detailed design and tolerance data for the LST engineering scale model optical system. This will enable MSFC to move forward to the optical element procurement phase and also to evaluate tolerances, manufacturing requirements, assembly/checkout procedures, reliability, operational complexity, stability requirements of the structure and thermal system, and the flexibility to change and grow.

  8. Advanced Kalman Filter for Real-Time Responsiveness in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Welch, Gregory Francis; Zhang, Jinghe

    2014-06-10

    Complex engineering systems pose fundamental challenges in real-time operations and control because they are highly dynamic systems consisting of a large number of elements with severe nonlinearities and discontinuities. Today’s tools for real-time complex system operations are mostly based on steady state models, unable to capture the dynamic nature and too slow to prevent system failures. We developed advanced Kalman filtering techniques and the formulation of dynamic state estimation using Kalman filtering techniques to capture complex system dynamics in aiding real-time operations and control. In this work, we looked at complex system issues including severe nonlinearity of system equations, discontinuitiesmore » caused by system controls and network switches, sparse measurements in space and time, and real-time requirements of power grid operations. We sought to bridge the disciplinary boundaries between Computer Science and Power Systems Engineering, by introducing methods that leverage both existing and new techniques. While our methods were developed in the context of electrical power systems, they should generalize to other large-scale scientific and engineering applications.« less

  9. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  10. Development and Initial Testing of the Tiltrotor Test Rig

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.; Sheikman, A. L.

    2018-01-01

    The NASA Tiltrotor Test Rig (TTR) is a new, large-scale proprotor test system, developed jointly with the U.S. Army and Air Force, to develop a new, large-scale proprotor test system for the National Full-Scale Aerodynamics Complex (NFAC). The TTR is designed to test advanced proprotors up to 26 feet in diameter at speeds up to 300 knots, and even larger rotors at lower airspeeds. This combination of size and speed is unprecedented and is necessary for research into 21st-century tiltrotors and other advanced rotorcraft concepts. The TTR will provide critical data for validation of state-of-the-art design and analysis tools.

  11. Systems metabolic engineering of microorganisms to achieve large-scale production of flavonoid scaffolds.

    PubMed

    Wu, Junjun; Du, Guocheng; Zhou, Jingwen; Chen, Jian

    2014-10-20

    Flavonoids possess pharmaceutical potential due to their health-promoting activities. The complex structures of these products make extraction from plants difficult, and chemical synthesis is limited because of the use of many toxic solvents. Microbial production offers an alternate way to produce these compounds on an industrial scale in a more economical and environment-friendly manner. However, at present microbial production has been achieved only on a laboratory scale and improvements and scale-up of these processes remain challenging. Naringenin and pinocembrin, which are flavonoid scaffolds and precursors for most of the flavonoids, are the model molecules that are key to solving the current issues restricting industrial production of these chemicals. The emergence of systems metabolic engineering, which combines systems biology with synthetic biology and evolutionary engineering at the systems level, offers new perspectives on strain and process optimization. In this review, current challenges in large-scale fermentation processes involving flavonoid scaffolds and the strategies and tools of systems metabolic engineering used to overcome these challenges are summarized. This will offer insights into overcoming the limitations and challenges of large-scale microbial production of these important pharmaceutical compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Biologically-Inspired Concepts for Self-Management of Complexity

    NASA Technical Reports Server (NTRS)

    Sterritt, Roy; Hinchey, G.

    2006-01-01

    Inherent complexity in large-scale applications may be impossible to eliminate or even ameliorate despite a number of promising advances. In such cases, the complexity must be tolerated and managed. Such management may be beyond the abilities of humans, or require such overhead as to make management by humans unrealistic. A number of initiatives inspired by concepts in biology have arisen for self-management of complex systems. We present some ideas and techniques we have been experimenting with, inspired by lesser-known concepts in biology that show promise in protecting complex systems and represent a step towards self-management of complexity.

  13. Big Data Analytics with Datalog Queries on Spark.

    PubMed

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2016-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics.

  14. Big Data Analytics with Datalog Queries on Spark

    PubMed Central

    Shkapsky, Alexander; Yang, Mohan; Interlandi, Matteo; Chiu, Hsuan; Condie, Tyson; Zaniolo, Carlo

    2017-01-01

    There is great interest in exploiting the opportunity provided by cloud computing platforms for large-scale analytics. Among these platforms, Apache Spark is growing in popularity for machine learning and graph analytics. Developing efficient complex analytics in Spark requires deep understanding of both the algorithm at hand and the Spark API or subsystem APIs (e.g., Spark SQL, GraphX). Our BigDatalog system addresses the problem by providing concise declarative specification of complex queries amenable to efficient evaluation. Towards this goal, we propose compilation and optimization techniques that tackle the important problem of efficiently supporting recursion in Spark. We perform an experimental comparison with other state-of-the-art large-scale Datalog systems and verify the efficacy of our techniques and effectiveness of Spark in supporting Datalog-based analytics. PMID:28626296

  15. Scaling laws of strategic behavior and size heterogeneity in agent dynamics

    NASA Astrophysics Data System (ADS)

    Vaglica, Gabriella; Lillo, Fabrizio; Moro, Esteban; Mantegna, Rosario N.

    2008-03-01

    We consider the financial market as a model system and study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact. We quantify this strategic behavior by detecting scaling relations between the variables characterizing the trading activity of different institutions. We also observe power-law distributions in the investment time horizon, in the number of transactions needed to execute a large order, and in the traded value exchanged by large institutions, and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system.

  16. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  17. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  18. The statistical power to detect cross-scale interactions at macroscales

    USGS Publications Warehouse

    Wagner, Tyler; Fergus, C. Emi; Stow, Craig A.; Cheruvelil, Kendra S.; Soranno, Patricia A.

    2016-01-01

    Macroscale studies of ecological phenomena are increasingly common because stressors such as climate and land-use change operate at large spatial and temporal scales. Cross-scale interactions (CSIs), where ecological processes operating at one spatial or temporal scale interact with processes operating at another scale, have been documented in a variety of ecosystems and contribute to complex system dynamics. However, studies investigating CSIs are often dependent on compiling multiple data sets from different sources to create multithematic, multiscaled data sets, which results in structurally complex, and sometimes incomplete data sets. The statistical power to detect CSIs needs to be evaluated because of their importance and the challenge of quantifying CSIs using data sets with complex structures and missing observations. We studied this problem using a spatially hierarchical model that measures CSIs between regional agriculture and its effects on the relationship between lake nutrients and lake productivity. We used an existing large multithematic, multiscaled database, LAke multiscaled GeOSpatial, and temporal database (LAGOS), to parameterize the power analysis simulations. We found that the power to detect CSIs was more strongly related to the number of regions in the study rather than the number of lakes nested within each region. CSI power analyses will not only help ecologists design large-scale studies aimed at detecting CSIs, but will also focus attention on CSI effect sizes and the degree to which they are ecologically relevant and detectable with large data sets.

  19. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  20. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  1. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  2. Best geoscience approach to complex systems in environment

    NASA Astrophysics Data System (ADS)

    Mezemate, Yacine; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2017-04-01

    The environment is a social issue that continues to grow in importance. Its complexity, both cross-disciplinary and multi-scale, has given rise to a large number of scientific and technological locks, that complex systems approaches can solve. Significant challenges must met to achieve the understanding of the environmental complexes systems. There study should proceed in some steps in which the use of data and models is crucial: - Exploration, observation and basic data acquisition - Identification of correlations, patterns, and mechanisms - Modelling - Model validation, implementation and prediction - Construction of a theory Since the e-learning becomes a powerful tool for knowledge and best practice shearing, we use it to teach the environmental complexities and systems. In this presentation we promote the e-learning course dedicated for a large public (undergraduates, graduates, PhD students and young scientists) which gather and puts in coherence different pedagogical materials of complex systems and environmental studies. This course describes a complex processes using numerous illustrations, examples and tests that make it "easy to enjoy" learning process. For the seek of simplicity, the course is divided in different modules and at the end of each module a set of exercises and program codes are proposed for a best practice. The graphical user interface (GUI) which is constructed using an open source Opale Scenari offers a simple navigation through the different module. The course treats the complex systems that can be found in environment and their observables, we particularly highlight the extreme variability of these observables over a wide range of scales. Using the multifractal formalism through different applications (turbulence, precipitation, hydrology) we demonstrate how such extreme variability of the geophysical/biological fields should be used solving everyday (geo-)environmental chalenges.

  3. A study of the spreading scheme for viral marketing based on a complex network model

    NASA Astrophysics Data System (ADS)

    Yang, Jianmei; Yao, Canzhong; Ma, Weicheng; Chen, Guanrong

    2010-02-01

    Buzzword-based viral marketing, known also as digital word-of-mouth marketing, is a marketing mode attached to some carriers on the Internet, which can rapidly copy marketing information at a low cost. Viral marketing actually uses a pre-existing social network where, however, the scale of the pre-existing network is believed to be so large and so random, so that its theoretical analysis is intractable and unmanageable. There are very few reports in the literature on how to design a spreading scheme for viral marketing on real social networks according to the traditional marketing theory or the relatively new network marketing theory. Complex network theory provides a new model for the study of large-scale complex systems, using the latest developments of graph theory and computing techniques. From this perspective, the present paper extends the complex network theory and modeling into the research of general viral marketing and develops a specific spreading scheme for viral marking and an approach to design the scheme based on a real complex network on the QQ instant messaging system. This approach is shown to be rather universal and can be further extended to the design of various spreading schemes for viral marketing based on different instant messaging systems.

  4. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    PubMed Central

    Jensen, Tue V.; Pinson, Pierre

    2017-01-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation. PMID:29182600

  5. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system.

    PubMed

    Jensen, Tue V; Pinson, Pierre

    2017-11-28

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  6. RE-Europe, a large-scale dataset for modeling a highly renewable European electricity system

    NASA Astrophysics Data System (ADS)

    Jensen, Tue V.; Pinson, Pierre

    2017-11-01

    Future highly renewable energy systems will couple to complex weather and climate dynamics. This coupling is generally not captured in detail by the open models developed in the power and energy system communities, where such open models exist. To enable modeling such a future energy system, we describe a dedicated large-scale dataset for a renewable electric power system. The dataset combines a transmission network model, as well as information for generation and demand. Generation includes conventional generators with their technical and economic characteristics, as well as weather-driven forecasts and corresponding realizations for renewable energy generation for a period of 3 years. These may be scaled according to the envisioned degrees of renewable penetration in a future European energy system. The spatial coverage, completeness and resolution of this dataset, open the door to the evaluation, scaling analysis and replicability check of a wealth of proposals in, e.g., market design, network actor coordination and forecasting of renewable power generation.

  7. Criticality as a Set-Point for Adaptive Behavior in Neuromorphic Hardware

    PubMed Central

    Srinivasa, Narayan; Stepp, Nigel D.; Cruz-Albrecht, Jose

    2015-01-01

    Neuromorphic hardware are designed by drawing inspiration from biology to overcome limitations of current computer architectures while forging the development of a new class of autonomous systems that can exhibit adaptive behaviors. Several designs in the recent past are capable of emulating large scale networks but avoid complexity in network dynamics by minimizing the number of dynamic variables that are supported and tunable in hardware. We believe that this is due to the lack of a clear understanding of how to design self-tuning complex systems. It has been widely demonstrated that criticality appears to be the default state of the brain and manifests in the form of spontaneous scale-invariant cascades of neural activity. Experiment, theory and recent models have shown that neuronal networks at criticality demonstrate optimal information transfer, learning and information processing capabilities that affect behavior. In this perspective article, we argue that understanding how large scale neuromorphic electronics can be designed to enable emergent adaptive behavior will require an understanding of how networks emulated by such hardware can self-tune local parameters to maintain criticality as a set-point. We believe that such capability will enable the design of truly scalable intelligent systems using neuromorphic hardware that embrace complexity in network dynamics rather than avoiding it. PMID:26648839

  8. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  9. Methods of information geometry in computational system biology (consistency between chemical and biological evolution).

    PubMed

    Astakhov, Vadim

    2009-01-01

    Interest in simulation of large-scale metabolic networks, species development, and genesis of various diseases requires new simulation techniques to accommodate the high complexity of realistic biological networks. Information geometry and topological formalisms are proposed to analyze information processes. We analyze the complexity of large-scale biological networks as well as transition of the system functionality due to modification in the system architecture, system environment, and system components. The dynamic core model is developed. The term dynamic core is used to define a set of causally related network functions. Delocalization of dynamic core model provides a mathematical formalism to analyze migration of specific functions in biosystems which undergo structure transition induced by the environment. The term delocalization is used to describe these processes of migration. We constructed a holographic model with self-poetic dynamic cores which preserves functional properties under those transitions. Topological constraints such as Ricci flow and Pfaff dimension were found for statistical manifolds which represent biological networks. These constraints can provide insight on processes of degeneration and recovery which take place in large-scale networks. We would like to suggest that therapies which are able to effectively implement estimated constraints, will successfully adjust biological systems and recover altered functionality. Also, we mathematically formulate the hypothesis that there is a direct consistency between biological and chemical evolution. Any set of causal relations within a biological network has its dual reimplementation in the chemistry of the system environment.

  10. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  11. Synchronization and Causality Across Time-scales: Complex Dynamics and Extremes in El Niño/Southern Oscillation

    NASA Astrophysics Data System (ADS)

    Jajcay, N.; Kravtsov, S.; Tsonis, A.; Palus, M.

    2017-12-01

    A better understanding of dynamics in complex systems, such as the Earth's climate is one of the key challenges for contemporary science and society. A large amount of experimental data requires new mathematical and computational approaches. Natural complex systems vary on many temporal and spatial scales, often exhibiting recurring patterns and quasi-oscillatory phenomena. The statistical inference of causal interactions and synchronization between dynamical phenomena evolving on different temporal scales is of vital importance for better understanding of underlying mechanisms and a key for modeling and prediction of such systems. This study introduces and applies information theory diagnostics to phase and amplitude time series of different wavelet components of the observed data that characterizes El Niño. A suite of significant interactions between processes operating on different time scales was detected, and intermittent synchronization among different time scales has been associated with the extreme El Niño events. The mechanisms of these nonlinear interactions were further studied in conceptual low-order and state-of-the-art dynamical, as well as statistical climate models. Observed and simulated interactions exhibit substantial discrepancies, whose understanding may be the key to an improved prediction. Moreover, the statistical framework which we apply here is suitable for direct usage of inferring cross-scale interactions in nonlinear time series from complex systems such as the terrestrial magnetosphere, solar-terrestrial interactions, seismic activity or even human brain dynamics.

  12. Modeling Costal Zone Responses to Sea-Level Rise Using MoCCS: A Model of Complex Coastal System

    NASA Astrophysics Data System (ADS)

    Dai, H.; Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Kish, S.

    2011-12-01

    Large-scale coastal systems consisting of several morphological components (e.g. beach, surf zone, dune, inlet, shoreface, and estuary) can be expected to exhibit complex and interacting responses to changes in the rate of sea level rise and storm climate. We have developed a numerical model of complex coastal systems (MoCCS), derived from earlier morphdynamic models, to represent the large-scale time-averaged physical processes that shape each component and govern the component interactions. These control the ongoing evolution of the barrier islands, beach and dune erosion, shoal formation and sand withdrawal at tidal inlets, depth changes in the bay, and changes in storm flooding. The model has been used to study the response of an idealized coastal system with physical characteristics and storm climatology similar to Santa Rosa Island on the Florida Panhandle coast. Five SLR scenarios have been used, covering the range of recently published projections for the next century. Each scenario has been input with a constant and then a time-varying storm climate. The results indicate that substantial increases in the rate of beach erosion are largely due to increased sand transfer to inlet shoals with increased rates of sea level rise. The barrier island undergoes cycles of dune destruction and regrowth, leading to sand deposition. This largely maintains island freeboard but is progressively less effective in offsetting bayside inundation and marsh habitat loss at accelerated sea level rise rates.

  13. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  14. Etoile Project : Social Intelligent ICT-System for very large scale education in complex systems

    NASA Astrophysics Data System (ADS)

    Bourgine, P.; Johnson, J.

    2009-04-01

    The project will devise new theory and implement new ICT-based methods of delivering high-quality low-cost postgraduate education to many thousands of people in a scalable way, with the cost of each extra student being negligible (< a few Euros). The research will create an in vivo laboratory of one to ten thousand postgraduate students studying courses in complex systems. This community is chosen because it is large and interdisciplinary and there is a known requirement for courses for thousand of students across Europe. The project involves every aspect of course production and delivery. Within this the research focused on the creation of a Socially Intelligent Resource Mining system to gather large volumes of high quality educational resources from the internet; new methods to deconstruct these to produce a semantically tagged Learning Object Database; a Living Course Ecology to support the creation and maintenance of evolving course materials; systems to deliver courses; and a ‘socially intelligent assessment system'. The system will be tested on one to ten thousand postgraduate students in Europe working towards the Complex System Society's title of European PhD in Complex Systems. Étoile will have a very high impact both scientifically and socially by (i) the provision of new scalable ICT-based methods for providing very low cost scientific education, (ii) the creation of new mathematical and statistical theory for the multiscale dynamics of complex systems, (iii) the provision of a working example of adaptation and emergence in complex socio-technical systems, and (iv) making a major educational contribution to European complex systems science and its applications.

  15. Project BALLOTS: Bibliographic Automation of Large Library Operations Using a Time-Sharing System. Progress Report (3/27/69 - 6/26/69).

    ERIC Educational Resources Information Center

    Veaner, Allen B.

    Project BALLOTS is a large-scale library automation development project of the Stanford University Libraries which has demonstrated the feasibility of conducting on-line interactive searches of complex bibliographic files, with a large number of users working simultaneously in the same or different files. This report documents the continuing…

  16. Systematic methods for defining coarse-grained maps in large biomolecules.

    PubMed

    Zhang, Zhiyong

    2015-01-01

    Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.

  17. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  18. EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.

    PubMed

    Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D

    2012-01-01

    Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.

  19. Numerical Technology for Large-Scale Computational Electromagnetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharpe, R; Champagne, N; White, D

    The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less

  20. Autonomous Energy Grids | Grid Modernization | NREL

    Science.gov Websites

    control themselves using advanced machine learning and simulation to create resilient, reliable, and affordable optimized energy systems. Current frameworks to monitor, control, and optimize large-scale energy of optimization theory, control theory, big data analytics, and complex system theory and modeling to

  1. Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding

    PubMed Central

    Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro

    2015-01-01

    Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045

  2. A numerical projection technique for large-scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Gamillscheg, Ralf; Haase, Gundolf; von der Linden, Wolfgang

    2011-10-01

    We present a new numerical technique to solve large-scale eigenvalue problems. It is based on the projection technique, used in strongly correlated quantum many-body systems, where first an effective approximate model of smaller complexity is constructed by projecting out high energy degrees of freedom and in turn solving the resulting model by some standard eigenvalue solver. Here we introduce a generalization of this idea, where both steps are performed numerically and which in contrast to the standard projection technique converges in principle to the exact eigenvalues. This approach is not just applicable to eigenvalue problems encountered in many-body systems but also in other areas of research that result in large-scale eigenvalue problems for matrices which have, roughly speaking, mostly a pronounced dominant diagonal part. We will present detailed studies of the approach guided by two many-body models.

  3. 'Fracking', Induced Seismicity and the Critical Earth

    NASA Astrophysics Data System (ADS)

    Leary, P.; Malin, P. E.

    2012-12-01

    Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.

  4. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  5. A Principled Approach to the Specification of System Architectures for Space Missions

    NASA Technical Reports Server (NTRS)

    McKelvin, Mark L. Jr.; Castillo, Robert; Bonanne, Kevin; Bonnici, Michael; Cox, Brian; Gibson, Corrina; Leon, Juan P.; Gomez-Mustafa, Jose; Jimenez, Alejandro; Madni, Azad

    2015-01-01

    Modern space systems are increasing in complexity and scale at an unprecedented pace. Consequently, innovative methods, processes, and tools are needed to cope with the increasing complexity of architecting these systems. A key systems challenge in practice is the ability to scale processes, methods, and tools used to architect complex space systems. Traditionally, the process for specifying space system architectures has largely relied on capturing the system architecture in informal descriptions that are often embedded within loosely coupled design documents and domain expertise. Such informal descriptions often lead to misunderstandings between design teams, ambiguous specifications, difficulty in maintaining consistency as the architecture evolves throughout the system development life cycle, and costly design iterations. Therefore, traditional methods are becoming increasingly inefficient to cope with ever-increasing system complexity. We apply the principles of component-based design and platform-based design to the development of the system architecture for a practical space system to demonstrate feasibility of our approach using SysML. Our results show that we are able to apply a systematic design method to manage system complexity, thus enabling effective data management, semantic coherence and traceability across different levels of abstraction in the design chain. Just as important, our approach enables interoperability among heterogeneous tools in a concurrent engineering model based design environment.

  6. Conducting Automated Test Assembly Using the Premium Solver Platform Version 7.0 with Microsoft Excel and the Large-Scale LP/QP Solver Engine Add-In

    ERIC Educational Resources Information Center

    Cor, Ken; Alves, Cecilia; Gierl, Mark J.

    2008-01-01

    This review describes and evaluates a software add-in created by Frontline Systems, Inc., that can be used with Microsoft Excel 2007 to solve large, complex test assembly problems. The combination of Microsoft Excel 2007 with the Frontline Systems Premium Solver Platform is significant because Microsoft Excel is the most commonly used spreadsheet…

  7. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  8. Overview of Accelerator Applications in Energy

    NASA Astrophysics Data System (ADS)

    Garnett, Robert W.; Sheffield, Richard L.

    An overview of the application of accelerators and accelerator technology in energy is presented. Applications span a broad range of cost, size, and complexity and include large-scale systems requiring high-power or high-energy accelerators to drive subcritical reactors for energy production or waste transmutation, as well as small-scale industrial systems used to improve oil and gas exploration and production. The enabling accelerator technologies will also be reviewed and future directions discussed.

  9. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  10. Simulations of Sea Level Rise Effects on Complex Coastal Systems

    NASA Astrophysics Data System (ADS)

    Niedoroda, A. W.; Ye, M.; Saha, B.; Donoghue, J. F.; Reed, C. W.

    2009-12-01

    It is now established that complex coastal systems with elements such as beaches, inlets, bays, and rivers adjust their morphologies according to time-varying balances in between the processes that control the exchange of sediment. Accelerated sea level rise introduces a major perturbation into the sediment-sharing systems. A modeling framework based on a new SL-PR model which is an advanced version of the aggregate-scale CST Model and the event-scale CMS-2D and CMS-Wave combination have been used to simulate the recent evolution of a portion of the Florida panhandle coast. This combination of models provides a method to evaluate coefficients in the aggregate-scale model that were previously treated as fitted parameters. That is, by carrying out simulations of a complex coastal system with runs of the event-scale model representing more than a year it is now possible to directly relate the coefficients in the large-scale SL-PR model to measureable physical parameters in the current and wave fields. This cross-scale modeling procedure has been used to simulate the shoreline evolution at the Santa Rosa Island, a long barrier which houses significant military infrastructure at the north Gulf Coast. The model has been used to simulate 137 years of measured shoreline change and to extend these to predictions of future rates of shoreline migration.

  11. An intermediate level of abstraction for computational systems chemistry.

    PubMed

    Andersen, Jakob L; Flamm, Christoph; Merkle, Daniel; Stadler, Peter F

    2017-12-28

    Computational techniques are required for narrowing down the vast space of possibilities to plausible prebiotic scenarios, because precise information on the molecular composition, the dominant reaction chemistry and the conditions for that era are scarce. The exploration of large chemical reaction networks is a central aspect in this endeavour. While quantum chemical methods can accurately predict the structures and reactivities of small molecules, they are not efficient enough to cope with large-scale reaction systems. The formalization of chemical reactions as graph grammars provides a generative system, well grounded in category theory, at the right level of abstraction for the analysis of large and complex reaction networks. An extension of the basic formalism into the realm of integer hyperflows allows for the identification of complex reaction patterns, such as autocatalysis, in large reaction networks using optimization techniques.This article is part of the themed issue 'Reconceptualizing the origins of life'. © 2017 The Author(s).

  12. The Internet As a Large-Scale Complex System

    NASA Astrophysics Data System (ADS)

    Park, Kihong; Willinger, Walter

    2005-06-01

    The Internet may be viewed as a "complex system" with diverse features and many components that can give rise to unexpected emergent phenomena, revealing much about its own engineering. This book brings together chapter contributions from a workshop held at the Santa Fe Institute in March 2001. This volume captures a snapshot of some features of the Internet that may be fruitfully approached using a complex systems perspective, meaning using interdisciplinary tools and methods to tackle the subject area. The Internet penetrates the socioeconomic fabric of everyday life; a broader and deeper grasp of the Internet may be needed to meet the challenges facing the future. The resulting empirical data have already proven to be invaluable for gaining novel insights into the network's spatio-temporal dynamics, and can be expected to become even more important when tryin to explain the Internet's complex and emergent behavior in terms of elementary networking-based mechanisms. The discoveries of fractal or self-similar network traffic traces, power-law behavior in network topology and World Wide Web connectivity are instances of unsuspected, emergent system traits. Another important factor at the heart of fair, efficient, and stable sharing of network resources is user behavior. Network systems, when habited by selfish or greedy users, take on the traits of a noncooperative multi-party game, and their stability and efficiency are integral to understanding the overall system and its dynamics. Lastly, fault-tolerance and robustness of large-scale network systems can exhibit spatial and temporal correlations whose effective analysis and management may benefit from rescaling techniques applied in certain physical and biological systems. The present book will bring together several of the leading workers involved in the analysis of complex systems with the future development of the Internet.

  13. "Structure and dynamics in complex chemical systems: Gaining new insights through recent advances in time-resolved spectroscopies.” ACS Division of Physical Chemistry Symposium presented at the Fall National ACS Meeting in Boston, MA, August 2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crawford, Daniel

    8-Session Symposium on STRUCTURE AND DYNAMICS IN COMPLEX CHEMICAL SYSTEMS: GAINING NEW INSIGHTS THROUGH RECENT ADVANCES IN TIME-RESOLVED SPECTROSCOPIES. The intricacy of most chemical, biochemical, and material processes and their applications are underscored by the complex nature of the environments in which they occur. Substantial challenges for building a global understanding of a heterogeneous system include (1) identifying unique signatures associated with specific structural motifs within the heterogeneous distribution, and (2) resolving the significance of each of multiple time scales involved in both small- and large-scale nuclear reorganization. This symposium focuses on the progress in our understanding of dynamics inmore » complex systems driven by recent innovations in time-resolved spectroscopies and theoretical developments. Such advancement is critical for driving discovery at the molecular level facilitating new applications. Broad areas of interest include: Structural relaxation and the impact of structure on dynamics in liquids, interfaces, biochemical systems, materials, and other heterogeneous environments.« less

  14. Particle physics and polyedra proximity calculation for hazard simulations in large-scale industrial plants

    NASA Astrophysics Data System (ADS)

    Plebe, Alice; Grasso, Giorgio

    2016-12-01

    This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.

  15. A Framework of Working Across Disciplines in Early Design and R&D of Large Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria Rivas; Papalambros, Panos Y.; Baker, Wayne E.

    2015-01-01

    This paper examines four primary methods of working across disciplines during R&D and early design of large-scale complex engineered systems such as aerospace systems. A conceptualized framework, called the Combining System Elements framework, is presented to delineate several aspects of cross-discipline and system integration practice. The framework is derived from a theoretical and empirical analysis of current work practices in actual operational settings and is informed by theories from organization science and engineering. The explanatory framework may be used by teams to clarify assumptions and associated work practices, which may reduce ambiguity in understanding diverse approaches to early systems research, development and design. The framework also highlights that very different engineering results may be obtained depending on work practices, even when the goals for the engineered system are the same.

  16. Finding equilibrium in the spatiotemporal chaos of the complex Ginzburg-Landau equation

    NASA Astrophysics Data System (ADS)

    Ballard, Christopher C.; Esty, C. Clark; Egolf, David A.

    2016-11-01

    Equilibrium statistical mechanics allows the prediction of collective behaviors of large numbers of interacting objects from just a few system-wide properties; however, a similar theory does not exist for far-from-equilibrium systems exhibiting complex spatial and temporal behavior. We propose a method for predicting behaviors in a broad class of such systems and apply these ideas to an archetypal example, the spatiotemporal chaotic 1D complex Ginzburg-Landau equation in the defect chaos regime. Building on the ideas of Ruelle and of Cross and Hohenberg that a spatiotemporal chaotic system can be considered a collection of weakly interacting dynamical units of a characteristic size, the chaotic length scale, we identify underlying, mesoscale, chaotic units and effective interaction potentials between them. We find that the resulting equilibrium Takahashi model accurately predicts distributions of particle numbers. These results suggest the intriguing possibility that a class of far-from-equilibrium systems may be well described at coarse-grained scales by the well-established theory of equilibrium statistical mechanics.

  17. Finding equilibrium in the spatiotemporal chaos of the complex Ginzburg-Landau equation.

    PubMed

    Ballard, Christopher C; Esty, C Clark; Egolf, David A

    2016-11-01

    Equilibrium statistical mechanics allows the prediction of collective behaviors of large numbers of interacting objects from just a few system-wide properties; however, a similar theory does not exist for far-from-equilibrium systems exhibiting complex spatial and temporal behavior. We propose a method for predicting behaviors in a broad class of such systems and apply these ideas to an archetypal example, the spatiotemporal chaotic 1D complex Ginzburg-Landau equation in the defect chaos regime. Building on the ideas of Ruelle and of Cross and Hohenberg that a spatiotemporal chaotic system can be considered a collection of weakly interacting dynamical units of a characteristic size, the chaotic length scale, we identify underlying, mesoscale, chaotic units and effective interaction potentials between them. We find that the resulting equilibrium Takahashi model accurately predicts distributions of particle numbers. These results suggest the intriguing possibility that a class of far-from-equilibrium systems may be well described at coarse-grained scales by the well-established theory of equilibrium statistical mechanics.

  18. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  19. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  20. Asymmetric noise-induced large fluctuations in coupled systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Ira B.; Szwaykowska, Klimka; Carr, Thomas W.

    2017-10-01

    Networks of interacting, communicating subsystems are common in many fields, from ecology, biology, and epidemiology to engineering and robotics. In the presence of noise and uncertainty, interactions between the individual components can lead to unexpected complex system-wide behaviors. In this paper, we consider a generic model of two weakly coupled dynamical systems, and we show how noise in one part of the system is transmitted through the coupling interface. Working synergistically with the coupling, the noise on one system drives a large fluctuation in the other, even when there is no noise in the second system. Moreover, the large fluctuation happens while the first system exhibits only small random oscillations. Uncertainty effects are quantified by showing how characteristic time scales of noise-induced switching scale as a function of the coupling between the two coupled parts of the experiment. In addition, our results show that the probability of switching in the noise-free system scales inversely as the square of reduced noise intensity amplitude, rendering the virtual probability of switching an extremely rare event. Our results showing the interplay between transmitted noise and coupling are also confirmed through simulations, which agree quite well with analytic theory.

  1. The Multi-Scale Network Landscape of Collaboration.

    PubMed

    Bae, Arram; Park, Doheum; Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena--which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists.

  2. The Multi-Scale Network Landscape of Collaboration

    PubMed Central

    Ahn, Yong-Yeol; Park, Juyong

    2016-01-01

    Propelled by the increasing availability of large-scale high-quality data, advanced data modeling and analysis techniques are enabling many novel and significant scientific understanding of a wide range of complex social, natural, and technological systems. These developments also provide opportunities for studying cultural systems and phenomena—which can be said to refer to all products of human creativity and way of life. An important characteristic of a cultural product is that it does not exist in isolation from others, but forms an intricate web of connections on many levels. In the creation and dissemination of cultural products and artworks in particular, collaboration and communication of ideas play an essential role, which can be captured in the heterogeneous network of the creators and practitioners of art. In this paper we propose novel methods to analyze and uncover meaningful patterns from such a network using the network of western classical musicians constructed from a large-scale comprehensive Compact Disc recordings data. We characterize the complex patterns in the network landscape of collaboration between musicians across multiple scales ranging from the macroscopic to the mesoscopic and microscopic that represent the diversity of cultural styles and the individuality of the artists. PMID:26990088

  3. Systems Proteomics for Translational Network Medicine

    PubMed Central

    Arrell, D. Kent; Terzic, Andre

    2012-01-01

    Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016

  4. Complex networks as an emerging property of hierarchical preferential attachment.

    PubMed

    Hébert-Dufresne, Laurent; Laurence, Edward; Allard, Antoine; Young, Jean-Gabriel; Dubé, Louis J

    2015-12-01

    Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance, in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality, and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.

  5. Complex networks as an emerging property of hierarchical preferential attachment

    NASA Astrophysics Data System (ADS)

    Hébert-Dufresne, Laurent; Laurence, Edward; Allard, Antoine; Young, Jean-Gabriel; Dubé, Louis J.

    2015-12-01

    Real complex systems are not rigidly structured; no clear rules or blueprints exist for their construction. Yet, amidst their apparent randomness, complex structural properties universally emerge. We propose that an important class of complex systems can be modeled as an organization of many embedded levels (potentially infinite in number), all of them following the same universal growth principle known as preferential attachment. We give examples of such hierarchy in real systems, for instance, in the pyramid of production entities of the film industry. More importantly, we show how real complex networks can be interpreted as a projection of our model, from which their scale independence, their clustering, their hierarchy, their fractality, and their navigability naturally emerge. Our results suggest that complex networks, viewed as growing systems, can be quite simple, and that the apparent complexity of their structure is largely a reflection of their unobserved hierarchical nature.

  6. Reaction factoring and bipartite update graphs accelerate the Gillespie Algorithm for large-scale biochemical systems.

    PubMed

    Indurkhya, Sagar; Beal, Jacob

    2010-01-06

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.

  7. Reaction Factoring and Bipartite Update Graphs Accelerate the Gillespie Algorithm for Large-Scale Biochemical Systems

    PubMed Central

    Indurkhya, Sagar; Beal, Jacob

    2010-01-01

    ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048

  8. Automation of Hessian-Based Tubularity Measure Response Function in 3D Biomedical Images.

    PubMed

    Dzyubak, Oleksandr P; Ritman, Erik L

    2011-01-01

    The blood vessels and nerve trees consist of tubular objects interconnected into a complex tree- or web-like structure that has a range of structural scale 5 μm diameter capillaries to 3 cm aorta. This large-scale range presents two major problems; one is just making the measurements, and the other is the exponential increase of component numbers with decreasing scale. With the remarkable increase in the volume imaged by, and resolution of, modern day 3D imagers, it is almost impossible to make manual tracking of the complex multiscale parameters from those large image data sets. In addition, the manual tracking is quite subjective and unreliable. We propose a solution for automation of an adaptive nonsupervised system for tracking tubular objects based on multiscale framework and use of Hessian-based object shape detector incorporating National Library of Medicine Insight Segmentation and Registration Toolkit (ITK) image processing libraries.

  9. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  10. Considering Complex Objectives and Scarce Resources in Information Systems' Analysis.

    ERIC Educational Resources Information Center

    Crowther, Warren

    The low efficacy of many of the library and large-scale information systems that have been implemented in the developing countries has been disappointing, and their appropriateness is often questioned in the governmental and educational institutions of more industrialized countries beset by budget-crunching and a very dynamic transformation of…

  11. Potential utilization of the NASA/George C. Marshall Space Flight Center in earthquake engineering research

    NASA Technical Reports Server (NTRS)

    Scholl, R. E. (Editor)

    1979-01-01

    Earthquake engineering research capabilities of the National Aeronautics and Space Administration (NASA) facilities at George C. Marshall Space Flight Center (MSFC), Alabama, were evaluated. The results indicate that the NASA/MSFC facilities and supporting capabilities offer unique opportunities for conducting earthquake engineering research. Specific features that are particularly attractive for large scale static and dynamic testing of natural and man-made structures include the following: large physical dimensions of buildings and test bays; high loading capacity; wide range and large number of test equipment and instrumentation devices; multichannel data acquisition and processing systems; technical expertise for conducting large-scale static and dynamic testing; sophisticated techniques for systems dynamics analysis, simulation, and control; and capability for managing large-size and technologically complex programs. Potential uses of the facilities for near and long term test programs to supplement current earthquake research activities are suggested.

  12. Phase Transitions and Scaling in Systems Far from Equilibrium

    NASA Astrophysics Data System (ADS)

    Täuber, Uwe C.

    2017-03-01

    Scaling ideas and renormalization group approaches proved crucial for a deep understanding and classification of critical phenomena in thermal equilibrium. Over the past decades, these powerful conceptual and mathematical tools were extended to continuous phase transitions separating distinct nonequilibrium stationary states in driven classical and quantum systems. In concordance with detailed numerical simulations and laboratory experiments, several prominent dynamical universality classes have emerged that govern large-scale, long-time scaling properties both near and far from thermal equilibrium. These pertain to genuine specific critical points as well as entire parameter space regions for steady states that display generic scale invariance. The exploration of nonstationary relaxation properties and associated physical aging scaling constitutes a complementary potent means to characterize cooperative dynamics in complex out-of-equilibrium systems. This review describes dynamic scaling features through paradigmatic examples that include near-equilibrium critical dynamics, driven lattice gases and growing interfaces, correlation-dominated reaction-diffusion systems, and basic epidemic models.

  13. On the context-dependent scaling of consumer feeding rates.

    PubMed

    Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C

    2016-06-01

    The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  14. 3-D imaging of large scale buried structure by 1-D inversion of very early time electromagnetic (VETEM) data

    USGS Publications Warehouse

    Aydmer, A.A.; Chew, W.C.; Cui, T.J.; Wright, D.L.; Smith, D.V.; Abraham, J.D.

    2001-01-01

    A simple and efficient method for large scale three-dimensional (3-D) subsurface imaging of inhomogeneous background is presented. One-dimensional (1-D) multifrequency distorted Born iterative method (DBIM) is employed in the inversion. Simulation results utilizing synthetic scattering data are given. Calibration of the very early time electromagnetic (VETEM) experimental waveforms is detailed along with major problems encountered in practice and their solutions. This discussion is followed by the results of a large scale application of the method to the experimental data provided by the VETEM system of the U.S. Geological Survey. The method is shown to have a computational complexity that is promising for on-site inversion.

  15. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  16. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  17. Evolving from bioinformatics in-the-small to bioinformatics in-the-large.

    PubMed

    Parker, D Stott; Gorlick, Michael M; Lee, Christopher J

    2003-01-01

    We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.

  18. Engineering research, development and technology FY99

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langland, R T

    The growth of computer power and connectivity, together with advances in wireless sensing and communication technologies, is transforming the field of complex distributed systems. The ability to deploy large numbers of sensors with a rapid, broadband communication system will enable high-fidelity, near real-time monitoring of complex systems. These technological developments will provide unprecedented insight into the actual performance of engineered and natural environment systems, enable the evolution of many new types of engineered systems for monitoring and detection, and enhance our ability to perform improved and validated large-scale simulations of complex systems. One of the challenges facing engineering is tomore » develop methodologies to exploit the emerging information technologies. Particularly important will be the ability to assimilate measured data into the simulation process in a way which is much more sophisticated than current, primarily ad hoc procedures. The reports contained in this section on the Center for Complex Distributed Systems describe activities related to the integrated engineering of large complex systems. The first three papers describe recent developments for each link of the integrated engineering process for large structural systems. These include (1) the development of model-based signal processing algorithms which will formalize the process of coupling measurements and simulation and provide a rigorous methodology for validation and update of computational models; (2) collaborative efforts with faculty at the University of California at Berkeley on the development of massive simulation models for the earth and large bridge structures; and (3) the development of wireless data acquisition systems which provide a practical means of monitoring large systems like the National Ignition Facility (NIF) optical support structures. These successful developments are coming to a confluence in the next year with applications to NIF structural characterizations and analysis of large bridge structures for the State of California. Initial feasibility investigations into the development of monitoring and detection systems are described in the papers on imaging of underground structures with ground-penetrating radar, and the use of live insects as sensor platforms. These efforts are establishing the basic performance characteristics essential to the decision process for future development of sensor arrays for information gathering related to national security.« less

  19. A complexity science-based framework for global joint operations analysis to support force projection: LDRD Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.

    2015-01-01

    The military is undergoing a significant transformation as it modernizes for the information age and adapts to address an emerging asymmetric threat beyond traditional cold war era adversaries. Techniques such as traditional large-scale, joint services war gaming analysis are no longer adequate to support program evaluation activities and mission planning analysis at the enterprise level because the operating environment is evolving too quickly. New analytical capabilities are necessary to address modernization of the Department of Defense (DoD) enterprise. This presents significant opportunity to Sandia in supporting the nation at this transformational enterprise scale. Although Sandia has significant experience with engineeringmore » system of systems (SoS) and Complex Adaptive System of Systems (CASoS), significant fundamental research is required to develop modeling, simulation and analysis capabilities at the enterprise scale. This report documents an enterprise modeling framework which will enable senior level decision makers to better understand their enterprise and required future investments.« less

  20. CORALINA: a universal method for the generation of gRNA libraries for CRISPR-based screening.

    PubMed

    Köferle, Anna; Worf, Karolina; Breunig, Christopher; Baumann, Valentin; Herrero, Javier; Wiesbeck, Maximilian; Hutter, Lukas H; Götz, Magdalena; Fuchs, Christiane; Beck, Stephan; Stricker, Stefan H

    2016-11-14

    The bacterial CRISPR system is fast becoming the most popular genetic and epigenetic engineering tool due to its universal applicability and adaptability. The desire to deploy CRISPR-based methods in a large variety of species and contexts has created an urgent need for the development of easy, time- and cost-effective methods enabling large-scale screening approaches. Here we describe CORALINA (comprehensive gRNA library generation through controlled nuclease activity), a method for the generation of comprehensive gRNA libraries for CRISPR-based screens. CORALINA gRNA libraries can be derived from any source of DNA without the need of complex oligonucleotide synthesis. We show the utility of CORALINA for human and mouse genomic DNA, its reproducibility in covering the most relevant genomic features including regulatory, coding and non-coding sequences and confirm the functionality of CORALINA generated gRNAs. The simplicity and cost-effectiveness make CORALINA suitable for any experimental system. The unprecedented sequence complexities obtainable with CORALINA libraries are a necessary pre-requisite for less biased large scale genomic and epigenomic screens.

  1. Web-based Visual Analytics for Extreme Scale Climate Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Evans, Katherine J; Harney, John F

    In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less

  2. Complexity, Robustness, and Multistability in Network Systems with Switching Topologies: A Hierarchical Hybrid Control Approach

    DTIC Science & Technology

    2015-05-22

    sensor networks for managing power levels of wireless networks ; air and ground transportation systems for air traffic control and payload transport and... network systems, large-scale systems, adaptive control, discontinuous systems 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF...cover a broad spectrum of ap- plications including cooperative control of unmanned air vehicles, autonomous underwater vehicles, distributed sensor

  3. Towards the understanding of network information processing in biology

    NASA Astrophysics Data System (ADS)

    Singh, Vijay

    Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.

  4. Some aspects of wind tunnel magnetic suspension systems with special application at large physical scales

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1983-01-01

    Wind tunnel magnetic suspension and balance systems (MSBSs) have so far failed to find application at the large physical scales necessary for the majority of aerodynamic testing. Three areas of technology relevant to such application are investigated. Two variants of the Spanwise Magnet roll torque generation scheme are studied. Spanwise Permanent Magnets are shown to be practical and are experimentally demonstrated. Extensive computations of the performance of the Spanwise Iron Magnet scheme indicate powerful capability, limited principally be electromagnet technology. Aerodynamic testing at extreme attitudes is shown to be practical in relatively conventional MSBSs. Preliminary operation of the MSBS over a wide range of angles of attack is demonstrated. The impact of a requirement for highly reliable operation on the overall architecture of Large MSBSs is studied and it is concluded that system cost and complexity need not be seriously increased.

  5. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  6. MacroBac: New Technologies for Robust and Efficient Large-Scale Production of Recombinant Multiprotein Complexes.

    PubMed

    Gradia, Scott D; Ishida, Justin P; Tsai, Miaw-Sheue; Jeans, Chris; Tainer, John A; Fuss, Jill O

    2017-01-01

    Recombinant expression of large, multiprotein complexes is essential and often rate limiting for determining structural, biophysical, and biochemical properties of DNA repair, replication, transcription, and other key cellular processes. Baculovirus-infected insect cell expression systems are especially well suited for producing large, human proteins recombinantly, and multigene baculovirus systems have facilitated studies of multiprotein complexes. In this chapter, we describe a multigene baculovirus system called MacroBac that uses a Biobricks-type assembly method based on restriction and ligation (Series 11) or ligation-independent cloning (Series 438). MacroBac cloning and assembly is efficient and equally well suited for either single subcloning reactions or high-throughput cloning using 96-well plates and liquid handling robotics. MacroBac vectors are polypromoter with each gene flanked by a strong polyhedrin promoter and an SV40 poly(A) termination signal that minimize gene order expression level effects seen in many polycistronic assemblies. Large assemblies are robustly achievable, and we have successfully assembled as many as 10 genes into a single MacroBac vector. Importantly, we have observed significant increases in expression levels and quality of large, multiprotein complexes using a single, multigene, polypromoter virus rather than coinfection with multiple, single-gene viruses. Given the importance of characterizing functional complexes, we believe that MacroBac provides a critical enabling technology that may change the way that structural, biophysical, and biochemical research is done. © 2017 Elsevier Inc. All rights reserved.

  7. Design and Implementation of a Metadata-rich File System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ames, S; Gokhale, M B; Maltzahn, C

    2010-01-19

    Despite continual improvements in the performance and reliability of large scale file systems, the management of user-defined file system metadata has changed little in the past decade. The mismatch between the size and complexity of large scale data stores and their ability to organize and query their metadata has led to a de facto standard in which raw data is stored in traditional file systems, while related, application-specific metadata is stored in relational databases. This separation of data and semantic metadata requires considerable effort to maintain consistency and can result in complex, slow, and inflexible system operation. To address thesemore » problems, we have developed the Quasar File System (QFS), a metadata-rich file system in which files, user-defined attributes, and file relationships are all first class objects. In contrast to hierarchical file systems and relational databases, QFS defines a graph data model composed of files and their relationships. QFS incorporates Quasar, an XPATH-extended query language for searching the file system. Results from our QFS prototype show the effectiveness of this approach. Compared to the de facto standard, the QFS prototype shows superior ingest performance and comparable query performance on user metadata-intensive operations and superior performance on normal file metadata operations.« less

  8. Bacterial community changes in an industrial algae production system.

    PubMed

    Fulbright, Scott P; Robbins-Pianka, Adam; Berg-Lyons, Donna; Knight, Rob; Reardon, Kenneth F; Chisholm, Stephen T

    2018-04-01

    While microalgae are a promising feedstock for production of fuels and other chemicals, a challenge for the algal bioproducts industry is obtaining consistent, robust algae growth. Algal cultures include complex bacterial communities and can be difficult to manage because specific bacteria can promote or reduce algae growth. To overcome bacterial contamination, algae growers may use closed photobioreactors designed to reduce the number of contaminant organisms. Even with closed systems, bacteria are known to enter and cohabitate, but little is known about these communities. Therefore, the richness, structure, and composition of bacterial communities were characterized in closed photobioreactor cultivations of Nannochloropsis salina in F/2 medium at different scales, across nine months spanning late summer-early spring, and during a sequence of serially inoculated cultivations. Using 16S rRNA sequence data from 275 samples, bacterial communities in small, medium, and large cultures were shown to be significantly different. Larger systems contained richer bacterial communities compared to smaller systems. Relationships between bacterial communities and algae growth were complex. On one hand, blooms of a specific bacterial type were observed in three abnormal, poorly performing replicate cultivations, while on the other, notable changes in the bacterial community structures were observed in a series of serial large-scale batch cultivations that had similar growth rates. Bacteria common to the majority of samples were identified, including a single OTU within the class Saprospirae that was found in all samples. This study contributes important information for crop protection in algae systems, and demonstrates the complex ecosystems that need to be understood for consistent, successful industrial algae cultivation. This is the first study to profile bacterial communities during the scale-up process of industrial algae systems.

  9. Space-time dependence between energy sources and climate related energy production

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjorn; Borga, Marco; Creutin, Jean-Dominique; Ramos, Maria-Helena; Tøfte, Lena; Warland, Geir

    2014-05-01

    The European Renewable Energy Directive adopted in 2009 focuses on achieving a 20% share of renewable energy in the EU overall energy mix by 2020. A major part of renewable energy production is related to climate, called "climate related energy" (CRE) production. CRE production systems (wind, solar, and hydropower) are characterized by a large degree of intermittency and variability on both short and long time scales due to the natural variability of climate variables. The main strategies to handle the variability of CRE production include energy-storage, -transport, -diversity and -information (smart grids). The three first strategies aim to smooth out the intermittency and variability of CRE production in time and space whereas the last strategy aims to provide a more optimal interaction between energy production and demand, i.e. to smooth out the residual load (the difference between demand and production). In order to increase the CRE share in the electricity system, it is essential to understand the space-time co-variability between the weather variables and CRE production under both current and future climates. This study presents a review of the literature that searches to tackle these problems. It reveals that the majority of studies deals with either a single CRE source or with the combination of two CREs, mostly wind and solar. This may be due to the fact that the most advanced countries in terms of wind equipment have also very little hydropower potential (Denmark, Ireland or UK, for instance). Hydropower is characterized by both a large storage capacity and flexibility in electricity production, and has therefore a large potential for both balancing and storing energy from wind- and solar-power. Several studies look at how to better connect regions with large share of hydropower (e.g., Scandinavia and the Alps) to regions with high shares of wind- and solar-power (e.g., green battery North-Sea net). Considering time scales, various studies consider wind and solar power production and their co-fluctuation at small time scales. The multi-scale nature of the variability is less studied, i.e., the potential adverse or favorable co-fluctuation at intermediate time scales involving water scarcity or abundance, is less present in the literature.Our review points out that it could be especially interesting to promote research on how the pronounced large-scale fluctuations in inflow to hydropower (intra-annual run-off) and smaller scale fluctuations in wind- and solar-power interact in an energy system. There is a need to better represent the profound difference between wind-, solar- and hydro-energy sources. On the one hand, they are all directly linked to the 2-D horizontal dynamics of meteorology. On the other hand, the branching structure of hydrological systems transforms this variability and governs the complex combination of natural inflows and reservoir storage.Finally, we note that the CRE production is, in addition to weather, also influenced by the energy system and market, i.e., the energy transport and demand across scales as well as changes of market regulation. The CRE production system lies thus in this nexus between climate, energy systems and market regulations. The work presented is part of the FP7 project COMPLEX (Knowledge based climate mitigation systems for a low carbon economy; http://www.complex.ac.uk)

  10. Dynamic effective connectivity in cortically embedded systems of recurrently coupled synfire chains.

    PubMed

    Trengove, Chris; Diesmann, Markus; van Leeuwen, Cees

    2016-02-01

    As a candidate mechanism of neural representation, large numbers of synfire chains can efficiently be embedded in a balanced recurrent cortical network model. Here we study a model in which multiple synfire chains of variable strength are randomly coupled together to form a recurrent system. The system can be implemented both as a large-scale network of integrate-and-fire neurons and as a reduced model. The latter has binary-state pools as basic units but is otherwise isomorphic to the large-scale model, and provides an efficient tool for studying its behavior. Both the large-scale system and its reduced counterpart are able to sustain ongoing endogenous activity in the form of synfire waves, the proliferation of which is regulated by negative feedback caused by collateral noise. Within this equilibrium, diverse repertoires of ongoing activity are observed, including meta-stability and multiple steady states. These states arise in concert with an effective connectivity structure (ECS). The ECS admits a family of effective connectivity graphs (ECGs), parametrized by the mean global activity level. Of these graphs, the strongly connected components and their associated out-components account to a large extent for the observed steady states of the system. These results imply a notion of dynamic effective connectivity as governing neural computation with synfire chains, and related forms of cortical circuitry with complex topologies.

  11. Resource Management for Distributed Parallel Systems

    NASA Technical Reports Server (NTRS)

    Neuman, B. Clifford; Rao, Santosh

    1993-01-01

    Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.

  12. An interactive web application for the dissemination of human systems immunology data.

    PubMed

    Speake, Cate; Presnell, Scott; Domico, Kelly; Zeitner, Brad; Bjork, Anna; Anderson, David; Mason, Michael J; Whalen, Elizabeth; Vargas, Olivia; Popov, Dimitry; Rinchai, Darawan; Jourde-Chiche, Noemie; Chiche, Laurent; Quinn, Charlie; Chaussabel, Damien

    2015-06-19

    Systems immunology approaches have proven invaluable in translational research settings. The current rate at which large-scale datasets are generated presents unique challenges and opportunities. Mining aggregates of these datasets could accelerate the pace of discovery, but new solutions are needed to integrate the heterogeneous data types with the contextual information that is necessary for interpretation. In addition, enabling tools and technologies facilitating investigators' interaction with large-scale datasets must be developed in order to promote insight and foster knowledge discovery. State of the art application programming was employed to develop an interactive web application for browsing and visualizing large and complex datasets. A collection of human immune transcriptome datasets were loaded alongside contextual information about the samples. We provide a resource enabling interactive query and navigation of transcriptome datasets relevant to human immunology research. Detailed information about studies and samples are displayed dynamically; if desired the associated data can be downloaded. Custom interactive visualizations of the data can be shared via email or social media. This application can be used to browse context-rich systems-scale data within and across systems immunology studies. This resource is publicly available online at [Gene Expression Browser Landing Page ( https://gxb.benaroyaresearch.org/dm3/landing.gsp )]. The source code is also available openly [Gene Expression Browser Source Code ( https://github.com/BenaroyaResearch/gxbrowser )]. We have developed a data browsing and visualization application capable of navigating increasingly large and complex datasets generated in the context of immunological studies. This intuitive tool ensures that, whether taken individually or as a whole, such datasets generated at great effort and expense remain interpretable and a ready source of insight for years to come.

  13. A large-scale perspective on ecosystems

    NASA Technical Reports Server (NTRS)

    Mizutani, Hiroshi

    1987-01-01

    Interactions between ecological elements must be better understood in order to construct an ecological life support system in space. An index was devised to describe the complexity of material cyclings within a given ecosystem. It was then applied to the cyclings of bioelements in various systems of material cyclings including the whole Earth and national economies. The results show interesting characteristics of natural and man-made systems.

  14. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  15. Complex Behavior of Contaminant Flux and the Ecology of the Lower Mississippi River

    NASA Astrophysics Data System (ADS)

    Barton, C. C.; Manheim, F. T.; De Cola, L.; Bollinger, J. E.; Jenkins, J. A.

    2001-12-01

    This presentation is an overview of a collaborative NSF/USGS/Tulane funded multi-scale study of the Lower Mississippi River system. The study examines the system in three major dimensional realms: space, time, and complexity (systems and their hierarchies). Researchers at Tulane University and the U.S. Geological Survey have initiated a collaborative effort to undertake the study of interacting elements which directly or indirectly affect the water quality, ecology and physical condition of the Mississippi River. These researchers include experts in the fields of water quality chemistry, geochemistry, hydrologic modeling, bioengineering, biology, fish ecology, statistics, complexity analysis, epidemiology, and computer science. Underlying this research are large databases that permit quantitative analysis of the system over the past 40 years. Results to date show that the variation in discharge and the contaminant flux scale independently both exhibit fractal scaling, the signature geometry of nonlinear dynamical and complex systems. Public perception is that the Lower Mississippi River is a health hazard, but for the past decade, traditional water quality measurements show that contaminants are within current regulatory guidelines for human consumption. This difference between public perception and scientific reality represents a complex scientific and social issue. The connections and feedback within the ecological system and the Mississippi River are few because engineering structures isolate the lower Mississippi River from its surroundings. Investigation of the connections and feedback between human health and the ecological health of the River and the surrounding region as well as perceptions of these states of health - holds promise for explaining epidemiological patterns of human disease.

  16. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  17. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  18. Framework for Smart Electronic Health Record-Linked Predictive Models to Optimize Care for Complex Digestive Diseases

    DTIC Science & Technology

    2014-07-01

    mucos"x1; N Acquired Abnormality 4.7350 93696 76 0.85771...4. Roden DM, Pulley JM, Basford MA, et al. Development of a large- scale de-identified DNA biobank to enable personalized medicine. Clin Pharmacol...large healthcare system which incorporated clinical information from a 20-hospital setting (both aca- demic and community hospitals) of University of

  19. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  20. High performance computing applications in neurobiological research

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; Cheng, Rei; Doshay, David G.; Linton, Samuel W.; Montgomery, Kevin; Parnas, Bruce R.

    1994-01-01

    The human nervous system is a massively parallel processor of information. The vast numbers of neurons, synapses and circuits is daunting to those seeking to understand the neural basis of consciousness and intellect. Pervading obstacles are lack of knowledge of the detailed, three-dimensional (3-D) organization of even a simple neural system and the paucity of large scale, biologically relevant computer simulations. We use high performance graphics workstations and supercomputers to study the 3-D organization of gravity sensors as a prototype architecture foreshadowing more complex systems. Scaled-down simulations run on a Silicon Graphics workstation and scale-up, three-dimensional versions run on the Cray Y-MP and CM5 supercomputers.

  1. Problems in merging Earth sensing satellite data sets

    NASA Technical Reports Server (NTRS)

    Smith, Paul H.; Goldberg, Michael J.

    1987-01-01

    Satellite remote sensing systems provide a tremendous source of data flow to the Earth science community. These systems provide scientists with data of types and on a scale previously unattainable. Looking forward to the capabilities of Space Station and the Earth Observing System (EOS), the full realization of the potential of satellite remote sensing will be handicapped by inadequate information systems. There is a growing emphasis in Earth science research to ask questions which are multidisciplinary in nature and global in scale. Many of these research projects emphasize the interactions of the land surface, the atmosphere, and the oceans through various physical mechanisms. Conducting this research requires large and complex data sets and teams of multidisciplinary scientists, often working at remote locations. A review of the problems of merging these large volumes of data into spatially referenced and manageable data sets is presented.

  2. Final Report. Analysis and Reduction of Complex Networks Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marzouk, Youssef M.; Coles, T.; Spantini, A.

    2013-09-30

    The project was a collaborative effort among MIT, Sandia National Laboratories (local PI Dr. Habib Najm), the University of Southern California (local PI Prof. Roger Ghanem), and The Johns Hopkins University (local PI Prof. Omar Knio, now at Duke University). Our focus was the analysis and reduction of large-scale dynamical systems emerging from networks of interacting components. Such networks underlie myriad natural and engineered systems. Examples important to DOE include chemical models of energy conversion processes, and elements of national infrastructure—e.g., electric power grids. Time scales in chemical systems span orders of magnitude, while infrastructure networks feature both local andmore » long-distance connectivity, with associated clusters of time scales. These systems also blend continuous and discrete behavior; examples include saturation phenomena in surface chemistry and catalysis, and switching in electrical networks. Reducing size and stiffness is essential to tractable and predictive simulation of these systems. Computational singular perturbation (CSP) has been effectively used to identify and decouple dynamics at disparate time scales in chemical systems, allowing reduction of model complexity and stiffness. In realistic settings, however, model reduction must contend with uncertainties, which are often greatest in large-scale systems most in need of reduction. Uncertainty is not limited to parameters; one must also address structural uncertainties—e.g., whether a link is present in a network—and the impact of random perturbations, e.g., fluctuating loads or sources. Research under this project developed new methods for the analysis and reduction of complex multiscale networks under uncertainty, by combining computational singular perturbation (CSP) with probabilistic uncertainty quantification. CSP yields asymptotic approximations of reduceddimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty in this context raised fundamentally new issues, e.g., how is the topology of slow manifolds transformed by parametric uncertainty? How to construct dynamical models on these uncertain manifolds? To address these questions, we used stochastic spectral polynomial chaos (PC) methods to reformulate uncertain network models and analyzed them using CSP in probabilistic terms. Finding uncertain manifolds involved the solution of stochastic eigenvalue problems, facilitated by projection onto PC bases. These problems motivated us to explore the spectral properties stochastic Galerkin systems. We also introduced novel methods for rank-reduction in stochastic eigensystems—transformations of a uncertain dynamical system that lead to lower storage and solution complexity. These technical accomplishments are detailed below. This report focuses on the MIT portion of the joint project.« less

  3. Simulations of Instabilities in Complex Valve and Feed Systems

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Shipman, Jeremy; Cavallo, Peter A.

    2006-01-01

    CFD analyses are playing an increasingly important role in identifying and characterizing flow induced instabilities in rocket engine test facilities and flight systems. In this paper, we analyze instability mechanisms that range from turbulent pressure fluctuations due to vortex shedding in structurally complex valve systems to flow resonance in plug cavities to large scale pressure fluctuations due to collapse of cavitation induced vapor clouds. Furthermore, we discuss simulations of transient behavior related to valve motion that can serve as guidelines for valve scheduling. Such predictions of valve response to varying flow conditions is of crucial importance to engine operation and testing.

  4. Hydrogeologic processes of large-scale tectonomagmatic complexes in Mongolia-southern Siberia and on Mars

    USGS Publications Warehouse

    Komatsu, G.; Dohm, J.M.; Hare, T.M.

    2004-01-01

    Large-scale tectonomagmatic complexes are common on Earth and Mars. Many of these complexes are created or at least influenced by mantle processes, including a wide array of plume types ranging from superplumes to mantle plumes. Among the most prominent complexes, the Mongolian plateau on Earth and the Tharsis bulge on Mars share remarkable similarities in terms of large domal uplifted areas, great rift canyon systems, and widespread volcanism on their surfaces. Water has also played an important role in the development of the two complexes. In general, atmospheric and surface water play a bigger role in the development of the present-day Mongolian plateau than for the Tharsis bulge, as evidenced by highly developed drainages and thick accumulation of sediments in the basins of the Baikal rift system. On the Tharsis bulge, however, water appears to have remained as ground ice except during periods of elevated magmatic activity. Glacial and periglacial processes are well documented for the Mongolian plateau and are also reported for parts of the Tharsis bulge. Ice-magma interactions, which are represented by the formation of subice volcanoes in parts of the Mongolian plateau region, have been reported for the Valles Marineris region of Mars. The complexes are also characterized by cataclysmic floods, but their triggering mechanism may differ: mainly ice-dam failures for the Mongolian plateau and outburst of groundwater for the Tharsis bulge, probably by magma-ice interactions, although ice-dam failures within the Valles Marineris region cannot be ruled out as a possible contributor. Comparative studies of the Mongolian plateau and Tharsis bulge provide excellent opportunities for understanding surface manifestations of plume-driven processes on terrestrial planets and how they interact with hydro-cryospheres. ?? 2004 Geological Society of America.

  5. Large scale affinity calculations of cyclodextrin host-guest complexes: Understanding the role of reorganization in the molecular recognition process

    PubMed Central

    Wickstrom, Lauren; He, Peng; Gallicchio, Emilio; Levy, Ronald M.

    2013-01-01

    Host-guest inclusion complexes are useful models for understanding the structural and energetic aspects of molecular recognition. Due to their small size relative to much larger protein-ligand complexes, converged results can be obtained rapidly for these systems thus offering the opportunity to more reliably study fundamental aspects of the thermodynamics of binding. In this work, we have performed a large scale binding affinity survey of 57 β-cyclodextrin (CD) host guest systems using the binding energy distribution analysis method (BEDAM) with implicit solvation (OPLS-AA/AGBNP2). Converged estimates of the standard binding free energies are obtained for these systems by employing techniques such as parallel Hamitionian replica exchange molecular dynamics, conformational reservoirs and multistate free energy estimators. Good agreement with experimental measurements is obtained in terms of both numerical accuracy and affinity rankings. Overall, average effective binding energies reproduce affinity rank ordering better than the calculated binding affinities, even though calculated binding free energies, which account for effects such as conformational strain and entropy loss upon binding, provide lower root mean square errors when compared to measurements. Interestingly, we find that binding free energies are superior rank order predictors for a large subset containing the most flexible guests. The results indicate that, while challenging, accurate modeling of reorganization effects can lead to ligand design models of superior predictive power for rank ordering relative to models based only on ligand-receptor interaction energies. PMID:25147485

  6. Novel Directional Protection Scheme for the FREEDM Smart Grid System

    NASA Astrophysics Data System (ADS)

    Sharma, Nitish

    This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.

  7. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  8. Modeling the Effects of Light and Sucrose on In Vitro Propagated Plants: A Multiscale System Analysis Using Artificial Intelligence Technology

    PubMed Central

    Gago, Jorge; Martínez-Núñez, Lourdes; Landín, Mariana; Flexas, Jaume; Gallego, Pedro P.

    2014-01-01

    Background Plant acclimation is a highly complex process, which cannot be fully understood by analysis at any one specific level (i.e. subcellular, cellular or whole plant scale). Various soft-computing techniques, such as neural networks or fuzzy logic, were designed to analyze complex multivariate data sets and might be used to model large such multiscale data sets in plant biology. Methodology and Principal Findings In this study we assessed the effectiveness of applying neuro-fuzzy logic to modeling the effects of light intensities and sucrose content/concentration in the in vitro culture of kiwifruit on plant acclimation, by modeling multivariate data from 14 parameters at different biological scales of organization. The model provides insights through application of 14 sets of straightforward rules and indicates that plants with lower stomatal aperture areas and higher photoinhibition and photoprotective status score best for acclimation. The model suggests the best condition for obtaining higher quality acclimatized plantlets is the combination of 2.3% sucrose and photonflux of 122–130 µmol m−2 s−1. Conclusions Our results demonstrate that artificial intelligence models are not only successful in identifying complex non-linear interactions among variables, by integrating large-scale data sets from different levels of biological organization in a holistic plant systems-biology approach, but can also be used successfully for inferring new results without further experimental work. PMID:24465829

  9. Modeling the effects of light and sucrose on in vitro propagated plants: a multiscale system analysis using artificial intelligence technology.

    PubMed

    Gago, Jorge; Martínez-Núñez, Lourdes; Landín, Mariana; Flexas, Jaume; Gallego, Pedro P

    2014-01-01

    Plant acclimation is a highly complex process, which cannot be fully understood by analysis at any one specific level (i.e. subcellular, cellular or whole plant scale). Various soft-computing techniques, such as neural networks or fuzzy logic, were designed to analyze complex multivariate data sets and might be used to model large such multiscale data sets in plant biology. In this study we assessed the effectiveness of applying neuro-fuzzy logic to modeling the effects of light intensities and sucrose content/concentration in the in vitro culture of kiwifruit on plant acclimation, by modeling multivariate data from 14 parameters at different biological scales of organization. The model provides insights through application of 14 sets of straightforward rules and indicates that plants with lower stomatal aperture areas and higher photoinhibition and photoprotective status score best for acclimation. The model suggests the best condition for obtaining higher quality acclimatized plantlets is the combination of 2.3% sucrose and photonflux of 122-130 µmol m(-2) s(-1). Our results demonstrate that artificial intelligence models are not only successful in identifying complex non-linear interactions among variables, by integrating large-scale data sets from different levels of biological organization in a holistic plant systems-biology approach, but can also be used successfully for inferring new results without further experimental work.

  10. New Roles for Occupational Instructors.

    ERIC Educational Resources Information Center

    Campbell, Dale F.

    Changes in the future role of occupational instructors which will be brought about by advances in educational technology are illustrated by the description of the Advanced Instructional System (AIS), a complex approach to occupational training which permits large-scale application of individualized instruction through the use of computer-assisted…

  11. Enabling Large-Scale Biomedical Analysis in the Cloud

    PubMed Central

    Lin, Ying-Chih; Yu, Chin-Sheng; Lin, Yen-Jen

    2013-01-01

    Recent progress in high-throughput instrumentations has led to an astonishing growth in both volume and complexity of biomedical data collected from various sources. The planet-size data brings serious challenges to the storage and computing technologies. Cloud computing is an alternative to crack the nut because it gives concurrent consideration to enable storage and high-performance computing on large-scale data. This work briefly introduces the data intensive computing system and summarizes existing cloud-based resources in bioinformatics. These developments and applications would facilitate biomedical research to make the vast amount of diversification data meaningful and usable. PMID:24288665

  12. Emperical Laws in Economics Uncovered Using Methods in Statistical Mechanics

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2001-06-01

    In recent years, statistical physicists and computational physicists have determined that physical systems which consist of a large number of interacting particles obey universal "scaling laws" that serve to demonstrate an intrinsic self-similarity operating in such systems. Further, the parameters appearing in these scaling laws appear to be largely independent of the microscopic details. Since economic systems also consist of a large number of interacting units, it is plausible that scaling theory can be usefully applied to economics. To test this possibility using realistic data sets, a number of scientists have begun analyzing economic data using methods of statistical physics [1]. We have found evidence for scaling (and data collapse), as well as universality, in various quantities, and these recent results will be reviewed in this talk--starting with the most recent study [2]. We also propose models that may lead to some insight into these phenomena. These results will be discussed, as well as the overall rationale for why one might expect scaling principles to hold for complex economic systems. This work on which this talk is based is supported by BP, and was carried out in collaboration with L. A. N. Amaral S. V. Buldyrev, D. Canning, P. Cizeau, X. Gabaix, P. Gopikrishnan, S. Havlin, Y. Lee, Y. Liu, R. N. Mantegna, K. Matia, M. Meyer, C.-K. Peng, V. Plerou, M. A. Salinger, and M. H. R. Stanley. [1.] See, e.g., R. N. Mantegna and H. E. Stanley, Introduction to Econophysics: Correlations & Complexity in Finance (Cambridge University Press, Cambridge, 1999). [2.] P. Gopikrishnan, B. Rosenow, V. Plerou, and H. E. Stanley, "Identifying Business Sectors from Stock Price Fluctuations," e-print cond-mat/0011145; V. Plerou, P. Gopikrishnan, L. A. N. Amaral, X. Gabaix, and H. E. Stanley, "Diffusion and Economic Fluctuations," Phys. Rev. E (Rapid Communications) 62, 3023-3026 (2000); P. Gopikrishnan, V. Plerou, X. Gabaix, and H. E. Stanley, "Statistical Properties of Share Volume Traded in Financial Markets," Phys. Rev. E (Rapid Communications) 62, 4493-4496 (2000).

  13. A methodology towards virtualisation-based high performance simulation platform supporting multidisciplinary design of complex products

    NASA Astrophysics Data System (ADS)

    Ren, Lei; Zhang, Lin; Tao, Fei; (Luke) Zhang, Xiaolong; Luo, Yongliang; Zhang, Yabin

    2012-08-01

    Multidisciplinary design of complex products leads to an increasing demand for high performance simulation (HPS) platforms. One great challenge is how to achieve high efficient utilisation of large-scale simulation resources in distributed and heterogeneous environments. This article reports a virtualisation-based methodology to realise a HPS platform. This research is driven by the issues concerning large-scale simulation resources deployment and complex simulation environment construction, efficient and transparent utilisation of fine-grained simulation resources and high reliable simulation with fault tolerance. A framework of virtualisation-based simulation platform (VSIM) is first proposed. Then the article investigates and discusses key approaches in VSIM, including simulation resources modelling, a method to automatically deploying simulation resources for dynamic construction of system environment, and a live migration mechanism in case of faults in run-time simulation. Furthermore, the proposed methodology is applied to a multidisciplinary design system for aircraft virtual prototyping and some experiments are conducted. The experimental results show that the proposed methodology can (1) significantly improve the utilisation of fine-grained simulation resources, (2) result in a great reduction in deployment time and an increased flexibility for simulation environment construction and (3)achieve fault tolerant simulation.

  14. Interdisciplinary Team Science in Cell Biology.

    PubMed

    Horwitz, Rick

    2016-11-01

    The cell is complex. With its multitude of components, spatial-temporal character, and gene expression diversity, it is challenging to comprehend the cell as an integrated system and to develop models that predict its behaviors. I suggest an approach to address this issue, involving system level data analysis, large scale team science, and philanthropy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Local dynamics and spatiotemporal chaos. The Kuramoto- Sivashinsky equation: A case study

    NASA Astrophysics Data System (ADS)

    Wittenberg, Ralf Werner

    The nature of spatiotemporal chaos in extended continuous systems is not yet well-understood. In this thesis, a model partial differential equation, the Kuramoto- Sivashinsky (KS) equation ut+uxxxx+uxx+uux =0 on a large one-dimensional periodic domain, is studied analytically, numerically, and through modeling to obtain a more detailed understanding of the observed spatiotemporally complex dynamics. In particular, with the aid of a wavelet decomposition, the relevant dynamical interactions are shown to be localized in space and scale. Motivated by these results, and by the idea that the attractor on a large domain may be understood via attractors on smaller domains, a spatially localized low- dimensional model for a minimal chaotic box is proposed. A (de)stabilized extension of the KS equation has recently attracted increased interest; for this situation, dissipativity and analyticity areproven, and an explicit shock-like solution is constructed which sheds light on the difficulties in obtaining optimal bounds for the KS equation. For the usual KS equation, the spatiotemporally chaotic state is carefully characterized in real, Fourier and wavelet space. The wavelet decomposition provides good scale separation which isolates the three characteristic regions of the dynamics: large scales of slow Gaussian fluctuations, active scales containing localized interactions of coherent structures, and small scales. Space localization is shown through a comparison of various correlation lengths and a numerical experiment in which different modes are uncoupled to estimate a dynamic interaction length. A detailed picture of the contributions of different scales to the spatiotemporally complex dynamics is obtained via a Galerkin projection of the KS equation onto the wavelet basis, and an extensive series of numerical experiments in which different combinations of wavelet levels are eliminated or forced. These results, and a formalism to derive an effective equation for periodized subsystems externally forced from a larger system, motivate various models for spatially localized forced systems. There is convincing evidence that short periodized systems, internally forced at the largest scales, form a minimal model for the observed extensively chaotic dynamics in larger domains.

  16. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    NASA Technical Reports Server (NTRS)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.

  17. Reaction dynamics analysis of a reconstituted Escherichia coli protein translation system by computational modeling

    PubMed Central

    Matsuura, Tomoaki; Tanimura, Naoki; Hosoda, Kazufumi; Yomo, Tetsuya; Shimizu, Yoshihiro

    2017-01-01

    To elucidate the dynamic features of a biologically relevant large-scale reaction network, we constructed a computational model of minimal protein synthesis consisting of 241 components and 968 reactions that synthesize the Met-Gly-Gly (MGG) peptide based on an Escherichia coli-based reconstituted in vitro protein synthesis system. We performed a simulation using parameters collected primarily from the literature and found that the rate of MGG peptide synthesis becomes nearly constant in minutes, thus achieving a steady state similar to experimental observations. In addition, concentration changes to 70% of the components, including intermediates, reached a plateau in a few minutes. However, the concentration change of each component exhibits several temporal plateaus, or a quasi-stationary state (QSS), before reaching the final plateau. To understand these complex dynamics, we focused on whether the components reached a QSS, mapped the arrangement of components in a QSS in the entire reaction network structure, and investigated time-dependent changes. We found that components in a QSS form clusters that grow over time but not in a linear fashion, and that this process involves the collapse and regrowth of clusters before the formation of a final large single cluster. These observations might commonly occur in other large-scale biological reaction networks. This developed analysis might be useful for understanding large-scale biological reactions by visualizing complex dynamics, thereby extracting the characteristics of the reaction network, including phase transitions. PMID:28167777

  18. A laser-sheet flow visualization technique for the large wind tunnels of the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Reinath, M. S.; Ross, J. C.

    1990-01-01

    A flow visualization technique for the large wind tunnels of the National Full Scale Aerodynamics Complex (NFAC) is described. The technique uses a laser sheet generated by the NFAC Long Range Laser Velocimeter (LRLV) to illuminate a smoke-like tracer in the flow. The LRLV optical system is modified slightly, and a scanned mirror is added to generate the sheet. These modifications are described, in addition to the results of an initial performance test conducted in the 80- by 120-Foot Wind Tunnel. During this test, flow visualization was performed in the wake region behind a truck as part of a vehicle drag reduction study. The problems encountered during the test are discussed, in addition to the recommended improvements needed to enhance the performance of the technique for future applications.

  19. Building and measuring a high performance network architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less

  20. ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.

    PubMed

    Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng

    2017-08-30

    While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.

  1. Photogrammetry of a Hypersonic Inflatable Aerodynamic Decelerator

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Littell, Justin D.; Cassell, Alan M.

    2013-01-01

    In 2012, two large-scale models of a Hypersonic Inflatable Aerodynamic decelerator were tested in the National Full-Scale Aerodynamic Complex at NASA Ames Research Center. One of the objectives of this test was to measure model deflections under aerodynamic loading that approximated expected flight conditions. The measurements were acquired using stereo photogrammetry. Four pairs of stereo cameras were mounted inside the NFAC test section, each imaging a particular section of the HIAD. The views were then stitched together post-test to create a surface deformation profile. The data from the photogram- metry system will largely be used for comparisons to and refinement of Fluid Structure Interaction models. This paper describes how a commercial photogrammetry system was adapted to make the measurements and presents some preliminary results.

  2. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  3. Group Decision Support System to Aid the Process of Design and Maintenance of Large Scale Systems

    DTIC Science & Technology

    1992-03-23

    from a fuzzy set of user requirements. The overall objective of the project is to develop a system combining the characteristics of a compact computer... AHP ) for hierarchical prioritization. 4) Individual Evaluation and Selection of Alternatives - Allows the decision maker to individually evaluate...its concept of outranking relations. The AHP method supports complex decision problems by successively decomposing and synthesizing various elements

  4. Model Uncertainty Quantification Methods For Data Assimilation In Partially Observed Multi-Scale Systems

    NASA Astrophysics Data System (ADS)

    Pathiraja, S. D.; van Leeuwen, P. J.

    2017-12-01

    Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.

  5. Seasonal movement and habitat use by sub-adult bull trout in the upper Flathead River system, Montana

    USGS Publications Warehouse

    Muhlfeld, Clint C.; Marotz, Brian

    2005-01-01

    Despite the importance of large-scale habitat connectivity to the threatened bull trout Salvelinus confluentus, little is known about the life history characteristics and processes influencing natural dispersal of migratory populations. We used radiotelemetry to investigate the seasonal movements and habitat use by subadult bull trout (i.e., fish that emigrated from natal streams to the river system) tracked for varying durations from 1999 to 2002 in the upper Flathead River system in northwestern Montana. Telemetry data revealed migratory (N = 32 fish) and nonmigratory (N = 35 fish) behavior, indicating variable movement patterns in the subadult phase of bull trout life history. Most migrating subadults (84%) made rapid or incremental downriver movements (mean distance, 33 km; range, 6–129 km) to lower portions of the river system and to Flathead Lake during high spring flows and as temperatures declined in the fall and winter. Bull trout subadults used complex daytime habitat throughout the upper river system, including deep runs that contained unembedded boulder and cobble substrates, pools with large woody debris, and deep lake-influenced areas of the lower river system. Our results elucidate the importance of maintaining natural connections and a diversity of complex habitats over a large spatial scale to conserve the full expression of life history traits and processes influencing the natural dispersal of bull trout populations. Managers should seek to restore and enhance critical river corridor habitat and remove migration barriers, where possible, for recovery and management programs.

  6. Investigating the Complex Chemistry of Functional Energy Storage Systems: The Need for an Integrative, Multiscale (Molecular to Mesoscale) Perspective.

    PubMed

    Abraham, Alyson; Housel, Lisa M; Lininger, Christianna N; Bock, David C; Jou, Jeffrey; Wang, Feng; West, Alan C; Marschilok, Amy C; Takeuchi, Kenneth J; Takeuchi, Esther S

    2016-06-22

    Electric energy storage systems such as batteries can significantly impact society in a variety of ways, including facilitating the widespread deployment of portable electronic devices, enabling the use of renewable energy generation for local off grid situations and providing the basis of highly efficient power grids integrated with energy production, large stationary batteries, and the excess capacity from electric vehicles. A critical challenge for electric energy storage is understanding the basic science associated with the gap between the usable output of energy storage systems and their theoretical energy contents. The goal of overcoming this inefficiency is to achieve more useful work (w) and minimize the generation of waste heat (q). Minimization of inefficiency can be approached at the macro level, where bulk parameters are identified and manipulated, with optimization as an ultimate goal. However, such a strategy may not provide insight toward the complexities of electric energy storage, especially the inherent heterogeneity of ion and electron flux contributing to the local resistances at numerous interfaces found at several scale lengths within a battery. Thus, the ability to predict and ultimately tune these complex systems to specific applications, both current and future, demands not just parametrization at the bulk scale but rather specific experimentation and understanding over multiple length scales within the same battery system, from the molecular scale to the mesoscale. Herein, we provide a case study examining the insights and implications from multiscale investigations of a prospective battery material, Fe3O4.

  7. Deployment of ERP Systems at Automotive Industries, Security Inspection (Case Study: IRAN KHODRO Automotive Company)

    NASA Astrophysics Data System (ADS)

    Ali, Hatamirad; Hasan, Mehrjerdi

    Automotive industry and car production process is one of the most complex and large-scale production processes. Today, information technology (IT) and ERP systems incorporates a large portion of production processes. Without any integrated systems such as ERP, the production and supply chain processes will be tangled. The ERP systems, that are last generation of MRP systems, make produce and sale processes of these industries easier and this is the major factor of development of these industries anyhow. Today many of large-scale companies are developing and deploying the ERP systems. The ERP systems facilitate many of organization processes and make organization to increase efficiency. The security is a very important part of the ERP strategy at the organization, Security at the ERP systems, because of integrity and extensive, is more important of local and legacy systems. Disregarding of this point can play a giant role at success or failure of this kind of systems. The IRANKHODRO is the biggest automotive factory in the Middle East with an annual production over 600.000 cars. This paper presents ERP security deployment experience at the "IRANKHODRO Company". Recently, by launching ERP systems, it moved a big step toward more developments.

  8. Mass-transport deposits and reservoir quality of Upper Cretaceous Chalk within the German Central Graben, North Sea

    NASA Astrophysics Data System (ADS)

    Arfai, Jashar; Lutz, Rüdiger; Franke, Dieter; Gaedicke, Christoph; Kley, Jonas

    2016-04-01

    The architecture of intra-chalk deposits in the `Entenschnabel' area of the German North Sea is studied based on 3D seismic data. Adapted from seismic reflection characteristics, four types of mass-transport deposits (MTDs) are distinguished, i.e. slumps, slides, channels and frontal splay deposits. The development of these systems can be linked to inversion tectonics and halotectonic movements of Zechstein salt. Tectonic uplift is interpreted to have caused repeated tilting of the sea floor. This triggered large-scale slump deposition during Turonian-Santonian times. Slump deposits are characterised by chaotic reflection patterns interpreted to result from significant stratal distortion. The south-eastern study area is characterised by a large-scale frontal splay complex. This comprises a network of shallow channel systems arranged in a distributive pattern. Several slide complexes are observed near the Top Chalk in Maastrichtian and Danian sediments. These slides are commonly associated with large incisions into the sediments below. Best reservoir properties with high producible porosities are found in the reworked chalk strata, e.g. Danish North Sea, therefore MTDs detected in the study area are regarded as potential hydrocarbon reservoirs and considered as exploration targets.

  9. Managing Network Partitions in Structured P2P Networks

    NASA Astrophysics Data System (ADS)

    Shafaat, Tallat M.; Ghodsi, Ali; Haridi, Seif

    Structured overlay networks form a major class of peer-to-peer systems, which are touted for their abilities to scale, tolerate failures, and self-manage. Any long-lived Internet-scale distributed system is destined to face network partitions. Consequently, the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems. This makes it a crucial requirement for building any structured peer-to-peer systems to be resilient to network partitions. Although the problem of network partitions and mergers is highly related to fault-tolerance and self-management in large-scale systems, it has hardly been studied in the context of structured peer-to-peer systems. Structured overlays have mainly been studied under churn (frequent joins/failures), which as a side effect solves the problem of network partitions, as it is similar to massive node failures. Yet, the crucial aspect of network mergers has been ignored. In fact, it has been claimed that ring-based structured overlay networks, which constitute the majority of the structured overlays, are intrinsically ill-suited for merging rings. In this chapter, we motivate the problem of network partitions and mergers in structured overlays. We discuss how a structured overlay can automatically detect a network partition and merger. We present an algorithm for merging multiple similar ring-based overlays when the underlying network merges. We examine the solution in dynamic conditions, showing how our solution is resilient to churn during the merger, something widely believed to be difficult or impossible. We evaluate the algorithm for various scenarios and show that even when falsely detecting a merger, the algorithm quickly terminates and does not clutter the network with many messages. The algorithm is flexible as the tradeoff between message complexity and time complexity can be adjusted by a parameter.

  10. Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)

    2002-01-01

    The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.

  11. Geospatial Optimization of Siting Large-Scale Solar Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macknick, Jordan; Quinby, Ted; Caulfield, Emmet

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less

  12. Users matter : multi-agent systems model of high performance computing cluster users.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    North, M. J.; Hood, C. S.; Decision and Information Sciences

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less

  13. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Turbulent and Laminar Flow in Karst Conduits Under Unsteady Flow Conditions: Interpretation of Pumping Tests by Discrete Conduit-Continuum Modeling

    NASA Astrophysics Data System (ADS)

    Giese, M.; Reimann, T.; Bailly-Comte, V.; Maréchal, J.-C.; Sauter, M.; Geyer, T.

    2018-03-01

    Due to the duality in terms of (1) the groundwater flow field and (2) the discharge conditions, flow patterns of karst aquifer systems are complex. Estimated aquifer parameters may differ by several orders of magnitude from local (borehole) to regional (catchment) scale because of the large contrast in hydraulic parameters between matrix and conduit, their heterogeneity and anisotropy. One approach to deal with the scale effect problem in the estimation of hydraulic parameters of karst aquifers is the application of large-scale experiments such as long-term high-abstraction conduit pumping tests, stimulating measurable groundwater drawdown in both, the karst conduit system as well as the fractured matrix. The numerical discrete conduit-continuum modeling approach MODFLOW-2005 Conduit Flow Process Mode 1 (CFPM1) is employed to simulate laminar and nonlaminar conduit flow, induced by large-scale experiments, in combination with Darcian matrix flow. Effects of large-scale experiments were simulated for idealized settings. Subsequently, diagnostic plots and analyses of different fluxes are applied to interpret differences in the simulated conduit drawdown and general flow patterns. The main focus is set on the question to which extent different conduit flow regimes will affect the drawdown in conduit and matrix depending on the hydraulic properties of the conduit system, i.e., conduit diameter and relative roughness. In this context, CFPM1 is applied to investigate the importance of considering turbulent conditions for the simulation of karst conduit flow. This work quantifies the relative error that results from assuming laminar conduit flow for the interpretation of a synthetic large-scale pumping test in karst.

  15. Multi-thread parallel algorithm for reconstructing 3D large-scale porous structures

    NASA Astrophysics Data System (ADS)

    Ju, Yang; Huang, Yaohui; Zheng, Jiangtao; Qian, Xu; Xie, Heping; Zhao, Xi

    2017-04-01

    Geomaterials inherently contain many discontinuous, multi-scale, geometrically irregular pores, forming a complex porous structure that governs their mechanical and transport properties. The development of an efficient reconstruction method for representing porous structures can significantly contribute toward providing a better understanding of the governing effects of porous structures on the properties of porous materials. In order to improve the efficiency of reconstructing large-scale porous structures, a multi-thread parallel scheme was incorporated into the simulated annealing reconstruction method. In the method, four correlation functions, which include the two-point probability function, the linear-path functions for the pore phase and the solid phase, and the fractal system function for the solid phase, were employed for better reproduction of the complex well-connected porous structures. In addition, a random sphere packing method and a self-developed pre-conditioning method were incorporated to cast the initial reconstructed model and select independent interchanging pairs for parallel multi-thread calculation, respectively. The accuracy of the proposed algorithm was evaluated by examining the similarity between the reconstructed structure and a prototype in terms of their geometrical, topological, and mechanical properties. Comparisons of the reconstruction efficiency of porous models with various scales indicated that the parallel multi-thread scheme significantly shortened the execution time for reconstruction of a large-scale well-connected porous model compared to a sequential single-thread procedure.

  16. Large system change challenges: addressing complex critical issues in linked physical and social domains

    NASA Astrophysics Data System (ADS)

    Waddell, Steve; Cornell, Sarah; Hsueh, Joe; Ozer, Ceren; McLachlan, Milla; Birney, Anna

    2015-04-01

    Most action to address contemporary complex challenges, including the urgent issues of global sustainability, occurs piecemeal and without meaningful guidance from leading complex change knowledge and methods. The potential benefit of using such knowledge is greater efficacy of effort and investment. However, this knowledge and its associated tools and methods are under-utilized because understanding about them is low, fragmented between diverse knowledge traditions, and often requires shifts in mindsets and skills from expert-led to participant-based action. We have been engaged in diverse action-oriented research efforts in Large System Change for sustainability. For us, "large" systems can be characterized as large-scale systems - up to global - with many components, of many kinds (physical, biological, institutional, cultural/conceptual), operating at multiple levels, driven by multiple forces, and presenting major challenges for people involved. We see change of such systems as complex challenges, in contrast with simple or complicated problems, or chaotic situations. In other words, issues and sub-systems have unclear boundaries, interact with each other, and are often contradictory; dynamics are non-linear; issues are not "controllable", and "solutions" are "emergent" and often paradoxical. Since choices are opportunity-, power- and value-driven, these social, institutional and cultural factors need to be made explicit in any actionable theory of change. Our emerging network is sharing and building a knowledge base of experience, heuristics, and theories of change from multiple disciplines and practice domains. We will present our views on focal issues for the development of the field of large system change, which include processes of goal-setting and alignment; leverage of systemic transitions and transformation; and the role of choice in influencing critical change processes, when only some sub-systems or levels of the system behave in purposeful ways, while others are undeniably and unavoidably deterministic.

  17. Patterns and Variation in Benthic Biodiversity in a Large Marine Ecosystem.

    PubMed

    Piacenza, Susan E; Barner, Allison K; Benkwitt, Cassandra E; Boersma, Kate S; Cerny-Chipman, Elizabeth B; Ingeman, Kurt E; Kindinger, Tye L; Lee, Jonathan D; Lindsley, Amy J; Reimer, Jessica N; Rowe, Jennifer C; Shen, Chenchen; Thompson, Kevin A; Thurman, Lindsey L; Heppell, Selina S

    2015-01-01

    While there is a persistent inverse relationship between latitude and species diversity across many taxa and ecosystems, deviations from this norm offer an opportunity to understand the conditions that contribute to large-scale diversity patterns. Marine systems, in particular, provide such an opportunity, as marine diversity does not always follow a strict latitudinal gradient, perhaps because several hypothesized drivers of the latitudinal diversity gradient are uncorrelated in marine systems. We used a large scale public monitoring dataset collected over an eight year period to examine benthic marine faunal biodiversity patterns for the continental shelf (55-183 m depth) and slope habitats (184-1280 m depth) off the US West Coast (47°20'N-32°40'N). We specifically asked whether marine biodiversity followed a strict latitudinal gradient, and if these latitudinal patterns varied across depth, in different benthic substrates, and over ecological time scales. Further, we subdivided our study area into three smaller regions to test whether coast-wide patterns of biodiversity held at regional scales, where local oceanographic processes tend to influence community structure and function. Overall, we found complex patterns of biodiversity on both the coast-wide and regional scales that differed by taxonomic group. Importantly, marine biodiversity was not always highest at low latitudes. We found that latitude, depth, substrate, and year were all important descriptors of fish and invertebrate diversity. Invertebrate richness and taxonomic diversity were highest at high latitudes and in deeper waters. Fish richness also increased with latitude, but exhibited a hump-shaped relationship with depth, increasing with depth up to the continental shelf break, ~200 m depth, and then decreasing in deeper waters. We found relationships between fish taxonomic and functional diversity and latitude, depth, substrate, and time at the regional scale, but not at the coast-wide scale, suggesting that coast-wide patterns can obscure important correlates at smaller scales. Our study provides insight into complex diversity patterns of the deep water soft substrate benthic ecosystems off the US West Coast.

  18. Geomorphic analysis of large alluvial rivers

    NASA Astrophysics Data System (ADS)

    Thorne, Colin R.

    2002-05-01

    Geomorphic analysis of a large river presents particular challenges and requires a systematic and organised approach because of the spatial scale and system complexity involved. This paper presents a framework and blueprint for geomorphic studies of large rivers developed in the course of basic, strategic and project-related investigations of a number of large rivers. The framework demonstrates the need to begin geomorphic studies early in the pre-feasibility stage of a river project and carry them through to implementation and post-project appraisal. The blueprint breaks down the multi-layered and multi-scaled complexity of a comprehensive geomorphic study into a number of well-defined and semi-independent topics, each of which can be performed separately to produce a clearly defined, deliverable product. Geomorphology increasingly plays a central role in multi-disciplinary river research and the importance of effective quality assurance makes it essential that audit trails and quality checks are hard-wired into study design. The structured approach presented here provides output products and production trails that can be rigorously audited, ensuring that the results of a geomorphic study can stand up to the closest scrutiny.

  19. Development of the US3D Code for Advanced Compressible and Reacting Flow Simulations

    NASA Technical Reports Server (NTRS)

    Candler, Graham V.; Johnson, Heath B.; Nompelis, Ioannis; Subbareddy, Pramod K.; Drayna, Travis W.; Gidzak, Vladimyr; Barnhardt, Michael D.

    2015-01-01

    Aerothermodynamics and hypersonic flows involve complex multi-disciplinary physics, including finite-rate gas-phase kinetics, finite-rate internal energy relaxation, gas-surface interactions with finite-rate oxidation and sublimation, transition to turbulence, large-scale unsteadiness, shock-boundary layer interactions, fluid-structure interactions, and thermal protection system ablation and thermal response. Many of the flows have a large range of length and time scales, requiring large computational grids, implicit time integration, and large solution run times. The University of Minnesota NASA US3D code was designed for the simulation of these complex, highly-coupled flows. It has many of the features of the well-established DPLR code, but uses unstructured grids and has many advanced numerical capabilities and physical models for multi-physics problems. The main capabilities of the code are described, the physical modeling approaches are discussed, the different types of numerical flux functions and time integration approaches are outlined, and the parallelization strategy is overviewed. Comparisons between US3D and the NASA DPLR code are presented, and several advanced simulations are presented to illustrate some of novel features of the code.

  20. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  1. To manage inland fisheries is to manage at the social-ecological watershed scale.

    PubMed

    Nguyen, Vivian M; Lynch, Abigail J; Young, Nathan; Cowx, Ian G; Beard, T Douglas; Taylor, William W; Cooke, Steven J

    2016-10-01

    Approaches to managing inland fisheries vary between systems and regions but are often based on large-scale marine fisheries principles and thus limited and outdated. Rarely do they adopt holistic approaches that consider the complex interplay among humans, fish, and the environment. We argue that there is an urgent need for a shift in inland fisheries management towards holistic and transdisciplinary approaches that embrace the principles of social-ecological systems at the watershed scale. The interconnectedness of inland fisheries with their associated watershed (biotic, abiotic, and humans) make them extremely complex and challenging to manage and protect. For this reason, the watershed is a logical management unit. To assist management at this scale, we propose a framework that integrates disparate concepts and management paradigms to facilitate inland fisheries management and sustainability. We contend that inland fisheries need to be managed as social-ecological watershed system (SEWS). The framework supports watershed-scale and transboundary governance to manage inland fisheries, and transdisciplinary projects and teams to ensure relevant and applicable monitoring and research. We discuss concepts of social-ecological feedback and interactions of multiple stressors and factors within/between the social-ecological systems. Moreover, we emphasize that management, monitoring, and research on inland fisheries at the watershed scale are needed to ensure long-term sustainable and resilient fisheries. Copyright © 2016. Published by Elsevier Ltd.

  2. High performance computing in biology: multimillion atom simulations of nanoscale systems

    PubMed Central

    Sanbonmatsu, K. Y.; Tung, C.-S.

    2007-01-01

    Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988

  3. Supporting Knowledge Transfer in IS Deployment Projects

    NASA Astrophysics Data System (ADS)

    Schönström, Mikael

    To deploy new information systems is an expensive and complex task, and does seldom result in successful usage where the system adds strategic value to the firm (e.g. Sharma et al. 2003). It has been argued that innovation diffusion is a knowledge integration problem (Newell et al. 2000). Knowledge about business processes, deployment processes, information systems and technology are needed in a large-scale deployment of a corporate IS. These deployments can therefore to a large extent be argued to be a knowledge management (KM) problem. An effective deployment requires that knowledge about the system is effectively transferred to the target organization (Ko et al. 2005).

  4. Controllability of multiplex, multi-time-scale networks

    NASA Astrophysics Data System (ADS)

    Pósfai, Márton; Gao, Jianxi; Cornelius, Sean P.; Barabási, Albert-László; D'Souza, Raissa M.

    2016-09-01

    The paradigm of layered networks is used to describe many real-world systems, from biological networks to social organizations and transportation systems. While recently there has been much progress in understanding the general properties of multilayer networks, our understanding of how to control such systems remains limited. One fundamental aspect that makes this endeavor challenging is that each layer can operate at a different time scale; thus, we cannot directly apply standard ideas from structural control theory of individual networks. Here we address the problem of controlling multilayer and multi-time-scale networks focusing on two-layer multiplex networks with one-to-one interlayer coupling. We investigate the practically relevant case when the control signal is applied to the nodes of one layer. We develop a theory based on disjoint path covers to determine the minimum number of inputs (Ni) necessary for full control. We show that if both layers operate on the same time scale, then the network structure of both layers equally affect controllability. In the presence of time-scale separation, controllability is enhanced if the controller interacts with the faster layer: Ni decreases as the time-scale difference increases up to a critical time-scale difference, above which Ni remains constant and is completely determined by the faster layer. We show that the critical time-scale difference is large if layer I is easy and layer II is hard to control in isolation. In contrast, control becomes increasingly difficult if the controller interacts with the layer operating on the slower time scale and increasing time-scale separation leads to increased Ni, again up to a critical value, above which Ni still depends on the structure of both layers. This critical value is largely determined by the longest path in the faster layer that does not involve cycles. By identifying the underlying mechanisms that connect time-scale difference and controllability for a simplified model, we provide crucial insight into disentangling how our ability to control real interacting complex systems is affected by a variety of sources of complexity.

  5. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  6. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  7. COMPUTATIONAL METHODOLOGIES for REAL-SPACE STRUCTURAL REFINEMENT of LARGE MACROMOLECULAR COMPLEXES

    PubMed Central

    Goh, Boon Chong; Hadden, Jodi A.; Bernardi, Rafael C.; Singharoy, Abhishek; McGreevy, Ryan; Rudack, Till; Cassidy, C. Keith; Schulten, Klaus

    2017-01-01

    The rise of the computer as a powerful tool for model building and refinement has revolutionized the field of structure determination for large biomolecular systems. Despite the wide availability of robust experimental methods capable of resolving structural details across a range of spatiotemporal resolutions, computational hybrid methods have the unique ability to integrate the diverse data from multimodal techniques such as X-ray crystallography and electron microscopy into consistent, fully atomistic structures. Here, commonly employed strategies for computational real-space structural refinement are reviewed, and their specific applications are illustrated for several large macromolecular complexes: ribosome, virus capsids, chemosensory array, and photosynthetic chromatophore. The increasingly important role of computational methods in large-scale structural refinement, along with current and future challenges, is discussed. PMID:27145875

  8. Association of parameter, software, and hardware variation with large-scale behavior across 57,000 climate models

    PubMed Central

    Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.

    2007-01-01

    In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921

  9. Research Notes - An Introduction to Openness and Evolvability Assessment

    DTIC Science & Technology

    2016-08-01

    importance of different business and technical characteristics that combine to achieve an open solution. The complexity of most large-scale systems of...process characteristic)  Granularity of the architecture (size of functional blocks)  Modularity (cohesion and coupling)  Support for multiple...Description)  OV-3 (Operational Information Exchange Matrix)  SV-1 (Systems Interface Description)  TV-1 ( Technical Standards Profile). Note that there

  10. What Is a Complex Innovation System?

    PubMed Central

    Katz, J. Sylvan

    2016-01-01

    Innovation systems are sometimes referred to as complex systems, something that is intuitively understood but poorly defined. A complex system dynamically evolves in non-linear ways giving it unique properties that distinguish it from other systems. In particular, a common signature of complex systems is scale-invariant emergent properties. A scale-invariant property can be identified because it is solely described by a power law function, f(x) = kxα, where the exponent, α, is a measure of scale-invariance. The focus of this paper is to describe and illustrate that innovation systems have properties of a complex adaptive system. In particular scale-invariant emergent properties indicative of their complex nature that can be quantified and used to inform public policy. The global research system is an example of an innovation system. Peer-reviewed publications containing knowledge are a characteristic output. Citations or references to these articles are an indirect measure of the impact the knowledge has on the research community. Peer-reviewed papers indexed in Scopus and in the Web of Science were used as data sources to produce measures of sizes and impact. These measures are used to illustrate how scale-invariant properties can be identified and quantified. It is demonstrated that the distribution of impact has a reasonable likelihood of being scale-invariant with scaling exponents that tended toward a value of less than 3.0 with the passage of time and decreasing group sizes. Scale-invariant correlations are shown between the evolution of impact and size with time and between field impact and sizes at points in time. The recursive or self-similar nature of scale-invariance suggests that any smaller innovation system within the global research system is likely to be complex with scale-invariant properties too. PMID:27258040

  11. Managing Complexity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Posse, Christian; Malard, Joel M.

    2004-08-01

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today’s most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically-based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This paper explores the state of the art in the use physical analogs for understanding the behavior of some econophysical systems and to deriving stable and robust controlmore » strategies for them. In particular we review and discussion applications of some analytic methods based on the thermodynamic metaphor according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood.« less

  12. Heuristic decomposition for non-hierarchic systems

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.; Hajela, P.

    1991-01-01

    Design and optimization is substantially more complex in multidisciplinary and large-scale engineering applications due to the existing inherently coupled interactions. The paper introduces a quasi-procedural methodology for multidisciplinary optimization that is applicable for nonhierarchic systems. The necessary decision-making support for the design process is provided by means of an embedded expert systems capability. The method employs a decomposition approach whose modularity allows for implementation of specialized methods for analysis and optimization within disciplines.

  13. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  14. Socio-Technical Perspective on Interdisciplinary Interactions During the Development of Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.; Daly, Shanna; Baker, Wayne; Papalambros, panos; Seifert, Colleen

    2013-01-01

    This study investigates interdisciplinary interactions that take place during the research, development, and early conceptual design phases in the design of large-scale complex engineered systems (LaCES) such as aerospace vehicles. These interactions, that take place throughout a large engineering development organization, become the initial conditions of the systems engineering process that ultimately leads to the development of a viable system. This paper summarizes some of the challenges and opportunities regarding social and organizational issues that emerged from a qualitative study using ethnographic and survey data. The analysis reveals several socio-technical couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their benefits to the engineered system as well as substantial challenges in interdisciplinary interactions. Noted benefits included enhanced knowledge and problem mitigation and noted obstacles centered on organizational and human dynamics. Findings suggest that addressing the social challenges may be a critical need in enabling interdisciplinary interactions

  15. Voltage collapse in complex power grids

    PubMed Central

    Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco

    2016-01-01

    A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284

  16. Studies on combined model based on functional objectives of large scale complex engineering

    NASA Astrophysics Data System (ADS)

    Yuting, Wang; Jingchun, Feng; Jiabao, Sun

    2018-03-01

    As various functions were included in large scale complex engineering, and each function would be conducted with completion of one or more projects, combined projects affecting their functions should be located. Based on the types of project portfolio, the relationship of projects and their functional objectives were analyzed. On that premise, portfolio projects-technics based on their functional objectives were introduced, then we studied and raised the principles of portfolio projects-technics based on the functional objectives of projects. In addition, The processes of combined projects were also constructed. With the help of portfolio projects-technics based on the functional objectives of projects, our research findings laid a good foundation for management of large scale complex engineering portfolio management.

  17. Epidemic outbreaks in complex heterogeneous networks

    NASA Astrophysics Data System (ADS)

    Moreno, Y.; Pastor-Satorras, R.; Vespignani, A.

    2002-04-01

    We present a detailed analytical and numerical study for the spreading of infections with acquired immunity in complex population networks. We show that the large connectivity fluctuations usually found in these networks strengthen considerably the incidence of epidemic outbreaks. Scale-free networks, which are characterized by diverging connectivity fluctuations in the limit of a very large number of nodes, exhibit the lack of an epidemic threshold and always show a finite fraction of infected individuals. This particular weakness, observed also in models without immunity, defines a new epidemiological framework characterized by a highly heterogeneous response of the system to the introduction of infected individuals with different connectivity. The understanding of epidemics in complex networks might deliver new insights in the spread of information and diseases in biological and technological networks that often appear to be characterized by complex heterogeneous architectures.

  18. 3-D visualisation and interpretation of seismic attributes extracted from large 3-D seismic datasets: Subregional and prospect evaluation, deepwater Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sola, M.; Haakon Nordby, L.; Dailey, D.V.

    High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team`s ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less

  19. 3-D visualisation and interpretation of seismic attributes extracted from large 3-D seismic datasets: Subregional and prospect evaluation, deepwater Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sola, M.; Haakon Nordby, L.; Dailey, D.V.

    High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team's ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less

  20. Emergence of universal scaling in financial markets from mean-field dynamics

    NASA Astrophysics Data System (ADS)

    Vikram, S. V.; Sinha, Sitabhra

    2011-01-01

    Collective phenomena with universal properties have been observed in many complex systems with a large number of components. Here we present a microscopic model of the emergence of scaling behavior in such systems, where the interaction dynamics between individual components is mediated by a global variable making the mean-field description exact. Using the example of financial markets, we show that asset price can be such a global variable with the critical role of coordinating the actions of agents who are otherwise independent. The resulting model accurately reproduces empirical properties such as the universal scaling of the price fluctuation and volume distributions, long-range correlations in volatility, and multiscaling.

  1. Managing large-scale workflow execution from resource provisioning to provenance tracking: The CyberShake example

    USGS Publications Warehouse

    Deelman, E.; Callaghan, S.; Field, E.; Francoeur, H.; Graves, R.; Gupta, N.; Gupta, V.; Jordan, T.H.; Kesselman, C.; Maechling, P.; Mehringer, J.; Mehta, G.; Okaya, D.; Vahi, K.; Zhao, L.

    2006-01-01

    This paper discusses the process of building an environment where large-scale, complex, scientific analysis can be scheduled onto a heterogeneous collection of computational and storage resources. The example application is the Southern California Earthquake Center (SCEC) CyberShake project, an analysis designed to compute probabilistic seismic hazard curves for sites in the Los Angeles area. We explain which software tools were used to build to the system, describe their functionality and interactions. We show the results of running the CyberShake analysis that included over 250,000 jobs using resources available through SCEC and the TeraGrid. ?? 2006 IEEE.

  2. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  3. Classification of time series patterns from complex dynamic systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schryver, J.C.; Rao, N.

    1998-07-01

    An increasing availability of high-performance computing and data storage media at decreasing cost is making possible the proliferation of large-scale numerical databases and data warehouses. Numeric warehousing enterprises on the order of hundreds of gigabytes to terabytes are a reality in many fields such as finance, retail sales, process systems monitoring, biomedical monitoring, surveillance and transportation. Large-scale databases are becoming more accessible to larger user communities through the internet, web-based applications and database connectivity. Consequently, most researchers now have access to a variety of massive datasets. This trend will probably only continue to grow over the next several years. Unfortunately,more » the availability of integrated tools to explore, analyze and understand the data warehoused in these archives is lagging far behind the ability to gain access to the same data. In particular, locating and identifying patterns of interest in numerical time series data is an increasingly important problem for which there are few available techniques. Temporal pattern recognition poses many interesting problems in classification, segmentation, prediction, diagnosis and anomaly detection. This research focuses on the problem of classification or characterization of numerical time series data. Highway vehicles and their drivers are examples of complex dynamic systems (CDS) which are being used by transportation agencies for field testing to generate large-scale time series datasets. Tools for effective analysis of numerical time series in databases generated by highway vehicle systems are not yet available, or have not been adapted to the target problem domain. However, analysis tools from similar domains may be adapted to the problem of classification of numerical time series data.« less

  4. Nanoparticles from renewable polymers

    PubMed Central

    Wurm, Frederik R.; Weiss, Clemens K.

    2014-01-01

    The use of polymers from natural resources can bring many benefits for novel polymeric nanoparticle systems. Such polymers have a variety of beneficial properties such as biodegradability and biocompatibility, they are readily available on large scale and at low cost. As the amount of fossil fuels decrease, their application becomes more interesting even if characterization is in many cases more challenging due to structural complexity, either by broad distribution of their molecular weights (polysaccharides, polyesters, lignin) or by complex structure (proteins, lignin). This review summarizes different sources and methods for the preparation of biopolymer-based nanoparticle systems for various applications. PMID:25101259

  5. Eyjafjallajökull and 9/11: The Impact of Large-Scale Disasters on Worldwide Mobility

    PubMed Central

    Woolley-Meza, Olivia; Grady, Daniel; Thiemann, Christian; Bagrow, James P.; Brockmann, Dirk

    2013-01-01

    Large-scale disasters that interfere with globalized socio-technical infrastructure, such as mobility and transportation networks, trigger high socio-economic costs. Although the origin of such events is often geographically confined, their impact reverberates through entire networks in ways that are poorly understood, difficult to assess, and even more difficult to predict. We investigate how the eruption of volcano Eyjafjallajökull, the September 11th terrorist attacks, and geographical disruptions in general interfere with worldwide mobility. To do this we track changes in effective distance in the worldwide air transportation network from the perspective of individual airports. We find that universal features exist across these events: airport susceptibilities to regional disruptions follow similar, strongly heterogeneous distributions that lack a scale. On the other hand, airports are more uniformly susceptible to attacks that target the most important hubs in the network, exhibiting a well-defined scale. The statistical behavior of susceptibility can be characterized by a single scaling exponent. Using scaling arguments that capture the interplay between individual airport characteristics and the structural properties of routes we can recover the exponent for all types of disruption. We find that the same mechanisms responsible for efficient passenger flow may also keep the system in a vulnerable state. Our approach can be applied to understand the impact of large, correlated disruptions in financial systems, ecosystems and other systems with a complex interaction structure between heterogeneous components. PMID:23950904

  6. Networking at Conferences: Developing Your Professional Support System

    ERIC Educational Resources Information Center

    Kowalsky, Michelle

    2012-01-01

    The complexity and scale of any large library, education, or technology conference can sometimes be overwhelming. Therefore, spending time reviewing the conference program and perusing the workshop offerings in advance can help you stay organized and make the most of your time at the event. Planning in advance will help you manage potential time…

  7. Design and Evaluation of Simulations for the Development of Complex Decision-Making Skills.

    ERIC Educational Resources Information Center

    Hartley, Roger; Varley, Glen

    2002-01-01

    Command and Control Training Using Simulation (CACTUS) is a computer digital mapping system used by police to manage large-scale public events. Audio and video records of adaptive training scenarios using CACTUS show how the simulation develops decision-making skills for strategic and tactical event management. (SK)

  8. Enabling Controlling Complex Networks with Local Topological Information.

    PubMed

    Li, Guoqi; Deng, Lei; Xiao, Gaoxi; Tang, Pei; Wen, Changyun; Hu, Wuhua; Pei, Jing; Shi, Luping; Stanley, H Eugene

    2018-03-15

    Complex networks characterize the nature of internal/external interactions in real-world systems including social, economic, biological, ecological, and technological networks. Two issues keep as obstacles to fulfilling control of large-scale networks: structural controllability which describes the ability to guide a dynamical system from any initial state to any desired final state in finite time, with a suitable choice of inputs; and optimal control, which is a typical control approach to minimize the cost for driving the network to a predefined state with a given number of control inputs. For large complex networks without global information of network topology, both problems remain essentially open. Here we combine graph theory and control theory for tackling the two problems in one go, using only local network topology information. For the structural controllability problem, a distributed local-game matching method is proposed, where every node plays a simple Bayesian game with local information and local interactions with adjacent nodes, ensuring a suboptimal solution at a linear complexity. Starring from any structural controllability solution, a minimizing longest control path method can efficiently reach a good solution for the optimal control in large networks. Our results provide solutions for distributed complex network control and demonstrate a way to link the structural controllability and optimal control together.

  9. A hybrid parallel framework for the cellular Potts model simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Yi; He, Kejing; Dong, Shoubin

    2009-01-01

    The Cellular Potts Model (CPM) has been widely used for biological simulations. However, most current implementations are either sequential or approximated, which can't be used for large scale complex 3D simulation. In this paper we present a hybrid parallel framework for CPM simulations. The time-consuming POE solving, cell division, and cell reaction operation are distributed to clusters using the Message Passing Interface (MPI). The Monte Carlo lattice update is parallelized on shared-memory SMP system using OpenMP. Because the Monte Carlo lattice update is much faster than the POE solving and SMP systems are more and more common, this hybrid approachmore » achieves good performance and high accuracy at the same time. Based on the parallel Cellular Potts Model, we studied the avascular tumor growth using a multiscale model. The application and performance analysis show that the hybrid parallel framework is quite efficient. The hybrid parallel CPM can be used for the large scale simulation ({approx}10{sup 8} sites) of complex collective behavior of numerous cells ({approx}10{sup 6}).« less

  10. Experiences in running a complex electronic data capture system using mobile phones in a large-scale population trial in southern Nepal.

    PubMed

    Style, Sarah; Beard, B James; Harris-Fry, Helen; Sengupta, Aman; Jha, Sonali; Shrestha, Bhim P; Rai, Anjana; Paudel, Vikas; Thondoo, Meelan; Pulkki-Brannstrom, Anni-Maria; Skordis-Worrall, Jolene; Manandhar, Dharma S; Costello, Anthony; Saville, Naomi M

    2017-01-01

    The increasing availability and capabilities of mobile phones make them a feasible means of data collection. Electronic Data Capture (EDC) systems have been used widely for public health monitoring and surveillance activities, but documentation of their use in complicated research studies requiring multiple systems is limited. This paper shares our experiences of designing and implementing a complex multi-component EDC system for a community-based four-armed cluster-Randomised Controlled Trial in the rural plains of Nepal, to help other researchers planning to use EDC for complex studies in low-income settings. We designed and implemented three interrelated mobile phone data collection systems to enrol and follow-up pregnant women (trial participants), and to support the implementation of trial interventions (women's groups, food and cash transfers). 720 field staff used basic phones to send simple coded text messages, 539 women's group facilitators used Android smartphones with Open Data Kit Collect, and 112 Interviewers, Coordinators and Supervisors used smartphones with CommCare. Barcoded photo ID cards encoded with participant information were generated for each enrolled woman. Automated systems were developed to download, recode and merge data for nearly real-time access by researchers. The systems were successfully rolled out and used by 1371 staff. A total of 25,089 pregnant women were enrolled, and 17,839 follow-up forms completed. Women's group facilitators recorded 5717 women's groups and the distribution of 14,647 food and 13,482 cash transfers. Using EDC sped up data collection and processing, although time needed for programming and set-up delayed the study inception. EDC using three interlinked mobile data management systems (FrontlineSMS, ODK and CommCare) was a feasible and effective method of data capture in a complex large-scale trial in the plains of Nepal. Despite challenges including prolonged set-up times, the systems met multiple data collection needs for users with varying levels of literacy and experience.

  11. Experiences in running a complex electronic data capture system using mobile phones in a large-scale population trial in southern Nepal

    PubMed Central

    Style, Sarah; Beard, B. James; Harris-Fry, Helen; Sengupta, Aman; Jha, Sonali; Shrestha, Bhim P.; Rai, Anjana; Paudel, Vikas; Thondoo, Meelan; Pulkki-Brannstrom, Anni-Maria; Skordis-Worrall, Jolene; Manandhar, Dharma S.; Costello, Anthony; Saville, Naomi M.

    2017-01-01

    ABSTRACT The increasing availability and capabilities of mobile phones make them a feasible means of data collection. Electronic Data Capture (EDC) systems have been used widely for public health monitoring and surveillance activities, but documentation of their use in complicated research studies requiring multiple systems is limited. This paper shares our experiences of designing and implementing a complex multi-component EDC system for a community-based four-armed cluster-Randomised Controlled Trial in the rural plains of Nepal, to help other researchers planning to use EDC for complex studies in low-income settings. We designed and implemented three interrelated mobile phone data collection systems to enrol and follow-up pregnant women (trial participants), and to support the implementation of trial interventions (women’s groups, food and cash transfers). 720 field staff used basic phones to send simple coded text messages, 539 women’s group facilitators used Android smartphones with Open Data Kit Collect, and 112 Interviewers, Coordinators and Supervisors used smartphones with CommCare. Barcoded photo ID cards encoded with participant information were generated for each enrolled woman. Automated systems were developed to download, recode and merge data for nearly real-time access by researchers. The systems were successfully rolled out and used by 1371 staff. A total of 25,089 pregnant women were enrolled, and 17,839 follow-up forms completed. Women’s group facilitators recorded 5717 women’s groups and the distribution of 14,647 food and 13,482 cash transfers. Using EDC sped up data collection and processing, although time needed for programming and set-up delayed the study inception. EDC using three interlinked mobile data management systems (FrontlineSMS, ODK and CommCare) was a feasible and effective method of data capture in a complex large-scale trial in the plains of Nepal. Despite challenges including prolonged set-up times, the systems met multiple data collection needs for users with varying levels of literacy and experience. PMID:28613121

  12. Using systems thinking to support clinical system transformation.

    PubMed

    Best, Allan; Berland, Alex; Herbert, Carol; Bitz, Jennifer; van Dijk, Marlies W; Krause, Christina; Cochrane, Douglas; Noel, Kevin; Marsden, Julian; McKeown, Shari; Millar, John

    2016-05-16

    Purpose - The British Columbia Ministry of Health's Clinical Care Management initiative was used as a case study to better understand large-scale change (LSC) within BC's health system. Using a complex system framework, the purpose of this paper is to examine mechanisms that enable and constrain the implementation of clinical guidelines across various clinical settings. Design/methodology/approach - Researchers applied a general model of complex adaptive systems plus two specific conceptual frameworks (realist evaluation and system dynamics mapping) to define and study enablers and constraints. Focus group sessions and interviews with clinicians, executives, managers and board members were validated through an online survey. Findings - The functional themes for managing large-scale clinical change included: creating a context to prepare clinicians for health system transformation initiatives; promoting shared clinical leadership; strengthening knowledge management, strategic communications and opportunities for networking; and clearing pathways through the complexity of a multilevel, dynamic system. Research limitations/implications - The action research methodology was designed to guide continuing improvement of implementation. A sample of initiatives was selected; it was not intended to compare and contrast facilitators and barriers across all initiatives and regions. Similarly, evaluating the results or process of guideline implementation was outside the scope; the methods were designed to enable conversations at multiple levels - policy, management and practice - about how to improve implementation. The study is best seen as a case study of LSC, offering a possible model for replication by others and a tool to shape further dialogue. Practical implications - Recommended action-oriented strategies included engaging local champions; supporting local adaptation for implementation of clinical guidelines; strengthening local teams to guide implementation; reducing change fatigue; ensuring adequate resources; providing consistent communication especially for front-line care providers; and supporting local teams to demonstrate the clinical value of the guidelines to their colleagues. Originality/value - Bringing a complex systems perspective to clinical guideline implementation resulted in a clear understanding of the challenges involved in LSC.

  13. The Ramifications of Meddling with Systems Governed by Self-organized Critical Dynamics

    NASA Astrophysics Data System (ADS)

    Carreras, B. A.; Newman, D. E.; Dobson, I.

    2002-12-01

    Complex natural, well as man-made, systems often exhibit characteristics similar to those seen in self-organized critical (SOC) systems. The concept of self-organized criticality brings together ideas of self-organization of nonlinear dynamical systems with the often-observed near critical behavior of many natural phenomena. These phenomena exhibit self-similarities over extended ranges of spatial and temporal scales. In those systems, scale lengths may be described by fractal geometry and time scales that lead to 1/f-like power spectra. Natural applications include modeling the motion of tectonics plates, forest fires, magnetospheric dynamics, spin glass systems, and turbulent transport. In man-made systems, applications have included traffic dynamics, power and communications networks, and financial markets among many others. Simple cellular automata models such as the running sandpile model have been very useful in reproducing the complexity and characteristics of these systems. One characteristic property of the SOC systems is that they relax through what we call events. These events can happen over all scales of the system. Examples of these events are: earthquakes in the case of plate tectonic; fires in forest evolution extinction in the co evolution of biological species; and blackouts in power transmission systems. In a time-averaged sense, these systems are subcritical (that is, they lie in an average state that should not trigger any events) and the relaxation events happen intermittently. The time spent in a subcritical state relative to the time of the events varies from one system to another. For instance, the chance of finding a forest on fire is very low with the frequency of fires being on the order of one fire every few years and with many of these fires small and inconsequential. Very large fires happen over time periods of decades or even centuries. However, because of their consequences, these large but infrequent events are the important ones to understand, control and minimize. The main thrust of this research is to understand how and when global events occur in such systems when we apply mitigation techniques and how this impacts risk assessment. As sample systems we investigate both forest fire models and electrical power transmission network models, though the results are probably applicable to a wide variety of systems. It is found, perhaps counter intuitively, that apparently sensible attempts to mitigate failures in such complex systems can have adverse effects and therefore must be approached with care. The success of mitigation efforts in SOC systems is strongly influenced by the dynamics of the system. Unless the mitigation efforts alter the self-organization forces driving the system, the system will in general be pushed toward criticality. To alter those forces with mitigation efforts may be quite difficult because the forces are an intrinsic part of the system. Moreover, in many cases, efforts to mitigate small disruptions will increase the frequency of large disruptions. This occurs because the large and small disruptions are not independent but are strongly coupled by the dynamics. Before discussing this in the more complicated case of power systems, we will illustrate this phenomenon with a forest fire model.

  14. High-throughput analysis of peptide binding modules

    PubMed Central

    Liu, Bernard A.; Engelmann, Brett; Nash, Piers D.

    2014-01-01

    Modular protein interaction domains that recognize linear peptide motifs are found in hundreds of proteins within the human genome. Some protein interaction domains such as SH2, 14-3-3, Chromo and Bromo domains serve to recognize post-translational modification of amino acids (such as phosphorylation, acetylation, methylation etc.) and translate these into discrete cellular responses. Other modules such as SH3 and PDZ domains recognize linear peptide epitopes and serve to organize protein complexes based on localization and regions of elevated concentration. In both cases, the ability to nucleate specific signaling complexes is in large part dependent on the selectivity of a given protein module for its cognate peptide ligand. High throughput analysis of peptide-binding domains by peptide or protein arrays, phage display, mass spectrometry or other HTP techniques provides new insight into the potential protein-protein interactions prescribed by individual or even whole families of modules. Systems level analyses have also promoted a deeper understanding of the underlying principles that govern selective protein-protein interactions and how selectivity evolves. Lastly, there is a growing appreciation for the limitations and potential pitfalls of high-throughput analysis of protein-peptide interactomes. This review will examine some of the common approaches utilized for large-scale studies of protein interaction domains and suggest a set of standards for the analysis and validation of datasets from large-scale studies of peptide-binding modules. We will also highlight how data from large-scale studies of modular interaction domain families can provide insight into systems level properties such as the linguistics of selective interactions. PMID:22610655

  15. Biogas utilization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, M.A.

    1996-01-01

    Options for successfully using biogas depend on project scale. Almost all biogas from anaerobic digesters must first go through a gas handling system that pressurizes, meters, and filters the biogas. Additional treatment, including hydrogen sulfide-mercaptan scrubbing, gas drying, and carbon dioxide removal may be necessary for specialized uses, but these are complex and expensive processes. Thus, they can be justified only for large-scale projects that require high-quality biogas. Small-scale projects (less than 65 cfm) generally use biogas (as produced) as a boiler fuel or for fueling internal combustion engine-generators to produce electricity. If engines or boilers are selected properly, theremore » should be no need to remove hydrogen sulfide. Small-scale combustion turbines, steam turbines, and fuel cells are not used because of their technical complexity and high capital cost. Biogas cleanup to pipeline or transportation fuel specifications is very costly, and energy economics preclude this level of treatment.« less

  16. Managing Risk and Uncertainty in Large-Scale University Research Projects

    ERIC Educational Resources Information Center

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  17. Active Brownian rods

    NASA Astrophysics Data System (ADS)

    Peruani, Fernando

    2016-11-01

    Bacteria, chemically-driven rods, and motility assays are examples of active (i.e. self-propelled) Brownian rods (ABR). The physics of ABR, despite their ubiquity in experimental systems, remains still poorly understood. Here, we review the large-scale properties of collections of ABR moving in a dissipative medium. We address the problem by presenting three different models, of decreasing complexity, which we refer to as model I, II, and III, respectively. Comparing model I, II, and III, we disentangle the role of activity and interactions. In particular, we learn that in two dimensions by ignoring steric or volume exclusion effects, large-scale nematic order seems to be possible, while steric interactions prevent the formation of orientational order at large scales. The macroscopic behavior of ABR results from the interplay between active stresses and local alignment. ABR exhibit, depending on where we locate ourselves in parameter space, a zoology of macroscopic patterns that ranges from polar and nematic bands to dynamic aggregates.

  18. Numerical Modeling of Propellant Boil-Off in a Cryogenic Storage Tank

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Steadman, T. E.; Maroney, J. L.; Sass, J. P.; Fesmire, J. E.

    2007-01-01

    A numerical model to predict boil-off of stored propellant in large spherical cryogenic tanks has been developed. Accurate prediction of tank boil-off rates for different thermal insulation systems was the goal of this collaboration effort. The Generalized Fluid System Simulation Program, integrating flow analysis and conjugate heat transfer for solving complex fluid system problems, was used to create the model. Calculation of tank boil-off rate requires simultaneous simulation of heat transfer processes among liquid propellant, vapor ullage space, and tank structure. The reference tank for the boil-off model was the 850,000 gallon liquid hydrogen tank at Launch Complex 39B (LC- 39B) at Kennedy Space Center, which is under study for future infrastructure improvements to support the Constellation program. The methodology employed in the numerical model was validated using a sub-scale model and tank. Experimental test data from a 1/15th scale version of the LC-39B tank using both liquid hydrogen and liquid nitrogen were used to anchor the analytical predictions of the sub-scale model. Favorable correlations between sub-scale model and experimental test data have provided confidence in full-scale tank boil-off predictions. These methods are now being used in the preliminary design for other cases including future launch vehicles

  19. Large-scale regions of antimatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  20. Complex Plasmas under free fall conditions aboard the International Space Station

    NASA Astrophysics Data System (ADS)

    Konopka, Uwe; Thomas, Edward, Jr.; Funk, Dylan; Doyle, Brandon; Williams, Jeremiah; Knapek, Christina; Thomas, Hubertus

    2017-10-01

    Complex Plasmas are dynamically dominated by massive, highly negatively charged, micron-sized particles. They are usually strongly coupled and as a result can show fluid-like behavior or undergo phase transitions to form crystalline structures. The dynamical time scale of these systems is easily accessible in experiments because of the relatively high mass/inertia of the particles. However, the high mass also leads to sedimentation effects and as a result prevents the conduction of large scale, fully three dimensional experiments that are necessary to utilize complex plasmas as model systems in the transition to continuous media. To reduce sedimentation influences it becomes necessary to perform experiments in a free-fall (``microgravity'') environment, such as the ISS based experiment facility ``Plasma-Kristall-4'' (``PK-4''). In our paper we will present our recently started research activities to investigate the basic properties of complex plasmas by utilizing the PK-4 experiment facility aboard the ISS. We further give an overview of developments towards the next generation experiment facility ``Ekoplasma'' (formerly named ``PlasmaLab'') and discuss potential additional small-scale space-based experiment scenarios. This work was supported by the JPL/NASA (JPL-RSA 1571699), the US Dept. of Energy (DE-SC0016330) and the NSF (PHY-1613087).

  1. Large-Scale First-Principles Molecular Dynamics Simulations with Electrostatic Embedding: Application to Acetylcholinesterase Catalysis

    DOE PAGES

    Fattebert, Jean-Luc; Lau, Edmond Y.; Bennion, Brian J.; ...

    2015-10-22

    Enzymes are complicated solvated systems that typically require many atoms to simulate their function with any degree of accuracy. We have recently developed numerical techniques for large scale First-Principles molecular dynamics simulations and applied them to study the enzymatic reaction catalyzed by acetylcholinesterase. We carried out Density functional theory calculations for a quantum mechanical (QM) sub- system consisting of 612 atoms with an O(N) complexity finite-difference approach. The QM sub-system is embedded inside an external potential field representing the electrostatic effect due to the environment. We obtained finite temperature sampling by First-Principles molecular dynamics for the acylation reaction of acetylcholinemore » catalyzed by acetylcholinesterase. Our calculations shows two energies barriers along the reaction coordinate for the enzyme catalyzed acylation of acetylcholine. In conclusion, the second barrier (8.5 kcal/mole) is rate-limiting for the acylation reaction and in good agreement with experiment.« less

  2. Investigating large-scale brain dynamics using field potential recordings: analysis and interpretation.

    PubMed

    Pesaran, Bijan; Vinck, Martin; Einevoll, Gaute T; Sirota, Anton; Fries, Pascal; Siegel, Markus; Truccolo, Wilson; Schroeder, Charles E; Srinivasan, Ramesh

    2018-06-25

    New technologies to record electrical activity from the brain on a massive scale offer tremendous opportunities for discovery. Electrical measurements of large-scale brain dynamics, termed field potentials, are especially important to understanding and treating the human brain. Here, our goal is to provide best practices on how field potential recordings (electroencephalograms, magnetoencephalograms, electrocorticograms and local field potentials) can be analyzed to identify large-scale brain dynamics, and to highlight critical issues and limitations of interpretation in current work. We focus our discussion of analyses around the broad themes of activation, correlation, communication and coding. We provide recommendations for interpreting the data using forward and inverse models. The forward model describes how field potentials are generated by the activity of populations of neurons. The inverse model describes how to infer the activity of populations of neurons from field potential recordings. A recurring theme is the challenge of understanding how field potentials reflect neuronal population activity given the complexity of the underlying brain systems.

  3. Large-scale image-based profiling of single-cell phenotypes in arrayed CRISPR-Cas9 gene perturbation screens.

    PubMed

    de Groot, Reinoud; Lüthi, Joel; Lindsay, Helen; Holtackers, René; Pelkmans, Lucas

    2018-01-23

    High-content imaging using automated microscopy and computer vision allows multivariate profiling of single-cell phenotypes. Here, we present methods for the application of the CISPR-Cas9 system in large-scale, image-based, gene perturbation experiments. We show that CRISPR-Cas9-mediated gene perturbation can be achieved in human tissue culture cells in a timeframe that is compatible with image-based phenotyping. We developed a pipeline to construct a large-scale arrayed library of 2,281 sequence-verified CRISPR-Cas9 targeting plasmids and profiled this library for genes affecting cellular morphology and the subcellular localization of components of the nuclear pore complex (NPC). We conceived a machine-learning method that harnesses genetic heterogeneity to score gene perturbations and identify phenotypically perturbed cells for in-depth characterization of gene perturbation effects. This approach enables genome-scale image-based multivariate gene perturbation profiling using CRISPR-Cas9. © 2018 The Authors. Published under the terms of the CC BY 4.0 license.

  4. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  5. Coarsening mechanism of phase separation caused by a double temperature quench in an off-symmetric binary mixture.

    PubMed

    Sigehuzi, Tomoo; Tanaka, Hajime

    2004-11-01

    We study phase-separation behavior of an off-symmetric fluid mixture induced by a "double temperature quench." We first quench a system into the unstable region. After a large phase-separated structure is formed, we again quench the system more deeply and follow the pattern-evolution process. The second quench makes the domains formed by the first quench unstable and leads to double phase separation; that is, small droplets are formed inside the large domains created by the first quench. The complex coarsening behavior of this hierarchic structure having two characteristic length scales is studied in detail by using the digital image analysis. We find three distinct time regimes in the time evolution of the structure factor of the system. In the first regime, small droplets coarsen with time inside large domains. There a large domain containing small droplets in it can be regarded as an isolated system. Later, however, the coarsening of small droplets stops when they start to interact via diffusion with the large domain containing them. Finally, small droplets disappear due to the Lifshitz-Slyozov mechanism. Thus the observed behavior can be explained by the crossover of the nature of a large domain from the isolated to the open system; this is a direct consequence of the existence of the two characteristic length scales.

  6. Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat

    1993-01-01

    The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.

  7. Collaborative Observation and Research (CORE) Watersheds: new strategies for tracking the regional effects of climate change on complex systems

    NASA Astrophysics Data System (ADS)

    Murdoch, P. S.

    2007-12-01

    The past 30 years of environmental research have shown that our world is not made up of discrete components acting independently, but rather of a mosaic of complex relations among air, land, water, living resources, and human activities. Recent warming of the climate is having a significant effect on the functioning of those systems. A national imperative is developing to quickly establish local, regional, and national systems for anticipating environmental degradation from a changing climate and developing cost-effective adaptation or mitigation strategies. In these circumstances, the debate over research versus monitoring becomes moot--there is a clear need for the integrated application of both across a range of temporal and spatial scales. A national framework that effectively addresses the multiple scales and complex multi-disciplinary processes of climate change is being assembled largely from existing programs through collaboration among Federal, State, local, and NGO organizations. The result will be an observation and research network capable of interpreting complex environmental changes at a range of spatial and temporal scales, but at less cost than if the network were funded as an independent initiative. A pilot implementation of the collaborative framework in the Delaware River Basin yielded multi-scale assessments of carbon storage and flux, and the effects of forest fragmentation and soil calcium depletion on ecosystem function. A prototype of a national climate-effects observation and research network linking research watersheds, regional surveys, remote sensing, and ecosystem modeling is being initiated in the Yukon River Basin where carbon flux associated with permafrost thaw could accelerate global warming.

  8. Causal influence in neural systems: Reconciling mechanistic-reductionist and statistical perspectives. Comment on "Foundational perspectives on causality in large-scale brain networks" by M. Mannino & S.L. Bressler

    NASA Astrophysics Data System (ADS)

    Griffiths, John D.

    2015-12-01

    The modern understanding of the brain as a large, complex network of interacting elements is a natural consequence of the Neuron Doctrine [1,2] that has been bolstered in recent years by the tools and concepts of connectomics. In this abstracted, network-centric view, the essence of neural and cognitive function derives from the flows between network elements of activity and information - or, more generally, causal influence. The appropriate characterization of causality in neural systems, therefore, is a question at the very heart of systems neuroscience.

  9. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  10. Monitoring of Large Scale Volcanic Deformations in the Andes Using Geodetic Time Series (DInSAR, GPS and Microgravity)

    NASA Astrophysics Data System (ADS)

    Remy, D.; Froger, J. L.; Bonvalot, S.; Gabalda, G.; Albino, F.; Byrdina, S.

    2010-03-01

    Lastarria (25°10'S, 68°31'W, 5706 m) and Cordon del Azufre (25°18'S, 68°33'W, 5480 m) are part of a broad polygenic quaternary volcanic complex lying on the Altiplano, on the border of Chile and Argentina. This large scale volcanic area attracted particular attention in recent years because of its relatively intense unrest characterised by a peculiar style of on-going ground deformation, apparently begun in 1998. Interferometric Synthetic Aperture Radar (InSAR) data collected between 1998 and 2009 revealed two scales of inflation: a large elliptical area (50 km NNE- SSW major axis and a 40 minor axis), which has been inflating at a rate of around 3 cm.yr-1; and a short wavelength inflation (6 km wide), which is located at Lastarria volcano on the northern margin of the large elliptical area. The temporal evolution of these two distinct inflating signals suggests that they could be linked. The origin of this inflation period is still debated and various source mechanisms have been proposed to explain the observed deformation. Here, we present new observations of surface deformation in the Lastarria-Cordon del Azufre complex based on InSAR, levelling global positioning system (GPS), and microgravity data. Ascending and descending ASAR interferograms are combined to determine vertical and the EW horizontal component of displacement. Geodetic data are discussed from the standpoint of providing better constraints to understand the mechanics of the observed process at the volcanic complex.

  11. Social-ecological resilience and geomorphic systems

    NASA Astrophysics Data System (ADS)

    Chaffin, Brian C.; Scown, Murray

    2018-03-01

    Governance of coupled social-ecological systems (SESs) and the underlying geomorphic processes that structure and alter Earth's surface is a key challenge for global sustainability amid the increasing uncertainty and change that defines the Anthropocene. Social-ecological resilience as a concept of scientific inquiry has contributed to new understandings of the dynamics of change in SESs, increasing our ability to contextualize and implement governance in these systems. Often, however, the importance of geomorphic change and geomorphological knowledge is somewhat missing from processes employed to inform SES governance. In this contribution, we argue that geomorphology and social-ecological resilience research should be integrated to improve governance toward sustainability. We first provide definitions of engineering, ecological, community, and social-ecological resilience and then explore the use of these concepts within and alongside geomorphology in the literature. While ecological studies often consider geomorphology as an important factor influencing the resilience of ecosystems and geomorphological studies often consider the engineering resilience of geomorphic systems of interest, very few studies define and employ a social-ecological resilience framing and explicitly link the concept to geomorphic systems. We present five key concepts-scale, feedbacks, state or regime, thresholds and regime shifts, and humans as part of the system-which we believe can help explicitly link important aspects of social-ecological resilience inquiry and geomorphological inquiry in order to strengthen the impact of both lines of research. Finally, we discuss how these five concepts might be used to integrate social-ecological resilience and geomorphology to better understand change in, and inform governance of, SESs. To compound these dynamics of resilience, complex systems are nested and cross-scale interactions from smaller and larger scales relative to the system of interest can play formative roles during periods of collapse and reorganization. Large- and small-scale disturbances as well as large-scale system memory/capacity and small-scale innovation can have significant impacts on the trajectory of a reorganizing system (Gunderson and Holling, 2002; Chaffin and Gunderson, 2016). Attempts to measure the property of ecological resilience across complex systems amounts to attempts to measure the persistence of system-controlling variables, including processes, parameters, and important feedbacks, when the system is exposed to varying degrees of disturbance (Folke, 2016).

  12. Ground-water flow in low permeability environments

    USGS Publications Warehouse

    Neuzil, Christopher E.

    1986-01-01

    Certain geologic media are known to have small permeability; subsurface environments composed of these media and lacking well developed secondary permeability have groundwater flow sytems with many distinctive characteristics. Moreover, groundwater flow in these environments appears to influence the evolution of certain hydrologic, geologic, and geochemical systems, may affect the accumulation of pertroleum and ores, and probably has a role in the structural evolution of parts of the crust. Such environments are also important in the context of waste disposal. This review attempts to synthesize the diverse contributions of various disciplines to the problem of flow in low-permeability environments. Problems hindering analysis are enumerated together with suggested approaches to overcoming them. A common thread running through the discussion is the significance of size- and time-scale limitations of the ability to directly observe flow behavior and make measurements of parameters. These limitations have resulted in rather distinct small- and large-scale approaches to the problem. The first part of the review considers experimental investigations of low-permeability flow, including in situ testing; these are generally conducted on temporal and spatial scales which are relatively small compared with those of interest. Results from this work have provided increasingly detailed information about many aspects of the flow but leave certain questions unanswered. Recent advances in laboratory and in situ testing techniques have permitted measurements of permeability and storage properties in progressively “tighter” media and investigation of transient flow under these conditions. However, very large hydraulic gradients are still required for the tests; an observational gap exists for typical in situ gradients. The applicability of Darcy's law in this range is therefore untested, although claims of observed non-Darcian behavior appear flawed. Two important nonhydraulic flow phenomena, osmosis and ultrafiltration, are experimentally well established in prepared clays but have been incompletely investigated, particularly in undisturbed geologic media. Small-scale experimental results form much of the basis for analyses of flow in low-permeability environments which occurs on scales of time and size too large to permit direct observation. Such large-scale flow behavior is the focus of the second part of the review. Extrapolation of small-scale experimental experience becomes an important and sometimes controversial problem in this context. In large flow systems under steady state conditions the regional permeability can sometimes be determined, but systems with transient flow are more difficult to analyze. The complexity of the problem is enhanced by the sensitivity of large-scale flow to the effects of slow geologic processes. One-dimensional studies have begun to elucidate how simple burial or exhumation can generate transient flow conditions by changing the state of stress and temperature and by burial metamorphism. Investigation of the more complex problem of the interaction of geologic processes and flow in two and three dimensions is just beginning. Because these transient flow analyses have largely been based on flow in experimental scale systems or in relatively permeable systems, deformation in response to effective stress changes is generally treated as linearly elastic; however, this treatment creates difficulties for the long periods of interest because viscoelastic deformation is probably significant. Also, large-scale flow simulations in argillaceous environments generally have neglected osmosis and ultrafiltration, in part because extrapolation of laboratory experience with coupled flow to large scales under in situ conditions is controversial. Nevertheless, the effects are potentially quite important because the coupled flow might cause ultra long lived transient conditions. The difficulties associated with analysis are matched by those of characterizing hydrologic conditions in tight environments; measurements of hydraulic head and sampling of pore fluids have been done only rarely because of the practical difficulties involved. These problems are also discussed in the second part of this paper.

  13. DCL System Using Deep Learning Approaches for Land-based or Ship-based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2014-09-30

    repeating pulse-like signals were investigated. Software prototypes were developed and integrated into distinct streams of reseach ; projects...to study complex sound archives spanning large spatial and temporal scales. A new post processing method for detection and classifcation was also...false positive rates. HK-ANN was successfully tested for a large minke whale dataset, but could easily be used on other signal types. Various

  14. Time to "go large" on biofilm research: advantages of an omics approach.

    PubMed

    Azevedo, Nuno F; Lopes, Susana P; Keevil, Charles W; Pereira, Maria O; Vieira, Maria J

    2009-04-01

    In nature, the biofilm mode of life is of great importance in the cell cycle for many microorganisms. Perhaps because of biofilm complexity and variability, the characterization of a given microbial system, in terms of biofilm formation potential, structure and associated physiological activity, in a large-scale, standardized and systematic manner has been hindered by the absence of high-throughput methods. This outlook is now starting to change as new methods involving the utilization of microtiter-plates and automated spectrophotometry and microscopy systems are being developed to perform large-scale testing of microbial biofilms. Here, we evaluate if the time is ripe to start an integrated omics approach, i.e., the generation and interrogation of large datasets, to biofilms--"biofomics". This omics approach would bring much needed insight into how biofilm formation ability is affected by a number of environmental, physiological and mutational factors and how these factors interplay between themselves in a standardized manner. This could then lead to the creation of a database where biofilm signatures are identified and interrogated. Nevertheless, and before embarking on such an enterprise, the selection of a versatile, robust, high-throughput biofilm growing device and of appropriate methods for biofilm analysis will have to be performed. Whether such device and analytical methods are already available, particularly for complex heterotrophic biofilms is, however, very debatable.

  15. Erosion and deposition by supercritical density flows during channel avulsion and backfilling: Field examples from coarse-grained deepwater channel-levée complexes (Sandino Forearc Basin, southern Central America)

    NASA Astrophysics Data System (ADS)

    Lang, Jörg; Brandes, Christian; Winsemann, Jutta

    2017-03-01

    Erosion and deposition by supercritical density flows can strongly impact the facies distribution and architecture of submarine fans. Field examples from coarse-grained channel-levée complexes from the Sandino Forearc Basin (southern Central America) show that cyclic-step and antidune deposits represent common sedimentary facies of these depositional systems and relate to the different stages of avulsion, bypass, levée construction and channel backfilling. During channel avulsion, large-scale scour-fill complexes (18 to 29 m deep, 18 to 25 m wide, 60 to > 120 m long) were incised by supercritical density flows. The multi-storey infill of the large-scale scour-fill complexes comprises amalgamated massive, normally coarse-tail graded or widely spaced subhorizontally stratified conglomerates and pebbly sandstones, interpreted as deposits of the hydraulic-jump zone of cyclic steps. The large-scale scour-fill complexes can be distinguished from small-scale channel fills based on the preservation of a steep upper margin and a coarse-grained infill comprising mainly amalgamated hydraulic-jump zone deposits. Channel fills include repeated successions deposited by cyclic steps with superimposed antidunes. The deposits of the hydraulic-jump zone of cyclic steps comprise regularly spaced scours (0.2 to 2.6 m deep, 0.8 to 23 m long) infilled by intraclast-rich conglomerates or pebbly sandstones, displaying normal coarse-tail grading or backsets. These deposits are laterally and vertically associated with subhorizontally stratified, low-angle cross-stratified or sinusoidally stratified sandstones and pebbly sandstones, which were deposited by antidunes on the stoss side of the cyclic steps during flow re-acceleration. The field examples indicate that so-called spaced stratified deposits may commonly represent antidune deposits with varying stratification styles controlled by the aggradation rate, grain-size distribution and amalgamation. The deposits of small-scale cyclic steps with superimposed antidunes form fining-upwards successions with decreasing antidune wavelengths, indicating waning flows. Such cyclic step-antidune successions form the characteristic basal infill of mid-fan channels, and are inferred to be related to successive supercritical high-density turbidity flows triggered by retrogressive slope failures.

  16. Large area sub-micron chemical imaging of magnesium in sea urchin teeth.

    PubMed

    Masic, Admir; Weaver, James C

    2015-03-01

    The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Simulation of electron energy loss spectra of nanomaterials with linear-scaling density functional theory

    DOE PAGES

    Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; ...

    2016-04-20

    Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree withmore » those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. As a result, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.« less

  18. Geometric quantification of features in large flow fields.

    PubMed

    Kendall, Wesley; Huang, Jian; Peterka, Tom

    2012-01-01

    Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.

  19. Serious Gaming for Test & Evaluation of Clean-Slate (Ab Initio) National Airspace System (NAS) Designs

    NASA Technical Reports Server (NTRS)

    Allen, B. Danette; Alexandrov, Natalia

    2016-01-01

    Incremental approaches to air transportation system development inherit current architectural constraints, which, in turn, place hard bounds on system capacity, efficiency of performance, and complexity. To enable airspace operations of the future, a clean-slate (ab initio) airspace design(s) must be considered. This ab initio National Airspace System (NAS) must be capable of accommodating increased traffic density, a broader diversity of aircraft, and on-demand mobility. System and subsystem designs should scale to accommodate the inevitable demand for airspace services that include large numbers of autonomous Unmanned Aerial Vehicles and a paradigm shift in general aviation (e.g., personal air vehicles) in addition to more traditional aerial vehicles such as commercial jetliners and weather balloons. The complex and adaptive nature of ab initio designs for the future NAS requires new approaches to validation, adding a significant physical experimentation component to analytical and simulation tools. In addition to software modeling and simulation, the ability to exercise system solutions in a flight environment will be an essential aspect of validation. The NASA Langley Research Center (LaRC) Autonomy Incubator seeks to develop a flight simulation infrastructure for ab initio modeling and simulation that assumes no specific NAS architecture and models vehicle-to-vehicle behavior to examine interactions and emergent behaviors among hundreds of intelligent aerial agents exhibiting collaborative, cooperative, coordinative, selfish, and malicious behaviors. The air transportation system of the future will be a complex adaptive system (CAS) characterized by complex and sometimes unpredictable (or unpredicted) behaviors that result from temporal and spatial interactions among large numbers of participants. A CAS not only evolves with a changing environment and adapts to it, it is closely coupled to all systems that constitute the environment. Thus, the ecosystem that contains the system and other systems evolves with the CAS as well. The effects of the emerging adaptation and co-evolution are difficult to capture with only combined mathematical and computational experimentation. Therefore, an ab initio flight simulation environment must accommodate individual vehicles, groups of self-organizing vehicles, and large-scale infrastructure behavior. Inspired by Massively Multiplayer Online Role Playing Games (MMORPG) and Serious Gaming, the proposed ab initio simulation environment is similar to online gaming environments in which player participants interact with each other, affect their environment, and expect the simulation to persist and change regardless of any individual player's active participation.

  20. Concurrent heterogeneous neural model simulation on real-time neuromimetic hardware.

    PubMed

    Rast, Alexander; Galluppi, Francesco; Davies, Sergio; Plana, Luis; Patterson, Cameron; Sharp, Thomas; Lester, David; Furber, Steve

    2011-11-01

    Dedicated hardware is becoming increasingly essential to simulate emerging very-large-scale neural models. Equally, however, it needs to be able to support multiple models of the neural dynamics, possibly operating simultaneously within the same system. This may be necessary either to simulate large models with heterogeneous neural types, or to simplify simulation and analysis of detailed, complex models in a large simulation by isolating the new model to a small subpopulation of a larger overall network. The SpiNNaker neuromimetic chip is a dedicated neural processor able to support such heterogeneous simulations. Implementing these models on-chip uses an integrated library-based tool chain incorporating the emerging PyNN interface that allows a modeller to input a high-level description and use an automated process to generate an on-chip simulation. Simulations using both LIF and Izhikevich models demonstrate the ability of the SpiNNaker system to generate and simulate heterogeneous networks on-chip, while illustrating, through the network-scale effects of wavefront synchronisation and burst gating, methods that can provide effective behavioural abstractions for large-scale hardware modelling. SpiNNaker's asynchronous virtual architecture permits greater scope for model exploration, with scalable levels of functional and temporal abstraction, than conventional (or neuromorphic) computing platforms. The complete system illustrates a potential path to understanding the neural model of computation, by building (and breaking) neural models at various scales, connecting the blocks, then comparing them against the biology: computational cognitive neuroscience. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    NASA Astrophysics Data System (ADS)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  2. Linking climate projections to performance: A yield-based decision scaling assessment of a large urban water resources system

    NASA Astrophysics Data System (ADS)

    Turner, Sean W. D.; Marlow, David; Ekström, Marie; Rhodes, Bruce G.; Kularathna, Udaya; Jeffrey, Paul J.

    2014-04-01

    Despite a decade of research into climate change impacts on water resources, the scientific community has delivered relatively few practical methodological developments for integrating uncertainty into water resources system design. This paper presents an application of the "decision scaling" methodology for assessing climate change impacts on water resources system performance and asks how such an approach might inform planning decisions. The decision scaling method reverses the conventional ethos of climate impact assessment by first establishing the climate conditions that would compel planners to intervene. Climate model projections are introduced at the end of the process to characterize climate risk in such a way that avoids the process of propagating those projections through hydrological models. Here we simulated 1000 multisite synthetic monthly streamflow traces in a model of the Melbourne bulk supply system to test the sensitivity of system performance to variations in streamflow statistics. An empirical relation was derived to convert decision-critical flow statistics to climatic units, against which 138 alternative climate projections were plotted and compared. We defined the decision threshold in terms of a system yield metric constrained by multiple performance criteria. Our approach allows for fast and simple incorporation of demand forecast uncertainty and demonstrates the reach of the decision scaling method through successful execution in a large and complex water resources system. Scope for wider application in urban water resources planning is discussed.

  3. The challenges associated with developing science-based landscape scale management plans

    USGS Publications Warehouse

    Szaro, Robert C.; Boyce, D.A.; Puchlerz, T.

    2005-01-01

    Planning activities over large landscapes poses a complex of challenges when trying to balance the implementation of a conservation strategy while still allowing for a variety of consumptive and nonconsumptive uses. We examine a case in southeast Alaska to illustrate the breadth of these challenges and an approach to developing a science-based resource plan. Not only was the planning area, the Tongass National Forest, USA, exceptionally large (approximately 17 million acres or 6.9 million ha), but it also is primarily an island archipelago environment. The water system surrounding and going through much of the forest provides access to facilitate the movement of people, animals, and plants but at the same time functions as a barrier to others. This largest temperate rainforest in the world is an exceptional example of the complexity of managing at such a scale but also illustrates the role of science in the planning process. As we enter the 21st century, the list of questions needing scientific investigation has not only changed dramatically, but the character of the questions also has changed. Questions are contentious, cover broad scales in space and time, and are highly complex and interdependent. The provision of unbiased and objective information to all stakeholders is an important step in informed decision-making.

  4. Scale-invariance underlying the logistic equation and its social applications

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Plastino, A.

    2013-01-01

    On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.

  5. Hydropower and sustainability: resilience and vulnerability in China's powersheds.

    PubMed

    McNally, Amy; Magee, Darrin; Wolf, Aaron T

    2009-07-01

    Large dams represent a whole complex of social, economic and ecological processes, perhaps more than any other large infrastructure project. Today, countries with rapidly developing economies are constructing new dams to provide energy and flood control to growing populations in riparian and distant urban communities. If the system is lacking institutional capacity to absorb these physical and institutional changes there is potential for conflict, thereby threatening human security. In this paper, we propose analyzing sustainability (political, socioeconomic, and ecological) in terms of resilience versus vulnerability, framed within the spatial abstraction of a powershed. The powershed framework facilitates multi-scalar and transboundary analysis while remaining focused on the questions of resilience and vulnerability relating to hydropower dams. Focusing on examples from China, this paper describes the complex nature of dams using the sustainability and powershed frameworks. We then analyze the roles of institutions in China to understand the relationships between power, human security and the socio-ecological system. To inform the study of conflicts over dams China is a particularly useful case study because we can examine what happens at the international, national and local scales. The powershed perspective allows us to examine resilience and vulnerability across political boundaries from a dynamic, process-defined analytical scale while remaining focused on a host of questions relating to hydro-development that invoke drivers and impacts on national and sub-national scales. The ability to disaggregate the affects of hydropower dam construction from political boundaries allows for a deeper analysis of resilience and vulnerability. From our analysis we find that reforms in China's hydropower sector since 1996 have been motivated by the need to create stability at the national scale rather than resilient solutions to China's growing demand for energy and water resource control at the local and international scales. Some measures that improved economic development through the market economy and a combination of dam construction and institutional reform may indeed improve hydro-political resilience at a single scale. However, if China does address large-scale hydropower construction's potential to create multi-scale geopolitical tensions, they may be vulnerable to conflict - though not necessarily violent - in domestic and international political arenas. We conclude with a look toward a resilient basin institution for the Nu/Salween River, the site of a proposed large-scale hydropower development effort in China and Myanmar.

  6. Modeling Power Systems as Complex Adaptive Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chassin, David P.; Malard, Joel M.; Posse, Christian

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We reviewmore » and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.« less

  7. Cellular Decomposition Based Hybrid-Hierarchical Control Systems with Applications to Flight Management Systems

    NASA Technical Reports Server (NTRS)

    Caines, P. E.

    1999-01-01

    The work in this research project has been focused on the construction of a hierarchical hybrid control theory which is applicable to flight management systems. The motivation and underlying philosophical position for this work has been that the scale, inherent complexity and the large number of agents (aircraft) involved in an air traffic system imply that a hierarchical modelling and control methodology is required for its management and real time control. In the current work the complex discrete or continuous state space of a system with a small number of agents is aggregated in such a way that discrete (finite state machine or supervisory automaton) controlled dynamics are abstracted from the system's behaviour. High level control may then be either directly applied at this abstracted level, or, if this is in itself of significant complexity, further layers of abstractions may be created to produce a system with an acceptable degree of complexity at each level. By the nature of this construction, high level commands are necessarily realizable at lower levels in the system.

  8. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  9. Computer-based tools for decision support in agroforestry: Current state and future needs

    Treesearch

    E.A. Ellis; G. Bentrup; Michelle M. Schoeneberger

    2004-01-01

    Successful design of agroforestry practices hinges on the ability to pull together very diverse and sometimes large sets of information (i.e., biophysical, economic and social factors), and then implementing the synthesis of this information across several spatial scales from site to landscape. Agroforestry, by its very nature, creates complex systems with impacts...

  10. Cascade-based attacks on complex networks

    NASA Astrophysics Data System (ADS)

    Motter, Adilson E.; Lai, Ying-Cheng

    2002-12-01

    We live in a modern world supported by large, complex networks. Examples range from financial markets to communication and transportation systems. In many realistic situations the flow of physical quantities in the network, as characterized by the loads on nodes, is important. We show that for such networks where loads can redistribute among the nodes, intentional attacks can lead to a cascade of overload failures, which can in turn cause the entire or a substantial part of the network to collapse. This is relevant for real-world networks that possess a highly heterogeneous distribution of loads, such as the Internet and power grids. We demonstrate that the heterogeneity of these networks makes them particularly vulnerable to attacks in that a large-scale cascade may be triggered by disabling a single key node. This brings obvious concerns on the security of such systems.

  11. Observations of Seafloor Roughness in a Tidally Modulated Inlet

    NASA Astrophysics Data System (ADS)

    Lippmann, T. C.; Hunt, J.

    2014-12-01

    The vertical structure of shallow water flows are influenced by the presence of a bottom boundary layer, which spans the water column for long period waves or mean flows. The nature of the boundary is determined in part by the roughness elements that make up the seafloor, and includes sometimes complex undulations associated with regular and irregular shaped bedforms whose scales range several orders of magnitude from orbital wave ripples (10-1 m) to mega-ripples (100 m) and even larger features (101-103) such as sand waves, bars, and dunes. Modeling efforts often parameterize the effects of roughness elements on flow fields, depending on the complexity of the boundary layer formulations. The problem is exacerbated by the transient nature of bedforms and their large spatial extent and variability. This is particularly important in high flow areas with large sediment transport, such as tidally dominated sandy inlets like New River Inlet, NC. Quantification of small scale seafloor variability over large spatial areas requires the use of mobile platforms that can measure with fine scale (order cm) accuracy in wide swaths. The problem is difficult in shallow water where waves and currents are large, and water clarity is often limited. In this work, we present results from bathymetric surveys obtained with the Coastal Bathymetry Survey System, a personal watercraft equipped with a Imagenex multibeam acoustic echosounder and Applanix POS-MV 320 GPS-aided inertial measurement unit. This system is able to measure shallow water seafloor bathymetry and backscatter intensity with very fine scale (10-1 m) resolution and over relatively large scales (103 m) in the presence of high waves and currents. Wavenumber spectra show that the noise floor of the resolved multibeam bathymetry is on the order of 2.5 - 5 cm in amplitude, depending on water depths ranging 2 - 6 m, and about 30 cm in wavelength. Seafloor roughness elements are estimated from wavenumber spectra across the inlet from bathymetric maps of the seafloor obtained with 10-25 cm horizontal resolution. Implications of the effects of the bottom variability on the vertical structure of the currents will be discussed. This work was supported by ONR and NOAA.

  12. Molecular Precision at Micrometer Length Scales: Hierarchical Assembly of DNA-Protein Nanostructures.

    PubMed

    Schiffels, Daniel; Szalai, Veronika A; Liddle, J Alexander

    2017-07-25

    Robust self-assembly across length scales is a ubiquitous feature of biological systems but remains challenging for synthetic structures. Taking a cue from biology-where disparate molecules work together to produce large, functional assemblies-we demonstrate how to engineer microscale structures with nanoscale features: Our self-assembly approach begins by using DNA polymerase to controllably create double-stranded DNA (dsDNA) sections on a single-stranded template. The single-stranded DNA (ssDNA) sections are then folded into a mechanically flexible skeleton by the origami method. This process simultaneously shapes the structure at the nanoscale and directs the large-scale geometry. The DNA skeleton guides the assembly of RecA protein filaments, which provides rigidity at the micrometer scale. We use our modular design strategy to assemble tetrahedral, rectangular, and linear shapes of defined dimensions. This method enables the robust construction of complex assemblies, greatly extending the range of DNA-based self-assembly methods.

  13. Discrete elements for 3D microfluidics.

    PubMed

    Bhargava, Krisna C; Thompson, Bryant; Malmstadt, Noah

    2014-10-21

    Microfluidic systems are rapidly becoming commonplace tools for high-precision materials synthesis, biochemical sample preparation, and biophysical analysis. Typically, microfluidic systems are constructed in monolithic form by means of microfabrication and, increasingly, by additive techniques. These methods restrict the design and assembly of truly complex systems by placing unnecessary emphasis on complete functional integration of operational elements in a planar environment. Here, we present a solution based on discrete elements that liberates designers to build large-scale microfluidic systems in three dimensions that are modular, diverse, and predictable by simple network analysis techniques. We develop a sample library of standardized components and connectors manufactured using stereolithography. We predict and validate the flow characteristics of these individual components to design and construct a tunable concentration gradient generator with a scalable number of parallel outputs. We show that these systems are rapidly reconfigurable by constructing three variations of a device for generating monodisperse microdroplets in two distinct size regimes and in a high-throughput mode by simple replacement of emulsifier subcircuits. Finally, we demonstrate the capability for active process monitoring by constructing an optical sensing element for detecting water droplets in a fluorocarbon stream and quantifying their size and frequency. By moving away from large-scale integration toward standardized discrete elements, we demonstrate the potential to reduce the practice of designing and assembling complex 3D microfluidic circuits to a methodology comparable to that found in the electronics industry.

  14. Scale-Free Networks and Commercial Air Carrier Transportation in the United States

    NASA Technical Reports Server (NTRS)

    Conway, Sheila R.

    2004-01-01

    Network science, or the art of describing system structure, may be useful for the analysis and control of large, complex systems. For example, networks exhibiting scale-free structure have been found to be particularly well suited to deal with environmental uncertainty and large demand growth. The National Airspace System may be, at least in part, a scalable network. In fact, the hub-and-spoke structure of the commercial segment of the NAS is an often-cited example of an existing scale-free network After reviewing the nature and attributes of scale-free networks, this assertion is put to the test: is commercial air carrier transportation in the United States well explained by this model? If so, are the positive attributes of these networks, e.g. those of efficiency, flexibility and robustness, fully realized, or could we effect substantial improvement? This paper first outlines attributes of various network types, then looks more closely at the common carrier air transportation network from perspectives of the traveler, the airlines, and Air Traffic Control (ATC). Network models are applied within each paradigm, including discussion of implied strengths and weaknesses of each model. Finally, known limitations of scalable networks are discussed. With an eye towards NAS operations, utilizing the strengths and avoiding the weaknesses of scale-free networks are addressed.

  15. Enhanced job control language procedures for the SIMSYS2D two-dimensional water-quality simulation system

    USGS Publications Warehouse

    Karavitis, G.A.

    1984-01-01

    The SIMSYS2D two-dimensional water-quality simulation system is a large-scale digital modeling software system used to simulate flow and transport of solutes in freshwater and estuarine environments. Due to the size, processing requirements, and complexity of the system, there is a need to easily move the system and its associated files between computer sites when required. A series of job control language (JCL) procedures was written to allow transferability between IBM and IBM-compatible computers. (USGS)

  16. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  17. Structures and Techniques For Implementing and Packaging Complex, Large Scale Microelectromechanical Systems Using Foundry Fabrication Processes.

    DTIC Science & Technology

    1996-06-01

    switches 5-43 Figure 5-27. Mechanical interference between ’Pull Spring’ devices 5-45 Figure 5-28. Array of LIGA mechanical relay switches 5-49...like coating DM Direct metal interconnect technique DMD ™ Digital Micromirror Device EDP Ethylene, diamine, pyrocatechol and water; silicon anisotropic...mechanical systems MOSIS MOS Implementation Service PGA Pin grid array, an electronic die package PZT Lead-zirconate-titanate LIGA Lithographie

  18. Battlespace Awareness: Heterogeneous Sensor Maps of Large Scale, Complex Environments

    DTIC Science & Technology

    2017-06-13

    reference frames enable a system designer to describe the position of any sensor or platform at any point of time. This section introduces the...analysis to evaluate the quality of reconstructions created by our algorithms. CloudCompare is an open-source tool designed for this purpose [65]. In...structure of the data. The data term seeks to keep the proposed solution (u) similar to the originally observed values ( f ). A systems designer must

  19. A unique large-scale undergraduate research experience in molecular systems biology for non-mathematics majors.

    PubMed

    Kappler, Ulrike; Rowland, Susan L; Pedwell, Rhianna K

    2017-05-01

    Systems biology is frequently taught with an emphasis on mathematical modeling approaches. This focus effectively excludes most biology, biochemistry, and molecular biology students, who are not mathematics majors. The mathematical focus can also present a misleading picture of systems biology, which is a multi-disciplinary pursuit requiring collaboration between biochemists, bioinformaticians, and mathematicians. This article describes an authentic large-scale undergraduate research experience (ALURE) in systems biology that incorporates proteomics, bacterial genomics, and bioinformatics in the one exercise. This project is designed to engage students who have a basic grounding in protein chemistry and metabolism and no mathematical modeling skills. The pedagogy around the research experience is designed to help students attack complex datasets and use their emergent metabolic knowledge to make meaning from large amounts of raw data. On completing the ALURE, participants reported a significant increase in their confidence around analyzing large datasets, while the majority of the cohort reported good or great gains in a variety of skills including "analysing data for patterns" and "conducting database or internet searches." An environmental scan shows that this ALURE is the only undergraduate-level system-biology research project offered on a large-scale in Australia; this speaks to the perceived difficulty of implementing such an opportunity for students. We argue however, that based on the student feedback, allowing undergraduate students to complete a systems-biology project is both feasible and desirable, even if the students are not maths and computing majors. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(3):235-248, 2017. © 2016 The International Union of Biochemistry and Molecular Biology.

  20. Hybrid multiphoton volumetric functional imaging of large-scale bioengineered neuronal networks

    NASA Astrophysics Data System (ADS)

    Dana, Hod; Marom, Anat; Paluch, Shir; Dvorkin, Roman; Brosh, Inbar; Shoham, Shy

    2014-06-01

    Planar neural networks and interfaces serve as versatile in vitro models of central nervous system physiology, but adaptations of related methods to three dimensions (3D) have met with limited success. Here, we demonstrate for the first time volumetric functional imaging in a bioengineered neural tissue growing in a transparent hydrogel with cortical cellular and synaptic densities, by introducing complementary new developments in nonlinear microscopy and neural tissue engineering. Our system uses a novel hybrid multiphoton microscope design combining a 3D scanning-line temporal-focusing subsystem and a conventional laser-scanning multiphoton microscope to provide functional and structural volumetric imaging capabilities: dense microscopic 3D sampling at tens of volumes per second of structures with mm-scale dimensions containing a network of over 1,000 developing cells with complex spontaneous activity patterns. These developments open new opportunities for large-scale neuronal interfacing and for applications of 3D engineered networks ranging from basic neuroscience to the screening of neuroactive substances.

  1. Settlement-Size Scaling among Prehistoric Hunter-Gatherer Settlement Systems in the New World

    PubMed Central

    Haas, W. Randall; Klink, Cynthia J.; Maggard, Greg J.; Aldenderfer, Mark S.

    2015-01-01

    Settlement size predicts extreme variation in the rates and magnitudes of many social and ecological processes in human societies. Yet, the factors that drive human settlement-size variation remain poorly understood. Size variation among economically integrated settlements tends to be heavy tailed such that the smallest settlements are extremely common and the largest settlements extremely large and rare. The upper tail of this size distribution is often formalized mathematically as a power-law function. Explanations for this scaling structure in human settlement systems tend to emphasize complex socioeconomic processes including agriculture, manufacturing, and warfare—behaviors that tend to differentially nucleate and disperse populations hierarchically among settlements. But, the degree to which heavy-tailed settlement-size variation requires such complex behaviors remains unclear. By examining the settlement patterns of eight prehistoric New World hunter-gatherer settlement systems spanning three distinct environmental contexts, this analysis explores the degree to which heavy-tailed settlement-size scaling depends on the aforementioned socioeconomic complexities. Surprisingly, the analysis finds that power-law models offer plausible and parsimonious statistical descriptions of prehistoric hunter-gatherer settlement-size variation. This finding reveals that incipient forms of hierarchical settlement structure may have preceded socioeconomic complexity in human societies and points to a need for additional research to explicate how mobile foragers came to exhibit settlement patterns that are more commonly associated with hierarchical organization. We propose that hunter-gatherer mobility with preferential attachment to previously occupied locations may account for the observed structure in site-size variation. PMID:26536241

  2. Editorial [Special issue on software defined networks and infrastructures, network function virtualisation, autonomous systems and network management

    DOE PAGES

    Biswas, Amitava; Liu, Chen; Monga, Inder; ...

    2016-01-01

    For last few years, there has been a tremendous growth in data traffic due to high adoption rate of mobile devices and cloud computing. Internet of things (IoT) will stimulate even further growth. This is increasing scale and complexity of telecom/internet service provider (SP) and enterprise data centre (DC) compute and network infrastructures. As a result, managing these large network-compute converged infrastructures is becoming complex and cumbersome. To cope up, network and DC operators are trying to automate network and system operations, administrations and management (OAM) functions. OAM includes all non-functional mechanisms which keep the network running.

  3. Practical modeling approaches for geological storage of carbon dioxide.

    PubMed

    Celia, Michael A; Nordbotten, Jan M

    2009-01-01

    The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.

  4. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  5. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  6. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    PubMed Central

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  7. A Novel Interdisciplinary Approach to Socio-Technical Complexity

    NASA Astrophysics Data System (ADS)

    Bassetti, Chiara

    The chapter presents a novel interdisciplinary approach that integrates micro-sociological analysis into computer-vision and pattern-recognition modeling and algorithms, the purpose being to tackle socio-technical complexity at a systemic yet micro-grounded level. The approach is empirically-grounded and both theoretically- and analytically-driven, yet systemic and multidimensional, semi-supervised and computable, and oriented towards large scale applications. The chapter describes the proposed approach especially as for its sociological foundations, and as applied to the analysis of a particular setting --i.e. sport-spectator crowds. Crowds, better defined as large gatherings, are almost ever-present in our societies, and capturing their dynamics is crucial. From social sciences to public safety management and emergency response, modeling and predicting large gatherings' presence and dynamics, thus possibly preventing critical situations and being able to properly react to them, is fundamental. This is where semi/automated technologies can make the difference. The work presented in this chapter is intended as a scientific step towards such an objective.

  8. Sedimentology and reservoir heterogeneity of a valley-fill deposit-A field guide to the Dakota Sandstone of the San Rafael Swell, Utah

    USGS Publications Warehouse

    Kirschbaum, Mark A.; Schenk, Christopher J.

    2010-01-01

    Valley-fill deposits form a significant class of hydrocarbon reservoirs in many basins of the world. Maximizing recovery of fluids from these reservoirs requires an understanding of the scales of fluid-flow heterogeneity present within the valley-fill system. The Upper Cretaceous Dakota Sandstone in the San Rafael Swell, Utah contains well exposed, relatively accessible outcrops that allow a unique view of the external geometry and internal complexity of a set of rocks interpreted to be deposits of an incised valley fill. These units can be traced on outcrop for tens of miles, and individual sandstone bodies are exposed in three dimensions because of modern erosion in side canyons in a semiarid setting and by exhumation of the overlying, easily erodible Mancos Shale. The Dakota consists of two major units: (1) a lower amalgamated sandstone facies dominated by large-scale cross stratification with several individual sandstone bodies ranging in thickness from 8 to 28 feet, ranging in width from 115 to 150 feet, and having lengths as much as 5,000 feet, and (2) an upper facies composed of numerous mud-encased lenticular sandstones, dominated by ripple-scale lamination, in bedsets ranging in thickness from 5 to 12 feet. The lower facies is interpreted to be fluvial, probably of mainly braided stream origin that exhibits multiple incisions amalgamated into a complex sandstone body. The upper facies has lower energy, probably anastomosed channels encased within alluvial and coastal-plain floodplain sediments. The Dakota valley-fill complex has multiple scales of heterogeneity that could affect fluid flow in similar oil and gas subsurface reservoirs. The largest scale heterogeneity is at the formation level, where the valley-fill complex is sealed within overlying and underlying units. Within the valley-fill complex, there are heterogeneities between individual sandstone bodies, and at the smallest scale, internal heterogeneities within the bodies themselves. These different scales of fluid-flow compartmentalization present a challenge to hydrocarbon exploration targeting paleovalley deposits, and producing fields containing these types of reservoirs may have significant bypassed pay, especially where well spacing is large.

  9. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  10. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  11. Parallel Clustering Algorithm for Large-Scale Biological Data Sets

    PubMed Central

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246

  12. Methodological Problems of Nanotechnoscience

    NASA Astrophysics Data System (ADS)

    Gorokhov, V. G.

    Recently, we have reported on the definitions of nanotechnology as a new type of NanoTechnoScience and on the nanotheory as a cluster of the different natural and engineering theories. Nanotechnology is not only a new type of scientific-engineering discipline, but it evolves also in a “nonclassical” way. Nanoontology or nano scientific world view has a function of the methodological orientation for the choice the theoretical means and methods toward a solution to the scientific and engineering problems. This allows to change from one explanation and scientific world view to another without any problems. Thus, nanotechnology is both a field of scientific knowledge and a sphere of engineering activity, in other words, NanoTechnoScience is similar to Systems Engineering as the analysis and design of large-scale, complex, man/machine systems but micro- and nanosystems. Nano systems engineering as well as Macro systems engineering includes not only systems design but also complex research. Design orientation has influence on the change of the priorities in the complex research and of the relation to the knowledge, not only to “the knowledge about something”, but also to the knowledge as the means of activity: from the beginning control and restructuring of matter at the nano-scale is a necessary element of nanoscience.

  13. Synchronization in node of complex networks consist of complex chaotic system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Qiang, E-mail: qiangweibeihua@163.com; Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin; Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  14. Systems Biology-Based Investigation of Host-Plasmodium Interactions.

    PubMed

    Smith, Maren L; Styczynski, Mark P

    2018-05-18

    Malaria is a serious, complex disease caused by parasites of the genus Plasmodium. Plasmodium parasites affect multiple tissues as they evade immune responses, replicate, sexually reproduce, and transmit between vertebrate and invertebrate hosts. The explosion of omics technologies has enabled large-scale collection of Plasmodium infection data, revealing systems-scale patterns, mechanisms of pathogenesis, and the ways that host and pathogen affect each other. Here, we provide an overview of recent efforts using systems biology approaches to study host-Plasmodium interactions and the biological themes that have emerged from these efforts. We discuss some of the challenges in using systems biology for this goal, key research efforts needed to address those issues, and promising future malaria applications of systems biology. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Oscillatory mechanisms of process binding in memory.

    PubMed

    Klimesch, Wolfgang; Freunberger, Roman; Sauseng, Paul

    2010-06-01

    A central topic in cognitive neuroscience is the question, which processes underlie large scale communication within and between different neural networks. The basic assumption is that oscillatory phase synchronization plays an important role for process binding--the transient linking of different cognitive processes--which may be considered a special type of large scale communication. We investigate this question for memory processes on the basis of different types of oscillatory synchronization mechanisms. The reviewed findings suggest that theta and alpha phase coupling (and phase reorganization) reflect control processes in two large memory systems, a working memory and a complex knowledge system that comprises semantic long-term memory. It is suggested that alpha phase synchronization may be interpreted in terms of processes that coordinate top-down control (a process guided by expectancy to focus on relevant search areas) and access to memory traces (a process leading to the activation of a memory trace). An analogous interpretation is suggested for theta oscillations and the controlled access to episodic memories. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  16. Patterns and Variation in Benthic Biodiversity in a Large Marine Ecosystem

    PubMed Central

    Lee, Jonathan D.

    2015-01-01

    While there is a persistent inverse relationship between latitude and species diversity across many taxa and ecosystems, deviations from this norm offer an opportunity to understand the conditions that contribute to large-scale diversity patterns. Marine systems, in particular, provide such an opportunity, as marine diversity does not always follow a strict latitudinal gradient, perhaps because several hypothesized drivers of the latitudinal diversity gradient are uncorrelated in marine systems. We used a large scale public monitoring dataset collected over an eight year period to examine benthic marine faunal biodiversity patterns for the continental shelf (55–183 m depth) and slope habitats (184–1280 m depth) off the US West Coast (47°20′N—32°40′N). We specifically asked whether marine biodiversity followed a strict latitudinal gradient, and if these latitudinal patterns varied across depth, in different benthic substrates, and over ecological time scales. Further, we subdivided our study area into three smaller regions to test whether coast-wide patterns of biodiversity held at regional scales, where local oceanographic processes tend to influence community structure and function. Overall, we found complex patterns of biodiversity on both the coast-wide and regional scales that differed by taxonomic group. Importantly, marine biodiversity was not always highest at low latitudes. We found that latitude, depth, substrate, and year were all important descriptors of fish and invertebrate diversity. Invertebrate richness and taxonomic diversity were highest at high latitudes and in deeper waters. Fish richness also increased with latitude, but exhibited a hump-shaped relationship with depth, increasing with depth up to the continental shelf break, ~200 m depth, and then decreasing in deeper waters. We found relationships between fish taxonomic and functional diversity and latitude, depth, substrate, and time at the regional scale, but not at the coast-wide scale, suggesting that coast-wide patterns can obscure important correlates at smaller scales. Our study provides insight into complex diversity patterns of the deep water soft substrate benthic ecosystems off the US West Coast. PMID:26308521

  17. Interpretation of Recent Temperature Trends in California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duffy, P B; Bonfils, C; Lobell, D

    2007-09-21

    Regional-scale climate change and associated societal impacts result from large-scale (e.g. well-mixed greenhouse gases) and more local (e.g. land-use change) 'forcing' (perturbing) agents. It is essential to understand these forcings and climate responses to them, in order to predict future climate and societal impacts. California is a fine example of the complex effects of multiple climate forcings. The State's natural climate is diverse, highly variable, and strongly influenced by ENSO. Humans are perturbing this complex system through urbanization, irrigation, and emission of multiple types of aerosols and greenhouse gases. Despite better-than-average observational coverage, we are only beginning to understand themore » manifestations of these forcings in California's temperature record.« less

  18. Cx-02 Program, workshop on modeling complex systems

    USGS Publications Warehouse

    Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.

    2003-01-01

    This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.

  19. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE PAGES

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-03-27

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  20. Fast parametric relationships for the large-scale reservoir simulation of mixed CH4-CO2 gas hydrate systems

    NASA Astrophysics Data System (ADS)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    2017-06-01

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO2-CH4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this work, we present a set of fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. The mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.

  1. Fast parametric relationships for the large-scale reservoir simulation of mixed CH 4-CO 2 gas hydrate systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reagan, Matthew T.; Moridis, George J.; Seim, Katie S.

    A recent Department of Energy field test on the Alaska North Slope has increased interest in the ability to simulate systems of mixed CO 2-CH 4 hydrates. However, the physically realistic simulation of mixed-hydrate simulation is not yet a fully solved problem. Limited quantitative laboratory data leads to the use of various ab initio, statistical mechanical, or other mathematic representations of mixed-hydrate phase behavior. Few of these methods are suitable for inclusion in reservoir simulations, particularly for systems with large number of grid elements, 3D systems, or systems with complex geometric configurations. In this paper, we present a set ofmore » fast parametric relationships describing the thermodynamic properties and phase behavior of a mixed methane-carbon dioxide hydrate system. We use well-known, off-the-shelf hydrate physical properties packages to generate a sufficiently large dataset, select the most convenient and efficient mathematical forms, and fit the data to those forms to create a physical properties package suitable for inclusion in the TOUGH+ family of codes. Finally, the mapping of the phase and thermodynamic space reveals the complexity of the mixed-hydrate system and allows understanding of the thermodynamics at a level beyond what much of the existing laboratory data and literature currently offer.« less

  2. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  3. A sense of life: computational and experimental investigations with models of biochemical and evolutionary processes.

    PubMed

    Mishra, Bud; Daruwala, Raoul-Sam; Zhou, Yi; Ugel, Nadia; Policriti, Alberto; Antoniotti, Marco; Paxia, Salvatore; Rejali, Marc; Rudra, Archisman; Cherepinsky, Vera; Silver, Naomi; Casey, William; Piazza, Carla; Simeoni, Marta; Barbano, Paolo; Spivak, Marina; Feng, Jiawu; Gill, Ofer; Venkatesh, Mysore; Cheng, Fang; Sun, Bing; Ioniata, Iuliana; Anantharaman, Thomas; Hubbard, E Jane Albert; Pnueli, Amir; Harel, David; Chandru, Vijay; Hariharan, Ramesh; Wigler, Michael; Park, Frank; Lin, Shih-Chieh; Lazebnik, Yuri; Winkler, Franz; Cantor, Charles R; Carbone, Alessandra; Gromov, Mikhael

    2003-01-01

    We collaborate in a research program aimed at creating a rigorous framework, experimental infrastructure, and computational environment for understanding, experimenting with, manipulating, and modifying a diverse set of fundamental biological processes at multiple scales and spatio-temporal modes. The novelty of our research is based on an approach that (i) requires coevolution of experimental science and theoretical techniques and (ii) exploits a certain universality in biology guided by a parsimonious model of evolutionary mechanisms operating at the genomic level and manifesting at the proteomic, transcriptomic, phylogenic, and other higher levels. Our current program in "systems biology" endeavors to marry large-scale biological experiments with the tools to ponder and reason about large, complex, and subtle natural systems. To achieve this ambitious goal, ideas and concepts are combined from many different fields: biological experimentation, applied mathematical modeling, computational reasoning schemes, and large-scale numerical and symbolic simulations. From a biological viewpoint, the basic issues are many: (i) understanding common and shared structural motifs among biological processes; (ii) modeling biological noise due to interactions among a small number of key molecules or loss of synchrony; (iii) explaining the robustness of these systems in spite of such noise; and (iv) cataloging multistatic behavior and adaptation exhibited by many biological processes.

  4. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  5. Integrated Data Modeling and Simulation on the Joint Polar Satellite System Program

    NASA Technical Reports Server (NTRS)

    Roberts, Christopher J.; Boyce, Leslye; Smith, Gary; Li, Angela; Barrett, Larry

    2012-01-01

    The Joint Polar Satellite System is a modern, large-scale, complex, multi-mission aerospace program, and presents a variety of design, testing and operational challenges due to: (1) System Scope: multi-mission coordination, role, responsibility and accountability challenges stemming from porous/ill-defined system and organizational boundaries (including foreign policy interactions) (2) Degree of Concurrency: design, implementation, integration, verification and operation occurring simultaneously, at multiple scales in the system hierarchy (3) Multi-Decadal Lifecycle: technical obsolesce, reliability and sustainment concerns, including those related to organizational and industrial base. Additionally, these systems tend to become embedded in the broader societal infrastructure, resulting in new system stakeholders with perhaps different preferences (4) Barriers to Effective Communications: process and cultural issues that emerge due to geographic dispersion and as one spans boundaries including gov./contractor, NASA/Other USG, and international relationships.

  6. Components for Atomistic-to-Continuum Multiscale Modeling of Flow in Micro- and Nanofluidic Systems

    DOE PAGES

    Adalsteinsson, Helgi; Debusschere, Bert J.; Long, Kevin R.; ...

    2008-01-01

    Micro- and nanofluidics pose a series of significant challenges for science-based modeling. Key among those are the wide separation of length- and timescales between interface phenomena and bulk flow and the spatially heterogeneous solution properties near solid-liquid interfaces. It is not uncommon for characteristic scales in these systems to span nine orders of magnitude from the atomic motions in particle dynamics up to evolution of mass transport at the macroscale level, making explicit particle models intractable for all but the simplest systems. Recently, atomistic-to-continuum (A2C) multiscale simulations have gained a lot of interest as an approach to rigorously handle particle-levelmore » dynamics while also tracking evolution of large-scale macroscale behavior. While these methods are clearly not applicable to all classes of simulations, they are finding traction in systems in which tight-binding, and physically important, dynamics at system interfaces have complex effects on the slower-evolving large-scale evolution of the surrounding medium. These conditions allow decomposition of the simulation into discrete domains, either spatially or temporally. In this paper, we describe how features of domain decomposed simulation systems can be harnessed to yield flexible and efficient software for multiscale simulations of electric field-driven micro- and nanofluidics.« less

  7. III. FROM SMALL TO BIG: METHODS FOR INCORPORATING LARGE SCALE DATA INTO DEVELOPMENTAL SCIENCE.

    PubMed

    Davis-Kean, Pamela E; Jager, Justin

    2017-06-01

    For decades, developmental science has been based primarily on relatively small-scale data collections with children and families. Part of the reason for the dominance of this type of data collection is the complexity of collecting cognitive and social data on infants and small children. These small data sets are limited in both power to detect differences and the demographic diversity to generalize clearly and broadly. Thus, in this chapter we will discuss the value of using existing large-scale data sets to tests the complex questions of child development and how to develop future large-scale data sets that are both representative and can answer the important questions of developmental scientists. © 2017 The Society for Research in Child Development, Inc.

  8. System Alignment and Consensus Discourses in Reforms: "School Effectiveness Frameworks and Instructional Rounds." Philosophical Responses with Oakeshott, Mouffe and Rancière

    ERIC Educational Resources Information Center

    Stickney, Jeff

    2015-01-01

    In this commentary 26-year Ontario district teacher, Jeff Stickney, begins by surveying balanced approaches to large- and small-scale education reforms: contrasting linear alignment with more complex ecological models, drawing on recent work by Andy Hargreaves (2008; Hargreaves & Shirley, 2012). Next, through a brief case study of School…

  9. epiDMS: Data Management and Analytics for Decision-Making From Epidemic Spread Simulation Ensembles.

    PubMed

    Liu, Sicong; Poccia, Silvestro; Candan, K Selçuk; Chowell, Gerardo; Sapino, Maria Luisa

    2016-12-01

    Carefully calibrated large-scale computational models of epidemic spread represent a powerful tool to support the decision-making process during epidemic emergencies. Epidemic models are being increasingly used for generating forecasts of the spatial-temporal progression of epidemics at different spatial scales and for assessing the likely impact of different intervention strategies. However, the management and analysis of simulation ensembles stemming from large-scale computational models pose challenges, particularly when dealing with multiple interdependent parameters, spanning multiple layers and geospatial frames, affected by complex dynamic processes operating at different resolutions. We describe and illustrate with examples a novel epidemic simulation data management system, epiDMS, that was developed to address the challenges that arise from the need to generate, search, visualize, and analyze, in a scalable manner, large volumes of epidemic simulation ensembles and observations during the progression of an epidemic. epiDMS is a publicly available system that facilitates management and analysis of large epidemic simulation ensembles. epiDMS aims to fill an important hole in decision-making during healthcare emergencies by enabling critical services with significant economic and health impact. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  10. Enhancing metaproteomics-The value of models and defined environmental microbial systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbst, Florian-Alexander; Lünsmann, Vanessa; Kjeldal, Henrik

    Metaproteomicsthe large-scale characterization of the entire protein complement of environmental microbiota at a given point in timehas provided new features to study complex microbial communities in order to unravel these black boxes. Some new technical challenges arose that were not an issue for classical proteome analytics before that could be tackled by the application of different model systems. Here, we review different current and future model systems for metaproteome analysis. We introduce model systems for clinical and biotechnological research questions including acid mine drainage, anaerobic digesters, and activated sludge, following a short introduction to microbial communities and metaproteomics. Model systemsmore » are useful to evaluate the challenges encountered within (but not limited to) metaproteomics, including species complexity and coverage, biomass availability, or reliable protein extraction. Moreover, the implementation of model systems can be considered as a step forward to better understand microbial community responses and ecological functions of single member organisms. In the future, improvements are necessary to fully explore complex environmental systems by metaproteomics.« less

  11. Enhancing metaproteomics-The value of models and defined environmental microbial systems

    DOE PAGES

    Herbst, Florian-Alexander; Lünsmann, Vanessa; Kjeldal, Henrik; ...

    2016-01-21

    Metaproteomicsthe large-scale characterization of the entire protein complement of environmental microbiota at a given point in timehas provided new features to study complex microbial communities in order to unravel these black boxes. Some new technical challenges arose that were not an issue for classical proteome analytics before that could be tackled by the application of different model systems. Here, we review different current and future model systems for metaproteome analysis. We introduce model systems for clinical and biotechnological research questions including acid mine drainage, anaerobic digesters, and activated sludge, following a short introduction to microbial communities and metaproteomics. Model systemsmore » are useful to evaluate the challenges encountered within (but not limited to) metaproteomics, including species complexity and coverage, biomass availability, or reliable protein extraction. Moreover, the implementation of model systems can be considered as a step forward to better understand microbial community responses and ecological functions of single member organisms. In the future, improvements are necessary to fully explore complex environmental systems by metaproteomics.« less

  12. Geocomputation over Hybrid Computer Architecture and Systems: Prior Works and On-going Initiatives at UARK

    NASA Astrophysics Data System (ADS)

    Shi, X.

    2015-12-01

    As NSF indicated - "Theory and experimentation have for centuries been regarded as two fundamental pillars of science. It is now widely recognized that computational and data-enabled science forms a critical third pillar." Geocomputation is the third pillar of GIScience and geosciences. With the exponential growth of geodata, the challenge of scalable and high performance computing for big data analytics become urgent because many research activities are constrained by the inability of software or tool that even could not complete the computation process. Heterogeneous geodata integration and analytics obviously magnify the complexity and operational time frame. Many large-scale geospatial problems may be not processable at all if the computer system does not have sufficient memory or computational power. Emerging computer architectures, such as Intel's Many Integrated Core (MIC) Architecture and Graphics Processing Unit (GPU), and advanced computing technologies provide promising solutions to employ massive parallelism and hardware resources to achieve scalability and high performance for data intensive computing over large spatiotemporal and social media data. Exploring novel algorithms and deploying the solutions in massively parallel computing environment to achieve the capability for scalable data processing and analytics over large-scale, complex, and heterogeneous geodata with consistent quality and high-performance has been the central theme of our research team in the Department of Geosciences at the University of Arkansas (UARK). New multi-core architectures combined with application accelerators hold the promise to achieve scalability and high performance by exploiting task and data levels of parallelism that are not supported by the conventional computing systems. Such a parallel or distributed computing environment is particularly suitable for large-scale geocomputation over big data as proved by our prior works, while the potential of such advanced infrastructure remains unexplored in this domain. Within this presentation, our prior and on-going initiatives will be summarized to exemplify how we exploit multicore CPUs, GPUs, and MICs, and clusters of CPUs, GPUs and MICs, to accelerate geocomputation in different applications.

  13. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  14. Professional development for science teachers.

    PubMed

    Wilson, Suzanne M

    2013-04-19

    The Next Generation Science Standards will require large-scale professional development (PD) for all science teachers. Existing research on effective teacher PD suggests factors that are associated with substantial changes in teacher knowledge and practice, as well as students' science achievement. But the complexity of the U.S. educational system continues to thwart the search for a straightforward answer to the question of how to support teachers. Interventions that take a systemic approach to reform hold promise for improving PD effectiveness.

  15. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE PAGES

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.; ...

    2017-08-18

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  16. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach.

    PubMed

    Plehn, Thomas; May, Volkhard

    2017-01-21

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  17. Facilitating CCS Business Planning by Extending the Functionality of the SimCCS Integrated System Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellett, Kevin M.; Middleton, Richard S.; Stauffer, Philip H.

    The application of integrated system models for evaluating carbon capture and storage technology has expanded steadily over the past few years. To date, such models have focused largely on hypothetical scenarios of complex source-sink matching involving numerous large-scale CO 2 emitters, and high-volume, continuous reservoirs such as deep saline formations to function as geologic sinks for carbon storage. Though these models have provided unique insight on the potential costs and feasibility of deploying complex networks of integrated infrastructure, there remains a pressing need to translate such insight to the business community if this technology is to ever achieve a trulymore » meaningful impact in greenhouse gas mitigation. Here, we present a new integrated system modelling tool termed SimCCUS aimed at providing crucial decision support for businesses by extending the functionality of a previously developed model called SimCCS. The primary innovation of the SimCCUS tool development is the incorporation of stacked geological reservoir systems with explicit consideration of processes and costs associated with the operation of multiple CO 2 utilization and storage targets from a single geographic location. In such locations provide significant efficiencies through economies of scale, effectively minimizing CO 2 storage costs while simultaneously maximizing revenue streams via the utilization of CO 2 as a commodity for enhanced hydrocarbon recovery.« less

  18. Charge and energy migration in molecular clusters: A stochastic Schrödinger equation approach

    NASA Astrophysics Data System (ADS)

    Plehn, Thomas; May, Volkhard

    2017-01-01

    The performance of stochastic Schrödinger equations for simulating dynamic phenomena in large scale open quantum systems is studied. Going beyond small system sizes, commonly used master equation approaches become inadequate. In this regime, wave function based methods profit from their inherent scaling benefit and present a promising tool to study, for example, exciton and charge carrier dynamics in huge and complex molecular structures. In the first part of this work, a strict analytic derivation is presented. It starts with the finite temperature reduced density operator expanded in coherent reservoir states and ends up with two linear stochastic Schrödinger equations. Both equations are valid in the weak and intermediate coupling limit and can be properly related to two existing approaches in literature. In the second part, we focus on the numerical solution of these equations. The main issue is the missing norm conservation of the wave function propagation which may lead to numerical discrepancies. To illustrate this, we simulate the exciton dynamics in the Fenna-Matthews-Olson complex in direct comparison with the data from literature. Subsequently a strategy for the proper computational handling of the linear stochastic Schrödinger equation is exposed particularly with regard to large systems. Here, we study charge carrier transfer kinetics in realistic hybrid organic/inorganic para-sexiphenyl/ZnO systems of different extension.

  19. Parameter estimation procedure for complex non-linear systems: calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch.

    PubMed

    Abusam, A; Keesman, K J; van Straten, G; Spanjers, H; Meinema, K

    2001-01-01

    When applied to large simulation models, the process of parameter estimation is also called calibration. Calibration of complex non-linear systems, such as activated sludge plants, is often not an easy task. On the one hand, manual calibration of such complex systems is usually time-consuming, and its results are often not reproducible. On the other hand, conventional automatic calibration methods are not always straightforward and often hampered by local minima problems. In this paper a new straightforward and automatic procedure, which is based on the response surface method (RSM) for selecting the best identifiable parameters, is proposed. In RSM, the process response (output) is related to the levels of the input variables in terms of a first- or second-order regression model. Usually, RSM is used to relate measured process output quantities to process conditions. However, in this paper RSM is used for selecting the dominant parameters, by evaluating parameters sensitivity in a predefined region. Good results obtained in calibration of ASM No. 1 for N-removal in a full-scale oxidation ditch proved that the proposed procedure is successful and reliable.

  20. Tailoring Enterprise Systems Engineering Policy for Project Scale and Complexity

    NASA Technical Reports Server (NTRS)

    Cox, Renee I.; Thomas, L. Dale

    2014-01-01

    Space systems are characterized by varying degrees of scale and complexity. Accordingly, cost-effective implementation of systems engineering also varies depending on scale and complexity. Recognizing that systems engineering and integration happen everywhere and at all levels of a given system and that the life cycle is an integrated process necessary to mature a design, the National Aeronautic and Space Administration's (NASA's) Marshall Space Flight Center (MSFC) has developed a suite of customized implementation approaches based on project scale and complexity. While it may be argued that a top-level system engineering process is common to and indeed desirable across an enterprise for all space systems, implementation of that top-level process and the associated products developed as a result differ from system to system. The implementation approaches used for developing a scientific instrument necessarily differ from those used for a space station. .

  1. Management applications of discontinuity theory | Science ...

    EPA Pesticide Factsheets

    1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management to sustain ecosystem goods and services and maintain resilient ecosystems. 2.We propose an approach based on discontinuity theory that accounts for patterns and processes at distinct spatial and temporal scales, an inherent property of ecological systems. Discontinuity theory has not been applied in natural resource management and could therefore improve ecosystem management because it explicitly accounts for ecological complexity. 3.Synthesis and applications. We highlight the application of discontinuity approaches for meeting management goals. Specifically, discontinuity approaches have significant potential to measure and thus understand the resilience of ecosystems, to objectively identify critical scales of space and time in ecological systems at which human impact might be most severe, to provide warning indicators of regime change, to help predict and understand biological invasions and extinctions and to focus monitoring efforts. Discontinuity theory can complement current approaches, providing a broader paradigm for ecological management and conservation This manuscript provides insight on using discontinuity approaches to aid in managing complex ecological systems. In part

  2. Service Discovery Oriented Management System Construction Method

    NASA Astrophysics Data System (ADS)

    Li, Huawei; Ren, Ying

    2017-10-01

    In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.

  3. A brief historical introduction to Euler's formula for polyhedra, topology, graph theory and networks

    NASA Astrophysics Data System (ADS)

    Debnath, Lokenath

    2010-09-01

    This article is essentially devoted to a brief historical introduction to Euler's formula for polyhedra, topology, theory of graphs and networks with many examples from the real-world. Celebrated Königsberg seven-bridge problem and some of the basic properties of graphs and networks for some understanding of the macroscopic behaviour of real physical systems are included. We also mention some important and modern applications of graph theory or network problems from transportation to telecommunications. Graphs or networks are effectively used as powerful tools in industrial, electrical and civil engineering, communication networks in the planning of business and industry. Graph theory and combinatorics can be used to understand the changes that occur in many large and complex scientific, technical and medical systems. With the advent of fast large computers and the ubiquitous Internet consisting of a very large network of computers, large-scale complex optimization problems can be modelled in terms of graphs or networks and then solved by algorithms available in graph theory. Many large and more complex combinatorial problems dealing with the possible arrangements of situations of various kinds, and computing the number and properties of such arrangements can be formulated in terms of networks. The Knight's tour problem, Hamilton's tour problem, problem of magic squares, the Euler Graeco-Latin squares problem and their modern developments in the twentieth century are also included.

  4. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank

    2016-03-01

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.

  5. A new initiating system based on [(SiMes)Ru(PPh3)(Ind)Cl2] combined with azo-bis-isobutyronitrile in the polymerization and copolymerization of styrene and methyl methacrylate.

    PubMed

    Al-Majid, Abdullah M; Shamsan, Waseem Sharaf; Al-Odayn, Abdel-Basit Mohammed; Nahra, Fady; Aouak, Taieb; Nolan, Steven P

    2017-01-01

    The homopolymerization and copolymerization of styrene and methyl methacrylate, initiated for the first time by the combination of azo-bis-isobutyronitrile (AIBN) with [(SiMes)Ru(PPh 3 )(Ind)Cl 2 ] complex. The reactions were successfully carried out, on a large scale, in presence this complex at 80 °C. It was concluded from the data obtained that the association of AIBN with the ruthenium complex reduces considerably the transfer reactions and leads to the controlled radical polymerization and the well-defined polymers.

  6. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  7. Self-assembly kinetics of DNA functionalised liposomes

    NASA Astrophysics Data System (ADS)

    Mognetti, B. M.; Bachmann, S. J.; Kotar, J.; Parolini, L.; Petitzon, M.; Cicuta, P.; di Michele, L.

    DNA has been largely used to program state-dependent interactions between functionalised Brownian units resulting in responsive systems featuring complex phase behaviours. In this talk I will show how DNA can also be used to control aggregation kinetics in systems of liposomes functionalised by three types of linkers that can simultaneously bind. In doing so, I will present a general coarse-graining strategy that allows calculating the adhesion free energy between pairs of compliant units functionalised by mobile binders. I will highlight the important role played by bilayer deformability and will calculate the free energy contribution due to the presence of complexes made by more than two binders. Finally we will demonstrate the importance of explicitly accounting for the kinetics underlying ligand-receptor reactions when studying large-scale self-assembly. We acknowledge support from ULB, the Oppenheimer Fund, and the EPSRC Programme Grant CAPITALS No. EP/J017566/1.

  8. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  9. Large-Scale Dynamics of the Magnetospheric Boundary: Comparisons between Global MHD Simulation Results and ISTP Observations

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.

    1998-01-01

    Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.

  10. Replica Exchange with Solute Tempering: Efficiency in Large Scale Systems

    PubMed Central

    Huang, Xuhui; Hagen, Morten; Kim, Byungchan; Friesner, Richard A.; Zhou, Ruhong; Berne, B. J.

    2009-01-01

    We apply the recently developed replica exchange with solute tempering (REST) to three large solvated peptide systems: an α-helix, a β-hairpin, and a TrpCage, with these peptides defined as the “central group”. We find that our original implementation of REST is not always more efficient than the replica exchange method (REM). Specifically, we find that exchanges between folded (F) and unfolded (U) conformations with vastly different structural energies are greatly reduced by the nonappearance of the water self-interaction energy in the replica exchange acceptance probabilities. REST, however, is expected to remain useful for a large class of systems for which the energy gap between the two states is not large, such as weakly bound protein–ligand complexes. Alternatively, a shell of water molecules can be incorporated into the central group, as discussed in the original paper. PMID:17439169

  11. Interrogation of Mammalian Protein Complex Structure, Function, and Membership Using Genome-Scale Fitness Screens.

    PubMed

    Pan, Joshua; Meyers, Robin M; Michel, Brittany C; Mashtalir, Nazar; Sizemore, Ann E; Wells, Jonathan N; Cassel, Seth H; Vazquez, Francisca; Weir, Barbara A; Hahn, William C; Marsh, Joseph A; Tsherniak, Aviad; Kadoch, Cigall

    2018-05-23

    Protein complexes are assemblies of subunits that have co-evolved to execute one or many coordinated functions in the cellular environment. Functional annotation of mammalian protein complexes is critical to understanding biological processes, as well as disease mechanisms. Here, we used genetic co-essentiality derived from genome-scale RNAi- and CRISPR-Cas9-based fitness screens performed across hundreds of human cancer cell lines to assign measures of functional similarity. From these measures, we systematically built and characterized functional similarity networks that recapitulate known structural and functional features of well-studied protein complexes and resolve novel functional modules within complexes lacking structural resolution, such as the mammalian SWI/SNF complex. Finally, by integrating functional networks with large protein-protein interaction networks, we discovered novel protein complexes involving recently evolved genes of unknown function. Taken together, these findings demonstrate the utility of genetic perturbation screens alone, and in combination with large-scale biophysical data, to enhance our understanding of mammalian protein complexes in normal and disease states. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Challenges and opportunities of power systems from smart homes to super-grids.

    PubMed

    Kuhn, Philipp; Huber, Matthias; Dorfner, Johannes; Hamacher, Thomas

    2016-01-01

    The world's power systems are facing a structural change including liberalization of markets and integration of renewable energy sources. This paper describes the challenges that lie ahead in this process and points out avenues for overcoming different problems at different scopes, ranging from individual homes to international super-grids. We apply energy system models at those different scopes and find a trade-off between technical and social complexity. Small-scale systems would require technological breakthroughs, especially for storage, but individual agents can and do already start to build and operate such systems. In contrast, large-scale systems could potentially be more efficient from a techno-economic point of view. However, new political frameworks are required that enable long-term cooperation among sovereign entities through mutual trust. Which scope first achieves its breakthrough is not clear yet.

  13. Hierarchical optimal control of large-scale nonlinear chemical processes.

    PubMed

    Ramezani, Mohammad Hossein; Sadati, Nasser

    2009-01-01

    In this paper, a new approach is presented for optimal control of large-scale chemical processes. In this approach, the chemical process is decomposed into smaller sub-systems at the first level, and a coordinator at the second level, for which a two-level hierarchical control strategy is designed. For this purpose, each sub-system in the first level can be solved separately, by using any conventional optimization algorithm. In the second level, the solutions obtained from the first level are coordinated using a new gradient-type strategy, which is updated by the error of the coordination vector. The proposed algorithm is used to solve the optimal control problem of a complex nonlinear chemical stirred tank reactor (CSTR), where its solution is also compared with the ones obtained using the centralized approach. The simulation results show the efficiency and the capability of the proposed hierarchical approach, in finding the optimal solution, over the centralized method.

  14. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  15. Transdisciplinary Application of Cross-Scale Resilience ...

    EPA Pesticide Factsheets

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlyingdiscontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems. Comparative analyses of complex systems have, in fact, demonstrated commonalities among distinctly different types of systems (Schneider & Kay 1994; Holling 2001; Lansing 2003; Foster 2005; Bullmore et al. 2009). Both biological and non-biological complex systems appear t

  16. Parallel Multivariate Spatio-Temporal Clustering of Large Ecological Datasets on Hybrid Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sreepathi, Sarat; Kumar, Jitendra; Mills, Richard T.

    A proliferation of data from vast networks of remote sensing platforms (satellites, unmanned aircraft systems (UAS), airborne etc.), observational facilities (meteorological, eddy covariance etc.), state-of-the-art sensors, and simulation models offer unprecedented opportunities for scientific discovery. Unsupervised classification is a widely applied data mining approach to derive insights from such data. However, classification of very large data sets is a complex computational problem that requires efficient numerical algorithms and implementations on high performance computing (HPC) platforms. Additionally, increasing power, space, cooling and efficiency requirements has led to the deployment of hybrid supercomputing platforms with complex architectures and memory hierarchies like themore » Titan system at Oak Ridge National Laboratory. The advent of such accelerated computing architectures offers new challenges and opportunities for big data analytics in general and specifically, large scale cluster analysis in our case. Although there is an existing body of work on parallel cluster analysis, those approaches do not fully meet the needs imposed by the nature and size of our large data sets. Moreover, they had scaling limitations and were mostly limited to traditional distributed memory computing platforms. We present a parallel Multivariate Spatio-Temporal Clustering (MSTC) technique based on k-means cluster analysis that can target hybrid supercomputers like Titan. We developed a hybrid MPI, CUDA and OpenACC implementation that can utilize both CPU and GPU resources on computational nodes. We describe performance results on Titan that demonstrate the scalability and efficacy of our approach in processing large ecological data sets.« less

  17. Geological hazard monitoring system in Georgia

    NASA Astrophysics Data System (ADS)

    Gaprindashvili, George

    2017-04-01

    Georgia belongs to one of world's most complex mountainous regions according to the scale and frequency of Geological processes and damage caused to population, farmlands, and Infrastructure facilities. Geological hazards (landslide, debrisflow/mudflow, rockfall, erosion and etc.) are affecting many populated areas, agricultural fields, roads, oil and gas pipes, high-voltage electric power transmission towers, hydraulic structures, and tourist complexes. Landslides occur almost in all geomorphological zones, resulting in wide differentiation in the failure types and mechanisms and in the size-frequency distribution. In Georgia, geological hazards triggered by: 1. Activation of highly intense earthquakes; 2. Meteorological events provoking the disaster processes on the background of global climatic change; 3. Large-scale Human impact on the environment. The prediction and monitoring of Geological Hazards is a very wide theme, which involves different researchers from different spheres. Geological hazard monitoring is essential to prevent and mitigate these hazards. In past years in Georgia several monitoring system, such as Ground-based geodetic techniques, Debrisflow Early Warning System (EWS) were installed on high sensitive landslide and debrisflow areas. This work presents description of Geological hazard monitoring system in Georgia.

  18. Big Data Analysis of Manufacturing Processes

    NASA Astrophysics Data System (ADS)

    Windmann, Stefan; Maier, Alexander; Niggemann, Oliver; Frey, Christian; Bernardi, Ansgar; Gu, Ying; Pfrommer, Holger; Steckel, Thilo; Krüger, Michael; Kraus, Robert

    2015-11-01

    The high complexity of manufacturing processes and the continuously growing amount of data lead to excessive demands on the users with respect to process monitoring, data analysis and fault detection. For these reasons, problems and faults are often detected too late, maintenance intervals are chosen too short and optimization potential for higher output and increased energy efficiency is not sufficiently used. A possibility to cope with these challenges is the development of self-learning assistance systems, which identify relevant relationships by observation of complex manufacturing processes so that failures, anomalies and need for optimization are automatically detected. The assistance system developed in the present work accomplishes data acquisition, process monitoring and anomaly detection in industrial and agricultural processes. The assistance system is evaluated in three application cases: Large distillation columns, agricultural harvesting processes and large-scale sorting plants. In this paper, the developed infrastructures for data acquisition in these application cases are described as well as the developed algorithms and initial evaluation results.

  19. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    NASA Astrophysics Data System (ADS)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.

    With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less

  1. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates.

    PubMed

    Barberis, Lucas; Peruani, Fernando

    2016-12-09

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  2. 3-Dimensional Root Cause Diagnosis via Co-analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Ziming; Lan, Zhiling; Yu, Li

    2012-01-01

    With the growth of system size and complexity, reliability has become a major concern for large-scale systems. Upon the occurrence of failure, system administrators typically trace the events in Reliability, Availability, and Serviceability (RAS) logs for root cause diagnosis. However, RAS log only contains limited diagnosis information. Moreover, the manual processing is time-consuming, error-prone, and not scalable. To address the problem, in this paper we present an automated root cause diagnosis mechanism for large-scale HPC systems. Our mechanism examines multiple logs to provide a 3-D fine-grained root cause analysis. Here, 3-D means that our analysis will pinpoint the failure layer,more » the time, and the location of the event that causes the problem. We evaluate our mechanism by means of real logs collected from a production IBM Blue Gene/P system at Oak Ridge National Laboratory. It successfully identifies failure layer information for 219 failures during 23-month period. Furthermore, it effectively identifies the triggering events with time and location information, even when the triggering events occur hundreds of hours before the resulting failures.« less

  3. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel.

    PubMed

    Yuan, Liming; Smith, Alex C

    2015-05-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect.

  4. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    PubMed Central

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  5. Self-affinity in the dengue fever time series

    NASA Astrophysics Data System (ADS)

    Azevedo, S. M.; Saba, H.; Miranda, J. G. V.; Filho, A. S. Nascimento; Moret, M. A.

    2016-06-01

    Dengue is a complex public health problem that is common in tropical and subtropical regions. This disease has risen substantially in the last three decades, and the physical symptoms depict the self-affine behavior of the occurrences of reported dengue cases in Bahia, Brazil. This study uses detrended fluctuation analysis (DFA) to verify the scale behavior in a time series of dengue cases and to evaluate the long-range correlations that are characterized by the power law α exponent for different cities in Bahia, Brazil. The scaling exponent (α) presents different long-range correlations, i.e. uncorrelated, anti-persistent, persistent and diffusive behaviors. The long-range correlations highlight the complex behavior of the time series of this disease. The findings show that there are two distinct types of scale behavior. In the first behavior, the time series presents a persistent α exponent for a one-month period. For large periods, the time series signal approaches subdiffusive behavior. The hypothesis of the long-range correlations in the time series of the occurrences of reported dengue cases was validated. The observed self-affinity is useful as a forecasting tool for future periods through extrapolation of the α exponent behavior. This complex system has a higher predictability in a relatively short time (approximately one month), and it suggests a new tool in epidemiological control strategies. However, predictions for large periods using DFA are hidden by the subdiffusive behavior.

  6. Interactive exploration of coastal restoration modeling in virtual environments

    NASA Astrophysics Data System (ADS)

    Gerndt, Andreas; Miller, Robert; Su, Simon; Meselhe, Ehab; Cruz-Neira, Carolina

    2009-02-01

    Over the last decades, Louisiana has lost a substantial part of its coastal region to the Gulf of Mexico. The goal of the project depicted in this paper is to investigate the complex ecological and geophysical system not only to find solutions to reverse this development but also to protect the southern landscape of Louisiana for disastrous impacts of natural hazards like hurricanes. This paper sets a focus on the interactive data handling of the Chenier Plain which is only one scenario of the overall project. The challenge addressed is the interactive exploration of large-scale time-depending 2D simulation results and of terrain data with a high resolution that is available for this region. Besides data preparation, efficient visualization approaches optimized for the usage in virtual environments are presented. These are embedded in a complex framework for scientific visualization of time-dependent large-scale datasets. To provide a straightforward interface for rapid application development, a software layer called VRFlowVis has been developed. Several architectural aspects to encapsulate complex virtual reality aspects like multi-pipe vs. cluster-based rendering are discussed. Moreover, the distributed post-processing architecture is investigated to prove its efficiency for the geophysical domain. Runtime measurements conclude this paper.

  7. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  8. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  9. Is Self-organization a Rational Expectation?

    NASA Astrophysics Data System (ADS)

    Luediger, Heinz

    Over decades and under varying names the study of biology-inspired algorithms applied to non-living systems has been the subject of a small and somewhat exotic research community. Only the recent coincidence of a growing inability to master the design, development and operation of increasingly intertwined systems and processes, and an accelerated trend towards a naïve if not romanticizing view of nature in the sciences, has led to the adoption of biology-inspired algorithmic research by a wider range of sciences. Adaptive systems, as we apparently observe in nature, are meanwhile viewed as a promising way out of the complexity trap and, propelled by a long list of ‘self’ catchwords, complexity research has become an influential stream in the science community. This paper presents four provocative theses that cast doubt on the strategic potential of complexity research and the viability of large scale deployment of biology-inspired algorithms in an expectation driven world.

  10. Quaternary Morphodynamics of Fluvial Dispersal Systems Revealed: The Fly River, PNG, and the Sunda Shelf, SE Asia, simulated with the Massively Parallel GPU-based Model 'GULLEM'

    NASA Astrophysics Data System (ADS)

    Aalto, R. E.; Lauer, J. W.; Darby, S. E.; Best, J.; Dietrich, W. E.

    2015-12-01

    During glacial-marine transgressions vast volumes of sediment are deposited due to the infilling of lowland fluvial systems and shallow shelves, material that is removed during ensuing regressions. Modelling these processes would illuminate system morphodynamics, fluxes, and 'complexity' in response to base level change, yet such problems are computationally formidable. Environmental systems are characterized by strong interconnectivity, yet traditional supercomputers have slow inter-node communication -- whereas rapidly advancing Graphics Processing Unit (GPU) technology offers vastly higher (>100x) bandwidths. GULLEM (GpU-accelerated Lowland Landscape Evolution Model) employs massively parallel code to simulate coupled fluvial-landscape evolution for complex lowland river systems over large temporal and spatial scales. GULLEM models the accommodation space carved/infilled by representing a range of geomorphic processes, including: river & tributary incision within a multi-directional flow regime, non-linear diffusion, glacial-isostatic flexure, hydraulic geometry, tectonic deformation, sediment production, transport & deposition, and full 3D tracking of all resulting stratigraphy. Model results concur with the Holocene dynamics of the Fly River, PNG -- as documented with dated cores, sonar imaging of floodbasin stratigraphy, and the observations of topographic remnants from LGM conditions. Other supporting research was conducted along the Mekong River, the largest fluvial system of the Sunda Shelf. These and other field data provide tantalizing empirical glimpses into the lowland landscapes of large rivers during glacial-interglacial transitions, observations that can be explored with this powerful numerical model. GULLEM affords estimates for the timing and flux budgets within the Fly and Sunda Systems, illustrating complex internal system responses to the external forcing of sea level and climate. Furthermore, GULLEM can be applied to most ANY fluvial system to explore processes across a wide range of temporal and spatial scales. The presentation will provide insights (& many animations) illustrating river morphodynamics & resulting landscapes formed as a result of sea level oscillations. [Image: The incised 3.2e6 km^2 Sundaland domain @ 431ka

  11. Ga(+) Basicity and Affinity Scales Based on High-Level Ab Initio Calculations.

    PubMed

    Brea, Oriana; Mó, Otilia; Yáñez, Manuel

    2015-10-26

    The structure, relative stability and bonding of complexes formed by the interaction between Ga(+) and a large set of compounds, including hydrocarbons, aromatic systems, and oxygen-, nitrogen-, fluorine and sulfur-containing Lewis bases have been investigated through the use of the high-level composite ab initio Gaussian-4 theory. This allowed us to establish rather accurate Ga(+) cation affinity (GaCA) and Ga(+) cation basicity (GaCB) scales. The bonding analysis of the complexes under scrutiny shows that, even though one of the main ingredients of the Ga(+) -base interaction is electrostatic, it exhibits a non-negligible covalent character triggered by the presence of the low-lying empty 4p orbital of Ga(+) , which favors a charge donation from occupied orbitals of the base to the metal ion. This partial covalent character, also observed in AlCA scales, is behind the dissimilarities observed when GaCA are compared with Li(+) cation affinities, where these covalent contributions are practically nonexistent. Quite unexpectedly, there are some dissimilarities between several Ga(+) -complexes and the corresponding Al(+) -analogues, mainly affecting the relative stability of π-complexes involving aromatic compounds. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments.

    PubMed

    Ionescu, Catalin; Papava, Dragos; Olaru, Vlad; Sminchisescu, Cristian

    2014-07-01

    We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20% improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http://vision.imar.ro/human3.6m.

  13. Computational issues in complex water-energy optimization problems: Time scales, parameterizations, objectives and algorithms

    NASA Astrophysics Data System (ADS)

    Efstratiadis, Andreas; Tsoukalas, Ioannis; Kossieris, Panayiotis; Karavokiros, George; Christofides, Antonis; Siskos, Alexandros; Mamassis, Nikos; Koutsoyiannis, Demetris

    2015-04-01

    Modelling of large-scale hybrid renewable energy systems (HRES) is a challenging task, for which several open computational issues exist. HRES comprise typical components of hydrosystems (reservoirs, boreholes, conveyance networks, hydropower stations, pumps, water demand nodes, etc.), which are dynamically linked with renewables (e.g., wind turbines, solar parks) and energy demand nodes. In such systems, apart from the well-known shortcomings of water resources modelling (nonlinear dynamics, unknown future inflows, large number of variables and constraints, conflicting criteria, etc.), additional complexities and uncertainties arise due to the introduction of energy components and associated fluxes. A major difficulty is the need for coupling two different temporal scales, given that in hydrosystem modeling, monthly simulation steps are typically adopted, yet for a faithful representation of the energy balance (i.e. energy production vs. demand) a much finer resolution (e.g. hourly) is required. Another drawback is the increase of control variables, constraints and objectives, due to the simultaneous modelling of the two parallel fluxes (i.e. water and energy) and their interactions. Finally, since the driving hydrometeorological processes of the integrated system are inherently uncertain, it is often essential to use synthetically generated input time series of large length, in order to assess the system performance in terms of reliability and risk, with satisfactory accuracy. To address these issues, we propose an effective and efficient modeling framework, key objectives of which are: (a) the substantial reduction of control variables, through parsimonious yet consistent parameterizations; (b) the substantial decrease of computational burden of simulation, by linearizing the combined water and energy allocation problem of each individual time step, and solve each local sub-problem through very fast linear network programming algorithms, and (c) the substantial decrease of the required number of function evaluations for detecting the optimal management policy, using an innovative, surrogate-assisted global optimization approach.

  14. Forest-fire model as a supercritical dynamic model in financial systems

    NASA Astrophysics Data System (ADS)

    Lee, Deokjae; Kim, Jae-Young; Lee, Jeho; Kahng, B.

    2015-02-01

    Recently large-scale cascading failures in complex systems have garnered substantial attention. Such extreme events have been treated as an integral part of self-organized criticality (SOC). Recent empirical work has suggested that some extreme events systematically deviate from the SOC paradigm, requiring a different theoretical framework. We shed additional theoretical light on this possibility by studying financial crisis. We build our model of financial crisis on the well-known forest fire model in scale-free networks. Our analysis shows a nontrivial scaling feature indicating supercritical behavior, which is independent of system size. Extreme events in the supercritical state result from bursting of a fat bubble, seeds of which are sown by a protracted period of a benign financial environment with few shocks. Our findings suggest that policymakers can control the magnitude of financial meltdowns by keeping the economy operating within reasonable duration of a benign environment.

  15. Kullback-Leibler divergence measure of intermittency: Application to turbulence

    NASA Astrophysics Data System (ADS)

    Granero-Belinchón, Carlos; Roux, Stéphane G.; Garnier, Nicolas B.

    2018-01-01

    For generic systems exhibiting power law behaviors, and hence multiscale dependencies, we propose a simple tool to analyze multifractality and intermittency, after noticing that these concepts are directly related to the deformation of a probability density function from Gaussian at large scales to non-Gaussian at smaller scales. Our framework is based on information theory and uses Shannon entropy and Kullback-Leibler divergence. We provide an extensive application to three-dimensional fully developed turbulence, seen here as a paradigmatic complex system where intermittency was historically defined and the concepts of scale invariance and multifractality were extensively studied and benchmarked. We compute our quantity on experimental Eulerian velocity measurements, as well as on synthetic processes and phenomenological models of fluid turbulence. Our approach is very general and does not require any underlying model of the system, although it can probe the relevance of such a model.

  16. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  17. Development and Application of a Process-based River System Model at a Continental Scale

    NASA Astrophysics Data System (ADS)

    Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.

    2014-12-01

    Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.

  18. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  19. Implantable neurotechnologies: bidirectional neural interfaces--applications and VLSI circuit implementations.

    PubMed

    Greenwald, Elliot; Masters, Matthew R; Thakor, Nitish V

    2016-01-01

    A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very large-scale integration has advanced the design of complex integrated circuits. System-on-chip devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems.

  20. Bidirectional Neural Interfaces

    PubMed Central

    Masters, Matthew R.; Thakor, Nitish V.

    2016-01-01

    A bidirectional neural interface is a device that transfers information into and out of the nervous system. This class of devices has potential to improve treatment and therapy in several patient populations. Progress in very-large-scale integration (VLSI) has advanced the design of complex integrated circuits. System-on-chip (SoC) devices are capable of recording neural electrical activity and altering natural activity with electrical stimulation. Often, these devices include wireless powering and telemetry functions. This review presents the state of the art of bidirectional circuits as applied to neuroprosthetic, neurorepair, and neurotherapeutic systems. PMID:26753776

  1. Approaches for scalable modeling and emulation of cyber systems : LDRD final report.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Minnich, Ronald G.; Armstrong, Robert C.

    2009-09-01

    The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminarymore » theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.« less

  2. Complexity-aware simple modeling.

    PubMed

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Large-scale structural analysis: The structural analyst, the CSM Testbed and the NAS System

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Mccleary, Susan L.; Macy, Steven C.; Aminpour, Mohammad A.

    1989-01-01

    The Computational Structural Mechanics (CSM) activity is developing advanced structural analysis and computational methods that exploit high-performance computers. Methods are developed in the framework of the CSM testbed software system and applied to representative complex structural analysis problems from the aerospace industry. An overview of the CSM testbed methods development environment is presented and some numerical methods developed on a CRAY-2 are described. Selected application studies performed on the NAS CRAY-2 are also summarized.

  4. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: overview and air-side system description

    NASA Astrophysics Data System (ADS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter; Ballard, Marlin; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; Shiri, Ron

    2016-07-01

    This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNC's demonstrated wavefront sensing and control system to refine and quantify end-to-end high-contrast starlight suppression performance. This pathfinder testbed will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  5. Complex large-scale convection of a viscous incompressible fluid with heat exchange according to Newton's law

    NASA Astrophysics Data System (ADS)

    Gorshkov, A. V.; Prosviryakov, E. Yu.

    2017-12-01

    The paper considers the construction of analytical solutions to the Oberbeck-Boussinesq system. This system describes layered Bénard-Marangoni convective flows of an incompressible viscous fluid. The third-kind boundary condition, i. e. Newton's heat transfer law, is used on the boundaries of a fluid layer. The obtained solution is analyzed. It is demonstrated that there is a fluid layer thickness with tangential stresses vanishing simultaneously, this being equivalent to the existence of tensile and compressive stresses.

  6. Segmenting healthcare terminology users: a strategic approach to large scale evolutionary development.

    PubMed

    Price, C; Briggs, K; Brown, P J

    1999-01-01

    Healthcare terminologies have become larger and more complex, aiming to support a diverse range of functions across the whole spectrum of healthcare activity. Prioritization of development, implementation and evaluation can be achieved by regarding the "terminology" as an integrated system of content-based and functional components. Matching these components to target segments within the healthcare community, supports a strategic approach to evolutionary development and provides essential product differentiation to enable terminology providers and systems suppliers to focus on end-user requirements.

  7. Policy and Research in a Post-Conflict Context: Issues and Challenges in the Implementation of the Rwandan Teacher Development and Management Policy

    ERIC Educational Resources Information Center

    Rutaisire, John; Gahima, Charles

    2009-01-01

    Purpose: The purpose of this paper is to assess the relationship between policy development and research evidence with specific reference to the Rwandan Teacher Development and Management Policy introduced in 2005. It aims to highlight the complexity of implementing large-scale system wide change in the specific context of a small African nation…

  8. Large-scale protein-protein interaction analysis in Arabidopsis mesophyll protoplasts by split firefly luciferase complementation.

    PubMed

    Li, Jian-Feng; Bush, Jenifer; Xiong, Yan; Li, Lei; McCormack, Matthew

    2011-01-01

    Protein-protein interactions (PPIs) constitute the regulatory network that coordinates diverse cellular functions. There are growing needs in plant research for creating protein interaction maps behind complex cellular processes and at a systems biology level. However, only a few approaches have been successfully used for large-scale surveys of PPIs in plants, each having advantages and disadvantages. Here we present split firefly luciferase complementation (SFLC) as a highly sensitive and noninvasive technique for in planta PPI investigation. In this assay, the separate halves of a firefly luciferase can come into close proximity and transiently restore its catalytic activity only when their fusion partners, namely the two proteins of interest, interact with each other. This assay was conferred with quantitativeness and high throughput potential when the Arabidopsis mesophyll protoplast system and a microplate luminometer were employed for protein expression and luciferase measurement, respectively. Using the SFLC assay, we could monitor the dynamics of rapamycin-induced and ascomycin-disrupted interaction between Arabidopsis FRB and human FKBP proteins in a near real-time manner. As a proof of concept for large-scale PPI survey, we further applied the SFLC assay to testing 132 binary PPIs among 8 auxin response factors (ARFs) and 12 Aux/IAA proteins from Arabidopsis. Our results demonstrated that the SFLC assay is ideal for in vivo quantitative PPI analysis in plant cells and is particularly powerful for large-scale binary PPI screens.

  9. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  10. The Social Life of a Data Base

    NASA Technical Reports Server (NTRS)

    Linde, Charlotte; Wales, Roxana; Clancy, Dan (Technical Monitor)

    2002-01-01

    This paper presents the complex social life of a large data base. The topics include: 1) Social Construction of Mechanisms of Memory; 2) Data Bases: The Invisible Memory Mechanism; 3) The Human in the Machine; 4) Data of the Study: A Large-Scale Problem Reporting Data Base; 5) The PRACA Study; 6) Description of PRACA; 7) PRACA and Paper; 8) Multiple Uses of PRACA; 9) The Work of PRACA; 10) Multiple Forms of Invisibility; 11) Such Systems are Everywhere; and 12) Two Morals to the Story. This paper is in viewgraph form.

  11. The 'cube' meta-model for the information system of large health sector organizations--a (platform neutral) mapping tool to integrate information system development with changing business functions and organizational development.

    PubMed

    Balkányi, László

    2002-01-01

    To develop information systems (IS) in the changing environment of the health sector, a simple but throughout model, avoiding the techno-jargon of informatics, might be useful for the top management. A platform neutral, extensible, transparent conceptual model should be established. Limitations of current methods lead to a simple, but comprehensive mapping, in the form of a three-dimensional cube. The three 'orthogonal' views are (a) organization functionality, (b) organizational structures and (c) information technology. Each of the cube-sides is described according to its nature. This approach enables to define any kind of an IS component as a certain point/layer/domain of the cube and enables also the management to label all IS components independently form any supplier(s) and/or any specific platform. The model handles changes in organization structure, business functionality and the serving info-system independently form each other. Practical application extends to (a) planning complex, new ISs, (b) guiding development of multi-vendor, multi-site ISs, (c) supporting large-scale public procurement procedures and the contracting, implementation phase by establishing a platform neutral reference, (d) keeping an exhaustive inventory of an existing large-scale system, that handles non-tangible aspects of the IS.

  12. Self-Reacting Friction Stir Welding for Aluminum Complex Curvature Applications

    NASA Technical Reports Server (NTRS)

    Brown, Randy J.; Martin, W.; Schneider, J.; Hartley, P. J.; Russell, Carolyn; Lawless, Kirby; Jones, Chip

    2003-01-01

    This viewgraph representation provides an overview of sucessful research conducted by Lockheed Martin and NASA to develop an advanced self-reacting friction stir technology for complex curvature aluminum alloys. The research included weld process development for 0.320 inch Al 2219, sucessful transfer from the 'lab' scale to the production scale tool and weld quality exceeding strenght goals. This process will enable development and implementation of large scale complex geometry hardware fabrication. Topics covered include: weld process development, weld process transfer, and intermediate hardware fabrication.

  13. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    NASA Astrophysics Data System (ADS)

    Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

  14. HACC: Simulating sky surveys on state-of-the-art supercomputing architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Habib, Salman; Pope, Adrian; Finkel, Hal

    2016-01-01

    Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less

  15. Challenges in Managing Trustworthy Large-scale Digital Science

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.

    2017-12-01

    The increased use of large-scale international digital science has opened a number of challenges for managing, handling, using and preserving scientific information. The large volumes of information are driven by three main categories - model outputs including coupled models and ensembles, data products that have been processing to a level of usability, and increasingly heuristically driven data analysis. These data products are increasingly the ones that are usable by the broad communities, and far in excess of the raw instruments data outputs. The data, software and workflows are then shared and replicated to allow broad use at an international scale, which places further demands of infrastructure to support how the information is managed reliably across distributed resources. Users necessarily rely on these underlying "black boxes" so that they are productive to produce new scientific outcomes. The software for these systems depend on computational infrastructure, software interconnected systems, and information capture systems. This ranges from the fundamentals of the reliability of the compute hardware, system software stacks and libraries, and the model software. Due to these complexities and capacity of the infrastructure, there is an increased emphasis of transparency of the approach and robustness of the methods over the full reproducibility. Furthermore, with large volume data management, it is increasingly difficult to store the historical versions of all model and derived data. Instead, the emphasis is on the ability to access the updated products and the reliability by which both previous outcomes are still relevant and can be updated for the new information. We will discuss these challenges and some of the approaches underway that are being used to address these issues.

  16. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  17. Size and structure of Chlorella zofingiensis /FeCl 3 flocs in a shear flow: Algae Floc Structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wyatt, Nicholas B.; O'Hern, Timothy J.; Shelden, Bion

    Flocculation is a promising method to overcome the economic hurdle to separation of algae from its growth medium in large scale operations. But, understanding of the floc structure and the effects of shear on the floc structure are crucial to the large scale implementation of this technique. The floc structure is important because it determines, in large part, the density and settling behavior of the algae. Freshwater algae floc size distributions and fractal dimensions are presented as a function of applied shear rate in a Couette cell using ferric chloride as a flocculant. Comparisons are made with measurements made formore » a polystyrene microparticle model system taken here as well as reported literature results. The algae floc size distributions are found to be self-preserving with respect to shear rate, consistent with literature data for polystyrene. Moreover, three fractal dimensions are calculated which quantitatively characterize the complexity of the floc structure. Low shear rates result in large, relatively dense packed flocs which elongate and fracture as the shear rate is increased. Our results presented here provide crucial information for economically implementing flocculation as a large scale algae harvesting strategy.« less

  18. White paper: A plan for cooperation between NASA and DARPA to establish a center for advanced architectures

    NASA Technical Reports Server (NTRS)

    Denning, P. J.; Adams, G. B., III; Brown, R. L.; Kanerva, P.; Leiner, B. M.; Raugh, M. R.

    1986-01-01

    Large, complex computer systems require many years of development. It is recognized that large scale systems are unlikely to be delivered in useful condition unless users are intimately involved throughout the design process. A mechanism is described that will involve users in the design of advanced computing systems and will accelerate the insertion of new systems into scientific research. This mechanism is embodied in a facility called the Center for Advanced Architectures (CAA). CAA would be a division of RIACS (Research Institute for Advanced Computer Science) and would receive its technical direction from a Scientific Advisory Board established by RIACS. The CAA described here is a possible implementation of a center envisaged in a proposed cooperation between NASA and DARPA.

  19. TomoMiner and TomoMinerCloud: A software platform for large-scale subtomogram structural analysis

    PubMed Central

    Frazier, Zachary; Xu, Min; Alber, Frank

    2017-01-01

    SUMMARY Cryo-electron tomography (cryoET) captures the 3D electron density distribution of macromolecular complexes in close to native state. With the rapid advance of cryoET acquisition technologies, it is possible to generate large numbers (>100,000) of subtomograms, each containing a macromolecular complex. Often, these subtomograms represent a heterogeneous sample due to variations in structure and composition of a complex in situ form or because particles are a mixture of different complexes. In this case subtomograms must be classified. However, classification of large numbers of subtomograms is a time-intensive task and often a limiting bottleneck. This paper introduces an open source software platform, TomoMiner, for large-scale subtomogram classification, template matching, subtomogram averaging, and alignment. Its scalable and robust parallel processing allows efficient classification of tens to hundreds of thousands of subtomograms. Additionally, TomoMiner provides a pre-configured TomoMinerCloud computing service permitting users without sufficient computing resources instant access to TomoMiners high-performance features. PMID:28552576

  20. High Performance Computing for Modeling Wind Farms and Their Impact

    NASA Astrophysics Data System (ADS)

    Mavriplis, D.; Naughton, J. W.; Stoellinger, M. K.

    2016-12-01

    As energy generated by wind penetrates further into our electrical system, modeling of power production, power distribution, and the economic impact of wind-generated electricity is growing in importance. The models used for this work can range in fidelity from simple codes that run on a single computer to those that require high performance computing capabilities. Over the past several years, high fidelity models have been developed and deployed on the NCAR-Wyoming Supercomputing Center's Yellowstone machine. One of the primary modeling efforts focuses on developing the capability to compute the behavior of a wind farm in complex terrain under realistic atmospheric conditions. Fully modeling this system requires the simulation of continental flows to modeling the flow over a wind turbine blade, including down to the blade boundary level, fully 10 orders of magnitude in scale. To accomplish this, the simulations are broken up by scale, with information from the larger scales being passed to the lower scale models. In the code being developed, four scale levels are included: the continental weather scale, the local atmospheric flow in complex terrain, the wind plant scale, and the turbine scale. The current state of the models in the latter three scales will be discussed. These simulations are based on a high-order accurate dynamic overset and adaptive mesh approach, which runs at large scale on the NWSC Yellowstone machine. A second effort on modeling the economic impact of new wind development as well as improvement in wind plant performance and enhancements to the transmission infrastructure will also be discussed.

  1. Network Theory: A Primer and Questions for Air Transportation Systems Applications

    NASA Technical Reports Server (NTRS)

    Holmes, Bruce J.

    2004-01-01

    A new understanding (with potential applications to air transportation systems) has emerged in the past five years in the scientific field of networks. This development emerges in large part because we now have a new laboratory for developing theories about complex networks: The Internet. The premise of this new understanding is that most complex networks of interest, both of nature and of human contrivance, exhibit a fundamentally different behavior than thought for over two hundred years under classical graph theory. Classical theory held that networks exhibited random behavior, characterized by normal, (e.g., Gaussian or Poisson) degree distributions of the connectivity between nodes by links. The new understanding turns this idea on its head: networks of interest exhibit scale-free (or small world) degree distributions of connectivity, characterized by power law distributions. The implications of scale-free behavior for air transportation systems include the potential that some behaviors of complex system architectures might be analyzed through relatively simple approximations of local elements of the system. For air transportation applications, this presentation proposes a framework for constructing topologies (architectures) that represent the relationships between mobility, flight operations, aircraft requirements, and airspace capacity, and the related externalities in airspace procedures and architectures. The proposed architectures or topologies may serve as a framework for posing comparative and combinative analyses of performance, cost, security, environmental, and related metrics.

  2. Operational development of small plant growth systems

    NASA Technical Reports Server (NTRS)

    Scheld, H. W.; Magnuson, J. W.; Sauer, R. L.

    1986-01-01

    The results of a study undertaken on the first phase of an empricial effort in the development of small plant growth chambers for production of salad type vegetables on space shuttle or space station are discussed. The overall effort is visualized as providing the underpinning of practical experience in handling of plant systems in space which will provide major support for future efforts in planning, design, and construction of plant-based (phytomechanical) systems for support of human habitation in space. The assumptions underlying the effort hold that large scale phytomechanical habitability support systems for future space stations must evolve from the simple to the complex. The highly complex final systems will be developed from the accumulated experience and data gathered from repetitive tests and trials of fragments or subsystems of the whole in an operational mode. These developing system components will, meanwhile, serve a useful operational function in providing psychological support and diversion for the crews.

  3. Ordering Unstructured Meshes for Sparse Matrix Computations on Leading Parallel Systems

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Heber, Gerd; Biswas, Rupak

    2000-01-01

    The ability of computers to solve hitherto intractable problems and simulate complex processes using mathematical models makes them an indispensable part of modern science and engineering. Computer simulations of large-scale realistic applications usually require solving a set of non-linear partial differential equations (PDES) over a finite region. For example, one thrust area in the DOE Grand Challenge projects is to design future accelerators such as the SpaHation Neutron Source (SNS). Our colleagues at SLAC need to model complex RFQ cavities with large aspect ratios. Unstructured grids are currently used to resolve the small features in a large computational domain; dynamic mesh adaptation will be added in the future for additional efficiency. The PDEs for electromagnetics are discretized by the FEM method, which leads to a generalized eigenvalue problem Kx = AMx, where K and M are the stiffness and mass matrices, and are very sparse. In a typical cavity model, the number of degrees of freedom is about one million. For such large eigenproblems, direct solution techniques quickly reach the memory limits. Instead, the most widely-used methods are Krylov subspace methods, such as Lanczos or Jacobi-Davidson. In all the Krylov-based algorithms, sparse matrix-vector multiplication (SPMV) must be performed repeatedly. Therefore, the efficiency of SPMV usually determines the eigensolver speed. SPMV is also one of the most heavily used kernels in large-scale numerical simulations.

  4. Application of superconducting technology to earth-to-orbit electromagnetic launch systems

    NASA Technical Reports Server (NTRS)

    Hull, J. R.; Carney, L. M.

    1988-01-01

    Benefits may occur by incorporating superconductors, both existing and those currently under development, in one or more parts of a large-scale electromagnetic launch (EML) system that is capable of delivering payloads from the surface of the Earth to space. The use of superconductors for many of the EML components results in lower system losses; consequently, reductions in the size and number of energy storage devices are possible. Applied high-temperature superconductivity may eventually enable novel design concepts for energy distribution and switching. All of these technical improvements have the potential to reduce system complexity and lower payload launch costs.

  5. A computational framework for modeling targets as complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  6. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    NASA Astrophysics Data System (ADS)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled hydrologic and hydrodynamic modelling proves to be an important tool for integrated evaluation of hydrological processes in such poorly gauged, large scale basins. We hope that this model application provides new ways forward for large scale model development in such systems, involving semi-arid regions and complex floodplains.

  7. Prepreg and Melt Infiltration Technology Developed for Affordable, Robust Manufacturing of Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Singh, Mrityunjay; Petko, Jeannie F.

    2004-01-01

    Affordable fiber-reinforced ceramic matrix composites with multifunctional properties are critically needed for high-temperature aerospace and space transportation applications. These materials have various applications in advanced high-efficiency and high-performance engines, airframe and propulsion components for next-generation launch vehicles, and components for land-based systems. A number of these applications require materials with specific functional characteristics: for example, thick component, hybrid layups for environmental durability and stress management, and self-healing and smart composite matrices. At present, with limited success and very high cost, traditional composite fabrication technologies have been utilized to manufacture some large, complex-shape components of these materials. However, many challenges still remain in developing affordable, robust, and flexible manufacturing technologies for large, complex-shape components with multifunctional properties. The prepreg and melt infiltration (PREMI) technology provides an affordable and robust manufacturing route for low-cost, large-scale production of multifunctional ceramic composite components.

  8. Integration and validation testing for PhEDEx, DBS and DAS with the PhEDEx LifeCycle agent

    NASA Astrophysics Data System (ADS)

    Boeser, C.; Chwalek, T.; Giffels, M.; Kuznetsov, V.; Wildish, T.

    2014-06-01

    The ever-increasing amount of data handled by the CMS dataflow and workflow management tools poses new challenges for cross-validation among different systems within CMS experiment at LHC. To approach this problem we developed an integration test suite based on the LifeCycle agent, a tool originally conceived for stress-testing new releases of PhEDEx, the CMS data-placement tool. The LifeCycle agent provides a framework for customising the test workflow in arbitrary ways, and can scale to levels of activity well beyond those seen in normal running. This means we can run realistic performance tests at scales not likely to be seen by the experiment for some years, or with custom topologies to examine particular situations that may cause concern some time in the future. The LifeCycle agent has recently been enhanced to become a general purpose integration and validation testing tool for major CMS services. It allows cross-system integration tests of all three components to be performed in controlled environments, without interfering with production services. In this paper we discuss the design and implementation of the LifeCycle agent. We describe how it is used for small-scale debugging and validation tests, and how we extend that to large-scale tests of whole groups of sub-systems. We show how the LifeCycle agent can emulate the action of operators, physicists, or software agents external to the system under test, and how it can be scaled to large and complex systems.

  9. Climate mitigation is not the only benefit of a national energy system

    NASA Astrophysics Data System (ADS)

    Clack, C.

    2016-12-01

    Many speculate that the main driving force for a continental scale energy system is for climate mitigation. While this is a strong driver, there are multiple co-benefits that emerge from such a transition when purely driven by costs. These components could be managed within a planned system to provide a close-to-optimal solution that enhances the probability of realization. It is shown that these co-benefits of a continental scale electric system occur at costs lower than existing ones. That means there are multiple additional savings without extra costs or effort. The disadvantage is coordination between large geographic regions that could cause more complexity in planning. The main finding from different versions of the NEWS simulator is that carbon mitigation is enhanced by larger systems. In addition, there are increased jobs, reduced water consumption, Sulphur dioxide emissions, Nitrogen oxide emissions, a more distributed electric system and a lower cost of electricity.

  10. Evolution of ice sheets in the early Quaternary of the central North Sea: 2.58 Ma to 0.78 Ma

    NASA Astrophysics Data System (ADS)

    Lamb, R.; Huuse, M.; Stewart, M.; Brocklehurst, S. H.

    2016-12-01

    Integration of chronostratigraphic proxies with 3D seismic and well-log data has allowed for a basin-wide re-interpretation of the onset of glaciation in the central North Sea during the Quaternary. Mapping of seismic geomorphology, calculations of water depth and sediment accumulation rates, and other basin analysis techniques unravel the evolution of the North Sea basin during the early Pleistocene, a period of dramatic global cooling and rapid 41 kyr glacial-interglacial cycles, identifying a system which is increasingly dominated by large, continental-scale ice sheets. Prior to this study continental-scale ice sheets were generally not considered able to grow in a 41 kyr cycle and the earliest date for such an ice sheet in the North Sea was identified at the onset of tunnel valley formation in the Elsterian (0.48 Ma; MIS 12) which forms a large regional-scale glacial unconformity. At the onset of the Pleistocene at 2.58 Ma the North Sea basin was an elongate `mega-fjord' with water depths of up to 350 m, infilling rapidly as the European river systems deposited a large clinoform complex in the southern end of the basin. This period corresponds to the preservation of large scale ice-berg scouring on clinoform topsets, suggesting the presence of marine-terminating ice sheets with repeated calving events. As the Pleistocene progressed and global climate became increasingly colder the North Sea became smaller and shallower due to the continual infill of the basin. At 1.72 Ma the basin reached a critical point between the cold climate and the shallowing of the basin and the first evidence for grounded glaciation in the form of mega-scale glacial lineations is seen at this level. Between 1.72 and 0.48 Ma there is evidence for ice-streaming in the form of multiple MSGL flow sets re-occupying the central North Sea, as well as a large glaciotectonic complex. The glacial geomorphological evidence presented pushes back the date of grounded glaciation in the central North Sea by over one million years relative to the existing models. This raises important questions about the completeness of glaciation histories of Europe and other high and mid latitude land areas that can only be addressed by in depth scrutiny of their adjacent offshore sedimentary records.

  11. Energy Spectral Behaviors of Communication Networks of Open-Source Communities

    PubMed Central

    Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun

    2015-01-01

    Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331

  12. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.

    PubMed

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-08-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.

  13. Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks

    PubMed Central

    Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph

    2015-01-01

    Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784

  14. A rapid mechanism to remobilize and homogenize highly crystalline magma bodies.

    PubMed

    Burgisser, Alain; Bergantz, George W

    2011-03-10

    The largest products of magmatic activity on Earth, the great bodies of granite and their corresponding large eruptions, have a dual nature: homogeneity at the large scale and spatial and temporal heterogeneity at the small scale. This duality calls for a mechanism that selectively removes the large-scale heterogeneities associated with the incremental assembly of these magmatic systems and yet occurs rapidly despite crystal-rich, viscous conditions seemingly resistant to mixing. Here we show that a simple dynamic template can unify a wide range of apparently contradictory observations from both large plutonic bodies and volcanic systems by a mechanism of rapid remobilization (unzipping) of highly viscous crystal-rich mushes. We demonstrate that this remobilization can lead to rapid overturn and produce the observed juxtaposition of magmatic materials with very disparate ages and complex chemical zoning. What distinguishes our model is the recognition that the process has two stages. Initially, a stiff mushy magma is reheated from below, producing a reduction in crystallinity that leads to the growth of a subjacent buoyant mobile layer. When the thickening mobile layer becomes sufficiently buoyant, it penetrates the overlying viscous mushy magma. This second stage rapidly exports homogenized material from the lower mobile layer to the top of the system, and leads to partial overturn within the viscous mush itself as an additional mechanism of mixing. Model outputs illustrate that unzipping can rapidly produce large amounts of mobile magma available for eruption. The agreement between calculated and observed unzipping rates for historical eruptions at Pinatubo and at Montserrat demonstrates the general applicability of the model. This mechanism furthers our understanding of both the formation of periodically homogenized plutons (crust building) and of ignimbrites by large eruptions.

  15. Big Data in Medicine is Driving Big Changes

    PubMed Central

    Verspoor, K.

    2014-01-01

    Summary Objectives To summarise current research that takes advantage of “Big Data” in health and biomedical informatics applications. Methods Survey of trends in this work, and exploration of literature describing how large-scale structured and unstructured data sources are being used to support applications from clinical decision making and health policy, to drug design and pharmacovigilance, and further to systems biology and genetics. Results The survey highlights ongoing development of powerful new methods for turning that large-scale, and often complex, data into information that provides new insights into human health, in a range of different areas. Consideration of this body of work identifies several important paradigm shifts that are facilitated by Big Data resources and methods: in clinical and translational research, from hypothesis-driven research to data-driven research, and in medicine, from evidence-based practice to practice-based evidence. Conclusions The increasing scale and availability of large quantities of health data require strategies for data management, data linkage, and data integration beyond the limits of many existing information systems, and substantial effort is underway to meet those needs. As our ability to make sense of that data improves, the value of the data will continue to increase. Health systems, genetics and genomics, population and public health; all areas of biomedicine stand to benefit from Big Data and the associated technologies. PMID:25123716

  16. Advanced Hypervelocity Aerophysics Facility Workshop

    NASA Technical Reports Server (NTRS)

    Witcofski, Robert D. (Compiler); Scallion, William I. (Compiler)

    1989-01-01

    The primary objective of the workshop was to obtain a critical assessment of a concept for a large, advanced hypervelocity ballistic range test facility powered by an electromagnetic launcher, which was proposed by the Langley Research Center. It was concluded that the subject large-scale facility was feasible and would provide the required ground-based capability for performing tests at entry flight conditions (velocity and density) on large, complex, instrumented models. It was also concluded that advances in remote measurement techniques and particularly onboard model instrumentation, light-weight model construction techniques, and model electromagnetic launcher (EML) systems must be made before any commitment for the construction of such a facility can be made.

  17. Information Flows? A Critique of Transfer Entropies

    NASA Astrophysics Data System (ADS)

    James, Ryan G.; Barnett, Nix; Crutchfield, James P.

    2016-06-01

    A central task in analyzing complex dynamics is to determine the loci of information storage and the communication topology of information flows within a system. Over the last decade and a half, diagnostics for the latter have come to be dominated by the transfer entropy. Via straightforward examples, we show that it and a derivative quantity, the causation entropy, do not, in fact, quantify the flow of information. At one and the same time they can overestimate flow or underestimate influence. We isolate why this is the case and propose several avenues to alternate measures for information flow. We also address an auxiliary consequence: The proliferation of networks as a now-common theoretical model for large-scale systems, in concert with the use of transferlike entropies, has shoehorned dyadic relationships into our structural interpretation of the organization and behavior of complex systems. This interpretation thus fails to include the effects of polyadic dependencies. The net result is that much of the sophisticated organization of complex systems may go undetected.

  18. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-13

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  19. An Efficient Multiscale Finite-Element Method for Frequency-Domain Seismic Wave Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    The frequency-domain seismic-wave equation, that is, the Helmholtz equation, has many important applications in seismological studies, yet is very challenging to solve, particularly for large geological models. Iterative solvers, domain decomposition, or parallel strategies can partially alleviate the computational burden, but these approaches may still encounter nontrivial difficulties in complex geological models where a sufficiently fine mesh is required to represent the fine-scale heterogeneities. We develop a novel numerical method to solve the frequency-domain acoustic wave equation on the basis of the multiscale finite-element theory. We discretize a heterogeneous model with a coarse mesh and employ carefully constructed high-order multiscalemore » basis functions to form the basis space for the coarse mesh. Solved from medium- and frequency-dependent local problems, these multiscale basis functions can effectively capture themedium’s fine-scale heterogeneity and the source’s frequency information, leading to a discrete system matrix with a much smaller dimension compared with those from conventional methods.We then obtain an accurate solution to the acoustic Helmholtz equation by solving only a small linear system instead of a large linear system constructed on the fine mesh in conventional methods.We verify our new method using several models of complicated heterogeneities, and the results show that our new multiscale method can solve the Helmholtz equation in complex models with high accuracy and extremely low computational costs.« less

  20. Field Operations For The "Intelligent River" Observation System: A Basin-wide Water Quality Observation System In The Savannah River Basin And Platform Supporting Related Diverse Initiatives.

    NASA Astrophysics Data System (ADS)

    Sutton, A.; Koons, M.; O'Brien-Gayes, P.; Moorer, R.; Hallstrom, J.; Post, C.; Gayes, P. T.

    2017-12-01

    The Intelligent River (IR) initiative is an NSF sponsored study developing new data management technology for a range of basin-scale applications. The technology developed by Florida Atlantic and Clemson University established a network of real-time reporting water quality sondes; from the mountains to the estuary of the Savannah River basin. Coastal Carolina University led the field operations campaign. Ancillary studies, student projects and initiatives benefitted from the associated instrumentation, infrastructure and operational support of the IR program. This provided a vehicle for students to participate in fieldwork across the watershed and pursue individual interests. Student projects included: 1) a Multibeam sonar survey investigating channel morphology in the area of an IR sensor station and 2) field tests of developing techniques for acquiring and assimilating flood velocity data into model systems associated with a separate NSF Rapid award. The multibeam survey within the lower Savannah basin exhibited a range of complexity in bathymetry, bedforms and bottom habitat in the vicinity of one of the water quality stations. The complex morphology and bottom habitat reflect complex flow patterns, localized areas of depositional and erosive tendencies providing a valuable context for considering point-source water quality time series. Micro- Lagrangian drifters developed by ISENSE at Florida Atlantic University, a sled mounted ADCP, and particle tracking from imagery collected by a photogrammetric drone were tested and used to develop methodology for establishing velocity, direction and discharge levels to validate, initialize and assimilate data into advance models systems during future flood events. The prospect of expanding wide scale observing systems can serve as a platform to integrate small and large-scale cooperative studies across disciplines as well as basic and applied research interests. Such initiatives provide opportunities for embedded education and experience for students that add to the understanding of broad integrated complex systems.

  1. Thermodynamic holography.

    PubMed

    Wei, Bo-Bo; Jiang, Zhan-Feng; Liu, Ren-Bao

    2015-10-19

    The holographic principle states that the information about a volume of a system is encoded on the boundary surface of the volume. Holography appears in many branches of physics, such as optics, electromagnetism, many-body physics, quantum gravity, and string theory. Here we show that holography is also an underlying principle in thermodynamics, a most important foundation of physics. The thermodynamics of a system is fully determined by its partition function. We prove that the partition function of a finite but arbitrarily large system is an analytic function on the complex plane of physical parameters, and therefore the partition function in a region on the complex plane is uniquely determined by its values along the boundary. The thermodynamic holography has applications in studying thermodynamics of nano-scale systems (such as molecule engines, nano-generators and macromolecules) and provides a new approach to many-body physics.

  2. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  3. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  4. Small vulnerable sets determine large network cascades in power grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  5. Small vulnerable sets determine large network cascades in power grids

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-11-17

    The understanding of cascading failures in complex systems has been hindered by the lack of realistic large-scale modeling and analysis that can account for variable system conditions. By using the North American power grid, we identified, quantified, and analyzed the set of network components that are vulnerable to cascading failures under any out of multiple conditions. We show that the vulnerable set consists of a small but topologically central portion of the network and that large cascades are disproportionately more likely to be triggered by initial failures close to this set. These results elucidate aspects of the origins and causesmore » of cascading failures relevant for grid design and operation and demonstrate vulnerability analysis methods that are applicable to a wider class of cascade-prone networks.« less

  6. Analysis and elimination of a bias in targeted molecular dynamics simulations of conformational transitions: application to calmodulin.

    PubMed

    Ovchinnikov, Victor; Karplus, Martin

    2012-07-26

    The popular targeted molecular dynamics (TMD) method for generating transition paths in complex biomolecular systems is revisited. In a typical TMD transition path, the large-scale changes occur early and the small-scale changes tend to occur later. As a result, the order of events in the computed paths depends on the direction in which the simulations are performed. To identify the origin of this bias, and to propose a method in which the bias is absent, variants of TMD in the restraint formulation are introduced and applied to the complex open ↔ closed transition in the protein calmodulin. Due to the global best-fit rotation that is typically part of the TMD method, the simulated system is guided implicitly along the lowest-frequency normal modes, until the large spatial scales associated with these modes are near the target conformation. The remaining portion of the transition is described progressively by higher-frequency modes, which correspond to smaller-scale rearrangements. A straightforward modification of TMD that avoids the global best-fit rotation is the locally restrained TMD (LRTMD) method, in which the biasing potential is constructed from a number of TMD potentials, each acting on a small connected portion of the protein sequence. With a uniform distribution of these elements, transition paths that lack the length-scale bias are obtained. Trajectories generated by steered MD in dihedral angle space (DSMD), a method that avoids best-fit rotations altogether, also lack the length-scale bias. To examine the importance of the paths generated by TMD, LRTMD, and DSMD in the actual transition, we use the finite-temperature string method to compute the free energy profile associated with a transition tube around a path generated by each algorithm. The free energy barriers associated with the paths are comparable, suggesting that transitions can occur along each route with similar probabilities. This result indicates that a broad ensemble of paths needs to be calculated to obtain a full description of conformational changes in biomolecules. The breadth of the contributing ensemble suggests that energetic barriers for conformational transitions in proteins are offset by entropic contributions that arise from a large number of possible paths.

  7. Complex Chemical Reaction Networks from Heuristics-Aided Quantum Chemistry.

    PubMed

    Rappoport, Dmitrij; Galvin, Cooper J; Zubarev, Dmitry Yu; Aspuru-Guzik, Alán

    2014-03-11

    While structures and reactivities of many small molecules can be computed efficiently and accurately using quantum chemical methods, heuristic approaches remain essential for modeling complex structures and large-scale chemical systems. Here, we present a heuristics-aided quantum chemical methodology applicable to complex chemical reaction networks such as those arising in cell metabolism and prebiotic chemistry. Chemical heuristics offer an expedient way of traversing high-dimensional reactive potential energy surfaces and are combined here with quantum chemical structure optimizations, which yield the structures and energies of the reaction intermediates and products. Application of heuristics-aided quantum chemical methodology to the formose reaction reproduces the experimentally observed reaction products, major reaction pathways, and autocatalytic cycles.

  8. Porous microwells for geometry-selective, large-scale microparticle arrays

    NASA Astrophysics Data System (ADS)

    Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.

    2017-01-01

    Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.

  9. Theory of rumour spreading in complex social networks

    NASA Astrophysics Data System (ADS)

    Nekovee, M.; Moreno, Y.; Bianconi, G.; Marsili, M.

    2007-01-01

    We introduce a general stochastic model for the spread of rumours, and derive mean-field equations that describe the dynamics of the model on complex social networks (in particular, those mediated by the Internet). We use analytical and numerical solutions of these equations to examine the threshold behaviour and dynamics of the model on several models of such networks: random graphs, uncorrelated scale-free networks and scale-free networks with assortative degree correlations. We show that in both homogeneous networks and random graphs the model exhibits a critical threshold in the rumour spreading rate below which a rumour cannot propagate in the system. In the case of scale-free networks, on the other hand, this threshold becomes vanishingly small in the limit of infinite system size. We find that the initial rate at which a rumour spreads is much higher in scale-free networks than in random graphs, and that the rate at which the spreading proceeds on scale-free networks is further increased when assortative degree correlations are introduced. The impact of degree correlations on the final fraction of nodes that ever hears a rumour, however, depends on the interplay between network topology and the rumour spreading rate. Our results show that scale-free social networks are prone to the spreading of rumours, just as they are to the spreading of infections. They are relevant to the spreading dynamics of chain emails, viral advertising and large-scale information dissemination algorithms on the Internet.

  10. Task Effects on Linguistic Complexity and Accuracy: A Large-Scale Learner Corpus Analysis Employing Natural Language Processing Techniques

    ERIC Educational Resources Information Center

    Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar

    2017-01-01

    Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…

  11. Mesoscale spatiotemporal variability in a complex host-parasite system influenced by intermediate host body size

    PubMed Central

    2017-01-01

    Background Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Methods Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts’ exposure to the parasite’s dispersive stages. Results Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (<15 mm). Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. Conclusions These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation—a characteristic of indirect host-parasite interactions—and subsequent increasing mortality rates over ontogeny underpin size-dependent host-parasite dynamics. PMID:28828270

  12. Mesoscale spatiotemporal variability in a complex host-parasite system influenced by intermediate host body size.

    PubMed

    Rodríguez, Sara M; Valdivia, Nelson

    2017-01-01

    Parasites are essential components of natural communities, but the factors that generate skewed distributions of parasite occurrences and abundances across host populations are not well understood. Here, we analyse at a seascape scale the spatiotemporal relationships of parasite exposure and host body-size with the proportion of infected hosts (i.e., prevalence) and aggregation of parasite burden across ca. 150 km of the coast and over 22 months. We predicted that the effects of parasite exposure on prevalence and aggregation are dependent on host body-sizes. We used an indirect host-parasite interaction in which migratory seagulls, sandy-shore molecrabs, and an acanthocephalan worm constitute the definitive hosts, intermediate hosts, and endoparasite, respectively. In such complex systems, increments in the abundance of definitive hosts imply increments in intermediate hosts' exposure to the parasite's dispersive stages. Linear mixed-effects models showed a significant, albeit highly variable, positive relationship between seagull density and prevalence. This relationship was stronger for small (cephalothorax length >15 mm) than large molecrabs (<15 mm). Independently of seagull density, large molecrabs carried significantly more parasites than small molecrabs. The analysis of the variance-to-mean ratio of per capita parasite burden showed no relationship between seagull density and mean parasite aggregation across host populations. However, the amount of unexplained variability in aggregation was strikingly higher in larger than smaller intermediate hosts. This unexplained variability was driven by a decrease in the mean-variance scaling in heavily infected large molecrabs. These results show complex interdependencies between extrinsic and intrinsic population attributes on the structure of host-parasite interactions. We suggest that parasite accumulation-a characteristic of indirect host-parasite interactions-and subsequent increasing mortality rates over ontogeny underpin size-dependent host-parasite dynamics.

  13. Miocene silicic volcanism in southwestern Idaho: Geochronology, geochemistry, and evolution of the central Snake River Plain

    USGS Publications Warehouse

    Bonnichsen, B.; Leeman, W.P.; Honjo, N.; McIntosh, W.C.; Godchaux, M.M.

    2008-01-01

    New 40Ar-39Ar geochronology, bulk rock geochemical data, and physical characteristics for representative stratigraphic sections of rhyolite ignimbrites and lavas from the west-central Snake River Plain (SRP) are combined to develop a coherent stratigraphic framework for Miocene silicic magmatism in this part of the Yellowstone 'hotspot track'. The magmatic record differs from that in areas to the west and east with regard to its unusually large extrusive volume, broad lateral scale, and extended duration. We infer that the magmatic systems developed in response to large-scale and repeated injections of basaltic magma into the crust, resulting in significant reconstitution of large volumes of the crust, wide distribution of crustal melt zones, and complex feeder systems for individual eruptive events. Some eruptive episodes or 'events' appear to be contemporaneous with major normal faulting, and perhaps catastrophic crustal foundering, that may have triggered concurrent evacuations of separate silicic magma reservoirs. This behavior and cumulative time-composition relations are difficult to relate to simple caldera-style single-source feeder systems and imply complex temporal-spatial development of the silicic magma systems. Inferred volumes and timing of mafic magma inputs, as the driving energy source, require a significant component of lithospheric extension on NNW-trending Basin and Range style faults (i.e., roughly parallel to the SW-NE orientation of the eastern SRP). This is needed to accommodate basaltic inputs at crustal levels, and is likely to play a role in generation of those magmas. Anomalously high magma production in the SRP compared to that in adjacent areas (e.g., northern Basin and Range Province) may require additional sub-lithospheric processes. ?? Springer-Verlag 2007.

  14. Crustal-Scale Fault Interaction at Rifted Margins and the Formation of Domain-Bounding Breakaway Complexes: Insights From Offshore Norway

    NASA Astrophysics Data System (ADS)

    Osmundsen, P. T.; Péron-Pinvidic, G.

    2018-03-01

    The large-magnitude faults that control crustal thinning and excision at rifted margins combine into laterally persistent structural boundaries that separate margin domains of contrasting morphology and structure. We term them breakaway complexes. At the Mid-Norwegian margin, we identify five principal breakaway complexes that separate the proximal, necking, distal, and outer margin domains. Downdip and lateral interactions between the faults that constitute breakaway complexes became fundamental to the evolution of the 3-D margin architecture. Different types of fault interaction are observed along and between these faults, but simple models for fault growth will not fully describe their evolution. These structures operate on the crustal scale, cut large thicknesses of heterogeneously layered lithosphere, and facilitate fundamental margin processes such as deformation coupling and exhumation. Variations in large-magnitude fault geometry, erosional footwall incision, and subsequent differential subsidence along the main breakaway complexes likely record the variable efficiency of these processes.

  15. An instrument design and sample strategy for measuring soil respiration in the coastal temperate rain forest

    NASA Astrophysics Data System (ADS)

    Nay, S. M.; D'Amore, D. V.

    2009-12-01

    The coastal temperate rainforest (CTR) along the northwest coast of North America is a large and complex mosaic of forests and wetlands located on an undulating terrain ranging from sea level to thousands of meters in elevation. This biome stores a dynamic portion of the total carbon stock of North America. The fate of the terrestrial carbon stock is of concern due to the potential for mobilization and export of this store to both the atmosphere as carbon respiration flux and ocean as dissolved organic and inorganic carbon flux. Soil respiration is the largest export vector in the system and must be accurately measured to gain any comprehensive understanding of how carbon moves though this system. Suitable monitoring tools capable of measuring carbon fluxes at small spatial scales are essential for our understanding of carbon dynamics at larger spatial scales within this complex assemblage of ecosystems. We have adapted instrumentation and developed a sampling strategy for optimizing replication of soil respiration measurements to quantify differences among spatially complex landscape units of the CTR. We start with the design of the instrument to ease the technological, ergonomic and financial barriers that technicians encounter in monitoring the efflux of CO2 from the soil. Our sampling strategy optimizes the physical efforts of the field work and manages for the high variation of flux measurements encountered in this difficult environment of rough terrain, dense vegetation and wet climate. Our soil respirometer incorporates an infra-red gas analyzer (LiCor Inc. LI-820) and an 8300 cm3 soil respiration chamber; the device is durable, lightweight, easy to operate and can be built for under $5000 per unit. The modest unit price allows for a multiple unit fleet to be deployed and operated in an intensive field monitoring campaign. We use a large 346 cm2 collar to accommodate as much micro spatial variation as feasible and to facilitate repeated measures for tracking temporal trends. Our collar design minimizes root interference yet provides a highly stable platform for coupling with the respirometer. Meso-scale variability characterized by large down woody debris, wind throw pits and mounds and surface roots is negotiated with by a hexagonal array of seven collars at two meter spacing (sample pod). Landscape scale variability is managed through stratification and replication amongst ecosystem types arrayed across a hydrologic gradient from bogs to forested wetlands to upland forests. Our strategy has allowed us to gather data sets consisting of approximately 1800 total observations with approximately 600 measurements per replication per year. Mean coefficients of variation (CV) at the collar (micro-scale) were approximately 0.67. The pod level mean CV was reduced to approximately 0.29 at the pod (meso-scale). The CV at the vegetation strata were 0.43, 0.18 and 0.21 for bog, forested wetland and upland forest respectively. With temperature and hydrological data we are able to measure and model carbon dynamics in this large and complex environment. The analysis of variability at the three spatial scales has confirmed that our approach is capturing and constraining the variability.

  16. A simplified method for power-law modelling of metabolic pathways from time-course data and steady-state flux profiles.

    PubMed

    Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru

    2006-07-17

    In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.

  17. Active Storage with Analytics Capabilities and I/O Runtime System for Petascale Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhary, Alok

    Computational scientists must understand results from experimental, observational and computational simulation generated data to gain insights and perform knowledge discovery. As systems approach the petascale range, problems that were unimaginable a few years ago are within reach. With the increasing volume and complexity of data produced by ultra-scale simulations and high-throughput experiments, understanding the science is largely hampered by the lack of comprehensive I/O, storage, acceleration of data manipulation, analysis, and mining tools. Scientists require techniques, tools and infrastructure to facilitate better understanding of their data, in particular the ability to effectively perform complex data analysis, statistical analysis and knowledgemore » discovery. The goal of this work is to enable more effective analysis of scientific datasets through the integration of enhancements in the I/O stack, from active storage support at the file system layer to MPI-IO and high-level I/O library layers. We propose to provide software components to accelerate data analytics, mining, I/O, and knowledge discovery for large-scale scientific applications, thereby increasing productivity of both scientists and the systems. Our approaches include 1) design the interfaces in high-level I/O libraries, such as parallel netCDF, for applications to activate data mining operations at the lower I/O layers; 2) Enhance MPI-IO runtime systems to incorporate the functionality developed as a part of the runtime system design; 3) Develop parallel data mining programs as part of runtime library for server-side file system in PVFS file system; and 4) Prototype an active storage cluster, which will utilize multicore CPUs, GPUs, and FPGAs to carry out the data mining workload.« less

  18. 3D printing via ambient reactive extrusion

    DOE PAGES

    Rios, Orlando; Carter, William G.; Post, Brian K.; ...

    2018-03-14

    Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less

  19. 3D printing via ambient reactive extrusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, Orlando; Carter, William G.; Post, Brian K.

    Here, Additive Manufacturing (AM) has the potential to offer many benefits over traditional manufacturing methods in the fabrication of complex parts with advantages such as low weight, complex geometry, and embedded functionality. In practice, today’s AM technologies are limited by their slow speed and highly directional properties. To address both issues, we have developed a reactive mixture deposition approach that can enable 3D printing of polymer materials at over 100X the volumetric deposition rate, enabled by a greater than 10X reduction in print head mass compared to existing large-scale thermoplastic deposition methods, with material chemistries that can be tuned formore » specific properties. Additionally, the reaction kinetics and transient rheological properties are specifically designed for the target deposition rates, enabling the synchronized development of increasing shear modulus and extensive cross linking across the printed layers. This ambient cure eliminates the internal stresses and bulk distortions that typically hamper AM of large parts, and yields a printed part with inter-layer covalent bonds that significantly improve the strength of the part along the build direction. The fast cure kinetics combined with the fine-tuned viscoelastic properties of the mixture enable rapid vertical builds that are not possible using other approaches. Through rheological characterization of mixtures that were capable of printing in this process as well as materials that have sufficient structural integrity for layer-on-layer printing, a “printability” rheological phase diagram has been developed, and is presented here. We envision this approach implemented as a deployable manufacturing system, where manufacturing is done on-site using the efficiently-shipped polymer, locally-sourced fillers, and a small, deployable print system. Unlike existing additive manufacturing approaches which require larger and slower print systems and complex thermal management strategies as scale increases, liquid reactive polymers decouple performance and print speed from the scale of the part, enabling a new class of cost-effective, fuel-efficient additive manufacturing.« less

  20. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  1. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  2. Fast Measurement and Reconstruction of Large Workpieces with Freeform Surfaces by Combining Local Scanning and Global Position Data

    PubMed Central

    Chen, Zhe; Zhang, Fumin; Qu, Xinghua; Liang, Baoqiu

    2015-01-01

    In this paper, we propose a new approach for the measurement and reconstruction of large workpieces with freeform surfaces. The system consists of a handheld laser scanning sensor and a position sensor. The laser scanning sensor is used to acquire the surface and geometry information, and the position sensor is utilized to unify the scanning sensors into a global coordinate system. The measurement process includes data collection, multi-sensor data fusion and surface reconstruction. With the multi-sensor data fusion, errors accumulated during the image alignment and registration process are minimized, and the measuring precision is significantly improved. After the dense accurate acquisition of the three-dimensional (3-D) coordinates, the surface is reconstructed using a commercial software piece, based on the Non-Uniform Rational B-Splines (NURBS) surface. The system has been evaluated, both qualitatively and quantitatively, using reference measurements provided by a commercial laser scanning sensor. The method has been applied for the reconstruction of a large gear rim and the accuracy is up to 0.0963 mm. The results prove that this new combined method is promising for measuring and reconstructing the large-scale objects with complex surface geometry. Compared with reported methods of large-scale shape measurement, it owns high freedom in motion, high precision and high measurement speed in a wide measurement range. PMID:26091396

  3. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biros, George

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. Thesemore » include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a central challenge in UQ, especially for large-scale models. We propose to develop the mathematical tools to address these challenges in the context of extreme-scale problems. 4. Parallel scalable algorithms for Bayesian optimal experimental design (OED). Bayesian inversion yields quantified uncertainties in the model parameters, which can be propagated forward through the model to yield uncertainty in outputs of interest. This opens the way for designing new experiments to reduce the uncertainties in the model parameters and model predictions. Such experimental design problems have been intractable for large-scale problems using conventional methods; we will create OED algorithms that exploit the structure of the PDE model and the parameter-to-output map to overcome these challenges. Parallel algorithms for these four problems were created, analyzed, prototyped, implemented, tuned, and scaled up for leading-edge supercomputers, including UT-Austin’s own 10 petaflops Stampede system, ANL’s Mira system, and ORNL’s Titan system. While our focus is on fundamental mathematical/computational methods and algorithms, we will assess our methods on model problems derived from several DOE mission applications, including multiscale mechanics and ice sheet dynamics.« less

  4. Investigating large-scale secondary circulations within impact crater topographies in a refractive index-matched facility

    NASA Astrophysics Data System (ADS)

    Blois, Gianluca; Kim, Taehoon; Bristow, Nathan; Day, Mackenzie; Kocurek, Gary; Anderson, William; Christensen, Kenneth

    2017-11-01

    Impact craters, common large-scale topographic features on the surface of Mars, are circular depressions delimited by a sharp ridge. A variety of crater fill morphologies exist, suggesting that complex intracrater circulations affect their evolution. Some large craters (diameter >10 km), particularly at mid latitudes on Mars, exhibit a central mound surrounded by circular moat. Foremost among these examples is Gale crater, landing site of NASA's Curiosity rover, since large-scale climatic processes early in in the history of Mars are preserved in the stratigraphic record of the inner mound. Investigating the intracrater flow produced by large scale winds aloft Mars craters is key to a number of important scientific issues including ongoing research on Mars paleo-environmental reconstruction and the planning of future missions (these results must be viewed in conjunction with the affects of radial katabatibc flows, the importance of which is already established in preceding studies). In this work we consider a number of crater shapes inspired by Gale morphology, including idealized craters. Access to the flow field within such geometrically complex topography is achieved herein using a refractive index matched approach. Instantaneous velocity maps, using both planar and volumetric PIV techniques, are presented to elucidate complex three-dimensional flow within the crater. In addition, first- and second-order statistics will be discussed in the context of wind-driven (aeolian) excavation of crater fill.

  5. A Vision-Based Dynamic Rotational Angle Measurement System for Large Civil Structures

    PubMed Central

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system. PMID:22969348

  6. A vision-based dynamic rotational angle measurement system for large civil structures.

    PubMed

    Lee, Jong-Jae; Ho, Hoai-Nam; Lee, Jong-Han

    2012-01-01

    In this paper, we propose a vision-based rotational angle measurement system for large-scale civil structures. Despite the fact that during the last decade several rotation angle measurement systems were introduced, they however often required complex and expensive equipment. Therefore, alternative effective solutions with high resolution are in great demand. The proposed system consists of commercial PCs, commercial camcorders, low-cost frame grabbers, and a wireless LAN router. The calculation of rotation angle is obtained by using image processing techniques with pre-measured calibration parameters. Several laboratory tests were conducted to verify the performance of the proposed system. Compared with the commercial rotation angle measurement, the results of the system showed very good agreement with an error of less than 1.0% in all test cases. Furthermore, several tests were conducted on the five-story modal testing tower with a hybrid mass damper to experimentally verify the feasibility of the proposed system.

  7. Psychometric analysis of the leadership environment scale (LENS): Outcome from the Oregon research initiative on the organisation of nursing (ORION).

    PubMed

    Ross, Amy M; Ilic, Kelley; Kiyoshi-Teo, Hiroko; Lee, Christopher S

    2017-12-26

    The purpose of this study was to establish the psychometric properties of the new 16-item leadership environment scale. The leadership environment scale was based on complexity science concepts relevant to complex adaptive health care systems. A workforce survey of direct-care nurses was conducted (n = 1,443) in Oregon. Confirmatory factor analysis, exploratory factor analysis, concordant validity test and reliability tests were conducted to establish the structure and internal consistency of the leadership environment scale. Confirmatory factor analysis indices approached acceptable thresholds of fit with a single factor solution. Exploratory factor analysis showed improved fit with a two-factor model solution; the factors were labelled 'influencing relationships' and 'interdependent system supports'. Moderate to strong convergent validity was observed between the leadership environment scale/subscales and both the nursing workforce index and the safety organising scale. Reliability of the leadership environment scale and subscales was strong, with all alphas ≥.85. The leadership environment scale is structurally sound and reliable. Nursing management can employ adaptive complexity leadership attributes, measure their influence on the leadership environment, subsequently modify system supports and relationships and improve the quality of health care systems. The leadership environment scale is an innovative fit to complex adaptive systems and how nurses act as leaders within these systems. © 2017 John Wiley & Sons Ltd.

  8. State estimation and prediction using clustered particle filters.

    PubMed

    Lee, Yoonsang; Majda, Andrew J

    2016-12-20

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors.

  9. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  10. State estimation and prediction using clustered particle filters

    PubMed Central

    Lee, Yoonsang; Majda, Andrew J.

    2016-01-01

    Particle filtering is an essential tool to improve uncertain model predictions by incorporating noisy observational data from complex systems including non-Gaussian features. A class of particle filters, clustered particle filters, is introduced for high-dimensional nonlinear systems, which uses relatively few particles compared with the standard particle filter. The clustered particle filter captures non-Gaussian features of the true signal, which are typical in complex nonlinear dynamical systems such as geophysical systems. The method is also robust in the difficult regime of high-quality sparse and infrequent observations. The key features of the clustered particle filtering are coarse-grained localization through the clustering of the state variables and particle adjustment to stabilize the method; each observation affects only neighbor state variables through clustering and particles are adjusted to prevent particle collapse due to high-quality observations. The clustered particle filter is tested for the 40-dimensional Lorenz 96 model with several dynamical regimes including strongly non-Gaussian statistics. The clustered particle filter shows robust skill in both achieving accurate filter results and capturing non-Gaussian statistics of the true signal. It is further extended to multiscale data assimilation, which provides the large-scale estimation by combining a cheap reduced-order forecast model and mixed observations of the large- and small-scale variables. This approach enables the use of a larger number of particles due to the computational savings in the forecast model. The multiscale clustered particle filter is tested for one-dimensional dispersive wave turbulence using a forecast model with model errors. PMID:27930332

  11. A characterization of workflow management systems for extreme-scale applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  12. A characterization of workflow management systems for extreme-scale applications

    DOE PAGES

    Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...

    2017-02-16

    We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less

  13. Impacts of large dams on the complexity of suspended sediment dynamics in the Yangtze River

    NASA Astrophysics Data System (ADS)

    Wang, Yuankun; Rhoads, Bruce L.; Wang, Dong; Wu, Jichun; Zhang, Xiao

    2018-03-01

    The Yangtze River is one of the largest and most important rivers in the world. Over the past several decades, the natural sediment regime of the Yangtze River has been altered by the construction of dams. This paper uses multi-scale entropy analysis to ascertain the impacts of large dams on the complexity of high-frequency suspended sediment dynamics in the Yangtze River system, especially after impoundment of the Three Gorges Dam (TGD). In this study, the complexity of sediment dynamics is quantified by framing it within the context of entropy analysis of time series. Data on daily sediment loads for four stations located in the mainstem are analyzed for the past 60 years. The results indicate that dam construction has reduced the complexity of short-term (1-30 days) variation in sediment dynamics near the structures, but that complexity has actually increased farther downstream. This spatial pattern seems to reflect a filtering effect of the dams on the on the temporal pattern of sediment loads as well as decreased longitudinal connectivity of sediment transfer through the river system, resulting in downstream enhancement of the influence of local sediment inputs by tributaries on sediment dynamics. The TGD has had a substantial impact on the complexity of sediment series in the mainstem of the Yangtze River, especially after it became fully operational. This enhanced impact is attributed to the high trapping efficiency of this dam and its associated large reservoir. The sediment dynamics "signal" becomes more spatially variable after dam construction. This study demonstrates the spatial influence of dams on the high-frequency temporal complexity of sediment regimes and provides valuable information that can be used to guide environmental conservation of the Yangtze River.

  14. Complexity analysis of the Next Gen Air Traffic Management System: trajectory based operations.

    PubMed

    Lyons, Rhonda

    2012-01-01

    According to Federal Aviation Administration traffic predictions currently our Air Traffic Management (ATM) system is operating at 150 percent capacity; forecasting that within the next two decades, the traffic with increase to a staggering 250 percent [17]. This will require a major redesign of our system. Today's ATM system is complex. It is designed to safely, economically, and efficiently provide air traffic services through the cost-effective provision of facilities and seamless services in collaboration with multiple agents however, contrary the vision, the system is loosely integrated and is suffering tremendously from antiquated equipment and saturated airways. The new Next Generation (Next Gen) ATM system is designed to transform the current system into an agile, robust and responsive set of operations that are designed to safely manage the growing needs of the projected increasingly complex, diverse set of air transportation system users and massive projected worldwide traffic rates. This new revolutionary technology-centric system is dynamically complex and is much more sophisticated than it's soon to be predecessor. ATM system failures could yield large scale catastrophic consequences as it is a safety critical system. This work will attempt to describe complexity and the complex nature of the NextGen ATM system and Trajectory Based Operational. Complex human factors interactions within Next Gen will be analyzed using a proposed dual experimental approach designed to identify hazards, gaps and elicit emergent hazards that would not be visible if conducted in isolation. Suggestions will be made along with a proposal for future human factors research in the TBO safety critical Next Gen environment.

  15. Dynamic ruptures on faults of complex geometry: insights from numerical simulations, from large-scale curvature to small-scale fractal roughness

    NASA Astrophysics Data System (ADS)

    Ulrich, T.; Gabriel, A. A.

    2016-12-01

    The geometry of faults is subject to a large degree of uncertainty. As buried structures being not directly observable, their complex shapes may only be inferred from surface traces, if available, or through geophysical methods, such as reflection seismology. As a consequence, most studies aiming at assessing the potential hazard of faults rely on idealized fault models, based on observable large-scale features. Yet, real faults are known to be wavy at all scales, their geometric features presenting similar statistical properties from the micro to the regional scale. The influence of roughness on the earthquake rupture process is currently a driving topic in the computational seismology community. From the numerical point of view, rough faults problems are challenging problems that require optimized codes able to run efficiently on high-performance computing infrastructure and simultaneously handle complex geometries. Physically, simulated ruptures hosted by rough faults appear to be much closer to source models inverted from observation in terms of complexity. Incorporating fault geometry on all scales may thus be crucial to model realistic earthquake source processes and to estimate more accurately seismic hazard. In this study, we use the software package SeisSol, based on an ADER-Discontinuous Galerkin scheme, to run our numerical simulations. SeisSol allows solving the spontaneous dynamic earthquake rupture problem and the wave propagation problem with high-order accuracy in space and time efficiently on large-scale machines. In this study, the influence of fault roughness on dynamic rupture style (e.g. onset of supershear transition, rupture front coherence, propagation of self-healing pulses, etc) at different length scales is investigated by analyzing ruptures on faults of varying roughness spectral content. In particular, we investigate the existence of a minimum roughness length scale in terms of rupture inherent length scales below which the rupture ceases to be sensible. Finally, the effect of fault geometry on ground-motions, in the near-field, is considered. Our simulations feature a classical linear slip weakening on the fault and a viscoplastic constitutive model off the fault. The benefits of using a more elaborate fast velocity-weakening friction law will also be considered.

  16. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  17. Observations of a nearby filament of galaxy clusters with the Sardinia Radio Telescope

    NASA Astrophysics Data System (ADS)

    Vacca, Valentina; Murgia, M.; Loi, F. Govoni F.; Vazza, F.; Finoguenov, A.; Carretti, E.; Feretti, L.; Giovannini, G.; Concu, R.; Melis, A.; Gheller, C.; Paladino, R.; Poppi, S.; Valente, G.; Bernardi, G.; Boschin, W.; Brienza, M.; Clarke, T. E.; Colafrancesco, S.; Enßlin, T.; Ferrari, C.; de Gasperin, F.; Gastaldello, F.; Girardi, M.; Gregorini, L.; Johnston-Hollitt, M.; Junklewitz, H.; Orrù, E.; Parma, P.; Perley, R.; Taylor, G. B.

    2018-05-01

    We report the detection of diffuse radio emission which might be connected to a large-scale filament of the cosmic web covering a 8° × 8° area in the sky, likely associated with a z≈0.1 over-density traced by nine massive galaxy clusters. In this work, we present radio observations of this region taken with the Sardinia Radio Telescope. Two of the clusters in the field host a powerful radio halo sustained by violent ongoing mergers and provide direct proof of intra-cluster magnetic fields. In order to investigate the presence of large-scale diffuse radio synchrotron emission in and beyond the galaxy clusters in this complex system, we combined the data taken at 1.4 GHz with the Sardinia Radio Telescope with higher resolution data taken with the NRAO VLA Sky Survey. We found 28 candidate new sources with a size larger and X-ray emission fainter than known diffuse large-scale synchrotron cluster sources for a given radio power. This new population is potentially the tip of the iceberg of a class of diffuse large-scale synchrotron sources associated with the filaments of the cosmic web. In addition, we found in the field a candidate new giant radio galaxy.

  18. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  19. Progressive Mid-latitude Afforestation: Local and Remote Climate Impacts in the Framework of Two Coupled Earth System Models

    NASA Astrophysics Data System (ADS)

    Lague, Marysa

    Vegetation influences the atmosphere in complex and non-linear ways, such that large-scale changes in vegetation cover can drive changes in climate on both local and global scales. Large-scale land surface changes have been shown to introduce excess energy to one hemisphere, causing a shift in atmospheric circulation on a global scale. However, past work has not quantified how the climate response scales with the area of vegetation. Here, we systematically evaluate the response of climate to linearly increasing the area of forest cover over the northern mid-latitudes. We show that the magnitude of afforestation of the northern mid-latitudes determines the climate response in a non-linear fashion, and identify a threshold in vegetation-induced cloud feedbacks - a concept not previously addressed by large-scale vegetation manipulation experiments. Small increases in tree cover drive compensating cloud feedbacks, while latent heat fluxes reach a threshold after sufficiently large increases in tree cover, causing the troposphere to warm and dry, subsequently reducing cloud cover. Increased absorption of solar radiation at the surface is driven by both surface albedo changes and cloud feedbacks. We identify how vegetation-induced changes in cloud cover further feedback on changes in the global energy balance. We also show how atmospheric cross-equatorial energy transport changes as the area of afforestation is incrementally increased (a relationship which has not previously been demonstrated). This work demonstrates that while some climate effects (such as energy transport) of large scale mid-latitude afforestation scale roughly linearly across a wide range of afforestation areas, others (such as the local partitioning of the surface energy budget) are non-linear, and sensitive to the particular magnitude of mid-latitude forcing. Our results highlight the importance of considering both local and remote climate responses to large-scale vegetation change, and explore the scaling relationship between changes in vegetation cover and the resulting climate impacts.

  20. Multiscale Granger causality

    NASA Astrophysics Data System (ADS)

    Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele

    2017-10-01

    In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.

  1. A modeling process to understand complex system architectures

    NASA Astrophysics Data System (ADS)

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two architectures is better more than 98% of the time. The second objective led to Hypothesis B, the second of the main hypotheses. This hypothesis stated that by studying the functional relations, the most critical entities composing the architecture could be identified. The critical entities are those that when their behavior varies slightly, the behavior of the overall architecture varies greatly. These are the entities that must be modeled more carefully and where modeling effort should be expended. This hypothesis was tested by simplifying agent-based models to the non-trivial minimum, and executing a large number of different simulations in order to obtain statistically significant results. The tests were conducted by evolving the complex model without any error induced, and then evolving the model once again for each ranking and assigning error to any of the nodes with a probability inversely proportional to the ranking. The results from this hypothesis test indicate that depending on the structural characteristics of the functional relations, it is useful to use one of two of the intelligent rankings tested, or it is best to expend effort equally amongst all the entities. Random ranking always performed worse than uniform ranking, indicating that if modeling effort is to be prioritized amongst the entities composing the large-scale system architecture, it should be prioritized intelligently. The benefit threshold between intelligent prioritization and no prioritization lays on the large-scale system's chaotic boundary. If the large-scale system behaves chaotically, small variations in any of the entities tends to have a great impact on the behavior of the entire system. Therefore, even low ranking entities can still affect the behavior of the model greatly, and error should not be concentrated in any one entity. It was discovered that the threshold can be identified from studying the structure of the networks, in particular the cyclicity, the Off-diagonal Complexity, and the Digraph Algebraic Connectivity. (Abstract shortened by UMI.)

  2. Flagellum synchronization inhibits large-scale hydrodynamic instabilities in sperm suspensions

    NASA Astrophysics Data System (ADS)

    Schöller, Simon F.; Keaveny, Eric E.

    2016-11-01

    Sperm in suspension can exhibit large-scale collective motion and form coherent structures. Our picture of such coherent motion is largely based on reduced models that treat the swimmers as self-locomoting rigid bodies that interact via steady dipolar flow fields. Swimming sperm, however, have many more degrees of freedom due to elasticity, have a more exotic shape, and generate spatially-complex, time-dependent flow fields. While these complexities are known to lead to phenomena such as flagellum synchronization and attraction, how these effects impact the overall suspension behaviour and coherent structure formation is largely unknown. Using a computational model that captures both flagellum beating and elasticity, we simulate suspensions on the order of 103 individual swimming sperm cells whose motion is coupled through the surrounding Stokesian fluid. We find that the tendency for flagella to synchronize and sperm to aggregate inhibits the emergence of the large-scale hydrodynamic instabilities often associated with active suspensions. However, when synchronization is repressed by adding noise in the flagellum actuation mechanism, the picture changes and the structures that resemble large-scale vortices appear to re-emerge. Supported by an Imperial College PhD scholarship.

  3. Large Scale Comparative Visualisation of Regulatory Networks with TRNDiff

    DOE PAGES

    Chua, Xin-Yi; Buckingham, Lawrence; Hogan, James M.; ...

    2015-06-01

    The advent of Next Generation Sequencing (NGS) technologies has seen explosive growth in genomic datasets, and dense coverage of related organisms, supporting study of subtle, strain-specific variations as a determinant of function. Such data collections present fresh and complex challenges for bioinformatics, those of comparing models of complex relationships across hundreds and even thousands of sequences. Transcriptional Regulatory Network (TRN) structures document the influence of regulatory proteins called Transcription Factors (TFs) on associated Target Genes (TGs). TRNs are routinely inferred from model systems or iterative search, and analysis at these scales requires simultaneous displays of multiple networks well beyond thosemore » of existing network visualisation tools [1]. In this paper we describe TRNDiff, an open source system supporting the comparative analysis and visualization of TRNs (and similarly structured data) from many genomes, allowing rapid identification of functional variations within species. The approach is demonstrated through a small scale multiple TRN analysis of the Fur iron-uptake system of Yersinia, suggesting a number of candidate virulence factors; and through a larger study exploiting integration with the RegPrecise database (http://regprecise.lbl.gov; [2]) - a collection of hundreds of manually curated and predicted transcription factor regulons drawn from across the entire spectrum of prokaryotic organisms.« less

  4. Systems biology: An emerging strategy for discovering novel pathogenetic mechanisms that promote cardiovascular disease.

    PubMed

    Maron, Bradley A; Leopold, Jane A

    2016-09-30

    Reductionist theory proposes that analyzing complex systems according to their most fundamental components is required for problem resolution, and has served as the cornerstone of scientific methodology for more than four centuries. However, technological gains in the current scientific era now allow for the generation of large datasets that profile the proteomic, genomic, and metabolomic signatures of biological systems across a range of conditions. The accessibility of data on such a vast scale has, in turn, highlighted the limitations of reductionism, which is not conducive to analyses that consider multiple and contemporaneous interactions between intermediates within a pathway or across constructs. Systems biology has emerged as an alternative approach to analyze complex biological systems. This methodology is based on the generation of scale-free networks and, thus, provides a quantitative assessment of relationships between multiple intermediates, such as protein-protein interactions, within and between pathways of interest. In this way, systems biology is well positioned to identify novel targets implicated in the pathogenesis or treatment of diseases. In this review, the historical root and fundamental basis of systems biology, as well as the potential applications of this methodology are discussed with particular emphasis on integration of these concepts to further understanding of cardiovascular disorders such as coronary artery disease and pulmonary hypertension.

  5. Science Competencies That Go Unassessed

    ERIC Educational Resources Information Center

    Gilmer, Penny J.; Sherdan, Danielle M.; Oosterhof, Albert; Rohani, Faranak; Rouby, Aaron

    2011-01-01

    Present large-scale assessments require the use of item formats, such as multiple choice, that can be administered and scored efficiently. This limits competencies that can be measured by these assessments. An alternative approach to large-scale assessments is being investigated that would include the use of complex performance assessments. As…

  6. Clustering biomolecular complexes by residue contacts similarity.

    PubMed

    Rodrigues, João P G L M; Trellet, Mikaël; Schmitz, Christophe; Kastritis, Panagiotis; Karaca, Ezgi; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2012-07-01

    Inaccuracies in computational molecular modeling methods are often counterweighed by brute-force generation of a plethora of putative solutions. These are then typically sieved via structural clustering based on similarity measures such as the root mean square deviation (RMSD) of atomic positions. Albeit widely used, these measures suffer from several theoretical and technical limitations (e.g., choice of regions for fitting) that impair their application in multicomponent systems (N > 2), large-scale studies (e.g., interactomes), and other time-critical scenarios. We present here a simple similarity measure for structural clustering based on atomic contacts--the fraction of common contacts--and compare it with the most used similarity measure of the protein docking community--interface backbone RMSD. We show that this method produces very compact clusters in remarkably short time when applied to a collection of binary and multicomponent protein-protein and protein-DNA complexes. Furthermore, it allows easy clustering of similar conformations of multicomponent symmetrical assemblies in which chain permutations can occur. Simple contact-based metrics should be applicable to other structural biology clustering problems, in particular for time-critical or large-scale endeavors. Copyright © 2012 Wiley Periodicals, Inc.

  7. Living in a network of scaling cities and finite resources.

    PubMed

    Qubbaj, Murad R; Shutters, Shade T; Muneepeerakul, Rachata

    2015-02-01

    Many urban phenomena exhibit remarkable regularity in the form of nonlinear scaling behaviors, but their implications on a system of networked cities has never been investigated. Such knowledge is crucial for our ability to harness the complexity of urban processes to further sustainability science. In this paper, we develop a dynamical modeling framework that embeds population-resource dynamics-a generalized Lotka-Volterra system with modifications to incorporate the urban scaling behaviors-in complex networks in which cities may be linked to the resources of other cities and people may migrate in pursuit of higher welfare. We find that isolated cities (i.e., no migration) are susceptible to collapse if they do not have access to adequate resources. Links to other cities may help cities that would otherwise collapse due to insufficient resources. The effects of inter-city links, however, can vary due to the interplay between the nonlinear scaling behaviors and network structure. The long-term population level of a city is, in many settings, largely a function of the city's access to resources over which the city has little or no competition. Nonetheless, careful investigation of dynamics is required to gain mechanistic understanding of a particular city-resource network because cities and resources may collapse and the scaling behaviors may influence the effects of inter-city links, thereby distorting what topological metrics really measure.

  8. A Model of Biological Attacks on a Realistic Population

    NASA Astrophysics Data System (ADS)

    Carley, Kathleen M.; Fridsma, Douglas; Casman, Elizabeth; Altman, Neal; Chen, Li-Chiou; Kaminsky, Boris; Nave, Demian; Yahja, Alex

    The capability to assess the impacts of large-scale biological attacks and the efficacy of containment policies is critical and requires knowledge-intensive reasoning about social response and disease transmission within a complex social system. There is a close linkage among social networks, transportation networks, disease spread, and early detection. Spatial dimensions related to public gathering places such as hospitals, nursing homes, and restaurants, can play a major role in epidemics [Klovdahl et. al. 2001]. Like natural epidemics, bioterrorist attacks unfold within spatially defined, complex social systems, and the societal and networked response can have profound effects on their outcome. This paper focuses on bioterrorist attacks, but the model has been applied to emergent and familiar diseases as well.

  9. Ocean Hydrodynamics Numerical Model in Curvilinear Coordinates for Simulating Circulation of the Global Ocean and its Separate Basins.

    NASA Astrophysics Data System (ADS)

    Gusev, Anatoly; Diansky, Nikolay; Zalesny, Vladimir

    2010-05-01

    The original program complex is proposed for the ocean circulation sigma-model, developed in the Institute of Numerical Mathematics (INM), Russian Academy of Sciences (RAS). The complex can be used in various curvilinear orthogonal coordinate systems. In addition to ocean circulation model, the complex contains a sea ice dynamics and thermodynamics model, as well as the original system of the atmospheric forcing implementation on the basis of both prescribed meteodata and atmospheric model results. This complex can be used as the oceanic block of Earth climate model as well as for solving the scientific and practical problems concerning the World ocean and its separate oceans and seas. The developed program complex can be effectively used on parallel shared memory computational systems and on contemporary personal computers. On the base of the complex proposed the ocean general circulation model (OGCM) was developed. The model is realized in the curvilinear orthogonal coordinate system obtained by the conformal transformation of the standard geographical grid that allowed us to locate the system singularities outside the integration domain. The horizontal resolution of the OGCM is 1 degree on longitude, 0.5 degree on latitude, and it has 40 non-uniform sigma-levels in depth. The model was integrated for 100 years starting from the Levitus January climatology using the realistic atmospheric annual cycle calculated on the base of CORE datasets. The experimental results showed us that the model adequately reproduces the basic characteristics of large-scale World Ocean dynamics, that is in good agreement with both observational data and results of the best climatic OGCMs. This OGCM is used as the oceanic component of the new version of climatic system model (CSM) developed in INM RAS. The latter is now ready for carrying out the new numerical experiments on climate and its change modelling according to IPCC (Intergovernmental Panel on Climate Change) scenarios in the scope of the CMIP-5 (Coupled Model Intercomparison Project). On the base of the complex proposed the Pacific Ocean circulation eddy-resolving model was realized. The integration domain covers the Pacific from Equator to Bering Strait. The model horizontal resolution is 0.125 degree and it has 20 non-uniform sigma-levels in depth. The model adequately reproduces circulation large-scale structure and its variability: Kuroshio meandering, ocean synoptic eddies, frontal zones, etc. Kuroshio high variability is shown. The distribution of contaminant was simulated that is admittedly wasted near Petropavlovsk-Kamchatsky. The results demonstrate contaminant distribution structure and provide us understanding of hydrological fields formation processes in the North-West Pacific.

  10. The Contribution of Mesoscale Convective Weather Systems to the Warm-Season Precipitation in the United States.

    NASA Astrophysics Data System (ADS)

    Fritsch, J. M.; Kane, R. J.; Chelius, C. R.

    1986-10-01

    The contribution of precipitation from mesoscale convective weather systems to the warm-season (April-September) rainfall in the United States is evaluated. Both Mesoscale Convective Complexes (MCC's) and other large, long-lived mesoscale convective systems that do not quite meet Maddox's criteria for being termed an MCC are included in the evaluation. The distribution and geographical limits of the precipitation from the convective weather systems are constructed for the warm seasons of 1982, a `normal' year, and 1983, a drought year. Precipitation characteristics of the systems are compared for the 2 years to determine how large-scale drought patterns affect their precipitation production.The frequency, precipitation characteristics and hydrologic ramifications of multiple occurrences, or series, of convective weather systems are presented and discussed. The temporal and spatial characteristics of the accumulated precipitation from a series of convective complexes is investigated and compared to that of Hurricane Alicia.It is found that mesoscale convective weather systems account for approximately 30% to 70% of the warm-season (April-September) precipitation over much of the region between the Rocky Mountains and the Mississippi River. During the June through August period, their contribution is even larger. Moreover, series of convective weather systems are very likely the most prolific precipitation producer in the United States, rivaling and even exceeding that of hurricanes.Changes in the large-scale circulation patterns affected the seasonal precipitation from mesoscale convective weather systems by altering the precipitation characteristics of individual systems. In particular, for the drought period of 1983, the frequency of the convective systems remained nearly the same as in the `normal' year (1982); however, the average precipitation area and the average volumetric production significantly decreased. Nevertheless, the rainfall that was produced by mesoscale convective weather systems in the drought year accounted for most of the precipitation received during the critical crop growth period.It is concluded that mesoscale convective weather systems may be a crucial precipitation-producing deterrent to drought and an important mechanism for enhancing midsummer crop growth throughout the midwestern United States. Furthermore, because mesoscale convective weather systems account for such a large fraction of the warm-season precipitation, significant improvements in prediction of such systems would likely translate into significant improvements in quantitative precipitation forecast skill and corresponding improvements in hydrologic forecasts of runoff.

  11. Pragmatic clinical trials embedded in healthcare systems: generalizable lessons from the NIH Collaboratory.

    PubMed

    Weinfurt, Kevin P; Hernandez, Adrian F; Coronado, Gloria D; DeBar, Lynn L; Dember, Laura M; Green, Beverly B; Heagerty, Patrick J; Huang, Susan S; James, Kathryn T; Jarvik, Jeffrey G; Larson, Eric B; Mor, Vincent; Platt, Richard; Rosenthal, Gary E; Septimus, Edward J; Simon, Gregory E; Staman, Karen L; Sugarman, Jeremy; Vazquez, Miguel; Zatzick, Douglas; Curtis, Lesley H

    2017-09-18

    The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost effective manner; research currently involves the use of labor-intensive parallel systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care. However, experience with these trials is nascent and best practices regarding design operational, analytic, and reporting methodologies are undeveloped. To strengthen the national capacity to implement cost-effective, large-scale PCTs, the Common Fund of the National Institutes of Health created the Health Care Systems Research Collaboratory (Collaboratory) to support the design, execution, and dissemination of a series of demonstration projects using a pragmatic research design. In this article, we will describe the Collaboratory, highlight some of the challenges encountered and solutions developed thus far, and discuss remaining barriers and opportunities for large-scale evidence generation using PCTs. A planning phase is critical, and even with careful planning, new challenges arise during execution; comparisons between arms can be complicated by unanticipated changes. Early and ongoing engagement with both health care system leaders and front-line clinicians is critical for success. There is also marked uncertainty when applying existing ethical and regulatory frameworks to PCTS, and using existing electronic health records for data capture adds complexity.

  12. Complexities, Catastrophes and Cities: Emergency Dynamics in Varying Scenarios and Urban Topologies

    NASA Astrophysics Data System (ADS)

    Narzisi, Giuseppe; Mysore, Venkatesh; Byeon, Jeewoong; Mishra, Bud

    Complex Systems are often characterized by agents capable of interacting with each other dynamically, often in non-linear and non-intuitive ways. Trying to characterize their dynamics often results in partial differential equations that are difficult, if not impossible, to solve. A large city or a city-state is an example of such an evolving and self-organizing complex environment that efficiently adapts to different and numerous incremental changes to its social, cultural and technological infrastructure [1]. One powerful technique for analyzing such complex systems is Agent-Based Modeling (ABM) [9], which has seen an increasing number of applications in social science, economics and also biology. The agent-based paradigm facilitates easier transfer of domain specific knowledge into a model. ABM provides a natural way to describe systems in which the overall dynamics can be described as the result of the behavior of populations of autonomous components: agents, with a fixed set of rules based on local information and possible central control. As part of the NYU Center for Catastrophe Preparedness and Response (CCPR1), we have been exploring how ABM can serve as a powerful simulation technique for analyzing large-scale urban disasters. The central problem in Disaster Management is that it is not immediately apparent whether the current emergency plans are robust against such sudden, rare and punctuated catastrophic events.

  13. Collective evolution of submicron hillocks during the early stages of anisotropic alkaline wet chemical etching of Si(1 0 0) surfaces

    NASA Astrophysics Data System (ADS)

    Sana, P.; Vázquez, Luis; Cuerno, Rodolfo; Sarkar, Subhendu

    2017-11-01

    We address experimentally the large-scale dynamics of Si(1 0 0) surfaces during the initial stages of anisotropic wet (KOH) chemical etching, which are characterized through atomic force microscopy. These systems are known to lead to the formation of characteristic pyramids, or hillocks, of typical sizes in the nanometric/micrometer scales, thus with the potential for a large number of applications that can benefit from the nanotexturing of Si surfaces. The present pattern formation process is very strongly disordered in space. We assess the space correlations in such a type of rough surface and elucidate the existence of a complex and rich morphological evolution, featuring at least three different regimes in just 10 min of etching. Such a complex time behavior cannot be consistently explained within a single formalism for dynamic scaling. The pyramidal structure reveals itself as the basic morphological motif of the surface throughout the dynamics. A detailed analysis of the surface slope distribution with etching time reveals that the texturing process induced by the KOH etching is rather gradual and progressive, which accounts for the dynamic complexity. The various stages of the morphological evolution can be accurately reproduced by computer-generated surfaces composed by uncorrelated pyramidal structures. To reach such an agreement, the key parameters are the average pyramid size, which increases with etching time, its distribution and the surface coverage by the pyramidal structures.

  14. NCI HPC Scaling and Optimisation in Climate, Weather, Earth system science and the Geosciences

    NASA Astrophysics Data System (ADS)

    Evans, B. J. K.; Bermous, I.; Freeman, J.; Roberts, D. S.; Ward, M. L.; Yang, R.

    2016-12-01

    The Australian National Computational Infrastructure (NCI) has a national focus in the Earth system sciences including climate, weather, ocean, water management, environment and geophysics. NCI leads a Program across its partners from the Australian science agencies and research communities to identify priority computational models to scale-up. Typically, these cases place a large overall demand on the available computer time, need to scale to higher resolutions, use excessive scarce resources such as large memory or bandwidth that limits, or in some cases, need to meet requirements for transition to a separate operational forecasting system, with set time-windows. The model codes include the UK Met Office Unified Model atmospheric model (UM), GFDL's Modular Ocean Model (MOM), both the UK Met Office's GC3 and Australian ACCESS coupled-climate systems (including sea ice), 4D-Var data assimilation and satellite processing, the Regional Ocean Model (ROMS), and WaveWatch3 as well as geophysics codes including hazards, magentuellerics, seismic inversions, and geodesy. Many of these codes use significant compute resources both for research applications as well as within the operational systems. Some of these models are particularly complex, and their behaviour had not been critically analysed for effective use of the NCI supercomputer or how they could be improved. As part of the Program, we have established a common profiling methodology that uses a suite of open source tools for performing scaling analyses. The most challenging cases are profiling multi-model coupled systems where the component models have their own complex algorithms and performance issues. We have also found issues within the current suite of profiling tools, and no single tool fully exposes the nature of the code performance. As a result of this work, international collaborations are now in place to ensure that improvements are incorporated within the community models, and our effort can be targeted in a coordinated way. The coordinations have involved user stakeholders, the model developer community, and dependent software libraries. For example, we have spent significant time characterising I/O scalability, and improving the use of libraries such as NetCDF and HDF5.

  15. A multitracer approach for characterizing interactions between shallow groundwater and the hydrothermal system in the Norris Geyser Basin area, Yellowstone National Park

    USGS Publications Warehouse

    Gardner, W.P.; Susong, D.D.; Solomon, D.K.; Heasler, H.P.

    2011-01-01

    Multiple environmental tracers are used to investigate age distribution, evolution, and mixing in local- to regional-scale groundwater circulation around the Norris Geyser Basin area in Yellowstone National Park. Springs ranging in temperature from 3??C to 90??C in the Norris Geyser Basin area were sampled for stable isotopes of hydrogen and oxygen, major and minor element chemistry, dissolved chlorofluorocarbons, and tritium. Groundwater near Norris Geyser Basin is comprised of two distinct systems: a shallow, cool water system and a deep, high-temperature hydrothermal system. These two end-member systems mix to create springs with intermediate temperature and composition. Using multiple tracers from a large number of springs, it is possible constrain the distribution of possible flow paths and refine conceptual models of groundwater circulation in and around a large, complex hydrothermal system. Copyright 2011 by the American Geophysical Union.

  16. Paradigms of Complexity: Fractals and Structures in the Sciences

    NASA Astrophysics Data System (ADS)

    Novak, Miroslav M.

    The Table of Contents for the book is as follows: * Preface * The Origin of Complexity (invited talk) * On the Existence of Spatially Uniform Scaling Laws in the Climate System * Multispectral Backscattering: A Fractal-Structure Probe * Small-Angle Multiple Scattering on a Fractal System of Point Scatterers * Symmetric Fractals Generated by Cellular Automata * Bispectra and Phase Correlations for Chaotic Dynamical Systems * Self-Organized Criticality Models of Neural Development * Altered Fractal and Irregular Heart Rate Behavior in Sick Fetuses * Extract Multiple Scaling in Long-Term Heart Rate Variability * A Semi-Continous Box Counting Method for Fractal Dimension Measurement of Short Single Dimension Temporal Signals - Preliminary Study * A Fractional Brownian Motion Model of Cracking * Self-Affine Scaling Studies on Fractography * Coarsening of Fractal Interfaces * A Fractal Model of Ocean Surface Superdiffusion * Stochastic Subsurface Flow and Transport in Fractal Fractal Conductivity Fields * Rendering Through Iterated Function Systems * The σ-Hull - The Hull Where Fractals Live - Calculating a Hull Bounded by Log Spirals to Solve the Inverse IFS-Problem by the Detected Orbits * On the Multifractal Properties of Passively Convected Scalar Fields * New Statistical Textural Transforms for Non-Stationary Signals: Application to Generalized Mutlifractal Analysis * Laplacian Growth of Parallel Needles: Their Mullins-Sekerka Instability * Entropy Dynamics Associated with Self-Organization * Fractal Properties in Economics (invited talk) * Fractal Approach to the Regional Seismic Event Discrimination Problem * Fractal and Topological Complexity of Radioactive Contamination * Pattern Selection: Nonsingular Saffman-Taylor Finger and Its Dynamic Evolution with Zero Surface Tension * A Family of Complex Wavelets for the Characterization of Singularities * Stabilization of Chaotic Amplitude Fluctuations in Multimode, Intracavity-Doubled Solid-State Lasers * Chaotic Dynamics of Elastic-Plastic Beams * The Riemann Non-Differentiable Function and Identities for the Gaussian Sums * Revealing the Multifractal Nature of Failure Sequence * The Fractal Nature of wood Revealed by Drying * Squaring the Circle: Diffusion Volume and Acoustic Behaviour of a Fractal Structure * Relationship Between Acupuncture Holographic Units and Fetus Development; Fractal Features of Two Acupuncture Holographic Unit Systems * The Fractal Properties of the Large-Scale Magnetic Fields on the Sun * Fractal Analysis of Tide Gauge Data * Author Index

  17. SparseMaps—A systematic infrastructure for reduced-scaling electronic structure methods. III. Linear-scaling multireference domain-based pair natural orbital N-electron valence perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yang; Sivalingam, Kantharuban; Neese, Frank, E-mail: Frank.Neese@cec.mpg.de

    2016-03-07

    Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still twomore » important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling “partially contracted” NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient “electron pair prescreening” that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison between the partial and strong contraction schemes is made, with conclusions that discourage the strong contraction scheme as a basis for local correlation methods due to its non-invariance with respect to rotations in the inactive and external subspaces. A minimal set of conservatively chosen truncation thresholds controls the accuracy of the method. With the default thresholds, about 99.9% of the canonical partially contracted NEVPT2 correlation energy is recovered while the crossover of the computational cost with the already very efficient canonical method occurs reasonably early; in linear chain type compounds at a chain length of around 80 atoms. Calculations are reported for systems with more than 300 atoms and 5400 basis functions.« less

  18. Emergence, institutionalization and renewal: Rhythms of adaptive governance in complex social-ecological systems.

    PubMed

    Chaffin, Brian C; Gunderson, Lance H

    2016-01-01

    Adaptive governance provides the capacity for environmental managers and decision makers to confront variable degrees of uncertainty inherent to complex social-ecological systems. Current theoretical conceptualizations of adaptive governance represent a series of structures and processes best suited for either adapting or transforming existing environmental governance regimes towards forms flexible enough to confront rapid ecological change. As the number of empirical examples of adaptive governance described in the literature grows, the conceptual basis of adaptive governance remains largely under theorized. We argue that reconnecting adaptive governance with foundational concepts of ecological resilience-specifically Panarchy and the adaptive cycle of complex systems-highlights the importance of episodic disturbances and cross-scale interactions in triggering reorganizations in governance. By envisioning the processes of adaptive governance through the lens of Panarchy, scholars and practitioners alike will be better able to identify the emergence of adaptive governance, as well as take advantage of opportunities to institutionalize this type of governance in pursuit of sustainability outcomes. The synergistic analysis of adaptive governance and Panarchy can provide critical insight for analyzing the role of social dynamics during oscillating periods of stability and instability in social-ecological systems. A deeper understanding of the potential for cross-scale interactions to shape adaptive governance regimes may be useful as society faces the challenge of mitigating the impacts of global environmental change. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Next Generation Analytic Tools for Large Scale Genetic Epidemiology Studies of Complex Diseases

    PubMed Central

    Mechanic, Leah E.; Chen, Huann-Sheng; Amos, Christopher I.; Chatterjee, Nilanjan; Cox, Nancy J.; Divi, Rao L.; Fan, Ruzong; Harris, Emily L.; Jacobs, Kevin; Kraft, Peter; Leal, Suzanne M.; McAllister, Kimberly; Moore, Jason H.; Paltoo, Dina N.; Province, Michael A.; Ramos, Erin M.; Ritchie, Marylyn D.; Roeder, Kathryn; Schaid, Daniel J.; Stephens, Matthew; Thomas, Duncan C.; Weinberg, Clarice R.; Witte, John S.; Zhang, Shunpu; Zöllner, Sebastian; Feuer, Eric J.; Gillanders, Elizabeth M.

    2012-01-01

    Over the past several years, genome-wide association studies (GWAS) have succeeded in identifying hundreds of genetic markers associated with common diseases. However, most of these markers confer relatively small increments of risk and explain only a small proportion of familial clustering. To identify obstacles to future progress in genetic epidemiology research and provide recommendations to NIH for overcoming these barriers, the National Cancer Institute sponsored a workshop entitled “Next Generation Analytic Tools for Large-Scale Genetic Epidemiology Studies of Complex Diseases” on September 15–16, 2010. The goal of the workshop was to facilitate discussions on (1) statistical strategies and methods to efficiently identify genetic and environmental factors contributing to the risk of complex disease; and (2) how to develop, apply, and evaluate these strategies for the design, analysis, and interpretation of large-scale complex disease association studies in order to guide NIH in setting the future agenda in this area of research. The workshop was organized as a series of short presentations covering scientific (gene-gene and gene-environment interaction, complex phenotypes, and rare variants and next generation sequencing) and methodological (simulation modeling and computational resources and data management) topic areas. Specific needs to advance the field were identified during each session and are summarized. PMID:22147673

  20. Large-scale patterns formed by solar active regions during the ascending phase of cycle 21

    NASA Astrophysics Data System (ADS)

    Gaizauskas, V.; Harvey, K. L.; Harvey, J. W.; Zwaan, C.

    1983-02-01

    Synoptic maps of photospheric magnetic fields prepared at the Kitt Peak National Observatory are used in investigating large-scale patterns in the spatial and temporal distribution of solar active regions for 27 solar rotations between 1977 and 1979. The active regions are found to be distributed in 'complexes of activity' (Bumba and Howard, 1965). With the working definition of a complex of activity based on continuity and proximity of the constituent active regions, the phenomenology of complexes is explored. It is found that complexes of activity form within one month and that they are typically maintained for 3 to 6 solar rotations by fresh injections of magnetic flux. During the active lifetime of a complex of activity, the total magnetic flux in the complex remains steady to within a factor of 2. The magnetic polarities are closely balanced, and each complex rotates about the sun at its own special, constant rate. In certain cases, the complexes form two diverging branches.

  1. Reproducing the scaling laws for Slow and Fast ruptures

    NASA Astrophysics Data System (ADS)

    Romanet, Pierre; Bhat, Harsha; Madariaga, Raúl

    2017-04-01

    Modelling long term behaviour of large, natural fault systems, that are geometrically complex, is a challenging problem. This is why most of the research so far has concentrated on modelling the long term response of single planar fault system. To overcome this limitation, we appeal to a novel algorithm called the Fast Multipole Method which was developed in the context of modelling gravitational N-body problems. This method allows us to decrease the computational complexity of the calculation from O(N2) to O(N log N), N being the number of discretised elements on the fault. We then adapted this method to model the long term quasi-dynamic response of two faults, with step-over like geometry, that are governed by rate and state friction laws. We assume the faults have spatially uniform rate weakening friction. The results show that when stress interaction between faults is accounted, a complex spectrum of slip (including slow-slip events, dynamic ruptures and partial ruptures) emerges naturally. The simulated slow-slip and dynamic events follow the scaling law inferred by Ide et al. 2007 i. e. M ∝ T for slow-slip events and M ∝ T2 (in 2D) for dynamic events.

  2. Development of an informatics infrastructure for data exchange of biomolecular simulations: architecture, data models and ontology$

    PubMed Central

    Thibault, J. C.; Roe, D. R.; Eilbeck, K.; Cheatham, T. E.; Facelli, J. C.

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data – both within the same organization and among different ones – remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations. PMID:26387907

  3. Development of an informatics infrastructure for data exchange of biomolecular simulations: Architecture, data models and ontology.

    PubMed

    Thibault, J C; Roe, D R; Eilbeck, K; Cheatham, T E; Facelli, J C

    2015-01-01

    Biomolecular simulations aim to simulate structure, dynamics, interactions, and energetics of complex biomolecular systems. With the recent advances in hardware, it is now possible to use more complex and accurate models, but also reach time scales that are biologically significant. Molecular simulations have become a standard tool for toxicology and pharmacology research, but organizing and sharing data - both within the same organization and among different ones - remains a substantial challenge. In this paper we review our recent work leading to the development of a comprehensive informatics infrastructure to facilitate the organization and exchange of biomolecular simulations data. Our efforts include the design of data models and dictionary tools that allow the standardization of the metadata used to describe the biomedical simulations, the development of a thesaurus and ontology for computational reasoning when searching for biomolecular simulations in distributed environments, and the development of systems based on these models to manage and share the data at a large scale (iBIOMES), and within smaller groups of researchers at laboratory scale (iBIOMES Lite), that take advantage of the standardization of the meta data used to describe biomolecular simulations.

  4. Advances in Multi-Sensor Scanning and Visualization of Complex Plants: the Utmost Case of a Reactor Building

    NASA Astrophysics Data System (ADS)

    Hullo, J.-F.; Thibault, G.; Boucheny, C.

    2015-02-01

    In a context of increased maintenance operations and workers generational renewal, a nuclear owner and operator like Electricité de France (EDF) is interested in the scaling up of tools and methods of "as-built virtual reality" for larger buildings and wider audiences. However, acquisition and sharing of as-built data on a large scale (large and complex multi-floored buildings) challenge current scientific and technical capacities. In this paper, we first present a state of the art of scanning tools and methods for industrial plants with very complex architecture. Then, we introduce the inner characteristics of the multi-sensor scanning and visualization of the interior of the most complex building of a power plant: a nuclear reactor building. We introduce several developments that made possible a first complete survey of such a large building, from acquisition, processing and fusion of multiple data sources (3D laser scans, total-station survey, RGB panoramic, 2D floor plans, 3D CAD as-built models). In addition, we present the concepts of a smart application developed for the painless exploration of the whole dataset. The goal of this application is to help professionals, unfamiliar with the manipulation of such datasets, to take into account spatial constraints induced by the building complexity while preparing maintenance operations. Finally, we discuss the main feedbacks of this large experiment, the remaining issues for the generalization of such large scale surveys and the future technical and scientific challenges in the field of industrial "virtual reality".

  5. Calderas and magma reservoirs

    NASA Astrophysics Data System (ADS)

    Cashman, Katharine V.; Giordano, Guido

    2014-11-01

    Large caldera-forming eruptions have long been a focus of both petrological and volcanological studies; petrologists have used the eruptive products to probe conditions of magma storage (and thus processes that drive magma evolution), while volcanologists have used them to study the conditions under which large volumes of magma are transported to, and emplaced on, the Earth's surface. Traditionally, both groups have worked on the assumption that eruptible magma is stored within a single long-lived melt body. Over the past decade, however, advances in analytical techniques have provided new views of magma storage regions, many of which provide evidence of multiple melt lenses feeding a single eruption, and/or rapid pre-eruptive assembly of large volumes of melt. These new petrological views of magmatic systems have not yet been fully integrated into volcanological perspectives of caldera-forming eruptions. Here we explore the implications of complex magma reservoir configurations for eruption dynamics and caldera formation. We first examine mafic systems, where stacked-sill models have long been invoked but which rarely produce explosive eruptions. An exception is the 2010 eruption of Eyjafjallajökull volcano, Iceland, where seismic and petrologic data show that multiple sills at different depths fed a multi-phase (explosive and effusive) eruption. Extension of this concept to larger mafic caldera-forming systems suggests a mechanism to explain many of their unusual features, including their protracted explosivity, spatially variable compositions and pronounced intra-eruptive pauses. We then review studies of more common intermediate and silicic caldera-forming systems to examine inferred conditions of magma storage, time scales of melt accumulation, eruption triggers, eruption dynamics and caldera collapse. By compiling data from large and small, and crystal-rich and crystal-poor, events, we compare eruptions that are well explained by simple evacuation of a zoned magma chamber (termed the Standard Model by Gualda and Ghiorso, 2013) to eruptions that are better explained by tapping multiple, rather than single, melt lenses stored within a largely crystalline mush (which we term complex magma reservoirs). We then discuss the implications of magma storage within complex, rather than simple, reservoirs for identifying magmatic systems with the potential to produce large eruptions, and for monitoring eruption progress under conditions where successive melt lenses may be tapped. We conclude that emerging views of complex magma reservoir configurations provide exciting opportunities for re-examining volcanological concepts of caldera-forming systems.

  6. Research on distributed virtual reality system in electronic commerce

    NASA Astrophysics Data System (ADS)

    Xue, Qiang; Wang, Jiening; Sun, Jizhou

    2004-03-01

    In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.

  7. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  8. Plant Phenotyping through the Eyes of Complex Systems: Theoretical Considerations

    NASA Astrophysics Data System (ADS)

    Kim, J.

    2017-12-01

    Plant phenotyping is an emerging transdisciplinary research which necessitates not only the communication and collaboration of scientists from different disciplines but also the paradigm shift to a holistic approach. Complex system is defined as a system having a large number of interacting parts (or particles, agents), whose interactions give rise to non-trivial properties like self-organization and emergence. Plant ecosystems are complex systems which are continually morphing dynamical systems, i.e. self-organizing hierarchical open systems. Such systems are composed of many subunits/subsystems with nonlinear interactions and feedback. The throughput such as the flow of energy, matter and information is the key control parameter in complex systems. Information theoretic approaches can be used to understand and identify such interactions, structures and dynamics through reductions in uncertainty (i.e. entropy). The theoretical considerations based on network and thermodynamic thinking and exemplary analyses (e.g. dynamic process network, spectral entropy) of the throughput time series will be presented. These can be used as a framework to develop more discipline-specific fundamental approaches to provide tools for the transferability of traits between measurement scales in plant phenotyping. Acknowledgment: This work was funded by the Weather Information Service Engine Program of the Korea Meteorological Administration under Grant KMIPA-2012-0001.

  9. It's the Physics: Organized Complexity in the Arctic/Midlatitude Weather Controversy

    NASA Astrophysics Data System (ADS)

    Overland, J. E.; Francis, J. A.; Wang, M.

    2017-12-01

    There is intense scientific and public interest in whether major Arctic changes can and will impact mid-latitude weather. Despite numerous workshops and a growing literature, convergence of understanding is lacking, with major objections about possible large impacts within the scientific community. Yet research on the Arctic as a new potential driver in improving subseasonal forecasting at midlatitudes remains a priority. A recent review laid part of the controversy on shortcomings in experimental design and ill-suited metrics, such as examining the influence of only sea-ice loss rather than overall Arctic temperature amplification, and/or calculating averages over large regions, long time periods, or many ensemble members that would tend to obscure event-like Arctic connections. The present analysis lays the difficulty at a deeper level owing to the inherently complex physics. Jet-stream dynamics and weather linkages on the scale of a week to months has characteristics of an organized complex system, with large-scale processes that operate in patterned, quasi-geostrophic ways but whose component feedbacks are continually changing. Arctic linkages may be state dependent, i.e., relationships may be more robust in one atmospheric wave pattern than another, generating intermittency. The observational network is insufficient to fully initialize such a system and the inherent noise obscures linkage signals, leading to an underdetermined problem; often more than one explanation can fit the data. Further, the problem may be computationally irreducible; the only way to know the result of these interactions is to trace out their path over time. Modeling is a suggested approach, but at present it is unclear whether previous model studies fully resolve anticipated complexity. The jet stream from autumn to early winter is characterized by non-linear interactions among enhanced atmospheric planetary waves, irregular transitions between the zonal and meridional flows, and the maintenance of atmospheric blocks (near stationary large amplitude atmospheric waves). For weather forecast improvement, but not necessarily to elucidate mechanism of linkages, a Numerical Weather Prediction (NWP) approach is appropriate; such is the plan for the upcoming Year of Polar Prediction (YOPP).

  10. Decadal opportunities for space architects

    NASA Astrophysics Data System (ADS)

    Sherwood, Brent

    2012-12-01

    A significant challenge for the new field of space architecture is the dearth of project opportunities. Yet every year more young professionals express interest to enter the field. This paper derives projections that bound the number, type, and range of global development opportunities that may be reasonably expected over the next few decades for human space flight (HSF) systems so those interested in the field can benchmark their goals. Four categories of HSF activity are described: human Exploration of solar system bodies; human Servicing of space-based assets; large-scale development of space Resources; and Breakout of self-sustaining human societies into the solar system. A progressive sequence of capabilities for each category starts with its earliest feasible missions and leads toward its full expression. The four sequences are compared in scale, distance from Earth, and readiness. Scenarios hybridize the most synergistic features from the four sequences for comparison to status quo, government-funded HSF program plans. Finally qualitative, decadal, order-of-magnitude estimates are derived for system development needs, and hence opportunities for space architects. Government investment towards human planetary exploration is the weakest generator of space architecture work. Conversely, the strongest generator is a combination of three market drivers: (1) commercial passenger travel in low Earth orbit; (2) in parallel, government extension of HSF capability to GEO; both followed by (3) scale-up demonstration of end-to-end solar power satellites in GEO. The rich end of this scale affords space architecture opportunities which are more diverse, complex, large-scale, and sociologically challenging than traditional exploration vehicle cabins and habitats.

  11. Structural Plasticity and Conformational Transitions of HIV Envelope Glycoprotein gp120

    PubMed Central

    Korkut, Anil; Hendrickson, Wayne A.

    2012-01-01

    HIV envelope glycoproteins undergo large-scale conformational changes as they interact with cellular receptors to cause the fusion of viral and cellular membranes that permits viral entry to infect targeted cells. Conformational dynamics in HIV gp120 are also important in masking conserved receptor epitopes from being detected for effective neutralization by the human immune system. Crystal structures of HIV gp120 and its complexes with receptors and antibody fragments provide high-resolution pictures of selected conformational states accessible to gp120. Here we describe systematic computational analyses of HIV gp120 plasticity in such complexes with CD4 binding fragments, CD4 mimetic proteins, and various antibody fragments. We used three computational approaches: an isotropic elastic network analysis of conformational plasticity, a full atomic normal mode analysis, and simulation of conformational transitions with our coarse-grained virtual atom molecular mechanics (VAMM) potential function. We observe collective sub-domain motions about hinge points that coordinate those motions, correlated local fluctuations at the interfacial cavity formed when gp120 binds to CD4, and concerted changes in structural elements that form at the CD4 interface during large-scale conformational transitions to the CD4-bound state from the deformed states of gp120 in certain antibody complexes. PMID:23300605

  12. Measuring Paleolandscape Relief in Alluvial River Systems from the Stratigraphic Record

    NASA Astrophysics Data System (ADS)

    Hajek, E. A.; Trampush, S. M.; Chamberlin, E.; Greenberg, E.

    2017-12-01

    Aggradational alluvial river systems sometimes generate relief in the vicinity of their channel belts (i.e. alluvial ridges) and it has been proposed that this process may define important thresholds in river avulsion. The compensation scale can be used to estimate the maximum relief across a landscape and can be connected to the maximum scale of autogenic organization in experimental and numerical systems. Here we use the compensation scale - measured from outcrops of Upper Cretaceous and Paleogene fluvial deposits - to estimate the maximum relief that characterized ancient fluvial landscapes. In some cases, the compensation scale significantly exceeds the maximum channel depth observed in a deposit, suggesting that aggradational alluvial systems organize to sustain more relief than might be expected by looking only in the immediate vicinity of the active channel belt. Instead, these results indicate that in some systems, positive topographic relief generated by multiple alluvial ridge complexes and/or large-scale fan features may be associated with landscape-scale autogenic organization of channel networks that spans multiple cycles of channel avulsion. We compare channel and floodplain sedimentation patterns among the studied ancient fluvial systems in an effort to determine whether avulsion style, channel migration, or floodplain conditions influenced the maximum autogenic relief of ancient landscapes. Our results emphasize that alluvial channel networks may be organized at much larger spatial and temporal scales than previously realized and provide an avenue for understanding which types of river systems are likely to exhibit the largest range of autogenic dynamics.

  13. Flood events across the North Atlantic region - past development and future perspectives

    NASA Astrophysics Data System (ADS)

    Matti, Bettina; Dieppois, Bastien; Lawler, Damian; Dahlke, Helen E.; Lyon, Steve W.

    2016-04-01

    Flood events have a large impact on humans, both socially and economically. An increase in winter and spring flooding across much of northern Europe in recent years opened up the question of changing underlying hydro-climatic drivers of flood events. Predicting the manifestation of such changes is difficult due to the natural variability and fluctuations in northern hydrological systems caused by large-scale atmospheric circulations, especially under altered climate conditions. Improving knowledge on the complexity of these hydrological systems and their interactions with climate is essential to be able to determine drivers of flood events and to predict changes in these drivers under altered climate conditions. This is particularly true for the North Atlantic region where both physical catchment properties and large-scale atmospheric circulations have a profound influence on floods. This study explores changes in streamflow across North Atlantic region catchments. An emphasis is placed on high-flow events, namely the timing and magnitude of past flood events, and selected flood percentiles were tested for stationarity by applying a flood frequency analysis. The issue of non-stationarity of flood return periods is important when linking streamflow to large-scale atmospheric circulations. Natural fluctuations in these circulations are found to have a strong influence on the outcome causing natural variability in streamflow records. Long time series and a multi-temporal approach allows for determining drivers of floods and linking streamflow to large-scale atmospheric circulations. Exploring changes in selected hydrological signatures consistency was found across much of the North Atlantic region suggesting a shift in flow regime. The lack of an overall regional pattern suggests that how catchments respond to changes in climatic drivers is strongly influenced by their physical characteristics. A better understanding of hydrological response to climate drivers is essential for example for forecasting purposes.

  14. A Virtual Reality Visualization Tool for Neuron Tracing

    PubMed Central

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Angelucci, Alessandra; Pascucci, Valerio

    2017-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists. PMID:28866520

  15. A general path for large-scale solubilization of cellular proteins: From membrane receptors to multiprotein complexes

    PubMed Central

    Pullara, Filippo; Guerrero-Santoro, Jennifer; Calero, Monica; Zhang, Qiangmin; Peng, Ye; Spåhr, Henrik; Kornberg, Guy L.; Cusimano, Antonella; Stevenson, Hilary P.; Santamaria-Suarez, Hugo; Reynolds, Shelley L.; Brown, Ian S.; Monga, Satdarshan P.S.; Van Houten, Bennett; Rapić-Otrin, Vesna; Calero, Guillermo; Levine, Arthur S.

    2014-01-01

    Expression of recombinant proteins in bacterial or eukaryotic systems often results in aggregation rendering them unavailable for biochemical or structural studies. Protein aggregation is a costly problem for biomedical research. It forces research laboratories and the biomedical industry to search for alternative, more soluble, non-human proteins and limits the number of potential “druggable” targets. In this study we present a highly reproducible protocol that introduces the systematic use of an extensive number of detergents to solubilize aggregated proteins expressed in bacterial and eukaryotic systems. We validate the usefulness of this protocol by solubilizing traditionally difficult human protein targets to milligram quantities and confirm their biological activity. We use this method to solubilize monomeric or multimeric components of multi-protein complexes and demonstrate its efficacy to reconstitute large cellular machines. This protocol works equally well on cytosolic, nuclear and membrane proteins and can be easily adapted to a high throughput format. PMID:23137940

  16. A Virtual Reality Visualization Tool for Neuron Tracing.

    PubMed

    Usher, Will; Klacansky, Pavol; Federer, Frederick; Bremer, Peer-Timo; Knoll, Aaron; Yarch, Jeff; Angelucci, Alessandra; Pascucci, Valerio

    2018-01-01

    Tracing neurons in large-scale microscopy data is crucial to establishing a wiring diagram of the brain, which is needed to understand how neural circuits in the brain process information and generate behavior. Automatic techniques often fail for large and complex datasets, and connectomics researchers may spend weeks or months manually tracing neurons using 2D image stacks. We present a design study of a new virtual reality (VR) system, developed in collaboration with trained neuroanatomists, to trace neurons in microscope scans of the visual cortex of primates. We hypothesize that using consumer-grade VR technology to interact with neurons directly in 3D will help neuroscientists better resolve complex cases and enable them to trace neurons faster and with less physical and mental strain. We discuss both the design process and technical challenges in developing an interactive system to navigate and manipulate terabyte-sized image volumes in VR. Using a number of different datasets, we demonstrate that, compared to widely used commercial software, consumer-grade VR presents a promising alternative for scientists.

  17. Wake profile measurements of fixed and oscillating flaps

    NASA Technical Reports Server (NTRS)

    Owen, F. K.

    1984-01-01

    Although the potential of laser velocimetry for the non-intrusive measurement of complex shear flows has long been recognized, there have been few applications in other small, closely controlled laboratory situations. Measurements in large scale, high speed wind tunnels are still a complex task. To support a study of periodic flows produced by an oscillating edge flap in the Ames eleven foot wind tunnel, this study was done. The potential for laser velocimeter measurements in large scale production facilities are evaluated. The results with hot wire flow field measurements are compared.

  18. Sampling and Visualizing Creases with Scale-Space Particles

    PubMed Central

    Kindlmann, Gordon L.; Estépar, Raúl San José; Smith, Stephen M.; Westin, Carl-Fredrik

    2010-01-01

    Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. PMID:19834216

  19. Modeling and Performance Considerations for Automated Fault Isolation in Complex Systems

    NASA Technical Reports Server (NTRS)

    Ferrell, Bob; Oostdyk, Rebecca

    2010-01-01

    The purpose of this paper is to document the modeling considerations and performance metrics that were examined in the development of a large-scale Fault Detection, Isolation and Recovery (FDIR) system. The FDIR system is envisioned to perform health management functions for both a launch vehicle and the ground systems that support the vehicle during checkout and launch countdown by using suite of complimentary software tools that alert operators to anomalies and failures in real-time. The FDIR team members developed a set of operational requirements for the models that would be used for fault isolation and worked closely with the vendor of the software tools selected for fault isolation to ensure that the software was able to meet the requirements. Once the requirements were established, example models of sufficient complexity were used to test the performance of the software. The results of the performance testing demonstrated the need for enhancements to the software in order to meet the demands of the full-scale ground and vehicle FDIR system. The paper highlights the importance of the development of operational requirements and preliminary performance testing as a strategy for identifying deficiencies in highly scalable systems and rectifying those deficiencies before they imperil the success of the project

  20. Structural similitude and design of scaled down laminated models

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Rezaeepazhand, J.

    1993-01-01

    The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.

  1. Space and energy. [space systems for energy generation, distribution and control

    NASA Technical Reports Server (NTRS)

    Bekey, I.

    1976-01-01

    Potential contributions of space to energy-related activities are discussed. Advanced concepts presented include worldwide energy distribution to substation-sized users using low-altitude space reflectors; powering large numbers of large aircraft worldwide using laser beams reflected from space mirror complexes; providing night illumination via sunlight-reflecting space mirrors; fine-scale power programming and monitoring in transmission networks by monitoring millions of network points from space; prevention of undetected hijacking of nuclear reactor fuels by space tracking of signals from tagging transmitters on all such materials; and disposal of nuclear power plant radioactive wastes in space.

  2. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  3. DL_MG: A Parallel Multigrid Poisson and Poisson-Boltzmann Solver for Electronic Structure Calculations in Vacuum and Solution.

    PubMed

    Womack, James C; Anton, Lucian; Dziedzic, Jacek; Hasnip, Phil J; Probert, Matt I J; Skylaris, Chris-Kriton

    2018-03-13

    The solution of the Poisson equation is a crucial step in electronic structure calculations, yielding the electrostatic potential-a key component of the quantum mechanical Hamiltonian. In recent decades, theoretical advances and increases in computer performance have made it possible to simulate the electronic structure of extended systems in complex environments. This requires the solution of more complicated variants of the Poisson equation, featuring nonhomogeneous dielectric permittivities, ionic concentrations with nonlinear dependencies, and diverse boundary conditions. The analytic solutions generally used to solve the Poisson equation in vacuum (or with homogeneous permittivity) are not applicable in these circumstances, and numerical methods must be used. In this work, we present DL_MG, a flexible, scalable, and accurate solver library, developed specifically to tackle the challenges of solving the Poisson equation in modern large-scale electronic structure calculations on parallel computers. Our solver is based on the multigrid approach and uses an iterative high-order defect correction method to improve the accuracy of solutions. Using two chemically relevant model systems, we tested the accuracy and computational performance of DL_MG when solving the generalized Poisson and Poisson-Boltzmann equations, demonstrating excellent agreement with analytic solutions and efficient scaling to ∼10 9 unknowns and 100s of CPU cores. We also applied DL_MG in actual large-scale electronic structure calculations, using the ONETEP linear-scaling electronic structure package to study a 2615 atom protein-ligand complex with routinely available computational resources. In these calculations, the overall execution time with DL_MG was not significantly greater than the time required for calculations using a conventional FFT-based solver.

  4. Study of the techniques feasible for food synthesis aboard a spacecraft

    NASA Technical Reports Server (NTRS)

    Weiss, A. H.

    1972-01-01

    Synthesis of sugars by Ca(OH)2 catalyzed formaldehyde condensation (the formose reaction) has produced branched carbohydrates that do not occur in nature. The kinetics and mechanisms of the homogeneously catalyzed autocatalytic condensation were studied and analogies between homogeneous and heterogeneous rate laws have been found. Aldol condensations proceed simultaneously with Cannizzaro and crossed-Cannizzaro reactions and Lobry de Bruyn-Van Eckenstein rearrangements. The separate steps as well as the interactions of this highly complex reaction system were elucidated. The system exhibits instabilities, competitive catalytic, mass action, and equilibrium phenomena, complexing, and parallel and consecutive reactions. Specific finding that have been made on the problem will be of interest for synthesizing sugars, both for sustained space flight and for large scale food manufacture. A contribution to methodology for studying complex catalyzed reactions and to understanding control of reaction selectivity was a broad goal of the project.

  5. Robust decentralized hybrid adaptive output feedback fuzzy control for a class of large-scale MIMO nonlinear systems and its application to AHS.

    PubMed

    Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu

    2014-09-01

    This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.

  6. Self-dissimilarity as a High Dimensional Complexity Measure

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Macready, William

    2005-01-01

    For many systems characterized as "complex" the patterns exhibited on different scales differ markedly from one another. For example the biomass distribution in a human body "looks very different" depending on the scale at which one examines it. Conversely, the patterns at different scales in "simple" systems (e.g., gases, mountains, crystals) vary little from one scale to another. Accordingly, the degrees of self-dissimilarity between the patterns of a system at various scales constitute a complexity "signature" of that system. Here we present a novel quantification of self-dissimilarity. This signature can, if desired, incorporate a novel information-theoretic measure of the distance between probability distributions that we derive here. Whatever distance measure is chosen, our quantification of self-dissimilarity can be measured for many kinds of real-world data. This allows comparisons of the complexity signatures of wholly different kinds of systems (e.g., systems involving information density in a digital computer vs. species densities in a rain-forest vs. capital density in an economy, etc.). Moreover, in contrast to many other suggested complexity measures, evaluating the self-dissimilarity of a system does not require one to already have a model of the system. These facts may allow self-dissimilarity signatures to be used a s the underlying observational variables of an eventual overarching theory relating all complex systems. To illustrate self-dissimilarity we present several numerical experiments. In particular, we show that underlying structure of the logistic map is picked out by the self-dissimilarity signature of time series produced by that map

  7. Adaptive neural network decentralized backstepping output-feedback control for nonlinear large-scale systems with time delays.

    PubMed

    Tong, Shao Cheng; Li, Yong Ming; Zhang, Hua-Guang

    2011-07-01

    In this paper, two adaptive neural network (NN) decentralized output feedback control approaches are proposed for a class of uncertain nonlinear large-scale systems with immeasurable states and unknown time delays. Using NNs to approximate the unknown nonlinear functions, an NN state observer is designed to estimate the immeasurable states. By combining the adaptive backstepping technique with decentralized control design principle, an adaptive NN decentralized output feedback control approach is developed. In order to overcome the problem of "explosion of complexity" inherent in the proposed control approach, the dynamic surface control (DSC) technique is introduced into the first adaptive NN decentralized control scheme, and a simplified adaptive NN decentralized output feedback DSC approach is developed. It is proved that the two proposed control approaches can guarantee that all the signals of the closed-loop system are semi-globally uniformly ultimately bounded, and the observer errors and the tracking errors converge to a small neighborhood of the origin. Simulation results are provided to show the effectiveness of the proposed approaches.

  8. Numerical propulsion system simulation

    NASA Technical Reports Server (NTRS)

    Lytle, John K.; Remaklus, David A.; Nichols, Lester D.

    1990-01-01

    The cost of implementing new technology in aerospace propulsion systems is becoming prohibitively expensive. One of the major contributors to the high cost is the need to perform many large scale system tests. Extensive testing is used to capture the complex interactions among the multiple disciplines and the multiple components inherent in complex systems. The objective of the Numerical Propulsion System Simulation (NPSS) is to provide insight into these complex interactions through computational simulations. This will allow for comprehensive evaluation of new concepts early in the design phase before a commitment to hardware is made. It will also allow for rapid assessment of field-related problems, particularly in cases where operational problems were encountered during conditions that would be difficult to simulate experimentally. The tremendous progress taking place in computational engineering and the rapid increase in computing power expected through parallel processing make this concept feasible within the near future. However it is critical that the framework for such simulations be put in place now to serve as a focal point for the continued developments in computational engineering and computing hardware and software. The NPSS concept which is described will provide that framework.

  9. Network cosmology.

    PubMed

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.

  10. Network Cosmology

    PubMed Central

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S.; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology. PMID:23162688

  11. Modular microfluidic systems using reversibly attached PDMS fluid control modules

    NASA Astrophysics Data System (ADS)

    Skafte-Pedersen, Peder; Sip, Christopher G.; Folch, Albert; Dufva, Martin

    2013-05-01

    The use of soft lithography-based poly(dimethylsiloxane) (PDMS) valve systems is the dominating approach for high-density microscale fluidic control. Integrated systems enable complex flow control and large-scale integration, but lack modularity. In contrast, modular systems are attractive alternatives to integration because they can be tailored for different applications piecewise and without redesigning every element of the system. We present a method for reversibly coupling hard materials to soft lithography defined systems through self-aligning O-ring features thereby enabling easy interfacing of complex-valve-based systems with simpler detachable units. Using this scheme, we demonstrate the seamless interfacing of a PDMS-based fluid control module with hard polymer chips. In our system, 32 self-aligning O-ring features protruding from the PDMS fluid control module form chip-to-control module interconnections which are sealed by tightening four screws. The interconnection method is robust and supports complex fluidic operations in the reversibly attached passive chip. In addition, we developed a double-sided molding method for fabricating PDMS devices with integrated through-holes. The versatile system facilitates a wide range of applications due to the modular approach, where application specific passive chips can be readily attached to the flow control module.

  12. Stress Testing Water Resource Systems at Regional and National Scales with Synthetic Drought Event Sets

    NASA Astrophysics Data System (ADS)

    Hall, J. W.; Mortazavi-Naeini, M.; Coxon, G.; Guillod, B. P.; Allen, M. R.

    2017-12-01

    Water resources systems can fail to deliver the services required by water users (and deprive the environment of flow requirements) in many different ways. In an attempt to make systems more resilient, they have also been made more complex, for example through a growing number of large-scale transfers, optimized storages and reuse plants. These systems may be vulnerable to complex variants of hydrological variability in space and time, and behavioural adaptations by water users. In previous research we have used non-parametric stochastic streamflow generators to test the vulnerability of water resource systems. Here we use a very large ensemble of regional climate model outputs from the weather@home crowd-sourced citizen science project, which has generated more than 30,000 years of synthetic weather for present and future climates in the UK and western Europe, using the HadAM3P regional climate model. These simulations have been constructed in order to preserve prolonged drought characteristics, through treatment of long-memory processes in ocean circulations and soil moisture. The weather simulations have been propagated through the newly developed DynaTOP national hydrological for Britain, in order to provide low flow simulations at points of water withdrawal for public water supply, energy and agricultural abstractors. We have used the WATHNET water resource simulation model, set up for the Thames Basin and for all of the large water resource zones in England, to simulate the frequency, severity and duration of water shortages in all of these synthetic weather conditions. In particular, we have sought to explore systemic vulnerabilities associated with inter-basin transfers and the trade-offs between different water users. This analytical capability is providing the basis for (i) implementation of the Duty of Resilience, which has been placed upon the water industry in the 2014 Water Act and (ii) testing reformed abstraction arrangements which the UK government is committed to implementing.

  13. The Computational Complexity, Parallel Scalability, and Performance of Atmospheric Data Assimilation Algorithms

    NASA Technical Reports Server (NTRS)

    Lyster, Peter M.; Guo, J.; Clune, T.; Larson, J. W.; Atlas, Robert (Technical Monitor)

    2001-01-01

    The computational complexity of algorithms for Four Dimensional Data Assimilation (4DDA) at NASA's Data Assimilation Office (DAO) is discussed. In 4DDA, observations are assimilated with the output of a dynamical model to generate best-estimates of the states of the system. It is thus a mapping problem, whereby scattered observations are converted into regular accurate maps of wind, temperature, moisture and other variables. The DAO is developing and using 4DDA algorithms that provide these datasets, or analyses, in support of Earth System Science research. Two large-scale algorithms are discussed. The first approach, the Goddard Earth Observing System Data Assimilation System (GEOS DAS), uses an atmospheric general circulation model (GCM) and an observation-space based analysis system, the Physical-space Statistical Analysis System (PSAS). GEOS DAS is very similar to global meteorological weather forecasting data assimilation systems, but is used at NASA for climate research. Systems of this size typically run at between 1 and 20 gigaflop/s. The second approach, the Kalman filter, uses a more consistent algorithm to determine the forecast error covariance matrix than does GEOS DAS. For atmospheric assimilation, the gridded dynamical fields typically have More than 10(exp 6) variables, therefore the full error covariance matrix may be in excess of a teraword. For the Kalman filter this problem can easily scale to petaflop/s proportions. We discuss the computational complexity of GEOS DAS and our implementation of the Kalman filter. We also discuss and quantify some of the technical issues and limitations in developing efficient, in terms of wall clock time, and scalable parallel implementations of the algorithms.

  14. RT-Syn: A real-time software system generator

    NASA Technical Reports Server (NTRS)

    Setliff, Dorothy E.

    1992-01-01

    This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.

  15. Reverse Ecology: from systems to environments and back.

    PubMed

    Levy, Roie; Borenstein, Elhanan

    2012-01-01

    The structure of complex biological systems reflects not only their function but also the environments in which they evolved and are adapted to. Reverse Ecology-an emerging new frontier in Evolutionary Systems Biology-aims to extract this information and to obtain novel insights into an organism's ecology. The Reverse Ecology framework facilitates the translation of high-throughput genomic data into large-scale ecological data, and has the potential to transform ecology into a high-throughput field. In this chapter, we describe some of the pioneering work in Reverse Ecology, demonstrating how system-level analysis of complex biological networks can be used to predict the natural habitats of poorly characterized microbial species, their interactions with other species, and universal patterns governing the adaptation of organisms to their environments. We further present several studies that applied Reverse Ecology to elucidate various aspects of microbial ecology, and lay out exciting future directions and potential future applications in biotechnology, biomedicine, and ecological engineering.

  16. Using nocturnal cold air drainage flow to monitor ecosystem processes in complex terrain

    Treesearch

    Thomas G. Pypker; Michael H. Unsworth; Alan C. Mix; William Rugh; Troy Ocheltree; Karrin Alstad; Barbara J. Bond

    2007-01-01

    This paper presents initial investigations of a new approach to monitor ecosystem processes in complex terrain on large scales. Metabolic processes in mountainous ecosystems are poorly represented in current ecosystem monitoring campaigns because the methods used for monitoring metabolism at the ecosystem scale (e.g., eddy covariance) require flat study sites. Our goal...

  17. Java Performance for Scientific Applications on LLNL Computer Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kapfer, C; Wissink, A

    2002-05-10

    Languages in use for high performance computing at the laboratory--Fortran (f77 and f90), C, and C++--have many years of development behind them and are generally considered the fastest available. However, Fortran and C do not readily extend to object-oriented programming models, limiting their capability for very complex simulation software. C++ facilitates object-oriented programming but is a very complex and error-prone language. Java offers a number of capabilities that these other languages do not. For instance it implements cleaner (i.e., easier to use and less prone to errors) object-oriented models than C++. It also offers networking and security as part ofmore » the language standard, and cross-platform executables that make it architecture neutral, to name a few. These features have made Java very popular for industrial computing applications. The aim of this paper is to explain the trade-offs in using Java for large-scale scientific applications at LLNL. Despite its advantages, the computational science community has been reluctant to write large-scale computationally intensive applications in Java due to concerns over its poor performance. However, considerable progress has been made over the last several years. The Java Grande Forum [1] has been promoting the use of Java for large-scale computing. Members have introduced efficient array libraries, developed fast just-in-time (JIT) compilers, and built links to existing packages used in high performance parallel computing.« less

  18. A case report of evaluating a large-scale health systems improvement project in an uncontrolled setting: a quality improvement initiative in KwaZulu-Natal, South Africa.

    PubMed

    Mate, Kedar S; Ngidi, Wilbroda Hlolisile; Reddy, Jennifer; Mphatswe, Wendy; Rollins, Nigel; Barker, Pierre

    2013-11-01

    New approaches are needed to evaluate quality improvement (QI) within large-scale public health efforts. This case report details challenges to large-scale QI evaluation, and proposes solutions relying on adaptive study design. We used two sequential evaluative methods to study a QI effort to improve delivery of HIV preventive care in public health facilities in three districts in KwaZulu-Natal, South Africa, over a 3-year period. We initially used a cluster randomised controlled trial (RCT) design. During the RCT study period, tensions arose between intervention implementation and evaluation design due to loss of integrity of the randomisation unit over time, pressure to implement changes across the randomisation unit boundaries, and use of administrative rather than functional structures for the randomisation. In response to this loss of design integrity, we switched to a more flexible intervention design and a mixed-methods quasiexperimental evaluation relying on both a qualitative analysis and an interrupted time series quantitative analysis. Cluster RCT designs may not be optimal for evaluating complex interventions to improve implementation in uncontrolled 'real world' settings. More flexible, context-sensitive evaluation designs offer a better balance of the need to adjust the intervention during the evaluation to meet implementation challenges while providing the data required to evaluate effectiveness. Our case study involved HIV care in a resource-limited setting, but these issues likely apply to complex improvement interventions in other settings.

  19. Modeling microbial community structure and functional diversity across time and space.

    PubMed

    Larsen, Peter E; Gibbons, Sean M; Gilbert, Jack A

    2012-07-01

    Microbial communities exhibit exquisitely complex structure. Many aspects of this complexity, from the number of species to the total number of interactions, are currently very difficult to examine directly. However, extraordinary efforts are being made to make these systems accessible to scientific investigation. While recent advances in high-throughput sequencing technologies have improved accessibility to the taxonomic and functional diversity of complex communities, monitoring the dynamics of these systems over time and space - using appropriate experimental design - is still expensive. Fortunately, modeling can be used as a lens to focus low-resolution observations of community dynamics to enable mathematical abstractions of functional and taxonomic dynamics across space and time. Here, we review the approaches for modeling bacterial diversity at both the very large and the very small scales at which microbial systems interact with their environments. We show that modeling can help to connect biogeochemical processes to specific microbial metabolic pathways. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  20. High cell density media for Escherichia coli are generally designed for aerobic cultivations – consequences for large-scale bioprocesses and shake flask cultures

    PubMed Central

    Soini, Jaakko; Ukkonen, Kaisa; Neubauer, Peter

    2008-01-01

    Background For the cultivation of Escherichia coli in bioreactors trace element solutions are generally designed for optimal growth under aerobic conditions. They do normally not contain selenium and nickel. Molybdenum is only contained in few of them. These elements are part of the formate hydrogen lyase (FHL) complex which is induced under anaerobic conditions. As it is generally known that oxygen limitation appears in shake flask cultures and locally in large-scale bioreactors, function of the FHL complex may influence the process behaviour. Formate has been described to accumulate in large-scale cultures and may have toxic effects on E. coli. Although the anaerobic metabolism of E. coli is well studied, reference data which estimate the impact of the FHL complex on bioprocesses of E. coli with oxygen limitation have so far not been published, but are important for a better process understanding. Results Two sets of fed-batch cultures with conditions triggering oxygen limitation and formate accumulation were performed. Permanent oxygen limitation which is typical for shake flask cultures was caused in a bioreactor by reduction of the agitation rate. Transient oxygen limitation, which has been described to eventually occur in the feed-zone of large-scale bioreactors, was mimicked in a two-compartment scale-down bioreactor consisting of a stirred tank reactor and a plug flow reactor (PFR) with continuous glucose feeding into the PFR. In both models formate accumulated up to about 20 mM in the culture medium without addition of selenium, molybdenum and nickel. By addition of these trace elements the formate accumulation decreased below the level observed in well-mixed laboratory-scale cultures. Interestingly, addition of the extra trace elements caused accumulation of large amounts of lactate and reduced biomass yield in the simulator with permanent oxygen limitation, but not in the scale-down two-compartment bioreactor. Conclusion The accumulation of formate in oxygen limited cultivations of E. coli can be fully prevented by addition of the trace elements selenium, nickel and molybdenum, necessary for the function of FHL complex. For large-scale cultivations, if glucose gradients are likely, the results from the two-compartment scale-down bioreactor indicate that the addition of the extra trace elements is beneficial. No negative effects on the biomass yield or on any other bioprocess parameters could be observed in cultures with the extra trace elements if the cells were repeatedly exposed to transient oxygen limitation. PMID:18687130

  1. Using landscape limnology to classify freshwater ecosystems for multi-ecosystem management and conservation

    USGS Publications Warehouse

    Soranno, Patricia A.; Cheruvelil, Kendra Spence; Webster, Katherine E.; Bremigan, Mary T.; Wagner, Tyler; Stow, Craig A.

    2010-01-01

    Governmental entities are responsible for managing and conserving large numbers of lake, river, and wetland ecosystems that can be addressed only rarely on a case-by-case basis. We present a system for predictive classification modeling, grounded in the theoretical foundation of landscape limnology, that creates a tractable number of ecosystem classes to which management actions may be tailored. We demonstrate our system by applying two types of predictive classification modeling approaches to develop nutrient criteria for eutrophication management in 1998 north temperate lakes. Our predictive classification system promotes the effective management of multiple ecosystems across broad geographic scales by explicitly connecting management and conservation goals to the classification modeling approach, considering multiple spatial scales as drivers of ecosystem dynamics, and acknowledging the hierarchical structure of freshwater ecosystems. Such a system is critical for adaptive management of complex mosaics of freshwater ecosystems and for balancing competing needs for ecosystem services in a changing world.

  2. Using d15N of Chironomidae to help assess lake condition and possible stressors in EPA?s National Lakes Assessment.

    EPA Science Inventory

    Background/Questions/Methods As interest in continental-scale ecology increases to address large-scale ecological problems, ecologists need indicators of complex processes that can be collected quickly at many sites across large areas. We are exploring the utility of stable isot...

  3. Development of Structural Geology and Tectonics Data System with Field and Lab Interface

    NASA Astrophysics Data System (ADS)

    Newman, J.; Tikoff, B.; Walker, J. D.; Good, J.; Michels, Z. D.; Ash, J.; Andrew, J.; Williams, R. T.; Richard, S. M.

    2015-12-01

    We have developed a prototype Data System for Structural Geology and Tectonics (SG&T). The goal of this effort is to enable recording and sharing data within the geoscience community, to encourage interdisciplinary research, and to facilitate the investigation of scientific questions that cannot currently be addressed. The development of the Data System emphasizes community input in order to build a system that encompasses the needs of researchers, in terms of data and usability. SG&T data is complex for a variety of reasons, including the wide range of temporal and spatial scales (many orders of magnitude each), the complex three-dimensional geometry of some geological structures, inherent spatial nature of the data, and the difficulty of making temporal inferences from spatial observations. To successful implement the step of developing a SG&T data system, we must simultaneously solve three problems: 1) How to digitize SG&T data; 2) How to design a software system that is applicable; and 3) How to construct a very flexible user interface. To address the first problem, we introduce the "Spot" concept, which allows tracking of hierarchical and spatial relations between structures at all scales, and will link map scale, mesoscale, and laboratory scale data. A Spot, in this sense, is analogous to the beam size of analytical equipment used for in situ analysis of rocks; it is the size over which a measurement or quantity is applicable. A Spot can be a single measurement, an aggregation of individual measurements, or even establish relationships between numerous other Spots. We address the second problem through the use of a Graph database to better preserve the myriad of potentially complex relationships. In order to construct a flexible user interface that follows a natural workflow, and that serves the needs of the community, we have begun the process of engaging the SG&T community in order to utilize the expertise of a large group of scientists to ensure the quality and usability of this data system. These activities have included Town Halls, subdiscipline-specific workshops to develop community standards, and pilot projects to test the data system in the field during the study of a variety of geologic structures.

  4. NASA's Pleiades Supercomputer Crunches Data For Groundbreaking Analysis and Visualizations

    NASA Image and Video Library

    2016-11-23

    The Pleiades supercomputer at NASA's Ames Research Center, recently named the 13th fastest computer in the world, provides scientists and researchers high-fidelity numerical modeling of complex systems and processes. By using detailed analyses and visualizations of large-scale data, Pleiades is helping to advance human knowledge and technology, from designing the next generation of aircraft and spacecraft to understanding the Earth's climate and the mysteries of our galaxy.

  5. Demonstration of reduced-order urban scale building energy models

    DOE PAGES

    Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...

    2017-09-08

    The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less

  6. Demonstration of reduced-order urban scale building energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew

    The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less

  7. [New materia medica project: synthetic biology based bioactive metabolites research in medicinal plant].

    PubMed

    Wang, Yong

    2017-03-25

    In the last decade, synthetic biology research has been gradually transited from monocellular parts or devices toward more complex multicellular systems. The emerging plant synthetic biology is regarded as the "next chapter" of synthetic biology. The complex and diverse plant metabolism as the entry point, plant synthetic biology research not only helps us understand how real life is working, but also facilitates us to learn how to design and construct more complex artificial life. Bioactive compounds innovation and large-scale production are expected to be breakthrough with the redesigned plant metabolism as well. In this review, we discuss the research progress in plant synthetic biology and propose the new materia medica project to lift the level of traditional Chinese herbal medicine research.

  8. Will Systems Biology Deliver Its Promise and Contribute to the Development of New or Improved Vaccines? What Really Constitutes the Study of "Systems Biology" and How Might Such an Approach Facilitate Vaccine Design.

    PubMed

    Germain, Ronald N

    2017-10-16

    A dichotomy exists in the field of vaccinology about the promise versus the hype associated with application of "systems biology" approaches to rational vaccine design. Some feel it is the only way to efficiently uncover currently unknown parameters controlling desired immune responses or discover what elements actually mediate these responses. Others feel that traditional experimental, often reductionist, methods for incrementally unraveling complex biology provide a more solid way forward, and that "systems" approaches are costly ways to collect data without gaining true insight. Here I argue that both views are inaccurate. This is largely because of confusion about what can be gained from classical experimentation versus statistical analysis of large data sets (bioinformatics) versus methods that quantitatively explain emergent properties of complex assemblies of biological components, with the latter reflecting what was previously called "physiology." Reductionist studies will remain essential for generating detailed insight into the functional attributes of specific elements of biological systems, but such analyses lack the power to provide a quantitative and predictive understanding of global system behavior. But by employing (1) large-scale screening methods for discovery of unknown components and connections in the immune system ( omics ), (2) statistical analysis of large data sets ( bioinformatics ), and (3) the capacity of quantitative computational methods to translate these individual components and connections into models of emergent behavior ( systems biology ), we will be able to better understand how the overall immune system functions and to determine with greater precision how to manipulate it to produce desired protective responses. Copyright © 2017 Cold Spring Harbor Laboratory Press; all rights reserved.

  9. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems

    NASA Astrophysics Data System (ADS)

    Broccard, Frédéric D.; Joshi, Siddharth; Wang, Jun; Cauwenberghs, Gert

    2017-08-01

    Objective. Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. Approach. This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. Main results. Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. Significance. Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.

  10. Designing Cognitive Complexity in Mathematical Problem-Solving Items

    ERIC Educational Resources Information Center

    Daniel, Robert C.; Embretson, Susan E.

    2010-01-01

    Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…

  11. Applying a framework for assessing the health system challenges to scaling up mHealth in South Africa

    PubMed Central

    2012-01-01

    Background Mobile phone technology has demonstrated the potential to improve health service delivery, but there is little guidance to inform decisions about acquiring and implementing mHealth technology at scale in health systems. Using the case of community-based health services (CBS) in South Africa, we apply a framework to appraise the opportunities and challenges to effective implementation of mHealth at scale in health systems. Methods A qualitative study reviewed the benefits and challenges of mHealth in community-based services in South Africa, through a combination of key informant interviews, site visits to local projects and document reviews. Using a framework adapted from three approaches to reviewing sustainable information and communication technology (ICT), the lessons from local experience and elsewhere formed the basis of a wider consideration of scale up challenges in South Africa. Results Four key system dimensions were identified and assessed: government stewardship and the organisational, technological and financial systems. In South Africa, the opportunities for successful implementation of mHealth include the high prevalence of mobile phones, a supportive policy environment for eHealth, successful use of mHealth for CBS in a number of projects and a well-developed ICT industry. However there are weaknesses in other key health systems areas such as organisational culture and capacity for using health information for management, and the poor availability and use of ICT in primary health care. The technological challenges include the complexity of ensuring interoperability and integration of information systems and securing privacy of information. Finally, there are the challenges of sustainable financing required for large scale use of mobile phone technology in resource limited settings. Conclusion Against a background of a health system with a weak ICT environment and limited implementation capacity, it remains uncertain that the potential benefits of mHealth for CBS would be retained with immediate large-scale implementation. Applying a health systems framework facilitated a systematic appraisal of potential challenges to scaling up mHealth for CBS in South Africa and may be useful for policy and practice decision-making in other low- and middle-income settings. PMID:23126370

  12. Factorized Runge-Kutta-Chebyshev Methods

    NASA Astrophysics Data System (ADS)

    O'Sullivan, Stephen

    2017-05-01

    The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) explicit schemes for the integration of large systems of PDEs with diffusive terms are presented. The schemes are simple to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures. Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability for acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The extent of the stability domain is approximately the same as that of RKC schemes, and a third longer than in the case of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is also discussed. A publicly available implementation of FRKC2 schemes may be obtained from maths.dit.ie/frkc

  13. Can Regional Climate Models be used in the assessment of vulnerability and risk caused by extreme events?

    NASA Astrophysics Data System (ADS)

    Nunes, Ana

    2015-04-01

    Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.

  14. Multi-scale correlations in different futures markets

    NASA Astrophysics Data System (ADS)

    Bartolozzi, M.; Mellen, C.; di Matteo, T.; Aste, T.

    2007-07-01

    In the present work we investigate the multiscale nature of the correlations for high frequency data (1 min) in different futures markets over a period of two years, starting on the 1st of January 2003 and ending on the 31st of December 2004. In particular, by using the concept of local Hurst exponent, we point out how the behaviour of this parameter, usually considered as a benchmark for persistency/antipersistency recognition in time series, is largely time-scale dependent in the market context. These findings are a direct consequence of the intrinsic complexity of a system where trading strategies are scale-adaptive. Moreover, our analysis points out different regimes in the dynamical behaviour of the market indices under consideration.

  15. A new way to protect privacy in large-scale genome-wide association studies.

    PubMed

    Kamm, Liina; Bogdanov, Dan; Laur, Sven; Vilo, Jaak

    2013-04-01

    Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Supplementary data are available at Bioinformatics online.

  16. A low complexity visualization tool that helps to perform complex systems analysis

    NASA Astrophysics Data System (ADS)

    Beiró, M. G.; Alvarez-Hamelin, J. I.; Busch, J. R.

    2008-12-01

    In this paper, we present an extension of large network visualization (LaNet-vi), a tool to visualize large scale networks using the k-core decomposition. One of the new features is how vertices compute their angular position. While in the later version it is done using shell clusters, in this version we use the angular coordinate of vertices in higher k-shells, and arrange the highest shell according to a cliques decomposition. The time complexity goes from O(n\\sqrt n) to O(n) upon bounds on a heavy-tailed degree distribution. The tool also performs a k-core-connectivity analysis, highlighting vertices that are not k-connected; e.g. this property is useful to measure robustness or quality of service (QoS) capabilities in communication networks. Finally, the actual version of LaNet-vi can draw labels and all the edges using transparencies, yielding an accurate visualization. Based on the obtained figure, it is possible to distinguish different sources and types of complex networks at a glance, in a sort of 'network iris-print'.

  17. Efficient data management in a large-scale epidemiology research project.

    PubMed

    Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang

    2012-09-01

    This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  19. Large Scale Multi-area Static/Dynamic Economic Dispatch using Nature Inspired Optimization

    NASA Astrophysics Data System (ADS)

    Pandit, Manjaree; Jain, Kalpana; Dubey, Hari Mohan; Singh, Rameshwar

    2017-04-01

    Economic dispatch (ED) ensures that the generation allocation to the power units is carried out such that the total fuel cost is minimized and all the operating equality/inequality constraints are satisfied. Classical ED does not take transmission constraints into consideration, but in the present restructured power systems the tie-line limits play a very important role in deciding operational policies. ED is a dynamic problem which is performed on-line in the central load dispatch centre with changing load scenarios. The dynamic multi-area ED (MAED) problem is more complex due to the additional tie-line, ramp-rate and area-wise power balance constraints. Nature inspired (NI) heuristic optimization methods are gaining popularity over the traditional methods for complex problems. This work presents the modified particle swarm optimization (PSO) based techniques where parameter automation is effectively used for improving the search efficiency by avoiding stagnation to a sub-optimal result. This work validates the performance of the PSO variants with traditional solver GAMS for single as well as multi-area economic dispatch (MAED) on three test cases of a large 140-unit standard test system having complex constraints.

  20. Large-Scale Astrophysical Visualization on Smartphones

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Massimino, P.; Costa, A.; Gheller, C.; Grillo, A.; Krokos, M.; Petta, C.

    2011-07-01

    Nowadays digital sky surveys and long-duration, high-resolution numerical simulations using high performance computing and grid systems produce multidimensional astrophysical datasets in the order of several Petabytes. Sharing visualizations of such datasets within communities and collaborating research groups is of paramount importance for disseminating results and advancing astrophysical research. Moreover educational and public outreach programs can benefit greatly from novel ways of presenting these datasets by promoting understanding of complex astrophysical processes, e.g., formation of stars and galaxies. We have previously developed VisIVO Server, a grid-enabled platform for high-performance large-scale astrophysical visualization. This article reviews the latest developments on VisIVO Web, a custom designed web portal wrapped around VisIVO Server, then introduces VisIVO Smartphone, a gateway connecting VisIVO Web and data repositories for mobile astrophysical visualization. We discuss current work and summarize future developments.

Top