Sample records for large distributed system

  1. High Voltage Distribution System (HVDS) as a better system compared to Low Voltage Distribution System (LVDS) applied at Medan city power network

    NASA Astrophysics Data System (ADS)

    Dinzi, R.; Hamonangan, TS; Fahmi, F.

    2018-02-01

    In the current distribution system, a large-capacity distribution transformer supplies loads to remote locations. The use of 220/380 V network is nowadays less common compared to 20 kV network. This results in losses due to the non-optimal distribution transformer, which neglected the load location, poor consumer profile, and large power losses along the carrier. This paper discusses how high voltage distribution systems (HVDS) can be a better system used in distribution networks than the currently used distribution system (Low Voltage Distribution System, LVDS). The proposed change of the system into the new configuration is done by replacing a large-capacity distribution transformer with some smaller-capacity distribution transformers and installed them in positions that closest to the load. The use of high voltage distribution systems will result in better voltage profiles and fewer power losses. From the non-technical side, the annual savings and payback periods on high voltage distribution systems will also be the advantage.

  2. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  3. Proceedings of the Workshop on Applications of Distributed System Theory to the Control of Large Space Structures

    NASA Technical Reports Server (NTRS)

    Rodriguez, G. (Editor)

    1983-01-01

    Two general themes in the control of large space structures are addressed: control theory for distributed parameter systems and distributed control for systems requiring spatially-distributed multipoint sensing and actuation. Topics include modeling and control, stabilization, and estimation and identification.

  4. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  5. A Semantics-Based Information Distribution Framework for Large Web-Based Course Forum System

    ERIC Educational Resources Information Center

    Chim, Hung; Deng, Xiaotie

    2008-01-01

    We propose a novel data distribution framework for developing a large Web-based course forum system. In the distributed architectural design, each forum server is fully equipped with the ability to support some course forums independently. The forum servers collaborating with each other constitute the whole forum system. Therefore, the workload of…

  6. Proceedings of the Workshop on Large, Distributed, Parallel Architecture, Real-Time Systems Held in Alexandria, Virginia on 15-19 March 1993

    DTIC Science & Technology

    1993-07-01

    distributed system. Second, to support the development of scaleable end-use applications that implement the mission critical control policies of the...implementation. These and other cogent reasons suggest two important rules for designing large, distributed, realtime systems: i) separate policies required...system design rules. 0 The separation of system coordination and management policies and mechanisms allows for the "objectification" of the underlying

  7. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  8. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  9. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  10. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  11. Spatiotemporal stick-slip phenomena in a coupled continuum-granular system

    NASA Astrophysics Data System (ADS)

    Ecke, Robert

    In sheared granular media, stick-slip behavior is ubiquitous, especially at very small shear rates and weak drive coupling. The resulting slips are characteristic of natural phenomena such as earthquakes and well as being a delicate probe of the collective dynamics of the granular system. In that spirit, we developed a laboratory experiment consisting of sheared elastic plates separated by a narrow gap filled with quasi-two-dimensional granular material (bi-dispersed nylon rods) . We directly determine the spatial and temporal distributions of strain displacements of the elastic continuum over 200 spatial points located adjacent to the gap. Slip events can be divided into large system-spanning events and spatially distributed smaller events. The small events have a probability distribution of event moment consistent with an M - 3 / 2 power law scaling and a Poisson distributed recurrence time distribution. Large events have a broad, log-normal moment distribution and a mean repetition time. As the applied normal force increases, there are fractionally more (less) large (small) events, and the large-event moment distribution broadens. The magnitude of the slip motion of the plates is well correlated with the root-mean-square displacements of the granular matter. Our results are consistent with mean field descriptions of statistical models of earthquakes and avalanches. We further explore the high-speed dynamics of system events and also discuss the effective granular friction of the sheared layer. We find that large events result from stored elastic energy in the plates in this coupled granular-continuum system.

  12. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  13. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  14. Energy Management of Smart Distribution Systems

    NASA Astrophysics Data System (ADS)

    Ansari, Bananeh

    Electric power distribution systems interface the end-users of electricity with the power grid. Traditional distribution systems are operated in a centralized fashion with the distribution system owner or operator being the only decision maker. The management and control architecture of distribution systems needs to gradually transform to accommodate the emerging smart grid technologies, distributed energy resources, and active electricity end-users or prosumers. The content of this document concerns with developing multi-task multi-objective energy management schemes for: 1) commercial/large residential prosumers, and 2) distribution system operator of a smart distribution system. The first part of this document describes a method of distributed energy management of multiple commercial/ large residential prosumers. These prosumers not only consume electricity, but also generate electricity using their roof-top solar photovoltaics systems. When photovoltaics generation is larger than local consumption, excess electricity will be fed into the distribution system, creating a voltage rise along the feeder. Distribution system operator cannot tolerate a significant voltage rise. ES can help the prosumers manage their electricity exchanges with the distribution system such that minimal voltage fluctuation occurs. The proposed distributed energy management scheme sizes and schedules each prosumer's ES to reduce the electricity bill and mitigate voltage rise along the feeder. The second part of this document focuses on emergency energy management and resilience assessment of a distribution system. The developed emergency energy management system uses available resources and redundancy to restore the distribution system's functionality fully or partially. The success of the restoration maneuver depends on how resilient the distribution system is. Engineering resilience terminology is used to evaluate the resilience of distribution system. The proposed emergency energy management scheme together with resilience assessment increases the distribution system operator's preparedness for emergency events.

  15. Resource Management for Distributed Parallel Systems

    NASA Technical Reports Server (NTRS)

    Neuman, B. Clifford; Rao, Santosh

    1993-01-01

    Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.

  16. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  17. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  18. WATERBORNE OUTBREAKS CAUSED BY DISTRIBUTION SYSTEM DEFICIENCIES IN THE UNITED STATES

    EPA Science Inventory


    Distribution system contamination has caused a significant number of waterborne outbreaks in the United States. The number of illnesses in a distribution-system outbreak can be quite large, and illness can be severe resulting in hospitalization and sometimes death. During t...

  19. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  20. Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brancucci Martinez-Anido, C.; Hodge, B. M.

    2014-09-01

    This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).

  1. Shared versus distributed memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.

    1991-01-01

    The question of whether multiprocessors should have shared or distributed memory has attracted a great deal of attention. Some researchers argue strongly for building distributed memory machines, while others argue just as strongly for programming shared memory multiprocessors. A great deal of research is underway on both types of parallel systems. Special emphasis is placed on systems with a very large number of processors for computation intensive tasks and considers research and implementation trends. It appears that the two types of systems will likely converge to a common form for large scale multiprocessors.

  2. Design of Availability-Dependent Distributed Services in Large-Scale Uncooperative Settings

    ERIC Educational Resources Information Center

    Morales, Ramses Victor

    2009-01-01

    Thesis Statement: "Availability-dependent global predicates can be efficiently and scalably realized for a class of distributed services, in spite of specific selfish and colluding behaviors, using local and decentralized protocols". Several types of large-scale distributed systems spanning the Internet have to deal with availability variations…

  3. Applications of the Theory of Distributed and Real Time Systems to the Development of Large-Scale Timing Based Systems.

    DTIC Science & Technology

    1996-04-01

    time systems . The focus is on the study of ’building-blocks’ for the construction of reliable and efficient systems. Our works falls into three...Members of MIT’s Theory of Distributed Systems group have continued their work on modelling, designing, verifying and analyzing distributed and real

  4. Network placement optimization for large-scale distributed system

    NASA Astrophysics Data System (ADS)

    Ren, Yu; Liu, Fangfang; Fu, Yunxia; Zhou, Zheng

    2018-01-01

    The network geometry strongly influences the performance of the distributed system, i.e., the coverage capability, measurement accuracy and overall cost. Therefore the network placement optimization represents an urgent issue in the distributed measurement, even in large-scale metrology. This paper presents an effective computer-assisted network placement optimization procedure for the large-scale distributed system and illustrates it with the example of the multi-tracker system. To get an optimal placement, the coverage capability and the coordinate uncertainty of the network are quantified. Then a placement optimization objective function is developed in terms of coverage capabilities, measurement accuracy and overall cost. And a novel grid-based encoding approach for Genetic algorithm is proposed. So the network placement is optimized by a global rough search and a local detailed search. Its obvious advantage is that there is no need for a specific initial placement. At last, a specific application illustrates this placement optimization procedure can simulate the measurement results of a specific network and design the optimal placement efficiently.

  5. Enterprise PACS and image distribution.

    PubMed

    Huang, H K

    2003-01-01

    Around the world now, because of the need to improve operation efficiency and better cost effective healthcare, many large-scale healthcare enterprises have been formed. Each of these enterprises groups hospitals, medical centers, and clinics together as one enterprise healthcare network. The management of these enterprises recognizes the importance of using PACS and image distribution as a key technology in cost-effective healthcare delivery in the enterprise level. As a result, many large-scale enterprise level PACS/image distribution pilot studies, full design and implementation, are underway. The purpose of this paper is to provide readers an overall view of the current status of enterprise PACS and image distribution. reviews three large-scale enterprise PACS/image distribution systems in USA, Germany, and South Korean. The concept of enterprise level PACS/image distribution, its characteristics and ingredients are then discussed. Business models for enterprise level implementation available by the private medical imaging and system integration industry are highlighted. One current system under development in designing a healthcare enterprise level chest tuberculosis (TB) screening in Hong Kong is described in detail. Copyright 2002 Elsevier Science Ltd.

  6. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    NASA Astrophysics Data System (ADS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  7. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  8. Parallel/distributed direct method for solving linear systems

    NASA Technical Reports Server (NTRS)

    Lin, Avi

    1990-01-01

    A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schauder, C.

    This subcontract report was completed under the auspices of the NREL/SCE High-Penetration Photovoltaic (PV) Integration Project, which is co-funded by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE) and the California Solar Initiative (CSI) Research, Development, Demonstration, and Deployment (RD&D) program funded by the California Public Utility Commission (CPUC) and managed by Itron. This project is focused on modeling, quantifying, and mitigating the impacts of large utility-scale PV systems (generally 1-5 MW in size) that are interconnected to the distribution system. This report discusses the concerns utilities have when interconnecting large PV systems thatmore » interconnect using PV inverters (a specific application of frequency converters). Additionally, a number of capabilities of PV inverters are described that could be implemented to mitigate the distribution system-level impacts of high-penetration PV integration. Finally, the main issues that need to be addressed to ease the interconnection of large PV systems to the distribution system are presented.« less

  10. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  11. Data Sharing in DHT Based P2P Systems

    NASA Astrophysics Data System (ADS)

    Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia

    The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.

  12. Middleware for big data processing: test results

    NASA Astrophysics Data System (ADS)

    Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.

    2017-12-01

    Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.

  13. Advancing Underwater Acoustic Communication for Autonomous Distributed Networks via Sparse Channel Sensing, Coding, and Navigation Support

    DTIC Science & Technology

    2012-09-30

    Estimation Methods for Underwater OFDM 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. 6) Asynchronous Multiuser...multi-input multi-output ( MIMO ) OFDM is also pursued, where it is shown that the proposed hybrid initialization enables drastically improved receiver...are investigated. 5) Two Iterative Receivers for Distributed MIMO - OFDM with Large Doppler Deviations. This work studies a distributed system with

  14. Information Power Grid Posters

    NASA Technical Reports Server (NTRS)

    Vaziri, Arsi

    2003-01-01

    This document is a summary of the accomplishments of the Information Power Grid (IPG). Grids are an emerging technology that provide seamless and uniform access to the geographically dispersed, computational, data storage, networking, instruments, and software resources needed for solving large-scale scientific and engineering problems. The goal of the NASA IPG is to use NASA's remotely located computing and data system resources to build distributed systems that can address problems that are too large or complex for a single site. The accomplishments outlined in this poster presentation are: access to distributed data, IPG heterogeneous computing, integration of large-scale computing node into distributed environment, remote access to high data rate instruments,and exploratory grid environment.

  15. Earthquake hazards to domestic water distribution systems in Salt Lake County, Utah

    USGS Publications Warehouse

    Highland, Lynn M.

    1985-01-01

    A magnitude-7. 5 earthquake occurring along the central portion of the Wasatch Fault, Utah, may cause significant damage to Salt Lake County's domestic water system. This system is composed of water treatment plants, aqueducts, distribution mains, and other facilities that are vulnerable to ground shaking, liquefaction, fault movement, and slope failures. Recent investigations into surface faulting, landslide potential, and earthquake intensity provide basic data for evaluating the potential earthquake hazards to water-distribution systems in the event of a large earthquake. Water supply system components may be vulnerable to one or more earthquake-related effects, depending on site geology and topography. Case studies of water-system damage by recent large earthquakes in Utah and in other regions of the United States offer valuable insights in evaluating water system vulnerability to earthquakes.

  16. An implementation of the distributed programming structural synthesis system (PROSSS)

    NASA Technical Reports Server (NTRS)

    Rogers, J. L., Jr.

    1981-01-01

    A method is described for implementing a flexible software system that combines large, complex programs with small, user-supplied, problem-dependent programs and that distributes their execution between a mainframe and a minicomputer. The Programming Structural Synthesis System (PROSSS) was the specific software system considered. The results of such distributed implementation are flexibility of the optimization procedure organization and versatility of the formulation of constraints and design variables.

  17. Derivation of WECC Distributed PV System Model Parameters from Quasi-Static Time-Series Distribution System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Boemer, Jens C.; Vittal, Eknath

    The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less

  18. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  19. Storage and distribution of pathology digital images using integrated web-based viewing systems.

    PubMed

    Marchevsky, Alberto M; Dulbandzhyan, Ronda; Seely, Kevin; Carey, Steve; Duncan, Raymond G

    2002-05-01

    Health care providers have expressed increasing interest in incorporating digital images of gross pathology specimens and photomicrographs in routine pathology reports. To describe the multiple technical and logistical challenges involved in the integration of the various components needed for the development of a system for integrated Web-based viewing, storage, and distribution of digital images in a large health system. An Oracle version 8.1.6 database was developed to store, index, and deploy pathology digital photographs via our Intranet. The database allows for retrieval of images by patient demographics or by SNOMED code information. The Intranet of a large health system accessible from multiple computers located within the medical center and at distant private physician offices. The images can be viewed using any of the workstations of the health system that have authorized access to our Intranet, using a standard browser or a browser configured with an external viewer or inexpensive plug-in software, such as Prizm 2.0. The images can be printed on paper or transferred to film using a digital film recorder. Digital images can also be displayed at pathology conferences by using wireless local area network (LAN) and secure remote technologies. The standardization of technologies and the adoption of a Web interface for all our computer systems allows us to distribute digital images from a pathology database to a potentially large group of users distributed in multiple locations throughout a large medical center.

  20. On Predictability of System Anomalies in Real World

    DTIC Science & Technology

    2011-08-01

    distributed system SETI @home [44]. Different from the above work, this work focuses on quantifying the predictability of real-world system anomalies. V...J.-M. Vincent, and D. Anderson, “Mining for statistical models of availability in large-scale distributed systems: An empirical study of seti @home,” in Proc. of MASCOTS, sept. 2009.

  1. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  2. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  3. Time-dependent breakdown of fiber networks: Uncertainty of lifetime

    NASA Astrophysics Data System (ADS)

    Mattsson, Amanda; Uesaka, Tetsu

    2017-05-01

    Materials often fail when subjected to stresses over a prolonged period. The time to failure, also called the lifetime, is known to exhibit large variability of many materials, particularly brittle and quasibrittle materials. For example, a coefficient of variation reaches 100% or even more. Its distribution shape is highly skewed toward zero lifetime, implying a large number of premature failures. This behavior contrasts with that of normal strength, which shows a variation of only 4%-10% and a nearly bell-shaped distribution. The fundamental cause of this large and unique variability of lifetime is not well understood because of the complex interplay between stochastic processes taking place on the molecular level and the hierarchical and disordered structure of the material. We have constructed fiber network models, both regular and random, as a paradigm for general material structures. With such networks, we have performed Monte Carlo simulations of creep failure to establish explicit relationships among fiber characteristics, network structures, system size, and lifetime distribution. We found that fiber characteristics have large, sometimes dominating, influences on the lifetime variability of a network. Among the factors investigated, geometrical disorders of the network were found to be essential to explain the large variability and highly skewed shape of the lifetime distribution. With increasing network size, the distribution asymptotically approaches a double-exponential form. The implication of this result is that, so-called "infant mortality," which is often predicted by the Weibull approximation of the lifetime distribution, may not exist for a large system.

  4. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  5. Centralized versus distributed propulsion

    NASA Technical Reports Server (NTRS)

    Clark, J. P.

    1982-01-01

    The functions and requirements of auxiliary propulsion systems are reviewed. None of the three major tasks (attitude control, stationkeeping, and shape control) can be performed by a collection of thrusters at a single central location. If a centralized system is defined as a collection of separated clusters, made up of the minimum number of propulsion units, then such a system can provide attitude control and stationkeeping for most vehicles. A distributed propulsion system is characterized by more numerous propulsion units in a regularly distributed arrangement. Various proposed large space systems are reviewed and it is concluded that centralized auxiliary propulsion is best suited to vehicles with a relatively rigid core. These vehicles may carry a number of flexible or movable appendages. A second group, consisting of one or more large flexible flat plates, may need distributed propulsion for shape control. There is a third group, consisting of vehicles built up from multiple shuttle launches, which may be forced into a distributed system because of the need to add additional propulsion units as the vehicles grow. The effects of distributed propulsion on a beam-like structure were examined. The deflection of the structure under both translational and rotational thrusts is shown as a function of the number of equally spaced thrusters. When two thrusters only are used it is shown that location is an important parameter. The possibility of using distributed propulsion to achieve minimum overall system weight is also examined. Finally, an examination of the active damping by distributed propulsion is described.

  6. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    PubMed

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  7. A gallium-arsenide digital phase shifter for clock and control signal distribution in high-speed digital systems

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.

    1992-01-01

    The design, implementation, testing, and applications of a gallium-arsenide digital phase shifter and fan-out buffer are described. The integrated circuit provides a method for adjusting the phase of high-speed clock and control signals in digital systems, without the need for pruning cables, multiplexing between cables of different lengths, delay lines, or similar techniques. The phase of signals distributed with the described chip can be dynamically adjusted in eight different steps of approximately 60 ps per step. The IC also serves as a fan-out buffer and provides 12 in-phase outputs. The chip is useful for distributing high-speed clock and control signals in synchronous digital systems, especially if components are distributed over a large physical area or if there is a large number of components.

  8. Computer-generated forces in distributed interactive simulation

    NASA Astrophysics Data System (ADS)

    Petty, Mikel D.

    1995-04-01

    Distributed Interactive Simulation (DIS) is an architecture for building large-scale simulation models from a set of independent simulator nodes communicating via a common network protocol. DIS is most often used to create a simulated battlefield for military training. Computer Generated Forces (CGF) systems control large numbers of autonomous battlefield entities in a DIS simulation using computer equipment and software rather than humans in simulators. CGF entities serve as both enemy forces and supplemental friendly forces in a DIS exercise. Research into various aspects of CGF systems is ongoing. Several CGF systems have been implemented.

  9. Main Difference with Formed Process of the Moon and Earth Minerals and Fluids

    NASA Astrophysics Data System (ADS)

    Kato, T.; Miura, Y.

    2018-04-01

    Minerals show large and global distribution on Earth system, but small and local formation on the Moon. Fluid water is formed as same size and distribution on Earth and the Moon based on their body-systems.

  10. Constraints and System Primitives in Achieving Multilevel Security in Real Time Distributed System Environment

    DTIC Science & Technology

    1994-04-18

    because they represent a microkernel and monolithic kernel approach to MLS operating system issues. TMACH is I based on MACH, a distributed operating...the operating system is [L.sed on a microkernel design or a monolithic kernel design. This distinction requires some caution since monolithic operating...are provided by 3 user-level processes, in contrast to standard UNIX, which has a large monolithic kernel that pro- I - 22 - Distributed O)perating

  11. Distributed Issues for Ada Real-Time Systems

    DTIC Science & Technology

    1990-07-23

    NUMBERS Distributed Issues for Ada Real - Time Systems MDA 903-87- C- 0056 S. AUTHOR(S) Thomas E. Griest 7. PERFORMING ORGANiZATION NAME(S) AND ADORESS(ES) 8...considerations. I Adding to the problem of distributed real - time systems is the issue of maintaining a common sense of time among all of the processors...because -omeone is waiting for the final output of a very large set of computations. However in real - time systems , consistent meeting of short-term

  12. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  13. Operating tool for a distributed data and information management system

    NASA Astrophysics Data System (ADS)

    Reck, C.; Mikusch, E.; Kiemle, S.; Wolfmüller, M.; Böttcher, M.

    2002-07-01

    The German Remote Sensing Data Center has developed the Data Information and Management System DIMS which provides multi-mission ground system services for earth observation product processing, archiving, ordering and delivery. DIMS successfully uses newest technologies within its services. This paper presents the solution taken to simplify operation tasks for this large and distributed system.

  14. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  15. Lexical Problems in Large Distributed Information Systems.

    ERIC Educational Resources Information Center

    Berkovich, Simon Ya; Shneiderman, Ben

    1980-01-01

    Suggests a unified concept of a lexical subsystem as part of an information system to deal with lexical problems in local and network environments. The linguistic and control functions of the lexical subsystems in solving problems for large computer systems are described, and references are included. (Author/BK)

  16. Novel Directional Protection Scheme for the FREEDM Smart Grid System

    NASA Astrophysics Data System (ADS)

    Sharma, Nitish

    This research primarily deals with the design and validation of the protection system for a large scale meshed distribution system. The large scale system simulation (LSSS) is a system level PSCAD model which is used to validate component models for different time-scale platforms, to provide a virtual testing platform for the Future Renewable Electric Energy Delivery and Management (FREEDM) system. It is also used to validate the cases of power system protection, renewable energy integration and storage, and load profiles. The protection of the FREEDM system against any abnormal condition is one of the important tasks. The addition of distributed generation and power electronic based solid state transformer adds to the complexity of the protection. The FREEDM loop system has a fault current limiter and in addition, the Solid State Transformer (SST) limits the fault current at 2.0 per unit. Former students at ASU have developed the protection scheme using fiber-optic cable. However, during the NSF-FREEDM site visit, the National Science Foundation (NSF) team regarded the system incompatible for the long distances. Hence, a new protection scheme with a wireless scheme is presented in this thesis. The use of wireless communication is extended to protect the large scale meshed distributed generation from any fault. The trip signal generated by the pilot protection system is used to trigger the FID (fault isolation device) which is an electronic circuit breaker operation (switched off/opening the FIDs). The trip signal must be received and accepted by the SST, and it must block the SST operation immediately. A comprehensive protection system for the large scale meshed distribution system has been developed in PSCAD with the ability to quickly detect the faults. The validation of the protection system is performed by building a hardware model using commercial relays at the ASU power laboratory.

  17. Exergy Analysis of the Cryogenic Helium Distribution System for the Large Hadron Collider (lhc)

    NASA Astrophysics Data System (ADS)

    Claudet, S.; Lebrun, Ph.; Tavian, L.; Wagner, U.

    2010-04-01

    The Large Hadron Collider (LHC) at CERN features the world's largest helium cryogenic system, spreading over the 26.7 km circumference of the superconducting accelerator. With a total equivalent capacity of 145 kW at 4.5 K including 18 kW at 1.8 K, the LHC refrigerators produce an unprecedented exergetic load, which must be distributed efficiently to the magnets in the tunnel over the 3.3 km length of each of the eight independent sectors of the machine. We recall the main features of the LHC cryogenic helium distribution system at different temperature levels and present its exergy analysis, thus enabling to qualify second-principle efficiency and identify main remaining sources of irreversibility.

  18. Modeling Multiple Human-Automation Distributed Systems using Network-form Games

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume

    2012-01-01

    The paper describes at a high-level the network-form game framework (based on Bayes net and game theory), which can be used to model and analyze safety issues in large, distributed, mixed human-automation systems such as NextGen.

  19. On distributed wavefront reconstruction for large-scale adaptive optics systems.

    PubMed

    de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel

    2016-05-01

    The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.

  20. On the Path to SunShot. Emerging Issues and Challenges in Integrating Solar with the Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Broderick, Robert; Mather, Barry

    2016-05-01

    This report analyzes distribution-integration challenges, solutions, and research needs in the context of distributed generation from PV (DGPV) deployment to date and the much higher levels of deployment expected with achievement of the U.S. Department of Energy's SunShot targets. Recent analyses have improved estimates of the DGPV hosting capacities of distribution systems. This report uses these results to statistically estimate the minimum DGPV hosting capacity for the contiguous United States using traditional inverters of approximately 170 GW without distribution system modifications. This hosting capacity roughly doubles if advanced inverters are used to manage local voltage and additional minor, low-cost changesmore » could further increase these levels substantially. Key to achieving these deployment levels at minimum cost is siting DGPV based on local hosting capacities, suggesting opportunities for regulatory, incentive, and interconnection innovation. Already, pre-computed hosting capacity is beginning to expedite DGPV interconnection requests and installations in select regions; however, realizing SunShot-scale deployment will require further improvements to DGPV interconnection processes, standards and codes, and compensation mechanisms so they embrace the contributions of DGPV to system-wide operations. SunShot-scale DGPV deployment will also require unprecedented coordination of the distribution and transmission systems. This includes harnessing DGPV's ability to relieve congestion and reduce system losses by generating closer to loads; minimizing system operating costs and reserve deployments through improved DGPV visibility; developing communication and control architectures that incorporate DGPV into system operations; providing frequency response, transient stability, and synthesized inertia with DGPV in the event of large-scale system disturbances; and potentially managing reactive power requirements due to large-scale deployment of advanced inverter functions. Finally, additional local and system-level value could be provided by integrating DGPV with energy storage and 'virtual storage,' which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Together, continued innovation across this rich distribution landscape can enable the very-high deployment levels envisioned by SunShot.« less

  1. Evaluation of a stream channel-type system for southeast Alaska.

    Treesearch

    M.D. Bryant; P.E. Porter; S.J. Paustian

    1991-01-01

    Nine channel types within a hierarchical channel-type classification system (CTCS) were surveyed to determine relations between salmonid densities and species distribution, and channel type. Two other habitat classification systems and the amount of large woody debris also were compared to species distribution and salmonid densities, and to stream channel types....

  2. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  3. Implementing Access to Data Distributed on Many Processors

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.

  4. A model for the distributed storage and processing of large arrays

    NASA Technical Reports Server (NTRS)

    Mehrota, P.; Pratt, T. W.

    1983-01-01

    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.

  5. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  6. Distributed Processing of Projections of Large Datasets: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    Modern information needs have resulted in very large amounts of data being used in geographic information systems. Problems arise when trying to project these data in a reasonable amount of time and accuracy, however. Current single-threaded methods can suffer from two problems: fast projection with poor accuracy, or accurate projection with long processing time. A possible solution may be to combine accurate interpolation methods and distributed processing algorithms to quickly and accurately convert digital geospatial data between coordinate systems. Modern technology has made it possible to construct systems, such as Beowulf clusters, for a low cost and provide access to supercomputer-class technology. Combining these techniques may result in the ability to use large amounts of geographic data in time-critical situations.

  7. Universal distribution of component frequencies in biological and technological systems

    PubMed Central

    Pang, Tin Yau; Maslov, Sergei

    2013-01-01

    Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195

  8. Drude weight fluctuations in many-body localized systems

    NASA Astrophysics Data System (ADS)

    Filippone, Michele; Brouwer, Piet W.; Eisert, Jens; von Oppen, Felix

    2016-11-01

    We numerically investigate the distribution of Drude weights D of many-body states in disordered one-dimensional interacting electron systems across the transition to a many-body localized phase. Drude weights are proportional to the spectral curvatures induced by magnetic fluxes in mesoscopic rings. They offer a method to relate the transition to the many-body localized phase to transport properties. In the delocalized regime, we find that the Drude weight distribution at a fixed disorder configuration agrees well with the random-matrix-theory prediction P (D ) ∝(γ2+D2) -3 /2 , although the distribution width γ strongly fluctuates between disorder realizations. A crossover is observed towards a distribution with different large-D asymptotics deep in the many-body localized phase, which however differs from the commonly expected Cauchy distribution. We show that the average distribution width <γ >, rescaled by L Δ ,Δ being the average level spacing in the middle of the spectrum and L the systems size, is an efficient probe of the many-body localization transition, as it increases (vanishes) exponentially in the delocalized (localized) phase.

  9. Grid-connected distributed solar power systems

    NASA Astrophysics Data System (ADS)

    Moyle, R.; Chernoff, H.; Schweizer, T.

    This paper discusses some important, though often ignored, technical and economic issues of distributed solar power systems: protection of the utility system and nonsolar customers requires suitable interfaced equipment. Purchase criteria must mirror reality; most analyses use life-cycle costing with low discount rates - most buyers use short payback periods. Distributing, installing, and marketing small, distributed solar systems is more costly than most analyses estimate. Results show that certain local conditions and uncommon purchase considerations can combine to make small, distributed solar power attractive, but lower interconnect costs (per kW), lower marketing and product distribution costs, and more favorable purchase criteria make large, centralized solar energy more attractive. Specifically, the value of dispersed solar systems to investors and utilities can be higher than $2000/kw. However, typical residential owners place a value of well under $1000 on the installed system.

  10. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  11. Revolutionary Aeropropulsion Concept for Sustainable Aviation: Turboelectric Distributed Propulsion

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Felder, James L.; Tong, Michael. T.; Armstrong, Michael

    2013-01-01

    In response to growing aviation demands and concerns about the environment and energy usage, a team at NASA proposed and examined a revolutionary aeropropulsion concept, a turboelectric distributed propulsion system, which employs multiple electric motor-driven propulsors that are distributed on a large transport vehicle. The power to drive these electric propulsors is generated by separately located gas-turbine-driven electric generators on the airframe. This arrangement enables the use of many small-distributed propulsors, allowing a very high effective bypass ratio, while retaining the superior efficiency of large core engines, which are physically separated but connected to the propulsors through electric power lines. Because of the physical separation of propulsors from power generating devices, a new class of vehicles with unprecedented performance employing such revolutionary propulsion system is possible in vehicle design. One such vehicle currently being investigated by NASA is called the "N3-X" that uses a hybrid-wing-body for an airframe and superconducting generators, motors, and transmission lines for its propulsion system. On the N3-X these new degrees of design freedom are used (1) to place two large turboshaft engines driving generators in freestream conditions to minimize total pressure losses and (2) to embed a broad continuous array of 14 motor-driven fans on the upper surface of the aircraft near the trailing edge of the hybrid-wing-body airframe to maximize propulsive efficiency by ingesting thick airframe boundary layer flow. Through a system analysis in engine cycle and weight estimation, it was determined that the N3-X would be able to achieve a reduction of 70% or 72% (depending on the cooling system) in energy usage relative to the reference aircraft, a Boeing 777-200LR. Since the high-power electric system is used in its propulsion system, a study of the electric power distribution system was performed to identify critical dynamic and safety issues. This paper presents some of the features and issues associated with the turboelectric distributed propulsion system and summarizes the recent study results, including the high electric power distribution, in the analysis of the N3-X vehicle.

  12. Modeling a hierarchical structure of factors influencing exploitation policy for water distribution systems using ISM approach

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, Małgorzata; Wyczółkowski, Ryszard; Gładysiak, Violetta

    2017-12-01

    Water distribution systems are one of the basic elements of contemporary technical infrastructure of urban and rural areas. It is a complex engineering system composed of transmission networks and auxiliary equipment (e.g. controllers, checkouts etc.), scattered territorially over a large area. From the water distribution system operation point of view, its basic features are: functional variability, resulting from the need to adjust the system to temporary fluctuations in demand for water and territorial dispersion. The main research questions are: What external factors should be taken into account when developing an effective water distribution policy? Does the size and nature of the water distribution system significantly affect the exploitation policy implemented? These questions have shaped the objectives of research and the method of research implementation.

  13. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  14. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  15. Production and Distribution of NASA MODIS Remote Sensing Products

    NASA Technical Reports Server (NTRS)

    Wolfe, Robert

    2007-01-01

    The two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on-board NASA's Earth Observing System (EOS) Terra and Aqua satellites make key measurements for understanding the Earth's terrestrial ecosystems. Global time-series of terrestrial geophysical parameters have been produced from MODIS/Terra for over 7 years and for MODIS/Aqua for more than 4 1/2 years. These well calibrated instruments, a team of scientists and a large data production, archive and distribution systems have allowed for the development of a new suite of high quality product variables at spatial resolutions as fine as 250m in support of global change research and natural resource applications. This talk describes the MODIS Science team's products, with a focus on the terrestrial (land) products, the data processing approach and the process for monitoring and improving the product quality. The original MODIS science team was formed in 1989. The team's primary role is the development and implementation of the geophysical algorithms. In addition, the team provided feedback on the design and pre-launch testing of the instrument and helped guide the development of the data processing system. The key challenges the science team dealt with before launch were the development of algorithms for a new instrument and provide guidance of the large and complex multi-discipline processing system. Land, Ocean and Atmosphere discipline teams drove the processing system requirements, particularly in the area of the processing loads and volumes needed to daily produce geophysical maps of the Earth at resolutions as fine as 250 m. The processing system had to handle a large number of data products, large data volumes and processing loads, and complex processing requirements. Prior to MODIS, daily global maps from heritage instruments, such as Advanced Very High Resolution Radiometer (AVHRR), were not produced at resolutions finer than 5 km. The processing solution evolved into a combination of processing the lower level (Level 1) products and the higher level discipline specific Land and Atmosphere products in the MODIS Science Investigator Lead Processing System (SIPS), the MODIS Adaptive Processing System (MODAPS), and archive and distribution of the Land products to the user community by two of NASA s EOS Distributed Active Archive Centers (DAACs). Recently, a part of MODAPS, the Level 1 and Atmosphere Archive and Distribution System (LAADS), took over the role of archiving and distributing the Level 1 and Atmosphere products to the user community.

  16. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  17. Large-area photogrammetry based testing of wind turbine blades

    NASA Astrophysics Data System (ADS)

    Poozesh, Peyman; Baqersad, Javad; Niezrecki, Christopher; Avitabile, Peter; Harvey, Eric; Yarala, Rahul

    2017-03-01

    An optically based sensing system that can measure the displacement and strain over essentially the entire area of a utility-scale blade leads to a measurement system that can significantly reduce the time and cost associated with traditional instrumentation. This paper evaluates the performance of conventional three dimensional digital image correlation (3D DIC) and three dimensional point tracking (3DPT) approaches over the surface of wind turbine blades and proposes a multi-camera measurement system using dynamic spatial data stitching. The potential advantages for the proposed approach include: (1) full-field measurement distributed over a very large area, (2) the elimination of time-consuming wiring and expensive sensors, and (3) the need for large-channel data acquisition systems. There are several challenges associated with extending the capability of a standard 3D DIC system to measure entire surface of utility scale blades to extract distributed strain, deflection, and modal parameters. This paper only tries to address some of the difficulties including: (1) assessing the accuracy of the 3D DIC system to measure full-field distributed strain and displacement over the large area, (2) understanding the geometrical constraints associated with a wind turbine testing facility (e.g. lighting, working distance, and speckle pattern size), (3) evaluating the performance of the dynamic stitching method to combine two different fields of view by extracting modal parameters from aligned point clouds, and (4) determining the feasibility of employing an output-only system identification to estimate modal parameters of a utility scale wind turbine blade from optically measured data. Within the current work, the results of an optical measurement (one stereo-vision system) performed on a large area over a 50-m utility-scale blade subjected to quasi-static and cyclic loading are presented. The blade certification and testing is typically performed using International Electro-Technical Commission standard (IEC 61400-23). For static tests, the blade is pulled in either flap-wise or edge-wise directions to measure deflection or distributed strain at a few limited locations of a large-sized blade. Additionally, the paper explores the error associated with using a multi-camera system (two stereo-vision systems) in measuring 3D displacement and extracting structural dynamic parameters on a mock set up emulating a utility-scale wind turbine blade. The results obtained in this paper reveal that the multi-camera measurement system has the potential to identify the dynamic characteristics of a very large structure.

  18. Influence of particle size distribution on nanopowder cold compaction processes

    NASA Astrophysics Data System (ADS)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  19. Statistical Maps of Ground Magnetic Disturbance Derived from Global Geospace Models

    NASA Astrophysics Data System (ADS)

    Rigler, E. J.; Wiltberger, M. J.; Love, J. J.

    2017-12-01

    Electric currents in space are the principal driver of magnetic variations measured at Earth's surface. These in turn induce geoelectric fields that present a natural hazard for technological systems like high-voltage power distribution networks. Modern global geospace models can reasonably simulate large-scale geomagnetic response to solar wind variations, but they are less successful at deterministic predictions of intense localized geomagnetic activity that most impacts technological systems on the ground. Still, recent studies have shown that these models can accurately reproduce the spatial statistical distributions of geomagnetic activity, suggesting that their physics are largely correct. Since the magnetosphere is a largely externally driven system, most model-measurement discrepancies probably arise from uncertain boundary conditions. So, with realistic distributions of solar wind parameters to establish its boundary conditions, we use the Lyon-Fedder-Mobarry (LFM) geospace model to build a synthetic multivariate statistical model of gridded ground magnetic disturbance. From this, we analyze the spatial modes of geomagnetic response, regress on available measurements to fill in unsampled locations on the grid, and estimate the global probability distribution of extreme magnetic disturbance. The latter offers a prototype geomagnetic "hazard map", similar to those used to characterize better-known geophysical hazards like earthquakes and floods.

  20. Land transportation model for supply chain manufacturing industries

    NASA Astrophysics Data System (ADS)

    Kurniawan, Fajar

    2017-12-01

    Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.

  1. The UCLA Design Diversity Experiment (DEDIX) system: A distributed testbed for multiple-version software

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.

  2. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  3. Quantum quenches and work distributions in ultralow-density systems.

    PubMed

    Shchadilova, Yulia E; Ribeiro, Pedro; Haque, Masudul

    2014-02-21

    We present results on quantum quenches in lattice systems with a fixed number of particles in a much larger number of sites. Both local and global quenches in this limit generically have power-law work distributions ("edge singularities"). We show that this regime allows for large edge singularity exponents beyond that allowed by the constraints of the usual thermodynamic limit. This large-exponent singularity has observable consequences in the time evolution, leading to a distinct intermediate power-law regime in time. We demonstrate these results first using local quantum quenches in a low-density Kondo-like system, and additionally through global and local quenches in Bose-Hubbard, Aubry-Andre, and hard-core boson systems at low densities.

  4. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  5. Automated distribution system management for multichannel space power systems

    NASA Technical Reports Server (NTRS)

    Fleck, G. W.; Decker, D. K.; Graves, J.

    1983-01-01

    A NASA sponsored study of space power distribution system technology is in progress to develop an autonomously managed power system (AMPS) for large space power platforms. The multichannel, multikilowatt, utility-type power subsystem proposed presents new survivability requirements and increased subsystem complexity. The computer controls under development for the power management system must optimize the power subsystem performance and minimize the life cycle cost of the platform. A distribution system management philosophy has been formulated which incorporates these constraints. Its implementation using a TI9900 microprocessor and FORTH as the programming language is presented. The approach offers a novel solution to the perplexing problem of determining the optimal combination of loads which should be connected to each power channel for a versatile electrical distribution concept.

  6. Archive Inventory Management System (AIMS) — A Fast, Metrics Gathering Framework for Validating and Gaining Insight from Large File-Based Data Archives

    NASA Astrophysics Data System (ADS)

    Verma, R. V.

    2018-04-01

    The Archive Inventory Management System (AIMS) is a software package for understanding the distribution, characteristics, integrity, and nuances of files and directories in large file-based data archives on a continuous basis.

  7. Preliminary analysis of hub and spoke air freight distribution system

    NASA Technical Reports Server (NTRS)

    Whitehead, A. H., Jr.

    1978-01-01

    A brief analysis is made of the hub and spoke air freight distribution system which would employ less than 15 hub centers world wide with very large advanced distributed-load freighters providing the line-haul delivery between hubs. This system is compared to a more conventional network using conventionally-designed long-haul freighters which travel between numerous major airports. The analysis calculates all of the transportation costs, including handling charges and pickup and delivery costs. The results show that the economics of the hub/spoke system are severely compromised by the extensive use of feeder aircraft to deliver cargo into and from the large freighter terminals. Not only are the higher costs for the smaller feeder airplanes disadvantageous, but their use implies an additional exchange of cargo between modes compared to truck delivery. The conventional system uses far fewer feeder airplanes, and in many cases, none at all. When feeder aircraft are eliminated from the hub/spoke system, however, that system is universally more economical than any conventional system employing smaller line-haul aircraft.

  8. Optical interconnect for large-scale systems

    NASA Astrophysics Data System (ADS)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  9. Integrated Micro-Power System (IMPS) Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Wilt, David; Hepp, Aloysius; Moran, Matt; Jenkins, Phillip; Scheiman, David; Raffaelle, Ryne

    2003-01-01

    Glenn Research Center (GRC) has a long history of energy related technology developments for large space related power systems, including photovoltaics, thermo-mechanical energy conversion, electrochemical energy storage. mechanical energy storage, power management and distribution and power system design. Recently, many of these technologies have begun to be adapted for small, distributed power system applications or Integrated Micro-Power Systems (IMPS). This paper will describe the IMPS component and system demonstration efforts to date.

  10. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems

    PubMed Central

    Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel

    2016-01-01

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894

  11. On Event-Triggered Adaptive Architectures for Decentralized and Distributed Control of Large-Scale Modular Systems.

    PubMed

    Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel

    2016-08-16

    The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.

  12. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  13. Why do electricity policy and competitive markets fail to use advanced PV systems to improve distribution power quality?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHenry, Mark P.; Johnson, Jay; Hightower, Mike

    The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less

  14. Why do electricity policy and competitive markets fail to use advanced PV systems to improve distribution power quality?

    DOE PAGES

    McHenry, Mark P.; Johnson, Jay; Hightower, Mike

    2016-01-01

    The increasing pressure for network operators to meet distribution network power quality standards with increasing peak loads, renewable energy targets, and advances in automated distributed power electronics and communications is forcing policy-makers to understand new means to distribute costs and benefits within electricity markets. Discussions surrounding how distributed generation (DG) exhibits active voltage regulation and power factor/reactive power control and other power quality capabilities are complicated by uncertainties of baseline local distribution network power quality and to whom and how costs and benefits of improved electricity infrastructure will be allocated. DG providing ancillary services that dynamically respond to the networkmore » characteristics could lead to major network improvements. With proper market structures renewable energy systems could greatly improve power quality on distribution systems with nearly no additional cost to the grid operators. Renewable DG does have variability challenges, though this issue can be overcome with energy storage, forecasting, and advanced inverter functionality. This paper presents real data from a large-scale grid-connected PV array with large-scale storage and explores effective mitigation measures for PV system variability. As a result, we discuss useful inverter technical knowledge for policy-makers to mitigate ongoing inflation of electricity network tariff components by new DG interconnection requirements or electricity markets which value power quality and control.« less

  15. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  16. Critical Assessment of the Foundations of Power Transmission and Distribution Reliability Metrics and Standards.

    PubMed

    Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan

    2016-01-01

    The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.

  17. Immunology-directed methods for distributed robotics: a novel immunity-based architecture for robust control and coordination

    NASA Astrophysics Data System (ADS)

    Singh, Surya P. N.; Thayer, Scott M.

    2002-02-01

    This paper presents a novel algorithmic architecture for the coordination and control of large scale distributed robot teams derived from the constructs found within the human immune system. Using this as a guide, the Immunology-derived Distributed Autonomous Robotics Architecture (IDARA) distributes tasks so that broad, all-purpose actions are refined and followed by specific and mediated responses based on each unit's utility and capability to timely address the system's perceived need(s). This method improves on initial developments in this area by including often overlooked interactions of the innate immune system resulting in a stronger first-order, general response mechanism. This allows for rapid reactions in dynamic environments, especially those lacking significant a priori information. As characterized via computer simulation of a of a self-healing mobile minefield having up to 7,500 mines and 2,750 robots, IDARA provides an efficient, communications light, and scalable architecture that yields significant operation and performance improvements for large-scale multi-robot coordination and control.

  18. Distributed-current-feed and distributed-energy-store railguns

    NASA Astrophysics Data System (ADS)

    Holland, L. D.

    1984-03-01

    In connection with advances in railgun technology evolution toward the development of systems for specific applications, investigations are being conducted regarding a wide variety of power supply and railgun systems. The present study is concerned with the development of the distributed railguns and the introduction of a new type of railgun system specifically designed for applications requiring long accelerators. It is found that the distributed railguns offer a solution to the limits on performance of the breech-fed railguns as the length of the rails becomes large. Attention is given to the pulse-forming network and breech-fed railgun, the breech-fed railgun with parallel pulse-forming network, a distributed-energy-store railgun, a distributed-current-feed (DCF) railgun, and a DCF railgun launcher.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, J; Lindsay, P; University of Toronto, Toronto

    Purpose: Recent progress in small animal radiotherapy systems has provided the foundation for delivering the heterogeneous, millimeter scale dose distributions demanded by preclinical radiobiology investigations. Despite advances in preclinical dose planning, delivery of highly heterogeneous dose distributions is constrained by the fixed collimation systems and large x-ray focal spot common in small animal radiotherapy systems. This work proposes a dual focal spot dose optimization and delivery method with a large x-ray focal spot used to deliver homogeneous dose regions and a small focal spot to paint spatially heterogeneous dose regions. Methods: Two-dimensional dose kernels were measured for a 1 mmmore » circular collimator with radiochromic film at 10 mm depth in a solid water phantom for the small and large x-ray focal spots on a recently developed small animal microirradiator. These kernels were used in an optimization framework which segmented a desired dose distribution into low- and high-spatial frequency regions for delivery by the large and small focal spot, respectively. For each region, the method determined an optimal set of stage positions and beam-on times. The method was demonstrated by optimizing a bullseye pattern consisting of 0.75 mm radius circular target and 0.5 and 1.0 mm wide rings alternating between 0 and 2 Gy. Results: Compared to a large focal spot technique, the dual focal spot technique improved the optimized dose distribution: 69.2% of the optimized dose was within 0.5 Gy of the intended dose for the large focal spot, compared to 80.6% for the dual focal spot method. The dual focal spot design required 14.0 minutes of optimization, and will require 178.3 minutes for automated delivery. Conclusion: The dual focal spot optimization and delivery framework is a novel option for delivering conformal and heterogeneous dose distributions at the preclinical level and provides a new experimental option for unique radiobiological investigations. Funding Support: this work is supported by funding the National Sciences and Engineering Research Council of Canada, and a Mitacs-accelerate fellowship. Conflict of Interest: Dr. Lindsay and Dr. Jaffray are listed as inventors of the small animal microirradiator described herein. This system has been licensed for commercial development.« less

  20. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    NASA Astrophysics Data System (ADS)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.

  1. 'Fracking', Induced Seismicity and the Critical Earth

    NASA Astrophysics Data System (ADS)

    Leary, P.; Malin, P. E.

    2012-12-01

    Issues of 'fracking' and induced seismicity are reverse-analogous to the equally complex issues of well productivity in hydrocarbon, geothermal and ore reservoirs. In low hazard reservoir economics, poorly producing wells and low grade ore bodies are many while highly producing wells and high grade ores are rare but high pay. With induced seismicity factored in, however, the same distribution physics reverses the high/low pay economics: large fracture-connectivity systems are hazardous hence low pay, while high probability small fracture-connectivity systems are non-hazardous hence high pay. Put differently, an economic risk abatement tactic for well productivity and ore body pay is to encounter large-scale fracture systems, while an economic risk abatement tactic for 'fracking'-induced seismicity is to avoid large-scale fracture systems. Well productivity and ore body grade distributions arise from three empirical rules for fluid flow in crustal rock: (i) power-law scaling of grain-scale fracture density fluctuations; (ii) spatial correlation between spatial fluctuations in well-core porosity and the logarithm of well-core permeability; (iii) frequency distributions of permeability governed by a lognormality skewness parameter. The physical origin of rules (i)-(iii) is the universal existence of a critical-state-percolation grain-scale fracture-density threshold for crustal rock. Crustal fractures are effectively long-range spatially-correlated distributions of grain-scale defects permitting fluid percolation on mm to km scales. The rule is, the larger the fracture system the more intense the percolation throughput. As percolation pathways are spatially erratic and unpredictable on all scales, they are difficult to model with sparsely sampled well data. Phenomena such as well productivity, induced seismicity, and ore body fossil fracture distributions are collectively extremely difficult to predict. Risk associated with unpredictable reservoir well productivity and ore body distributions can be managed by operating in a context which affords many small failures for a few large successes. In reverse view, 'fracking' and induced seismicity could be rationally managed in a context in which many small successes can afford a few large failures. However, just as there is every incentive to acquire information leading to higher rates of productive well drilling and ore body exploration, there are equal incentives for acquiring information leading to lower rates of 'fracking'-induced seismicity. Current industry practice of using an effective medium approach to reservoir rock creates an uncritical sense that property distributions in rock are essentially uniform. Well-log data show that the reverse is true: the larger the length scale the greater the deviation from uniformity. Applying the effective medium approach to large-scale rock formations thus appears to be unnecessarily hazardous. It promotes the notion that large scale fluid pressurization acts against weakly cohesive but essentially uniform rock to produce large-scale quasi-uniform tensile discontinuities. Indiscriminate hydrofacturing appears to be vastly more problematic in reality than as pictured by the effective medium hypothesis. The spatial complexity of rock, especially at large scales, provides ample reason to find more controlled pressurization strategies for enhancing in situ flow.

  2. High-speed data duplication/data distribution: An adjunct to the mass storage equation

    NASA Technical Reports Server (NTRS)

    Howard, Kevin

    1993-01-01

    The term 'mass storage' invokes the image of large on-site disk and tape farms which contain huge quantities of low- to medium-access data. Although the cost of such bulk storage is recognized, the cost of the bulk distribution of this data rarely is given much attention. Mass data distribution becomes an even more acute problem if the bulk data is part of a national or international system. If the bulk data distribution is to travel from one large data center to another large data center then fiber-optic cables or the use of satellite channels is feasible. However, if the distribution must be disseminated from a central site to a number of much smaller, and, perhaps varying sites, then cost prohibits the use of fiber-optic cable or satellite communication. Given these cost constraints much of the bulk distribution of data will continue to be disseminated via inexpensive magnetic tape using the various next day postal service options. For non-transmitted bulk data, our working hypotheses are that the desired duplication efficiency of the total bulk data should be established before selecting any particular data duplication system; and, that the data duplication algorithm should be determined before any bulk data duplication method is selected.

  3. The case for distributed irrigation as a development priority in sub-Saharan Africa.

    PubMed

    Burney, Jennifer A; Naylor, Rosamond L; Postel, Sandra L

    2013-07-30

    Distributed irrigation systems are those in which the water access (via pump or human power), distribution (via furrow, watering can, sprinkler, drip lines, etc.), and use all occur at or near the same location. Distributed systems are typically privately owned and managed by individuals or groups, in contrast to centralized irrigation systems, which tend to be publicly operated and involve large water extractions and distribution over significant distances for use by scores of farmers. Here we draw on a growing body of evidence on smallholder farmers, distributed irrigation systems, and land and water resource availability across sub-Saharan Africa (SSA) to show how investments in distributed smallholder irrigation technologies might be used to (i) use the water sources of SSA more productively, (ii) improve nutritional outcomes and rural development throughout SSA, and (iii) narrow the income disparities that permit widespread hunger to persist despite aggregate economic advancement.

  4. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  5. Proceedings from the Workshop on Large-Grained Parallelism (2nd) Held in Hidden Valley, Pennsylvania on October 11-14, 1987.

    DTIC Science & Technology

    1987-11-01

    The purpose of the workshop was to bring together people whose interests lie in the areas of operating I systems , programming languages, and formal... operating system support, and applications. There were parallel discussions on scheduling and distributed languages, and on real-time and operating ...number of key challenges: * Distributed systems , languages, environments - Make transactions efficient. Integrate them into the operating system

  6. Legal and Illegal Patterns of Drug Distribution in the United States

    ERIC Educational Resources Information Center

    Caliguri, Joseph P.

    1976-01-01

    Along with large supply sources of legal and illegal drug substances, diversion and distribution systems have developed to feed and maintain the demand. This presentation provides information on the diverting of drugs from legal and illegal sources as well as the characteristics of the distribution patterns. (Author)

  7. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  8. Competition and Cooperation of Distributed Generation and Power System

    NASA Astrophysics Data System (ADS)

    Miyake, Masatoshi; Nanahara, Toshiya

    Advances in distributed generation technologies together with the deregulation of an electric power industry can lead to a massive introduction of distributed generation. Since most of distributed generation will be interconnected to a power system, coordination and competition between distributed generators and large-scale power sources would be a vital issue in realizing a more desirable energy system in the future. This paper analyzes competitions between electric utilities and cogenerators from the viewpoints of economic and energy efficiency based on the simulation results on an energy system including a cogeneration system. First, we examine best response correspondence of an electric utility and a cogenerator with a noncooperative game approach: we obtain a Nash equilibrium point. Secondly, we examine the optimum strategy that attains the highest social surplus and the highest energy efficiency through global optimization.

  9. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  10. Decentralized diagnostics based on a distributed micro-genetic algorithm for transducer networks monitoring large experimental systems.

    PubMed

    Arpaia, P; Cimmino, P; Girone, M; La Commara, G; Maisto, D; Manna, C; Pezzetti, M

    2014-09-01

    Evolutionary approach to centralized multiple-faults diagnostics is extended to distributed transducer networks monitoring large experimental systems. Given a set of anomalies detected by the transducers, each instance of the multiple-fault problem is formulated as several parallel communicating sub-tasks running on different transducers, and thus solved one-by-one on spatially separated parallel processes. A micro-genetic algorithm merges evaluation time efficiency, arising from a small-size population distributed on parallel-synchronized processors, with the effectiveness of centralized evolutionary techniques due to optimal mix of exploitation and exploration. In this way, holistic view and effectiveness advantages of evolutionary global diagnostics are combined with reliability and efficiency benefits of distributed parallel architectures. The proposed approach was validated both (i) by simulation at CERN, on a case study of a cold box for enhancing the cryogeny diagnostics of the Large Hadron Collider, and (ii) by experiments, under the framework of the industrial research project MONDIEVOB (Building Remote Monitoring and Evolutionary Diagnostics), co-funded by EU and the company Del Bo srl, Napoli, Italy.

  11. The study on servo-control system in the large aperture telescope

    NASA Astrophysics Data System (ADS)

    Hu, Wei; Zhenchao, Zhang; Daxing, Wang

    2008-08-01

    Large astronomical telescope or extremely enormous astronomical telescope servo tracking technique will be one of crucial technology that must be solved in researching and manufacturing. To control technique feature of large astronomical telescope or extremely enormous astronomical telescope, this paper design a sort of large astronomical telescope servo tracking control system. This system composes a principal and subordinate distributed control system, host computer sends steering instruction and receive slave computer functional mode, slave computer accomplish control algorithm and execute real-time control. Large astronomical telescope servo control use direct drive machine, and adopt DSP technology to complete direct torque control algorithm, Such design can not only increase control system performance, but also greatly reduced volume and costs of control system, which has a significant occurrence. The system design scheme can be proved reasonably by calculating and simulating. This system can be applied to large astronomical telescope.

  12. Long-term spatial and temporal microbial community dynamics in a large-scale drinking water distribution system with multiple disinfectant regimes.

    PubMed

    Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus

    2018-08-01

    Long-term spatial-temporal investigations of microbial dynamics in full-scale drinking water distribution systems are scarce. These investigations can reveal the process, infrastructure, and environmental factors that influence the microbial community, offering opportunities to re-think microbial management in drinking water systems. Often, these insights are missed or are unreliable in short-term studies, which are impacted by stochastic variabilities inherent to large full-scale systems. In this two-year study, we investigated the spatial and temporal dynamics of the microbial community in a large, full scale South African drinking water distribution system that uses three successive disinfection strategies (i.e. chlorination, chloramination and hypochlorination). Monthly bulk water samples were collected from the outlet of the treatment plant and from 17 points in the distribution system spanning nearly 150 km and the bacterial community composition was characterised by Illumina MiSeq sequencing of the V4 hypervariable region of the 16S rRNA gene. Like previous studies, Alpha- and Betaproteobacteria dominated the drinking water bacterial communities, with an increase in Betaproteobacteria post-chloramination. In contrast with previous reports, the observed richness, diversity, and evenness of the bacterial communities were higher in the winter months as opposed to the summer months in this study. In addition to temperature effects, the seasonal variations were also likely to be influenced by changes in average water age in the distribution system and corresponding changes in disinfectant residual concentrations. Spatial dynamics of the bacterial communities indicated distance decay, with bacterial communities becoming increasingly dissimilar with increasing distance between sampling locations. These spatial effects dampened the temporal changes in the bulk water community and were the dominant factor when considering the entire distribution system. However, temporal variations were consistently stronger as compared to spatial changes at individual sampling locations and demonstrated seasonality. This study emphasises the need for long-term studies to comprehensively understand the temporal patterns that would otherwise be missed in short-term investigations. Furthermore, systematic long-term investigations are particularly critical towards determining the impact of changes in source water quality, environmental conditions, and process operations on the changes in microbial community composition in the drinking water distribution system. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Simulation Framework for Intelligent Transportation Systems

    DOT National Transportation Integrated Search

    1996-10-01

    A simulation framework has been developed for a large-scale, comprehensive, scaleable simulation of an Intelligent Transportation System. The simulator is designed for running on parellel computers and distributed (networked) computer systems, but ca...

  14. Validation and performance of the LHC cryogenic system through commissioning of the first sector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serio, L.; Bouillot, A.; Casas-Cubillos, J.

    2007-12-01

    The cryogenic system [1] for the Large Hadron Collider accelerator is presently in its final phase of commissioning at nominal operating conditions. The refrigeration capacity for the LHC is produced using eight large cryogenic plants and eight 1.8 K refrigeration units installed on five cryogenic islands. Machine cryogenic equipment is installed in a 26.7-km circumference ring deep underground tunnel and are maintained at their nominal operating conditions via a distribution system consisting of transfer lines, cold interconnection boxes at each cryogenic island and a cryogenic distribution line. The functional analysis of the whole system during all operating conditions was establishedmore » and validated during the first sector commissioning in order to maximize the system availability. Analysis, operating modes, main failure scenarios, results and performance of the cryogenic system are presented.« less

  15. Mineralogical and Molecular Microbial Characterization of a Lead Pipe Removed from a Drinking Water Distribution System

    EPA Science Inventory

    The U.S. Environmental Protection Agency's (US EPA) Lead and Copper Rule established an action level for lead of 0.0 15 mg/L in a 1 liter first draw sample at the consumer's tap. Lead corrosion and solubility in drinking water distribution systems are largely controlled by the fo...

  16. THE EFFECT OF CHLORIDE AND ORTHOPHOSPHATE ON THE RELEASE OF IRON FROM DRINKING WATER DISTRIBUTION SYSTEM CAST IRON MAIN

    EPA Science Inventory

    “Colored water” resulting from suspended iron particles is a common drinking water consumer complaint which is largely impacted by water chemistry. A bench scale study, performed on a 90 year-old corroded cast-iron pipe section removed from a drinking water distribution system, w...

  17. Exchange-driven growth.

    PubMed

    Ben-Naim, E; Krapivsky, P L

    2003-09-01

    We study a class of growth processes in which clusters evolve via exchange of particles. We show that depending on the rate of exchange there are three possibilities: (I) Growth-clusters grow indefinitely, (II) gelation-all mass is transformed into an infinite gel in a finite time, and (III) instant gelation. In regimes I and II, the cluster size distribution attains a self-similar form. The large size tail of the scaling distribution is Phi(x) approximately exp(-x(2-nu)), where nu is a homogeneity degree of the rate of exchange. At the borderline case nu=2, the distribution exhibits a generic algebraic tail, Phi(x) approximately x(-5). In regime III, the gel nucleates immediately and consumes the entire system. For finite systems, the gelation time vanishes logarithmically, T approximately [lnN](-(nu-2)), in the large system size limit N--> infinity. The theory is applied to coarsening in the infinite range Ising-Kawasaki model and in electrostatically driven granular layers.

  18. Finite-key analysis for measurement-device-independent quantum key distribution.

    PubMed

    Curty, Marcos; Xu, Feihu; Cui, Wei; Lim, Charles Ci Wen; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2014-04-29

    Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach--measurement-device-independent quantum key distribution--has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time frame of signal transmission.

  19. An Investigation of Energy Consumption and Cost in Large Air-Conditioned Buildings. An Interim Report.

    ERIC Educational Resources Information Center

    Milbank, N. O.

    Two similarly large buildings and air conditioning systems are comparatively analyzed as to energy consumption, costs, and inefficiency during certain measured periods of time. Building design and velocity systems are compared to heating, cooling, lighting and distribution capabilities. Energy requirements for pumps, fans and lighting are found to…

  20. The decay process of rotating unstable systems through the passage time distribution

    NASA Astrophysics Data System (ADS)

    Jiménez-Aquino, J. I.; Cortés, Emilio; Aquino, N.

    2001-05-01

    In this work we propose a general scheme to characterize, through the passage time distribution, the decay process of rotational unstable systems in the presence of external forces of large amplitude. The formalism starts with a matricial Langevin type equation formulated in the context of two dynamical representations given, respectively, by the vectors x and y, both related by a time dependent rotation matrix. The transformation preserves the norm of the vector and decouples the set of dynamical equations in the transformed space y. We study the dynamical characterization of the systems of two variables and show that the statistical properties of the passage time distribution are essentially equivalent in both dynamics. The theory is applied to the laser system studied in Dellunde et al. (Opt. Commun. 102 (1993) 277), where the effect of large injected signals on the transient dynamics of the laser has been studied in terms of complex electric field. The analytical results are compared with numerical simulation.

  1. A novel dispersion compensating fiber grating with a large chirp parameter and period sampled distribution

    NASA Astrophysics Data System (ADS)

    Xia, Li; Li, Xuhui; Chen, Xiangfei; Xie, Shizhong

    2003-11-01

    A novel fiber grating structure is proposed for the purpose of dispersion compensation. This kind of grating can be produced with a large chirp parameter and period sampled distribution along the grating length. There are multiple channels in the wide bandwidth and each channel has totally different dispersion and bandwidth. The dispersion compensation effect of this special designed grating is verified through system simulation.

  2. Large space systems technology electronics: Data and power distribution

    NASA Technical Reports Server (NTRS)

    Dunbar, W. G.

    1980-01-01

    The development of hardware technology and manufacturing techniques required to meet space platform and antenna system needs in the 1980s is discussed. Preliminary designs for manned and automatically assembled space power system cables, connectors, and grounding and bonding materials and techniques are reviewed. Connector concepts, grounding design requirements, and bonding requirements are discussed. The problem of particulate debris contamination for large structure spacecraft is addressed.

  3. Monitoring and control requirement definition study for Dispersed Storage and Generation (DSG), volume 1

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.

  4. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    NASA Astrophysics Data System (ADS)

    Dykstra, Dave

    2012-12-01

    One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.

  5. Comparison of the Frontier Distributed Database Caching System to NoSQL Databases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, Dave

    One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less

  6. Final Report for File System Support for Burst Buffers on HPC Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, W.; Mohror, K.

    Distributed burst buffers are a promising storage architecture for handling I/O workloads for exascale computing. As they are being deployed on more supercomputers, a file system that efficiently manages these burst buffers for fast I/O operations carries great consequence. Over the past year, FSU team has undertaken several efforts to design, prototype and evaluate distributed file systems for burst buffers on HPC systems. These include MetaKV: a Key-Value Store for Metadata Management of Distributed Burst Buffers, a user-level file system with multiple backends, and a specialized file system for large datasets of deep neural networks. Our progress for these respectivemore » efforts are elaborated further in this report.« less

  7. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    PubMed Central

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  8. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  9. Modeling of subglacial hydrological development following rapid supraglacial lake drainage.

    PubMed

    Dow, C F; Kulessa, B; Rutt, I C; Tsai, V C; Pimentel, S; Doyle, S H; van As, D; Lindbäck, K; Pettersson, R; Jones, G A; Hubbard, A

    2015-06-01

    The rapid drainage of supraglacial lakes injects substantial volumes of water to the bed of the Greenland ice sheet over short timescales. The effect of these water pulses on the development of basal hydrological systems is largely unknown. To address this, we develop a lake drainage model incorporating both (1) a subglacial radial flux element driven by elastic hydraulic jacking and (2) downstream drainage through a linked channelized and distributed system. Here we present the model and examine whether substantial, efficient subglacial channels can form during or following lake drainage events and their effect on the water pressure in the surrounding distributed system. We force the model with field data from a lake drainage site, 70 km from the terminus of Russell Glacier in West Greenland. The model outputs suggest that efficient subglacial channels do not readily form in the vicinity of the lake during rapid drainage and instead water is evacuated primarily by a transient turbulent sheet and the distributed system. Following lake drainage, channels grow but are not large enough to reduce the water pressure in the surrounding distributed system, unless preexisting channels are present throughout the domain. Our results have implications for the analysis of subglacial hydrological systems in regions where rapid lake drainage provides the primary mechanism for surface-to-bed connections. Model for subglacial hydrological analysis of rapid lake drainage eventsLimited subglacial channel growth during and following rapid lake drainagePersistence of distributed drainage in inland areas where channel growth is limited.

  10. Modeling of subglacial hydrological development following rapid supraglacial lake drainage

    PubMed Central

    Dow, C F; Kulessa, B; Rutt, I C; Tsai, V C; Pimentel, S; Doyle, S H; van As, D; Lindbäck, K; Pettersson, R; Jones, G A; Hubbard, A

    2015-01-01

    The rapid drainage of supraglacial lakes injects substantial volumes of water to the bed of the Greenland ice sheet over short timescales. The effect of these water pulses on the development of basal hydrological systems is largely unknown. To address this, we develop a lake drainage model incorporating both (1) a subglacial radial flux element driven by elastic hydraulic jacking and (2) downstream drainage through a linked channelized and distributed system. Here we present the model and examine whether substantial, efficient subglacial channels can form during or following lake drainage events and their effect on the water pressure in the surrounding distributed system. We force the model with field data from a lake drainage site, 70 km from the terminus of Russell Glacier in West Greenland. The model outputs suggest that efficient subglacial channels do not readily form in the vicinity of the lake during rapid drainage and instead water is evacuated primarily by a transient turbulent sheet and the distributed system. Following lake drainage, channels grow but are not large enough to reduce the water pressure in the surrounding distributed system, unless preexisting channels are present throughout the domain. Our results have implications for the analysis of subglacial hydrological systems in regions where rapid lake drainage provides the primary mechanism for surface-to-bed connections. Key Points Model for subglacial hydrological analysis of rapid lake drainage events Limited subglacial channel growth during and following rapid lake drainage Persistence of distributed drainage in inland areas where channel growth is limited PMID:26640746

  11. A multitracer approach for characterizing interactions between shallow groundwater and the hydrothermal system in the Norris Geyser Basin area, Yellowstone National Park

    USGS Publications Warehouse

    Gardner, W.P.; Susong, D.D.; Solomon, D.K.; Heasler, H.P.

    2011-01-01

    Multiple environmental tracers are used to investigate age distribution, evolution, and mixing in local- to regional-scale groundwater circulation around the Norris Geyser Basin area in Yellowstone National Park. Springs ranging in temperature from 3??C to 90??C in the Norris Geyser Basin area were sampled for stable isotopes of hydrogen and oxygen, major and minor element chemistry, dissolved chlorofluorocarbons, and tritium. Groundwater near Norris Geyser Basin is comprised of two distinct systems: a shallow, cool water system and a deep, high-temperature hydrothermal system. These two end-member systems mix to create springs with intermediate temperature and composition. Using multiple tracers from a large number of springs, it is possible constrain the distribution of possible flow paths and refine conceptual models of groundwater circulation in and around a large, complex hydrothermal system. Copyright 2011 by the American Geophysical Union.

  12. Diversity, community composition, and dynamics of nonpigmented and late-pigmenting rapidly growing mycobacteria in an urban tap water production and distribution system.

    PubMed

    Dubrou, S; Konjek, J; Macheras, E; Welté, B; Guidicelli, L; Chignon, E; Joyeux, M; Gaillard, J L; Heym, B; Tully, T; Sapriel, G

    2013-09-01

    Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network.

  13. Diversity, Community Composition, and Dynamics of Nonpigmented and Late-Pigmenting Rapidly Growing Mycobacteria in an Urban Tap Water Production and Distribution System

    PubMed Central

    Dubrou, S.; Konjek, J.; Macheras, E.; Welté, B.; Guidicelli, L.; Chignon, E.; Joyeux, M.; Gaillard, J. L.; Heym, B.; Tully, T.

    2013-01-01

    Nonpigmented and late-pigmenting rapidly growing mycobacteria (RGM) have been reported to commonly colonize water production and distribution systems. However, there is little information about the nature and distribution of RGM species within the different parts of such complex networks or about their clustering into specific RGM species communities. We conducted a large-scale survey between 2007 and 2009 in the Parisian urban tap water production and distribution system. We analyzed 1,418 water samples from 36 sites, covering all production units, water storage tanks, and distribution units; RGM isolates were identified by using rpoB gene sequencing. We detected 18 RGM species and putative new species, with most isolates being Mycobacterium chelonae and Mycobacterium llatzerense. Using hierarchical clustering and principal-component analysis, we found that RGM were organized into various communities correlating with water origin (groundwater or surface water) and location within the distribution network. Water treatment plants were more specifically associated with species of the Mycobacterium septicum group. On average, M. chelonae dominated network sites fed by surface water, and M. llatzerense dominated those fed by groundwater. Overall, the M. chelonae prevalence index increased along the distribution network and was associated with a correlative decrease in the prevalence index of M. llatzerense, suggesting competitive or niche exclusion between these two dominant species. Our data describe the great diversity and complexity of RGM species living in the interconnected environments that constitute the water production and distribution system of a large city and highlight the prevalence index of the potentially pathogenic species M. chelonae in the distribution network. PMID:23835173

  14. Forecasting distribution of numbers of large fires

    Treesearch

    Haiganoush K. Preisler; Jeff Eidenshink; Stephen Howard; Robert E. Burgan

    2015-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the...

  15. Modeling the Economic Impacts of Large Deployments on Local Communities

    DTIC Science & Technology

    2008-12-01

    MODELING THE ECONOMIC IMPACTS OF LARGE DEPLOYMENTS ON LOCAL COMMUNITIES THESIS Aaron L... MODELING THE ECONOMIC IMPACTS OF LARGE DEPLOYMENTS ON LOCAL COMMUNITIES THESIS Presented to the Faculty Department of Systems Engineering and...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED AFIT/GCA/ENV/08-D01 MODELING THE ECONOMIC IMPACTS OF LARGE DEPLOYMENTS ON LOCAL

  16. Determining bathymetric distributions of the eelgrass Zostera ...

    EPA Pesticide Factsheets

    Improved methods for determining bathymetric distributions of dominant intertidal plants throughout their estuarine range are needed. Zostera marina is a seagrass native to estuaries of the northeastern Pacific and many other sectors of the world ocean. The technique described here employed large format aerial photography using false color near-infrared film with digital image classification, and the production of digital bathymetric models of shallow estuaries such as those occurring in turbid waters of the Pacific Northwest USA. Application of geographic information system procedures to the eelgrass classifications and bathymetry distributions yielded digital bathymetric distributions based upon a very large number of observations. Similar bathymetric patterns were obtained for the three estuaries surveyed, and approximately 90% of the classified eelgrass occurred within the depth range -1.0 m to +1.0 m (MLLW). Comparison of these distributions with ground surveys of eelgrass lower depth limits indicated that the area of undetected subtidal eelgrass constituted 86% overall accuracy) in each estuary. The pattern of eelgrass in one estuary was distinctly different from those in the other two systems, illustrating the potential usefulness of this technique in exploring causative factors for such differences in estuarine intertidal vegetation distributions. Improved methods for determining bathymetric distributions of dominant intertidal plants throughout

  17. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  18. Distributed shared memory for roaming large volumes.

    PubMed

    Castanié, Laurent; Mion, Christophe; Cavin, Xavier; Lévy, Bruno

    2006-01-01

    We present a cluster-based volume rendering system for roaming very large volumes. This system allows to move a gigabyte-sized probe inside a total volume of several tens or hundreds of gigabytes in real-time. While the size of the probe is limited by the total amount of texture memory on the cluster, the size of the total data set has no theoretical limit. The cluster is used as a distributed graphics processing unit that both aggregates graphics power and graphics memory. A hardware-accelerated volume renderer runs in parallel on the cluster nodes and the final image compositing is implemented using a pipelined sort-last rendering algorithm. Meanwhile, volume bricking and volume paging allow efficient data caching. On each rendering node, a distributed hierarchical cache system implements a global software-based distributed shared memory on the cluster. In case of a cache miss, this system first checks page residency on the other cluster nodes instead of directly accessing local disks. Using two Gigabit Ethernet network interfaces per node, we accelerate data fetching by a factor of 4 compared to directly accessing local disks. The system also implements asynchronous disk access and texture loading, which makes it possible to overlap data loading, volume slicing and rendering for optimal volume roaming.

  19. Statistical effects related to low numbers of reacting molecules analyzed for a reversible association reaction A + B = C in ideally dispersed systems: An apparent violation of the law of mass action

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.

    2016-03-28

    Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less

  20. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  1. Dynamic Task Assignment of Autonomous Distributed AGV in an Intelligent FMS Environment

    NASA Astrophysics Data System (ADS)

    Fauadi, Muhammad Hafidz Fazli Bin Md; Lin, Hao Wen; Murata, Tomohiro

    The need of implementing distributed system is growing significantly as it is proven to be effective for organization to be flexible against a highly demanding market. Nevertheless, there are still large technical gaps need to be addressed to gain significant achievement. We propose a distributed architecture to control Automated Guided Vehicle (AGV) operation based on multi-agent architecture. System architectures and agents' functions have been designed to support distributed control of AGV. Furthermore, enhanced agent communication protocol has been configured to accommodate dynamic attributes of AGV task assignment procedure. Result proved that the technique successfully provides a better solution.

  2. Programming secure mobile agents in healthcare environments using role-based permissions.

    PubMed

    Georgiadis, C K; Baltatzis, J; Pangalos, G I

    2003-01-01

    The healthcare environment consists of vast amounts of dynamic and unstructured information, distributed over a large number of information systems. Mobile agent technology is having an ever-growing impact on the delivery of medical information. It supports acquiring and manipulating information distributed in a large number of information systems. Moreover is suitable for the computer untrained medical stuff. But the introduction of mobile agents generates advanced threads to the sensitive healthcare information, unless the proper countermeasures are taken. By applying the role-based approach to the authorization problem, we ease the sharing of information between hospital information systems and we reduce the administering part. The different initiative of the agent's migration method, results in different methods of assigning roles to the agent.

  3. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  4. Space industrialization - Education. [via communication satellites

    NASA Technical Reports Server (NTRS)

    Joels, K. M.

    1978-01-01

    The components of an educational system based on, and perhaps enhanced by, space industrialization communications technology are considered. Satellite technology has introduced a synoptic distribution system for various transmittable educational media. The cost of communications satellite distribution for educational programming has been high. It has, therefore, been proposed to utilize Space Shuttle related technology and Large Space Structures (LSS) to construct a system with a quantum advancement in communication capability and a quantum reduction in user cost. LSS for communications purposes have three basic advantages for both developed and emerging nations, including the ability to distribute signals over wide geographic areas, the reduced cost of satellite communications systems versus installation of land based systems, and the ability of a communication satellite system to create instant educational networks.

  5. Accelerator infrastructure in Europe: EuCARD 2011

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2011-10-01

    The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.

  6. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  7. Efficient measurement of large light source near-field color and luminance distributions for optical design and simulation

    NASA Astrophysics Data System (ADS)

    Kostal, Hubert; Kreysar, Douglas; Rykowski, Ronald

    2009-08-01

    The color and luminance distributions of large light sources are difficult to measure because of the size of the source and the physical space required for the measurement. We describe a method for the measurement of large light sources in a limited space that efficiently overcomes the physical limitations of traditional far-field measurement techniques. This method uses a calibrated, high dynamic range imaging colorimeter and a goniometric system to move the light source through an automated measurement sequence in the imaging colorimeter's field-of-view. The measurement is performed from within the near-field of the light source, enabling a compact measurement set-up. This method generates a detailed near-field color and luminance distribution model that can be directly converted to ray sets for optical design and that can be extrapolated to far-field distributions for illumination design. The measurements obtained show excellent correlation to traditional imaging colorimeter and photogoniometer measurement methods. The near-field goniometer approach that we describe is broadly applicable to general lighting systems, can be deployed in a compact laboratory space, and provides full near-field data for optical design and simulation.

  8. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  9. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  10. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  11. Autonomous Decentralized Voltage Profile Control of Super Distributed Energy System using Multi-agent Technology

    NASA Astrophysics Data System (ADS)

    Tsuji, Takao; Hara, Ryoichi; Oyama, Tsutomu; Yasuda, Keiichiro

    A super distributed energy system is a future energy system in which the large part of its demand is fed by a huge number of distributed generators. At one time some nodes in the super distributed energy system behave as load, however, at other times they behave as generator - the characteristic of each node depends on the customers' decision. In such situation, it is very difficult to regulate voltage profile over the system due to the complexity of power flows. This paper proposes a novel control method of distributed generators that can achieve the autonomous decentralized voltage profile regulation by using multi-agent technology. The proposed multi-agent system employs two types of agent; a control agent and a mobile agent. Control agents generate or consume reactive power to regulate the voltage profile of neighboring nodes and mobile agents transmit the information necessary for VQ-control among the control agents. The proposed control method is tested through numerical simulations.

  12. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  13. Assessment of distributed solar power systems: Issues and impacts

    NASA Astrophysics Data System (ADS)

    Moyle, R. A.; Chernoff, H.; Schweizer, T. C.; Patton, J. B.

    1982-11-01

    The installation of distributed solar-power systems presents electric utilities with a host of questions. Some of the technical and economic impacts of these systems are discussed. Among the technical interconnect issues are isolated operation, power quality, line safety, and metering options. Economic issues include user purchase criteria, structures and installation costs, marketing and product distribution costs, and interconnect costs. An interactive computer program that allows easy calculation of allowable system prices and allowable generation-equipment prices was developed as part of this project. It is concluded that the technical problems raised by distributed solar systems are surmountable, but their resolution may be costly. The stringent purchase criteria likely to be imposed by many potential system users and the economies of large-scale systems make small systems (less than 10 to 20 kW) less attractive than larger systems. Utilities that consider life-cycle costs in making investment decisions and third-party investors who have tax and financial advantages are likely to place the highest value on solar-power systems.

  14. ADMS Evaluation Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2018-01-23

    Deploying an ADMS or looking to optimize its value? NREL offers a low-cost, low-risk evaluation platform for assessing ADMS performance. The National Renewable Energy Laboratory (NREL) has developed a vendor-neutral advanced distribution management system (ADMS) evaluation platform and is expanding its capabilities. The platform uses actual grid-scale hardware, large-scale distribution system models, and advanced visualization to simulate realworld conditions for the most accurate ADMS evaluation and experimentation.

  15. Bibliography On Multiprocessors And Distributed Processing

    NASA Technical Reports Server (NTRS)

    Miya, Eugene N.

    1988-01-01

    Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.

  16. Remote maintenance monitoring system

    NASA Technical Reports Server (NTRS)

    Simpkins, Lorenz G. (Inventor); Owens, Richard C. (Inventor); Rochette, Donn A. (Inventor)

    1992-01-01

    A remote maintenance monitoring system retrofits to a given hardware device with a sensor implant which gathers and captures failure data from the hardware device, without interfering with its operation. Failure data is continuously obtained from predetermined critical points within the hardware device, and is analyzed with a diagnostic expert system, which isolates failure origin to a particular component within the hardware device. For example, monitoring of a computer-based device may include monitoring of parity error data therefrom, as well as monitoring power supply fluctuations therein, so that parity error and power supply anomaly data may be used to trace the failure origin to a particular plane or power supply within the computer-based device. A plurality of sensor implants may be rerofit to corresponding plural devices comprising a distributed large-scale system. Transparent interface of the sensors to the devices precludes operative interference with the distributed network. Retrofit capability of the sensors permits monitoring of even older devices having no built-in testing technology. Continuous real time monitoring of a distributed network of such devices, coupled with diagnostic expert system analysis thereof, permits capture and analysis of even intermittent failures, thereby facilitating maintenance of the monitored large-scale system.

  17. Design considerations for large space electric power systems

    NASA Technical Reports Server (NTRS)

    Renz, D. D.; Finke, R. C.; Stevens, N. J.; Triner, J. E.; Hansen, I. G.

    1983-01-01

    As power levels of spacecraft rise to the 50 to 100 kW range, it becomes apparent that low voltage (28 V) dc power distribution and management systems will not operate efficiently at these higher power levels. The concept of transforming a solar array voltage at 150 V dc into a 1000 V ac distribution system operating at 20 kHz is examined. The transformation is accomplished with series-resonant inverter by using a rotary transformer to isolate the solar array from the spacecraft. The power can then be distributed in any desired method such as three phase delta to delta. The distribution voltage can be easily transformed to any desired load voltage and operating frequency. The reasons for the voltage limitations on the solar array due to plasma interactions and the many advantages of a high voltage, high frequency at distribution system are discussed.

  18. Determination of optimum allocation and pricing of distributed generation using genetic algorithm methodology

    NASA Astrophysics Data System (ADS)

    Mwakabuta, Ndaga Stanslaus

    Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.

  19. Combatting Inherent Vulnerabilities of CFAR Algorithms and a New Robust CFAR Design

    DTIC Science & Technology

    1993-09-01

    elements of any automatic radar system. Unfortunately, CFAR systems are inherently vulnerable to degradation caused by large clutter edges, multiple ...edges, multiple targets, and electronic countermeasures (ECM) environments. 20 Distribution, Availability of Abstract 21 Abstract Security...inherently vulnerable to degradation caused by large clutter edges, multiple targets and jamming environments. This thesis presents eight popular and studied

  20. A distributed microcomputer-controlled system for data acquisition and power spectral analysis of EEG.

    PubMed

    Vo, T D; Dwyer, G; Szeto, H H

    1986-04-01

    A relatively powerful and inexpensive microcomputer-based system for the spectral analysis of the EEG is presented. High resolution and speed is achieved with the use of recently available large-scale integrated circuit technology with enhanced functionality (INTEL Math co-processors 8087) which can perform transcendental functions rapidly. The versatility of the system is achieved with a hardware organization that has distributed data acquisition capability performed by the use of a microprocessor-based analog to digital converter with large resident memory (Cyborg ISAAC-2000). Compiled BASIC programs and assembly language subroutines perform on-line or off-line the fast Fourier transform and spectral analysis of the EEG which is stored as soft as well as hard copy. Some results obtained from test application of the entire system in animal studies are presented.

  1. Asynchronous transfer mode distribution network by use of an optoelectronic VLSI switching chip.

    PubMed

    Lentine, A L; Reiley, D J; Novotny, R A; Morrison, R L; Sasian, J M; Beckman, M G; Buchholz, D B; Hinterlong, S J; Cloonan, T J; Richards, G W; McCormick, F B

    1997-03-10

    We describe a new optoelectronic switching system demonstration that implements part of the distribution fabric for a large asynchronous transfer mode (ATM) switch. The system uses a single optoelectronic VLSI modulator-based switching chip with more than 4000 optical input-outputs. The optical system images the input fibers from a two-dimensional fiber bundle onto this chip. A new optomechanical design allows the system to be mounted in a standard electronic equipment frame. A large section of the switch was operated as a 208-Mbits/s time-multiplexed space switch, which can serve as part of an ATM switch by use of an appropriate out-of-band controller. A larger section with 896 input light beams and 256 output beams was operated at 160 Mbits/s as a slowly reconfigurable space switch.

  2. Distributed magnetic field positioning system using code division multiple access

    NASA Technical Reports Server (NTRS)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  3. Large fluctuations of the macroscopic current in diffusive systems: a numerical test of the additivity principle.

    PubMed

    Hurtado, Pablo I; Garrido, Pedro L

    2010-04-01

    Most systems, when pushed out of equilibrium, respond by building up currents of locally conserved observables. Understanding how microscopic dynamics determines the averages and fluctuations of these currents is one of the main open problems in nonequilibrium statistical physics. The additivity principle is a theoretical proposal that allows to compute the current distribution in many one-dimensional nonequilibrium systems. Using simulations, we validate this conjecture in a simple and general model of energy transport, both in the presence of a temperature gradient and in canonical equilibrium. In particular, we show that the current distribution displays a Gaussian regime for small current fluctuations, as prescribed by the central limit theorem, and non-Gaussian (exponential) tails for large current deviations, obeying in all cases the Gallavotti-Cohen fluctuation theorem. In order to facilitate a given current fluctuation, the system adopts a well-defined temperature profile different from that of the steady state and in accordance with the additivity hypothesis predictions. System statistics during a large current fluctuation is independent of the sign of the current, which implies that the optimal profile (as well as higher-order profiles and spatial correlations) are invariant upon current inversion. We also demonstrate that finite-time joint fluctuations of the current and the profile are well described by the additivity functional. These results suggest the additivity hypothesis as a general and powerful tool to compute current distributions in many nonequilibrium systems.

  4. Design considerations, architecture, and use of the Mini-Sentinel distributed data system.

    PubMed

    Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S

    2012-01-01

    We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.

  5. The impact of a large penetration of intermittent sources on the power system operation and planning

    NASA Astrophysics Data System (ADS)

    Ausin, Juan Carlos

    This research investigated the impact on the power system of a large penetration of intermittent renewable sources, mainly wind and photovoltaic generation. Currently, electrical utilities deal with wind and PV plants as if they were sources of negative demand, that is to say, they have no control over the power output produced. In this way, the grid absorbs all the power fluctuation as if it were coming from a common load. With the level of wind penetration growing so quickly, there is growing concern amongst the utilities and the grid operators, as they will have to deal with a much higher level of fluctuation. In the same way, the potential cost reduction of PV technologies suggests that a similar development may be expected for solar production in the mid term. The first part of the research was focused on the issues that affect utility planning and reinforcement decision making. Although DG is located mainly on the distribution network, a large penetration may alter the flows, not only on the distribution lines, but also on the transmission system and through the transmission - distribution interfaces. The optimal capacity and production costs for the UK transmission network have been calculated for several combinations of load profiles and typical wind/PV output scenarios. A full economic analysis is developed, showing the benefits and disadvantages that a large penetration of these distributed generators may have on transmission system operator reinforcement strategies. Closely related to planning factors are institutional, revelatory, and economic considerations, such as transmission pricing, which may hamper the integration of renewable energy technologies into the electric utility industry. The second part of the research related to the impact of intermittent renewable energy technologies on the second by second, minute by minute, and half-hour by half-hour operations of power systems. If a large integration of these new generators partially replaces the conventional rotating machines the aggregate fluctuation starts to become an important factor, and should be taken into account for the calculation of the balancing requirements. Additional balancing requirements would increase the total balancing cost and this could stop the future development of the intermittent sources.

  6. Definition, analysis and development of an optical data distribution network for integrated avionics and control systems. Part 2: Component development and system integration

    NASA Technical Reports Server (NTRS)

    Yen, H. W.; Morrison, R. J.

    1984-01-01

    Fiber optic transmission is emerging as an attractive concept in data distribution onboard civil aircraft. Development of an Optical Data Distribution Network for Integrated Avionics and Control Systems for commercial aircraft will provide a data distribution network that gives freedom from EMI-RFI and ground loop problems, eliminates crosstalk and short circuits, provides protection and immunity from lightning induced transients and give a large bandwidth data transmission capability. In addition there is a potential for significantly reducing the weight and increasing the reliability over conventional data distribution networks. Wavelength Division Multiplexing (WDM) is a candidate method for data communication between the various avionic subsystems. With WDM all systems could conceptually communicate with each other without time sharing and requiring complicated coding schemes for each computer and subsystem to recognize a message. However, the state of the art of optical technology limits the application of fiber optics in advanced integrated avionics and control systems. Therefore, it is necessary to address the architecture for a fiber optics data distribution system for integrated avionics and control systems as well as develop prototype components and systems.

  7. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.

  8. Spatial Variation in the Invertebrate Macrobenthos of Three Large Missouri River Reservoirs

    EPA Science Inventory

    Benthic macroinvertebrates assemblages are useful indicators of ecological condition for aquatic systems. This study was conducted to characterize benthic communities of three large reservoirs on the Missouri River. The information collected on abundance, distribution and varia...

  9. Multiagent model and mean field theory of complex auction dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  10. Distributed rendering for multiview parallax displays

    NASA Astrophysics Data System (ADS)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  11. Solar simulator for concentrator photovoltaic systems.

    PubMed

    Domínguez, César; Antón, Ignacio; Sala, Gabriel

    2008-09-15

    A solar simulator for measuring performance of large area concentrator photovoltaic (CPV) modules is presented. Its illumination system is based on a Xenon flash light and a large area collimator mirror, which simulates natural sun light. Quality requirements imposed by the CPV systems have been characterized: irradiance level and uniformity at the receiver, light collimation and spectral distribution. The simulator allows indoor fast and cost-effective performance characterization and classification of CPV systems at the production line as well as module rating carried out by laboratories.

  12. Concept for a power system controller for large space electrical power systems

    NASA Technical Reports Server (NTRS)

    Lollar, L. F.; Lanier, J. R., Jr.; Graves, J. R.

    1981-01-01

    The development of technology for a fail-operatonal power system controller (PSC) utilizing microprocessor technology for managing the distribution and power processor subsystems of a large multi-kW space electrical power system is discussed. The specific functions which must be performed by the PSC, the best microprocessor available to do the job, and the feasibility, cost savings, and applications of a PSC were determined. A limited function breadboard version of a PSC was developed to demonstrate the concept and potential cost savings.

  13. VST project: distributed control system overview

    NASA Astrophysics Data System (ADS)

    Mancini, Dario; Mazzola, Germana; Molfese, C.; Schipani, Pietro; Brescia, Massimo; Marty, Laurent; Rossi, Emilio

    2003-02-01

    The VLT Survey Telescope (VST) is a co-operative program between the European Southern Observatory (ESO) and the INAF Capodimonte Astronomical Observatory (OAC), Naples, for the study, design, and realization of a 2.6-m wide-field optical imaging telescope to be operated at the Paranal Observatory, Chile. The telescope design, manufacturing and integration are responsibility of OAC. The VST has been specifically designed to carry out stand-alone observations in the UV to I spectral range and to supply target databases for the ESO Very Large Telescope (VLT). The control hardware is based on a large utilization of distributed embedded specialized controllers specifically designed, prototyped and manufactured by the Technology Working Group for VST project. The use of a field bus improves the whole system reliability in terms of high level flexibility, control speed and allow to reduce drastically the plant distribution in the instrument. The paper describes the philosophy and the architecture of the VST control HW with particular reference to the advantages of this distributed solution for the VST project.

  14. Service Discovery Oriented Management System Construction Method

    NASA Astrophysics Data System (ADS)

    Li, Huawei; Ren, Ying

    2017-10-01

    In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.

  15. Multi-agent systems and their applications

    DOE PAGES

    Xie, Jing; Liu, Chen-Ching

    2017-07-14

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  16. Multi-agent systems and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Liu, Chen-Ching

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  17. Symmetry of interactions rules in incompletely connected random replicator ecosystems.

    PubMed

    Kärenlampi, Petri P

    2014-06-01

    The evolution of an incompletely connected system of species with speciation and extinction is investigated in terms of random replicators. It is found that evolving random replicator systems with speciation do become large and complex, depending on speciation parameters. Antisymmetric interactions result in large systems, whereas systems with symmetric interactions remain small. A co-dominating feature is within-species interaction pressure: large within-species interaction increases species diversity. Average fitness evolves in all systems, however symmetry and connectivity evolve in small systems only. Newcomers get extinct almost immediately in symmetric systems. The distribution in species lifetimes is determined for antisymmetric systems. The replicator systems investigated do not show any sign of self-organized criticality. The generalized Lotka-Volterra system is shown to be a tedious way of implementing the replicator system.

  18. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  19. Optimal Operation and Management for Smart Grid Subsumed High Penetration of Renewable Energy, Electric Vehicle, and Battery Energy Storage System

    NASA Astrophysics Data System (ADS)

    Shigenobu, Ryuto; Noorzad, Ahmad Samim; Muarapaz, Cirio; Yona, Atsushi; Senjyu, Tomonobu

    2016-04-01

    Distributed generators (DG) and renewable energy sources have been attracting special attention in distribution systems in all over the world. Renewable energies, such as photovoltaic (PV) and wind turbine generators are considered as green energy. However, a large amount of DG penetration causes voltage deviation beyond the statutory range and reverse power flow at interconnection points in the distribution system. If excessive voltage deviation occurs, consumer's electric devices might break and reverse power flow will also has a negative impact on the transmission system. Thus, mass interconnections of DGs has an adverse effect on both of the utility and the customer. Therefore, reactive power control method is proposed previous research by using inverters attached DGs for prevent voltage deviations. Moreover, battery energy storage system (BESS) is also proposed for resolve reverse power flow. In addition, it is possible to supply high quality power for managing DGs and BESSs. Therefore, this paper proposes a method to maintain voltage, active power, and reactive power flow at interconnection points by using cooperative controlled of PVs, house BESSs, EVs, large BESSs, and existing voltage control devices. This paper not only protect distribution system, but also attain distribution loss reduction and effectivity management of control devices. Therefore mentioned control objectives are formulated as an optimization problem that is solved by using the Particle Swarm Optimization (PSO) algorithm. Modified scheduling method is proposed in order to improve convergence probability of scheduling scheme. The effectiveness of the proposed method is verified by case studies results and by using numerical simulations in MATLAB®.

  20. Spatial analysis and characteristics of pig farming in Thailand.

    PubMed

    Thanapongtharm, Weerapong; Linard, Catherine; Chinson, Pornpiroon; Kasemsuwan, Suwicha; Visser, Marjolein; Gaughan, Andrea E; Epprech, Michael; Robinson, Timothy P; Gilbert, Marius

    2016-10-06

    In Thailand, pig production intensified significantly during the last decade, with many economic, epidemiological and environmental implications. Strategies toward more sustainable future developments are currently investigated, and these could be informed by a detailed assessment of the main trends in the pig sector, and on how different production systems are geographically distributed. This study had two main objectives. First, we aimed to describe the main trends and geographic patterns of pig production systems in Thailand in terms of pig type (native, breeding, and fattening pigs), farm scales (smallholder and large-scale farming systems) and type of farming systems (farrow-to-finish, nursery, and finishing systems) based on a very detailed 2010 census. Second, we aimed to study the statistical spatial association between these different types of pig farming distribution and a set of spatial variables describing access to feed and markets. Over the last decades, pig population gradually increased, with a continuously increasing number of pigs per holder, suggesting a continuing intensification of the sector. The different pig-production systems showed very contrasted geographical distributions. The spatial distribution of large-scale pig farms corresponds with that of commercial pig breeds, and spatial analysis conducted using Random Forest distribution models indicated that these were concentrated in lowland urban or peri-urban areas, close to means of transportation, facilitating supply to major markets such as provincial capitals and the Bangkok Metropolitan region. Conversely the smallholders were distributed throughout the country, with higher densities located in highland, remote, and rural areas, where they supply local rural markets. A limitation of the study was that pig farming systems were defined from the number of animals per farm, resulting in their possible misclassification, but this should have a limited impact on the main patterns revealed by the analysis. The very contrasted distribution of different pig production systems present opportunities for future regionalization of pig production. More specifically, the detailed geographical analysis of the different production systems will be used to spatially-inform planning decisions for pig farming accounting for the specific health, environment and economical implications of the different pig production systems.

  1. Fission meter and neutron detection using poisson distribution comparison

    DOEpatents

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  2. The BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, A

    2005-04-20

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less

  3. The BaBar Data Reconstruction Control System

    NASA Astrophysics Data System (ADS)

    Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.

    2005-08-01

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.

  4. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  5. A search for applications of Fiber Optics in early warning systems for natural hazards.

    NASA Astrophysics Data System (ADS)

    Wenker, Koen; Bogaard, Thom

    2013-04-01

    In order to reduce the societal risk associated with natural hazards novel technologies could help to advance in early warning systems. In our study we evaluate the use of multi-sensor technologies as possible early-warning systems for landslides and man-made structures, and the integration of the information in a simple Decision Support System (DSS). In this project, particular attention will be paid to some new possibilities available in the field of distributed monitoring systems of relevant parameters for landslide and man-made structures monitoring (such as large dams and bridges), and among them the distributed monitoring of temperature, strain and acoustic signals by FO cables. Fiber Optic measurements are becoming more and more popular. Fiber optic cables have been developed in the telecommunication business to send large amounts of information over large distances with the speed of light. Because of the commercial application, production costs are relatively low. Using fiber optics for measurements has several advantages. This novel technology is, for instance, immune to electromagnetic interference, appears stable, very accurate, and has the potential to measure several independent physical properties in a distributed manner. The high resolution spatial and temporal distributed information on e.g. temperature or strain (or both) make fiber optics an interesting measurement technique. Several applications have been developed in both engineering as science and the possibilities seem numerous. We will present a thorough literature review that was done to assess the applicability and limitations of FO cable technology. This review was focused but not limited to application in landslide research. Several examples of current practices will be shown, also from outside the natural hazard practice and possible application will be discussed.

  6. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  7. Distributed intelligent urban environment monitoring system

    NASA Astrophysics Data System (ADS)

    Du, Jinsong; Wang, Wei; Gao, Jie; Cong, Rigang

    2018-02-01

    The current environmental pollution and destruction have developed into a world-wide major social problem that threatens human survival and development. Environmental monitoring is the prerequisite and basis of environmental governance, but overall, the current environmental monitoring system is facing a series of problems. Based on the electrochemical sensor, this paper designs a small, low-cost, easy to layout urban environmental quality monitoring terminal, and multi-terminal constitutes a distributed network. The system has been small-scale demonstration applications and has confirmed that the system is suitable for large-scale promotion

  8. Space platform utilities distribution study

    NASA Technical Reports Server (NTRS)

    Lefever, A. E.

    1980-01-01

    Generic concepts for the installation of power data and thermal fluid distribution lines on large space platforms were discussed. Connections with central utility subsystem modules and pallet interfaces were also considered. Three system concept study platforms were used as basepoints for the detail development. The tradeoff of high voltage low voltage power distribution and the impact of fiber optics as a data distribution mechanism were analyzed. Thermal expansion and temperature control of utility lines and ducts were considered. Technology developments required for implementation of the generic distribution concepts were identified.

  9. Sensor Needs for Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Graf, John C.

    2000-01-01

    Sensors and feedback systems are critical to life support flight systems and life support systems research. New sensor capabilities can allow for new system architectures to be considered, and can facilitate dramatic improvements in system performance. This paper will describe three opportunities for biosensor researchers to develop sensors that will enable life support system improvements. The first opportunity relates to measuring physical, chemical, and biological parameters in the Space Station Water Processing System. Measuring pH, iodine, total organic carbon, microbiological activity, total dissolved solids, or conductivity with a safe, effective, stable, reliable microsensor could benefit the water processing system considerably. Of special interest is a sensor which can monitor biological contamination rapidly. The second opportunity relates to sensing microbiological contamination and water condensation on the surface of large inflatable structures. It is the goal of large inflatable structures used for habitation to take advantage of the large surface area of the structure and reject waste heat passively through the walls of the structure. Too much heat rejection leads to a cold spot with water condensation, and eventually microbiological contamination. A distributed sensor system that can measure temperature, humidity, and microbiological contamination across a large surface would benefit designers of large inflatable habitable structures. The third opportunity relates to sensing microbial bioreactors used for waste water processing and reuse. Microbiological bioreactors offer considerable advantages in weight and power compared to adsorption bed based systems when used for long periods of time. Managing and controlling bioreactors is greatly helped if distributed microsensors measured the biological populations continuously in many locations within the bioreactor. Nitrifying bacteria are of special interest to bioreactor designers, and any sensors that could measure the populations of these types of bacteria would help the control and operation of bioreactors. J

  10. On the Path to SunShot - Emerging Issues and Challenges with Integrating High Levels of Solar into the Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palminitier, Bryan; Broderick, Robert; Mather, Barry

    2016-05-01

    Wide use of advanced inverters could double the electricity-distribution system’s hosting capacity for distributed PV at low costs—from about 170 GW to 350 GW (see Palmintier et al. 2016). At the distribution system level, increased variable generation due to high penetrations of distributed PV (typically rooftop and smaller ground-mounted systems) could challenge the management of distribution voltage, potentially increase wear and tear on electromechanical utility equipment, and complicate the configuration of circuit-breakers and other protection systems—all of which could increase costs, limit further PV deployment, or both. However, improved analysis of distribution system hosting capacity—the amount of distributed PV thatmore » can be interconnected without changing the existing infrastructure or prematurely wearing out equipment—has overturned previous rule-of-thumb assumptions such as the idea that distributed PV penetrations higher than 15% require detailed impact studies. For example, new analysis suggests that the hosting capacity for distributed PV could rise from approximately 170 GW using traditional inverters to about 350 GW with the use of advanced inverters for voltage management, and it could be even higher using accessible and low-cost strategies such as careful siting of PV systems within a distribution feeder and additional minor changes in distribution operations. Also critical to facilitating distributed PV deployment is the improvement of interconnection processes, associated standards and codes, and compensation mechanisms so they embrace PV’s contributions to system-wide operations. Ultimately SunShot-level PV deployment will require unprecedented coordination of the historically separate distribution and transmission systems along with incorporation of energy storage and “virtual storage,” which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Additional analysis and innovation are neede« less

  11. MODELING THE POTENTIAL SPATIAL DISTRIBUTION OF BEEF CATTLE GRAZING USING A GEOGRAPHIC INFORMATION SYSTEM

    EPA Science Inventory

    Data regarding grazing utilization in the western United States are typically compiled within administrative boundaries(e.g. allotment,pasture). For large areas, an assumption of uniform distribution is seldom valid. Previous studies show that vegetation type, degree of slope, an...

  12. Real Time Text Analysis

    NASA Astrophysics Data System (ADS)

    Senthilkumar, K.; Ruchika Mehra Vijayan, E.

    2017-11-01

    This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language

  13. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3) Coupling large-scale computing and data systems to scientific and engineering instruments (e.g., realtime interaction with experiments through real-time data analysis and interpretation presented to the experimentalist in ways that allow direct interaction with the experiment (instead of just with instrument control); (5) Highly interactive, augmented reality and virtual reality remote collaborations (e.g., Ames / Boeing Remote Help Desk providing field maintenance use of coupled video and NDI to a remote, on-line airframe structures expert who uses this data to index into detailed design databases, and returns 3D internal aircraft geometry to the field); (5) Single computational problems too large for any single system (e.g. the rotocraft reference calculation). Grids also have the potential to provide pools of resources that could be called on in extraordinary / rapid response situations (such as disaster response) because they can provide common interfaces and access mechanisms, standardized management, and uniform user authentication and authorization, for large collections of distributed resources (whether or not they normally function in concert). IPG development and deployment is addressing requirements obtained by analyzing a number of different application areas, in particular from the NASA Aero-Space Technology Enterprise. This analysis has focussed primarily on two types of users: the scientist / design engineer whose primary interest is problem solving (e.g. determining wing aerodynamic characteristics in many different operating environments), and whose primary interface to IPG will be through various sorts of problem solving frameworks. The second type of user is the tool designer: the computational scientists who convert physics and mathematics into code that can simulate the physical world. These are the two primary users of IPG, and they have rather different requirements. The results of the analysis of the needs of these two types of users provides a broad set of requirements that gives rise to a general set of required capabilities. The IPG project is intended to address all of these requirements. In some cases the required computing technology exists, and in some cases it must be researched and developed. The project is using available technology to provide a prototype set of capabilities in a persistent distributed computing testbed. Beyond this, there are required capabilities that are not immediately available, and whose development spans the range from near-term engineering development (one to two years) to much longer term R&D (three to six years). Additional information is contained in the original.

  14. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  15. Seasonal and spatial variability of nitrosamines and their precursor sources at a large-scale urban drinking water system.

    PubMed

    Woods, Gwen C; Trenholm, Rebecca A; Hale, Bruce; Campbell, Zeke; Dickenson, Eric R V

    2015-07-01

    Nitrosamines are considered to pose greater health risks than currently regulated DBPs and are subsequently listed as a priority pollutant by the EPA, with potential for future regulation. Denver Water, as part of the EPA's Unregulated Contaminant Monitoring Rule 2 (UCMR2) monitoring campaign, found detectable levels of N-nitrosodimethylamine (NDMA) at all sites of maximum residency within the distribution system. To better understand the occurrence of nitrosamines and nitrosamine precursors, Denver Water undertook a comprehensive year-long monitoring campaign. Samples were taken every two weeks to monitor for NDMA in the distribution system, and quarterly sampling events further examined 9 nitrosamines and nitrosamine precursors throughout the treatment and distribution systems. NDMA levels within the distribution system were typically low (>1.3 to 7.2 ng/L) with a remote distribution site (frequently >200 h of residency) experiencing the highest concentrations found. Eight other nitrosamines (N-nitrosomethylethylamine, N-nitrosodiethylamine, N-nitroso-di-n-propylamine, N-nitroso-di-n-butylamine, N-nitroso-di-phenylamine, N-nitrosopyrrolidine, N-nitrosopiperidine, N-nitrosomorpholine) were also monitored but none of these 8, or precursors of these 8 [as estimated with formation potential (FP) tests], were detected anywhere in raw, partially-treated or distribution samples. Throughout the year, there was evidence that seasonality may impact NDMA formation, such that lower temperatures (~5-10°C) produced greater NDMA than during warmer months. The year of sampling further provided evidence that water quality and weather events may impact NDMA precursor loads. Precursor loading estimates demonstrated that NDMA precursors increased during treatment (potentially from cationic polymer coagulant aids). The precursor analysis also provided evidence that precursors may have increased further within the distribution system itself. This comprehensive study of a large-scale drinking water system provides insight into the variability of NDMA occurrence in a chloraminated system, which may be impacted by seasonality, water quality changes and/or the varied origins of NDMA precursors within a given system. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Adaptive optical system for writing large holographic optical elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tyutchev, M.V.; Kalyashov, E.V.; Pavlov, A.P.

    1994-11-01

    This paper formulates the requirements imposed on systems for correcting the phase-difference distribution of recording waves over the field of a large-diameter photographic plate ({le}1.5 m) when writing holographic optical elements (HOEs). A technique is proposed for writing large HOEs, based on the use of an adaptive phase-correction optical system of the first type, controlled by the self-diffraction signal from a latent image. The technique is implemented by writing HOEs on photographic plates with an effective diameter of 0.7 m on As{sub 2}S{sub 3} layers. 13 refs., 4 figs.

  17. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  18. Technique for active measurement of atmospheric transmittance using an imaging system: implementation at 10.6-μm wavelength

    NASA Astrophysics Data System (ADS)

    Sadot, Dan; Zaarur, O.; Zaarur, S.; Kopeika, Norman S.

    1994-10-01

    An active method is presented for measuring atmospheric transmittance with an imaging system. In comparison to other measurement methods, this method has the advantage of immunity to background noise, independence of atmospheric conditions such as solar radiation, and an improved capability to evaluate effects of turbulence on the measurements. Other significant advantages are integration over all particulate size distribution effects including very small and very large particulates whose concentration is hard to measure, and the fact that this method is a path-integrated measurement. In this implementation attenuation deriving from molecular absorption and from small and large particulate scatter and absorption and their weather dependences are separated out. Preliminary results indicate high correlation with direct transmittance calculations via particle size distribution measurement, and that even at 10.6 micrometers wavelength atmospheric transmission depends noticeably on aerosol size distribution and concentration.

  19. A technique for active measurement of atmospheric transmittance using an imaging system: implementation at 10.6 μm wavelength

    NASA Astrophysics Data System (ADS)

    Sadot, D.; Zaarur, O.; Zaarur, S.

    1995-12-01

    An active method is presented for measuring atmospheric transmittance with an imaging system. In comparison to other measurement methods, this method has the advantage of immunity to background noise, independence of atmospheric conditions such as solar radiation, and an improved capability to evaluate effects of turbulence on the measurements. Other significant advantages are integration over all particulate size distribution effects including very small and very large particulates whose concentration is hard to measure, and the fact that this method is a path-integrated measurement. Attenuation deriving from molecular absorption and from small and large particulate scatter and absorption and their weather dependences are separated out. Preliminary results indicate high correlation with direct transmittance calculations via particle size distribution measurement, and that even at 10.6 μm wavelength atmospheric transmission depends noticeably on aerosol size distribution and concentration.

  20. Large Area Stress Distribution in Crystalline Materials Calculated from Lattice Deformation Identified by Electron Backscatter Diffraction

    NASA Astrophysics Data System (ADS)

    Shao, Yongliang; Zhang, Lei; Hao, Xiaopeng; Wu, Yongzhong; Dai, Yuanbin; Tian, Yuan; Huo, Qin

    2014-08-01

    We report a method to obtain the stress of crystalline materials directly from lattice deformation by Hooke's law. The lattice deformation was calculated using the crystallographic orientations obtained from electron backscatter diffraction (EBSD) technology. The stress distribution over a large area was obtained efficiently and accurately using this method. Wurtzite structure gallium nitride (GaN) crystal was used as the example of a hexagonal crystal system. With this method, the stress distribution of a GaN crystal was obtained. Raman spectroscopy was used to verify the stress distribution. The cause of the stress distribution found in the GaN crystal was discussed from theoretical analysis and EBSD data. Other properties related to lattice deformation, such as piezoelectricity, can also be analyzed by this novel approach based on EBSD data.

  1. Large area stress distribution in crystalline materials calculated from lattice deformation identified by electron backscatter diffraction.

    PubMed

    Shao, Yongliang; Zhang, Lei; Hao, Xiaopeng; Wu, Yongzhong; Dai, Yuanbin; Tian, Yuan; Huo, Qin

    2014-08-05

    We report a method to obtain the stress of crystalline materials directly from lattice deformation by Hooke's law. The lattice deformation was calculated using the crystallographic orientations obtained from electron backscatter diffraction (EBSD) technology. The stress distribution over a large area was obtained efficiently and accurately using this method. Wurtzite structure gallium nitride (GaN) crystal was used as the example of a hexagonal crystal system. With this method, the stress distribution of a GaN crystal was obtained. Raman spectroscopy was used to verify the stress distribution. The cause of the stress distribution found in the GaN crystal was discussed from theoretical analysis and EBSD data. Other properties related to lattice deformation, such as piezoelectricity, can also be analyzed by this novel approach based on EBSD data.

  2. Large Area Stress Distribution in Crystalline Materials Calculated from Lattice Deformation Identified by Electron Backscatter Diffraction

    PubMed Central

    Shao, Yongliang; Zhang, Lei; Hao, Xiaopeng; Wu, Yongzhong; Dai, Yuanbin; Tian, Yuan; Huo, Qin

    2014-01-01

    We report a method to obtain the stress of crystalline materials directly from lattice deformation by Hooke's law. The lattice deformation was calculated using the crystallographic orientations obtained from electron backscatter diffraction (EBSD) technology. The stress distribution over a large area was obtained efficiently and accurately using this method. Wurtzite structure gallium nitride (GaN) crystal was used as the example of a hexagonal crystal system. With this method, the stress distribution of a GaN crystal was obtained. Raman spectroscopy was used to verify the stress distribution. The cause of the stress distribution found in the GaN crystal was discussed from theoretical analysis and EBSD data. Other properties related to lattice deformation, such as piezoelectricity, can also be analyzed by this novel approach based on EBSD data. PMID:25091314

  3. Large-scale displacement following the 2016 Kaikōura earthquake

    NASA Astrophysics Data System (ADS)

    Wang, T.; Peng, D.; Barbot, S.; Wei, S.; Shi, X.

    2017-12-01

    The 2016 Mw 7.9 Kaikōura earthquake occurred near the southern termination of the Hikurangi subduction system, where a transition from subduction to strike-slip motion dominates the pre-seismic strain accumulation. Dense spatial coverage of the GPS measurements and large amount of Interferometric Synthetic Aperture Radar (InSAR) images provide valuable constraints, from the near field to the far field, to study how the slip is distributed among the subduction interface and the overlying fault system before, during and after the earthquake. We extract time-series deformation from the New Zealand continuous GPS network, and SAR images acquired from Japanese ALOS-2 and European Sentinel-1A/B satellites to image the surface deformation related to the 2016 Kaikōura earthquake. Both GPS and InSAR data, which cover the entire New Zealand region, show that the co-seismic and post-seismic deformations are distributed in an extraordinary large area, as far as to the north tip of the North Island. Based on a coseismic slip model derived from seismic and geodetic observations, we calculate the stress perturbation incurred by the earthquake. We explore a range of possibilities of friction laws and rheology via a linear combination of strain rate in finite volumes and slip velocity on ruptured faults. We obtain the slip distribution that can best explain our geodetic measurements using outlier-insensitive hierarchical Bayesian model, to better understand different mechanisms behind the localized shallow after slip and distributed deformation. Our results indicate that complex interactions between the subduction interface and the overlying fault system play an important role in causing such large-scale deformation during and after the earthquake event.

  4. Small vs. Large Convective Cloud Objects from CERES Aqua Observations: Where are the Intraseasonal Variation Signals?

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2016-01-01

    During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.

  5. A Geo-Distributed System Architecture for Different Domains

    NASA Astrophysics Data System (ADS)

    Moßgraber, Jürgen; Middleton, Stuart; Tao, Ran

    2013-04-01

    The presentation will describe work on the system-of-systems (SoS) architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". In this project we deal with two use-cases: Natural Crisis Management (e.g. Tsunami Early Warning) and Industrial Subsurface Development (e.g. drilling for oil). These use-cases seem to be quite different at first sight but share a lot of similarities, like managing and looking up available sensors, extracting data from them and annotate it semantically, intelligently manage the data (big data problem), run mathematical analysis algorithms on the data and finally provide decision support on this basis. The main challenge was to create a generic architecture which fits both use-cases. The requirements to the architecture are manifold and the whole spectrum of a modern, geo-distributed and collaborative system comes into play. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. The most important architectural challenges we needed to address are 1. Build a scalable communication layer for a System-of-sytems 2. Build a resilient communication layer for a System-of-sytems 3. Efficiently publish large volumes of semantically rich sensor data 4. Scalable and high performance storage of large distributed datasets 5. Handling federated multi-domain heterogeneous data 6. Discovery of resources in a geo-distributed SoS 7. Coordination of work between geo-distributed systems The design decisions made for each of them will be presented. These developed concepts are also applicable to the requirements of the Future Internet (FI) and Internet of Things (IoT) which will provide services like smart grids, smart metering, logistics and environmental monitoring.

  6. Particles size distribution in diluted magnetic fluids

    NASA Astrophysics Data System (ADS)

    Yerin, Constantine V.

    2017-06-01

    Changes in particles and aggregates size distribution in diluted kerosene based magnetic fluids is studied by dynamic light scattering method. It has been found that immediately after dilution in magnetic fluids the system of aggregates with sizes ranging from 100 to 250-1000 nm is formed. In 50-100 h after dilution large aggregates are peptized and in the sample stationary particles and aggregates size distribution is fixed.

  7. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  8. Soil temperature variability in complex terrain measured using fiber-optic distributed temperature sensing

    USDA-ARS?s Scientific Manuscript database

    Soil temperature (Ts) exerts critical controls on hydrologic and biogeochemical processes but magnitude and nature of Ts variability in a landscape setting are rarely documented. Fiber optic distributed temperature sensing systems (FO-DTS) potentially measure Ts at high density over a large extent. ...

  9. SSP Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Lynch, Thomas H.; Roth, A. (Technical Monitor)

    2000-01-01

    Space Solar Power is a NASA program sponsored by Marshall Space Flight Center. The Paper presented here represents the architectural study of a large power management and distribution (PMAD) system. The PMAD supplies power to a microwave array for power beaming to an earth rectenna (Rectifier Antenna). The power is in the GW level.

  10. High-levels of microplastic pollution in a large, remote, mountain lake.

    PubMed

    Free, Christopher M; Jensen, Olaf P; Mason, Sherri A; Eriksen, Marcus; Williamson, Nicholas J; Boldgiv, Bazartseren

    2014-08-15

    Despite the large and growing literature on microplastics in the ocean, little information exists on microplastics in freshwater systems. This study is the first to evaluate the abundance, distribution, and composition of pelagic microplastic pollution in a large, remote, mountain lake. We quantified pelagic microplastics and shoreline anthropogenic debris in Lake Hovsgol, Mongolia. With an average microplastic density of 20,264 particles km(-2), Lake Hovsgol is more heavily polluted with microplastics than the more developed Lakes Huron and Superior in the Laurentian Great Lakes. Fragments and films were the most abundant microplastic types; no plastic microbeads and few pellets were observed. Household plastics dominated the shoreline debris and were comprised largely of plastic bottles, fishing gear, and bags. Microplastic density decreased with distance from the southwestern shore, the most populated and accessible section of the park, and was distributed by the prevailing winds. These results demonstrate that without proper waste management, low-density populations can heavily pollute freshwater systems with consumer plastics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. A Study of Economical Incentives for Voltage Profile Control Method in Future Distribution Network

    NASA Astrophysics Data System (ADS)

    Tsuji, Takao; Sato, Noriyuki; Hashiguchi, Takuhei; Goda, Tadahiro; Tange, Seiji; Nomura, Toshio

    In a future distribution network, it is difficult to maintain system voltage because a large number of distributed generators are introduced to the system. The authors have proposed “voltage profile control method” using power factor control of distributed generators in the previous work. However, the economical disbenefit is caused by the active power decrease when the power factor is controlled in order to increase the reactive power. Therefore, proper incentives must be given to the customers that corporate to the voltage profile control method. Thus, in this paper, we develop a new rules which can decide the economical incentives to the customers. The method is tested in one feeder distribution network model and its effectiveness is shown.

  12. Design and Implementation of Distributed Crawler System Based on Scrapy

    NASA Astrophysics Data System (ADS)

    Fan, Yuhao

    2018-01-01

    At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.

  13. Arcade: A Web-Java Based Framework for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.

  14. A Performance Comparison of Tree and Ring Topologies in Distributed System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Min

    A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less

  15. Biases in the OSSOS Detection of Large Semimajor Axis Trans-Neptunian Objects

    NASA Astrophysics Data System (ADS)

    Gladman, Brett; Shankman, Cory; OSSOS Collaboration

    2017-10-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada-France-Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  16. OSSOS. VI. Striking Biases in the Detection of Large Semimajor Axis Trans-Neptunian Objects

    NASA Astrophysics Data System (ADS)

    Shankman, Cory; Kavelaars, J. J.; Bannister, Michele T.; Gladman, Brett J.; Lawler, Samantha M.; Chen, Ying-Tung; Jakubik, Marian; Kaib, Nathan; Alexandersen, Mike; Gwyn, Stephen D. J.; Petit, Jean-Marc; Volk, Kathryn

    2017-08-01

    The accumulating but small set of large semimajor axis trans-Neptunian objects (TNOs) shows an apparent clustering in the orientations of their orbits. This clustering must either be representative of the intrinsic distribution of these TNOs, or else have arisen as a result of observation biases and/or statistically expected variations for such a small set of detected objects. The clustered TNOs were detected across different and independent surveys, which has led to claims that the detections are therefore free of observational bias. This apparent clustering has led to the so-called “Planet 9” hypothesis that a super-Earth currently resides in the distant solar system and causes this clustering. The Outer Solar System Origins Survey (OSSOS) is a large program that ran on the Canada–France–Hawaii Telescope from 2013 to 2017, discovering more than 800 new TNOs. One of the primary design goals of OSSOS was the careful determination of observational biases that would manifest within the detected sample. We demonstrate the striking and non-intuitive biases that exist for the detection of TNOs with large semimajor axes. The eight large semimajor axis OSSOS detections are an independent data set, of comparable size to the conglomerate samples used in previous studies. We conclude that the orbital distribution of the OSSOS sample is consistent with being detected from a uniform underlying angular distribution.

  17. Plant Minders

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Indoor plants are automatically watered by the Aqua Trends watering system. System draws water from building outlets or from pump/reservoir module and distributes it to the plants via a network of tubes and adjustable nozzles. Key element of system is electronic controller programmed to dispense water according to the needs of various plants in an installation. Adjustable nozzle meters out exactly right amount of water at proper time to the plant it's serving. More than 100 Aqua/Trends systems are in service in the USA, from a simple residential system to a large Mirage III system integrated to water all greenery in a large office or apartment building.

  18. Large field distributed aperture laser semiactive angle measurement system design with imaging fiber bundles.

    PubMed

    Xu, Chunyun; Cheng, Haobo; Feng, Yunpeng; Jing, Xiaoli

    2016-09-01

    A type of laser semiactive angle measurement system is designed for target detecting and tracking. Only one detector is used to detect target location from four distributed aperture optical systems through a 4×1 imaging fiber bundle. A telecentric optical system in image space is designed to increase the efficiency of imaging fiber bundles. According to the working principle of a four-quadrant (4Q) detector, fiber diamond alignment is adopted between an optical system and a 4Q detector. The structure of the laser semiactive angle measurement system is, we believe, novel. Tolerance analysis is carried out to determine tolerance limits of manufacture and installation errors of the optical system. The performance of the proposed method is identified by computer simulations and experiments. It is demonstrated that the linear region of the system is ±12°, with measurement error of better than 0.2°. In general, this new system can be used with large field of view and high accuracy, providing an efficient, stable, and fast method for angle measurement in practical situations.

  19. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  20. Engineering research, development and technology FY99

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langland, R T

    The growth of computer power and connectivity, together with advances in wireless sensing and communication technologies, is transforming the field of complex distributed systems. The ability to deploy large numbers of sensors with a rapid, broadband communication system will enable high-fidelity, near real-time monitoring of complex systems. These technological developments will provide unprecedented insight into the actual performance of engineered and natural environment systems, enable the evolution of many new types of engineered systems for monitoring and detection, and enhance our ability to perform improved and validated large-scale simulations of complex systems. One of the challenges facing engineering is tomore » develop methodologies to exploit the emerging information technologies. Particularly important will be the ability to assimilate measured data into the simulation process in a way which is much more sophisticated than current, primarily ad hoc procedures. The reports contained in this section on the Center for Complex Distributed Systems describe activities related to the integrated engineering of large complex systems. The first three papers describe recent developments for each link of the integrated engineering process for large structural systems. These include (1) the development of model-based signal processing algorithms which will formalize the process of coupling measurements and simulation and provide a rigorous methodology for validation and update of computational models; (2) collaborative efforts with faculty at the University of California at Berkeley on the development of massive simulation models for the earth and large bridge structures; and (3) the development of wireless data acquisition systems which provide a practical means of monitoring large systems like the National Ignition Facility (NIF) optical support structures. These successful developments are coming to a confluence in the next year with applications to NIF structural characterizations and analysis of large bridge structures for the State of California. Initial feasibility investigations into the development of monitoring and detection systems are described in the papers on imaging of underground structures with ground-penetrating radar, and the use of live insects as sensor platforms. These efforts are establishing the basic performance characteristics essential to the decision process for future development of sensor arrays for information gathering related to national security.« less

  1. Understanding quantum work in a quantum many-body system.

    PubMed

    Wang, Qian; Quan, H T

    2017-03-01

    Based on previous studies in a single-particle system in both the integrable [Jarzynski, Quan, and Rahav, Phys. Rev. X 5, 031038 (2015)2160-330810.1103/PhysRevX.5.031038] and the chaotic systems [Zhu, Gong, Wu, and Quan, Phys. Rev. E 93, 062108 (2016)1539-375510.1103/PhysRevE.93.062108], we study the the correspondence principle between quantum and classical work distributions in a quantum many-body system. Even though the interaction and the indistinguishability of identical particles increase the complexity of the system, we find that for a quantum many-body system the quantum work distribution still converges to its classical counterpart in the semiclassical limit. Our results imply that there exists a correspondence principle between quantum and classical work distributions in an interacting quantum many-body system, especially in the large particle number limit, and further justify the definition of quantum work via two-point energy measurements in quantum many-body systems.

  2. The feasibility and stability of large complex biological networks: a random matrix approach.

    PubMed

    Stone, Lewi

    2018-05-29

    In the 70's, Robert May demonstrated that complexity creates instability in generic models of ecological networks having random interaction matrices A. Similar random matrix models have since been applied in many disciplines. Central to assessing stability is the "circular law" since it describes the eigenvalue distribution for an important class of random matrices A. However, despite widespread adoption, the "circular law" does not apply for ecological systems in which density-dependence operates (i.e., where a species growth is determined by its density). Instead one needs to study the far more complicated eigenvalue distribution of the community matrix S = DA, where D is a diagonal matrix of population equilibrium values. Here we obtain this eigenvalue distribution. We show that if the random matrix A is locally stable, the community matrix S = DA will also be locally stable, providing the system is feasible (i.e., all species have positive equilibria D > 0). This helps explain why, unusually, nearly all feasible systems studied here are locally stable. Large complex systems may thus be even more fragile than May predicted, given the difficulty of assembling a feasible system. It was also found that the degree of stability, or resilience of a system, depended on the minimum equilibrium population.

  3. Global stability results for a generalized Lotka-Volterra system with distributed delays. Applications to predator-prey and to epidemic systems.

    PubMed

    Beretta, E; Capasso, V; Rinaldi, F

    1988-01-01

    The paper contains an extension of the general ODE system proposed in previous papers by the same authors, to include distributed time delays in the interaction terms. The new system describes a large class of Lotka-Volterra like population models and epidemic models with continuous time delays. Sufficient conditions for the boundedness of solutions and for the global asymptotic stability of nontrivial equilibrium solutions are given. A detailed analysis of the epidemic system is given with respect to the conditions for global stability. For a relevant subclass of these systems an existence criterion for steady states is also given.

  4. Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision (in Chinese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong

    Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logicmore » and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.« less

  5. Utilization of available skills and materials in fire prevention

    NASA Technical Reports Server (NTRS)

    Martin, H. W.

    1971-01-01

    Procedures for installing fire protection systems in large buildings are discussed. Factors considered in the safety management are: (1) distribution of water supply, (2) design and location of exits, (3) emergency power system, and (4) maintenance procedures.

  6. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  7. Background Noises Versus Intraseasonal Variation Signals: Small vs. Large Convective Cloud Objects From CERES Aqua Observations

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2015-01-01

    During inactive phases of Madden-Julian Oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES (Clouds and the Earth's Radiant Energy System) observations between July 2006 and June 2010 for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index, which assigns the tropics to one of the eight MJO phases each day. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The criteria for defining these cloud types are overcast footprints and cloud top pressures less than 400 hPa, but DC has higher cloud optical depths (=10) than those of CS (<10). The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation speeds/directions.

  8. White Paper on Dish Stirling Technology: Path Toward Commercial Deployment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andraka, Charles E.; Stechel, Ellen; Becker, Peter

    2016-07-01

    Dish Stirling energy systems have been developed for distributed and large-scale utility deployment. This report summarizes the state of the technology in a joint project between Stirling Energy Systems, Sandia National Laboratories, and the Department of Energy in 2011. It then lays out a feasible path to large scale deployment, including development needs and anticipated cost reduction paths that will make a viable deployment product.

  9. Production of juvenile and sub-adult cobia in recirculating aquaculture systems

    USDA-ARS?s Scientific Manuscript database

    Cobia Rachycentron canadum is a large migratory pelagic finfish species that is distributed worldwide in tropical, subtropical, and warm temperate seas except the Mediterranean and the central and eastern Pacific. Despite its large size, commonly exceeding 23 kg at maturity, and excellent food qual...

  10. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  11. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 7 2012-10-01 2012-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  12. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 7 2013-10-01 2013-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  13. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 7 2011-10-01 2011-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  14. 46 CFR 183.354 - Battery installations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 7 2014-10-01 2014-10-01 false Battery installations. 183.354 Section 183.354 Shipping...) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely...

  15. Exact results in the large system size limit for the dynamics of the chemical master equation, a one dimensional chain of equations.

    PubMed

    Martirosyan, A; Saakian, David B

    2011-08-01

    We apply the Hamilton-Jacobi equation (HJE) formalism to solve the dynamics of the chemical master equation (CME). We found exact analytical expressions (in large system-size limit) for the probability distribution, including explicit expression for the dynamics of variance of distribution. We also give the solution for some simple cases of the model with time-dependent rates. We derived the results of the Van Kampen method from the HJE approach using a special ansatz. Using the Van Kampen method, we give a system of ordinary differential equations (ODEs) to define the variance in a two-dimensional case. We performed numerics for the CME with stationary noise. We give analytical criteria for the disappearance of bistability in the case of stationary noise in one-dimensional CMEs.

  16. Multi-camera digital image correlation method with distributed fields of view

    NASA Astrophysics Data System (ADS)

    Malowany, Krzysztof; Malesa, Marcin; Kowaluk, Tomasz; Kujawinska, Malgorzata

    2017-11-01

    A multi-camera digital image correlation (DIC) method and system for measurements of large engineering objects with distributed, non-overlapping areas of interest are described. The data obtained with individual 3D DIC systems are stitched by an algorithm which utilizes the positions of fiducial markers determined simultaneously by Stereo-DIC units and laser tracker. The proposed calibration method enables reliable determination of transformations between local (3D DIC) and global coordinate systems. The applicability of the method was proven during in-situ measurements of a hall made of arch-shaped (18 m span) self-supporting metal-plates. The proposed method is highly recommended for 3D measurements of shape and displacements of large and complex engineering objects made from multiple directions and it provides the suitable accuracy of data for further advanced structural integrity analysis of such objects.

  17. Intelligent, Self-Diagnostic Thermal Protection System for Future Spacecraft

    NASA Technical Reports Server (NTRS)

    Hyers, Robert W.; SanSoucie, Michael P.; Pepyne, David; Hanlon, Alaina B.; Deshmukh, Abhijit

    2005-01-01

    The goal of this project is to provide self-diagnostic capabilities to the thermal protection systems (TPS) of future spacecraft. Self-diagnosis is especially important in thermal protection systems (TPS), where large numbers of parts must survive extreme conditions after weeks or years in space. In-service inspections of these systems are difficult or impossible, yet their reliability must be ensured before atmospheric entry. In fact, TPS represents the greatest risk factor after propulsion for any transatmospheric mission. The concepts and much of the technology would be applicable not only to the Crew Exploration Vehicle (CEV), but also to ablative thermal protection for aerocapture and planetary exploration. Monitoring a thermal protection system on a Shuttle-sized vehicle is a daunting task: there are more than 26,000 components whose integrity must be verified with very low rates of both missed faults and false positives. The large number of monitored components precludes conventional approaches based on centralized data collection over separate wires; a distributed approach is necessary to limit the power, mass, and volume of the health monitoring system. Distributed intelligence with self-diagnosis further improves capability, scalability, robustness, and reliability of the monitoring subsystem. A distributed system of intelligent sensors can provide an assurance of the integrity of the system, diagnosis of faults, and condition-based maintenance, all with provable bounds on errors.

  18. Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, Jianming; Wu, Di; Kalsi, Karanjit

    This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less

  19. Distributed File System Utilities to Manage Large DatasetsVersion 0.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-05-21

    FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.

  20. Visualization and Analysis for Near-Real-Time Decision Making in Distributed Workflows

    DOE PAGES

    Pugmire, David; Kress, James; Choi, Jong; ...

    2016-08-04

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  1. Earthquake mechanism and predictability shown by a laboratory fault

    USGS Publications Warehouse

    King, C.-Y.

    1994-01-01

    Slip events generated in a laboratory fault model consisting of a circulinear chain of eight spring-connected blocks of approximately equal weight elastically driven to slide on a frictional surface are studied. It is found that most of the input strain energy is released by a relatively few large events, which are approximately time predictable. A large event tends to roughen stress distribution along the fault, whereas the subsequent smaller events tend to smooth the stress distribution and prepare a condition of simultaneous criticality for the occurrence of the next large event. The frequency-size distribution resembles the Gutenberg-Richter relation for earthquakes, except for a falloff for the largest events due to the finite energy-storage capacity of the fault system. Slip distributions, in different events are commonly dissimilar. Stress drop, slip velocity, and rupture velocity all tend to increase with event size. Rupture-initiation locations are usually not close to the maximum-slip locations. ?? 1994 Birkha??user Verlag.

  2. Understanding Microplastic Distribution: A Global Citizen Monitoring Effort

    NASA Astrophysics Data System (ADS)

    Barrows, A.

    2016-02-01

    Understanding distribution and abundance of microplastics in the world's oceans will continue to help inform global law-making. Through recruiting and training over 500 volunteers our study has collected over 1000 samples from remote and populated areas world-wide. Samples include water collected at the sea surface and throughout the water column. Surface to depth sampling has provided insight into vertical plastic distribution. The development of unique field and laboratory methodology has enabled plastics to be quantified down to 50 µm. In 2015, the study expanded to include global freshwater systems. By understanding plastic patterns, distribution and concentration in large and small watersheds we will better understand how freshwater systems are contributing to marine microplastic pollution.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  4. Statistical Models of Fracture Relevant to Nuclear-Grade Graphite: Review and Recommendations

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Bratton, Robert L.

    2011-01-01

    The nuclear-grade (low-impurity) graphite needed for the fuel element and moderator material for next-generation (Gen IV) reactors displays large scatter in strength and a nonlinear stress-strain response from damage accumulation. This response can be characterized as quasi-brittle. In this expanded review, relevant statistical failure models for various brittle and quasi-brittle material systems are discussed with regard to strength distribution, size effect, multiaxial strength, and damage accumulation. This includes descriptions of the Weibull, Batdorf, and Burchell models as well as models that describe the strength response of composite materials, which involves distributed damage. Results from lattice simulations are included for a physics-based description of material breakdown. Consideration is given to the predicted transition between brittle and quasi-brittle damage behavior versus the density of damage (level of disorder) within the material system. The literature indicates that weakest-link-based failure modeling approaches appear to be reasonably robust in that they can be applied to materials that display distributed damage, provided that the level of disorder in the material is not too large. The Weibull distribution is argued to be the most appropriate statistical distribution to model the stochastic-strength response of graphite.

  5. Discharge transient coupling in large space power systems

    NASA Technical Reports Server (NTRS)

    Stevens, N. John; Stillwell, R. P.

    1990-01-01

    Experiments have shown that plasma environments can induce discharges in solar arrays. These plasmas simulate the environments found in low earth orbits where current plans call for operation of very large power systems. The discharges could be large enough to couple into the power system and possibly disrupt operations. Here, the general concepts of the discharge mechanism and the techniques of coupling are discussed. Data from both ground and flight experiments are reviewed to obtain an expected basis for the interactions. These concepts were applied to the Space Station solar array and distribution system as an example of the large space power system. The effect of discharges was found to be a function of the discharge site. For most sites in the array discharges would not seriously impact performance. One location at the negative end of the array was identified as a position where discharges could couple to charge stored in system capacitors. This latter case could impact performance.

  6. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  7. NASDA's Advanced On-Line System (ADOLIS)

    NASA Technical Reports Server (NTRS)

    Yamamoto, Yoshikatsu; Hara, Hideo; Yamada, Shigeo; Hirata, Nobuyuki; Komatsu, Shigenori; Nishihata, Seiji; Oniyama, Akio

    1993-01-01

    Spacecraft operations including ground system operations are generally realized by various large or small scale group work which is done by operators, engineers, managers, users and so on, and their positions are geographically distributed in many cases. In face-to-face work environments, it is easy for them to understand each other. However, in distributed work environments which need communication media, if only using audio, they become estranged from each other and lose interest in and continuity of work. It is an obstacle to smooth operation of spacecraft. NASDA has developed an experimental model of a new real-time operation control system called 'ADOLIS' (ADvanced On-Line System) adopted to such a distributed environment using a multi-media system dealing with character, figure, image, handwriting, video and audio information which is accommodated to operation systems of a wide range including spacecraft and ground systems. This paper describes the results of the development of the experimental model.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steinberg, Elad; Sari, Re’em

    The Asteroid Belt and the Kuiper Belt are relics from the formation of our solar system. Understanding the size and spin distribution of the two belts is crucial for a deeper understanding of the formation of our solar system and the dynamical processes that govern it. In this paper, we investigate the effect of collisions on the evolution of the spin distribution of asteroids and KBOs. We find that the power law nature of the impactors’ size distribution leads to a Lévy distribution of the spin rates. This results in a power law tail in the spin distribution, in starkmore » contrast to the usually quoted Maxwellian distribution. We show that for bodies larger than 10 km, collisions alone lead to spin rates peaking at 0.15–0.5 revolutions per day. Comparing that to the observed spin rates of large asteroids (R > 50 km), we find that the spins of large asteroids, peaking at ∼1–2 revolutions per day, are dominated by a primordial component that reflects the formation mechanism of the asteroids. Similarly, the Kuiper Belt has undergone virtually no collisional spin evolution, assuming current densities. Collisions contribute a spin rate of ∼0.01 revolutions per day, thus the observed fast spin rates of KBOs are also primordial in nature.« less

  9. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation.

    PubMed

    Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-07-29

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.

  10. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation

    PubMed Central

    Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-01-01

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946

  11. Hardware-assisted software clock synchronization for homogeneous distributed systems

    NASA Technical Reports Server (NTRS)

    Ramanathan, P.; Kandlur, Dilip D.; Shin, Kang G.

    1990-01-01

    A clock synchronization scheme that strikes a balance between hardware and software solutions is proposed. The proposed is a software algorithm that uses minimal additional hardware to achieve reasonably tight synchronization. Unlike other software solutions, the guaranteed worst-case skews can be made insensitive to the maximum variation of message transit delay in the system. The scheme is particularly suitable for large partially connected distributed systems with topologies that support simple point-to-point broadcast algorithms. Examples of such topologies include the hypercube and the mesh interconnection structures.

  12. Identification of Curie temperature distributions in magnetic particulate systems

    NASA Astrophysics Data System (ADS)

    Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.

    2017-09-01

    This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.

  13. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  14. Electrical System Technology Working Group (WG) Report

    NASA Technical Reports Server (NTRS)

    Silverman, S.; Ford, F. E.

    1984-01-01

    The technology needs for space power systems (military, public, commercial) were assessed for the period 1995 to 2005 in the area of power management and distribution, components, circuits, subsystems, controls and autonomy, modeling and simulation. There was general agreement that the military requirements for pulse power would be the dominant factor in the growth of power systems. However, the growth of conventional power to the 100 to 250kw range would be in the public sector, with low Earth orbit needs being the driver toward large 100kw systems. An overall philosophy for large power system development is also described.

  15. Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks.

    PubMed

    Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi

    2015-09-18

    Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugmire, David; Kress, James; Choi, Jong

    Data driven science is becoming increasingly more common, complex, and is placing tremendous stresses on visualization and analysis frameworks. Data sources producing 10GB per second (and more) are becoming increasingly commonplace in both simulation, sensor and experimental sciences. These data sources, which are often distributed around the world, must be analyzed by teams of scientists that are also distributed. Enabling scientists to view, query and interact with such large volumes of data in near-real-time requires a rich fusion of visualization and analysis techniques, middleware and workflow systems. Here, this paper discusses initial research into visualization and analysis of distributed datamore » workflows that enables scientists to make near-real-time decisions of large volumes of time varying data.« less

  17. Exact Extremal Statistics in the Classical 1D Coulomb Gas

    NASA Astrophysics Data System (ADS)

    Dhar, Abhishek; Kundu, Anupam; Majumdar, Satya N.; Sabhapandit, Sanjib; Schehr, Grégory

    2017-08-01

    We consider a one-dimensional classical Coulomb gas of N -like charges in a harmonic potential—also known as the one-dimensional one-component plasma. We compute, analytically, the probability distribution of the position xmax of the rightmost charge in the limit of large N . We show that the typical fluctuations of xmax around its mean are described by a nontrivial scaling function, with asymmetric tails. This distribution is different from the Tracy-Widom distribution of xmax for Dyson's log gas. We also compute the large deviation functions of xmax explicitly and show that the system exhibits a third-order phase transition, as in the log gas. Our theoretical predictions are verified numerically.

  18. Large-scale P2P network based distributed virtual geographic environment (DVGE)

    NASA Astrophysics Data System (ADS)

    Tan, Xicheng; Yu, Liang; Bian, Fuling

    2007-06-01

    Virtual Geographic Environment has raised full concern as a kind of software information system that helps us understand and analyze the real geographic environment, and it has also expanded to application service system in distributed environment--distributed virtual geographic environment system (DVGE), and gets some achievements. However, limited by the factor of the mass data of VGE, the band width of network, as well as numerous requests and economic, etc. DVGE still faces some challenges and problems which directly cause the current DVGE could not provide the public with high-quality service under current network mode. The Rapid development of peer-to-peer network technology has offered new ideas of solutions to the current challenges and problems of DVGE. Peer-to-peer network technology is able to effectively release and search network resources so as to realize efficient share of information. Accordingly, this paper brings forth a research subject on Large-scale peer-to-peer network extension of DVGE as well as a deep study on network framework, routing mechanism, and DVGE data management on P2P network.

  19. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems.

    PubMed

    Shehzad, Danish; Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models.

  20. Optimizing NEURON Simulation Environment Using Remote Memory Access with Recursive Doubling on Distributed Memory Systems

    PubMed Central

    Bozkuş, Zeki

    2016-01-01

    Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collective is exercised for spikes exchange after each interval across distributed memory systems. The increase in number of processors though results in achieving concurrency and better performance but it inversely affects MPI_Allgather which increases communication time between processors. This necessitates improving communication methodology to decrease the spikes exchange time over distributed memory systems. This work has improved MPI_Allgather method using Remote Memory Access (RMA) by moving two-sided communication to one-sided communication, and use of recursive doubling mechanism facilitates achieving efficient communication between the processors in precise steps. This approach enhanced communication concurrency and has improved overall runtime making NEURON more efficient for simulation of large neuronal network models. PMID:27413363

  1. Very Large Scale Distributed Information Processing Systems

    DTIC Science & Technology

    1991-09-27

    USENIX Conference Proceedings, pp. 31-43. USENIX, February 1988. [KLA90] Michael L. Kazar, Bruce W. Leverett, Owen T. Anderson, Vasilis Apos- tolides, Beth...will be selected if cost is the curlcron Iorsleettin- IfFigure 2 R DistribUted Database lSgtam and its we combin the abolve two pit , n r-itcrr

  2. Within-band spray distribution of nozzles used for herbaceous plant control

    Treesearch

    James H. Miller

    1994-01-01

    Abstract. Described are the spray patterns of nozzles setup for banded herbaceous plant control treatments. Spraying Systems Company nozzles. were tested, but similar nozzles are available from other manufacturers. Desirable traits were considered to be as follows: an even distribution pattern, low volume, low height, large droplets, and a single...

  3. Distributed Name Servers: Naming and Caching in Large Distributed Computing Environments

    DTIC Science & Technology

    1985-12-01

    transmission rate of the communication medium1, transmission over a 56K bps line costs approx- imately 54r, and similarly, communication over a 9.6K...memories for modem computer systems attempt to maximize the hit ratio for a fixed-size cache by utilizing intelligent cache replacement algorithms

  4. A continuum theory for multicomponent chromatography modeling.

    PubMed

    Pfister, David; Morbidelli, Massimo; Nicoud, Roger-Marc

    2016-05-13

    A continuum theory is proposed for modeling multicomponent chromatographic systems under linear conditions. The model is based on the description of complex mixtures, possibly involving tens or hundreds of solutes, by a continuum. The present approach is shown to be very efficient when dealing with a large number of similar components presenting close elution behaviors and whose individual analytical characterization is impossible. Moreover, approximating complex mixtures by continuous distributions of solutes reduces the required number of model parameters to the few ones specific to the characterization of the selected continuous distributions. Therefore, in the frame of the continuum theory, the simulation of large multicomponent systems gets simplified and the computational effectiveness of the chromatographic model is thus dramatically improved. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Computational Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years of the project. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed. A theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modelling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide a embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  6. Integral criteria for large-scale multiple fingerprint solutions

    NASA Astrophysics Data System (ADS)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  7. Distributed health care imaging information systems

    NASA Astrophysics Data System (ADS)

    Thompson, Mary R.; Johnston, William E.; Guojun, Jin; Lee, Jason; Tierney, Brian; Terdiman, Joseph F.

    1997-05-01

    We have developed an ATM network-based system to collect and catalogue cardio-angiogram videos from the source at a Kaiser central facility and make them available for viewing by doctors at primary care Kaiser facilities. This an example of the general problem of diagnostic data being generated at tertiary facilities, while the images, or other large data objects they produce, need to be used from a variety of other locations such as doctor's offices or local hospitals. We describe the use of a highly distributed computing and storage architecture to provide all aspects of collecting, storing, analyzing, and accessing such large data-objects in a metropolitan area ATM network. Our large data-object management system provides network interface between the object sources, the data management system and the user of the data. As the data is being stored, a cataloguing system automatically creates and stores condensed versions of the data, textural metadata and pointers to the original data. The catalogue system provides a Web-based graphical interface to the data. The user is able the view the low-resolution data with a standard Internet connection and Web browser. If high-resolution is required, a high-speed connection and special application programs can be used to view the high-resolution original data.

  8. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  9. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  10. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  11. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  12. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  13. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  14. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  15. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  16. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  17. 46 CFR 129.356 - Battery installations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery installations. 129.356 Section 129.356 Shipping... INSTALLATIONS Power Sources and Distribution Systems § 129.356 Battery installations. (a) Large. Each large battery-installation must be located in a locker, room, or enclosed box dedicated solely to the storage of...

  18. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  19. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 4 2011-10-01 2011-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  20. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 46 Shipping 4 2014-10-01 2014-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  1. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  2. 46 CFR 120.352 - Battery categories.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 46 Shipping 4 2013-10-01 2013-10-01 false Battery categories. 120.352 Section 120.352 Shipping... and Distribution Systems § 120.352 Battery categories. This section applies to batteries installed to... sources of power to final emergency loads. (a) Large. A large battery installation is one connected to a...

  3. 46 CFR 120.354 - Battery installations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 46 Shipping 4 2012-10-01 2012-10-01 false Battery installations. 120.354 Section 120.354 Shipping... and Distribution Systems § 120.354 Battery installations. (a) Large batteries. Each large battery installation must be located in a locker, room or enclosed box solely dedicated to the storage of batteries...

  4. Qualitative Description of Electric Power System Future States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hardy, Trevor D.; Corbin, Charles D.

    The simulation and evaluation of transactive systems depends to a large extent on the context in which those efforts are performed. Assumptions regarding the composition of the electric power system, the regulatory and policy environment, the distribution of renewable and other distributed energy resources (DERs), technological advances, and consumer engagement all contribute to, and affect, the evaluation of any given transactive system, regardless of its design. It is our position that the assumptions made about the state of the future power grid will determine, to some extent, the systems ultimately deployed, and that the transactive system itself may play anmore » important role in the evolution of the power system.« less

  5. A practical large scale/high speed data distribution system using 8 mm libraries

    NASA Technical Reports Server (NTRS)

    Howard, Kevin

    1993-01-01

    Eight mm tape libraries are known primarily for their small size, large storage capacity, and low cost. However, many applications require an additional attribute which, heretofore, has been lacking -- high transfer rate. Transfer rate is particularly important in a large scale data distribution environment -- an environment in which 8 mm tape should play a very important role. Data distribution is a natural application for 8 mm for several reasons: most large laboratories have access to 8 mm tape drives, 8 mm tapes are upwardly compatible, 8 mm media are very inexpensive, 8 mm media are light weight (important for shipping purposes), and 8 mm media densely pack data (5 gigabytes now and 15 gigabytes on the horizon). If the transfer rate issue were resolved, 8 mm could offer a good solution to the data distribution problem. To that end Exabyte has analyzed four ways to increase its transfer rate: native drive transfer rate increases, data compression at the drive level, tape striping, and homogeneous drive utilization. Exabyte is actively pursuing native drive transfer rate increases and drive level data compression. However, for non-transmitted bulk data applications (which include data distribution) the other two methods (tape striping and homogeneous drive utilization) hold promise.

  6. Design and implementation of distributed multimedia surveillance system based on object-oriented middleware

    NASA Astrophysics Data System (ADS)

    Cao, Xuesong; Jiang, Ling; Hu, Ruimin

    2006-10-01

    Currently, the applications of surveillance system have been increasingly widespread. But there are few surveillance platforms that can meet the requirement of large-scale, cross-regional, and flexible surveillance business. In the paper, we present a distributed surveillance system platform to improve safety and security of the society. The system is constructed by an object-oriented middleware called as Internet Communications Engine (ICE). This middleware helps our platform to integrate a lot of surveillance resource of the society and accommodate diverse range of surveillance industry requirements. In the follow sections, we will describe in detail the design concepts of system and introduce traits of ICE.

  7. Large Fluctuations for Spatial Diffusion of Cold Atoms

    NASA Astrophysics Data System (ADS)

    Aghion, Erez; Kessler, David A.; Barkai, Eli

    2017-06-01

    We use a new approach to study the large fluctuations of a heavy-tailed system, where the standard large-deviations principle does not apply. Large-deviations theory deals with tails of probability distributions and the rare events of random processes, for example, spreading packets of particles. Mathematically, it concerns the exponential falloff of the density of thin-tailed systems. Here we investigate the spatial density Pt(x ) of laser-cooled atoms, where at intermediate length scales the shape is fat tailed. We focus on the rare events beyond this range, which dominate important statistical properties of the system. Through a novel friction mechanism induced by the laser fields, the density is explored with the recently proposed non-normalized infinite-covariant density approach. The small and large fluctuations give rise to a bifractal nature of the spreading packet. We derive general relations which extend our theory to a class of systems with multifractal moments.

  8. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  9. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  10. A unifying framework for systems modeling, control systems design, and system operation

    NASA Technical Reports Server (NTRS)

    Dvorak, Daniel L.; Indictor, Mark B.; Ingham, Michel D.; Rasmussen, Robert D.; Stringfellow, Margaret V.

    2005-01-01

    Current engineering practice in the analysis and design of large-scale multi-disciplinary control systems is typified by some form of decomposition- whether functional or physical or discipline-based-that enables multiple teams to work in parallel and in relative isolation. Too often, the resulting system after integration is an awkward marriage of different control and data mechanisms with poor end-to-end accountability. System of systems engineering, which faces this problem on a large scale, cries out for a unifying framework to guide analysis, design, and operation. This paper describes such a framework based on a state-, model-, and goal-based architecture for semi-autonomous control systems that guides analysis and modeling, shapes control system software design, and directly specifies operational intent. This paper illustrates the key concepts in the context of a large-scale, concurrent, globally distributed system of systems: NASA's proposed Array-based Deep Space Network.

  11. Utility of 222Rn as a passive tracer of subglacial distributed system drainage

    NASA Astrophysics Data System (ADS)

    Linhoff, Benjamin S.; Charette, Matthew A.; Nienow, Peter W.; Wadham, Jemma L.; Tedstone, Andrew J.; Cowton, Thomas

    2017-03-01

    Water flow beneath the Greenland Ice Sheet (GrIS) has been shown to include slow-inefficient (distributed) and fast-efficient (channelized) drainage systems, in response to meltwater delivery to the bed via both moulins and surface lake drainage. This partitioning between channelized and distributed drainage systems is difficult to quantify yet it plays an important role in bulk meltwater chemistry and glacial velocity, and thus subglacial erosion. Radon-222, which is continuously produced via the decay of 226Ra, accumulates in meltwater that has interacted with rock and sediment. Hence, elevated concentrations of 222Rn should be indicative of meltwater that has flowed through a distributed drainage system network. In the spring and summer of 2011 and 2012, we made hourly 222Rn measurements in the proglacial river of a large outlet glacier of the GrIS (Leverett Glacier, SW Greenland). Radon-222 activities were highest in the early melt season (10-15 dpm L-1), decreasing by a factor of 2-5 (3-5 dpm L-1) following the onset of widespread surface melt. Using a 222Rn mass balance model, we estimate that, on average, greater than 90% of the river 222Rn was sourced from distributed system meltwater. The distributed system 222Rn flux varied on diurnal, weekly, and seasonal time scales with highest fluxes generally occurring on the falling limb of the hydrograph and during expansion of the channelized drainage system. Using laboratory based estimates of distributed system 222Rn, the distributed system water flux generally ranged between 1-5% of the total proglacial river discharge for both seasons. This study provides a promising new method for hydrograph separation in glacial watersheds and for estimating the timing and magnitude of distributed system fluxes expelled at ice sheet margins.

  12. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  13. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  14. Data Reprocessing on Worldwide Distributed Systems

    NASA Astrophysics Data System (ADS)

    Wicke, Daniel

    The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.

  15. Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay

    NASA Astrophysics Data System (ADS)

    Torkamani, Shahab; Butcher, Eric A.

    2013-07-01

    The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.

  16. Large 3D resistivity and induced polarization acquisition using the Fullwaver system: towards an adapted processing methodology

    NASA Astrophysics Data System (ADS)

    Leite, Orlando; Gance, Julien; Texier, Benoît; Bernard, Jean; Truffert, Catherine

    2017-04-01

    Driven by needs in the mineral exploration market for ever faster and ever easier set-up of large 3D resistivity and induced polarization, autonomous and cableless recorded systems come to the forefront. Opposite to the traditional centralized acquisition, this new system permits a complete random distribution of receivers on the survey area allowing to obtain a real 3D imaging. This work presents the results of a 3 km2 large experiment up to 600m of depth performed with a new type of autonomous distributed receivers: the I&V-Fullwaver. With such system, all usual drawbacks induced by long cable set up over large 3D areas - time consuming, lack of accessibility, heavy weight, electromagnetic induction, etc. - disappear. The V-Fullwavers record the entire time series of voltage on two perpendicular axes, for a good determination of the data quality although I-Fullwaver records injected current simultaneously. For this survey, despite good assessment of each individual signal quality, on each channel of the set of Fullwaver systems, a significant number of negative apparent resistivity and chargeability remains present in the dataset (around 15%). These values are commonly not taken into account in the inversion software although they may be due to complex geological structure of interest (e.g. linked to the presence of sulfides in the earth). Taking into account that such distributed recording system aims to restitute the best 3D resistivity and IP tomography, how can 3D inversion be improved? In this work, we present the dataset, the processing chain and quality control of a large 3D survey. We show that the quality of the data selected is good enough to include it into the inversion processing. We propose a second way of processing based on the modulus of the apparent resistivity that stabilizes the inversion. We then discuss the results of both processing. We conclude that an effort could be made on the inclusion of negative apparent resistivity in the inversion code.

  17. Design and implementation of a UNIX based distributed computing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Love, J.S.; Michael, M.W.

    1994-12-31

    We have designed, implemented, and are running a corporate-wide distributed processing batch queue on a large number of networked workstations using the UNIX{reg_sign} operating system. Atlas Wireline researchers and scientists have used the system for over a year. The large increase in available computer power has greatly reduced the time required for nuclear and electromagnetic tool modeling. Use of remote distributed computing has simultaneously reduced computation costs and increased usable computer time. The system integrates equipment from different manufacturers, using various CPU architectures, distinct operating system revisions, and even multiple processors per machine. Various differences between the machines have tomore » be accounted for in the master scheduler. These differences include shells, command sets, swap spaces, memory sizes, CPU sizes, and OS revision levels. Remote processing across a network must be performed in a manner that is seamless from the users` perspective. The system currently uses IBM RISC System/6000{reg_sign}, SPARCstation{sup TM}, HP9000s700, HP9000s800, and DEC Alpha AXP{sup TM} machines. Each CPU in the network has its own speed rating, allowed working hours, and workload parameters. The system if designed so that all of the computers in the network can be optimally scheduled without adversely impacting the primary users of the machines. The increase in the total usable computational capacity by means of distributed batch computing can change corporate computing strategy. The integration of disparate computer platforms eliminates the need to buy one type of computer for computations, another for graphics, and yet another for day-to-day operations. It might be possible, for example, to meet all research and engineering computing needs with existing networked computers.« less

  18. Direct measurements show decreasing methane emissions from natural gas local distribution systems in the United States.

    PubMed

    Lamb, Brian K; Edburg, Steven L; Ferrara, Thomas W; Howard, Touché; Harrison, Matthew R; Kolb, Charles E; Townsend-Small, Amy; Dyck, Wesley; Possolo, Antonio; Whetstone, James R

    2015-04-21

    Fugitive losses from natural gas distribution systems are a significant source of anthropogenic methane. Here, we report on a national sampling program to measure methane emissions from 13 urban distribution systems across the U.S. Emission factors were derived from direct measurements at 230 underground pipeline leaks and 229 metering and regulating facilities using stratified random sampling. When these new emission factors are combined with estimates for customer meters, maintenance, and upsets, and current pipeline miles and numbers of facilities, the total estimate is 393 Gg/yr with a 95% upper confidence limit of 854 Gg/yr (0.10% to 0.22% of the methane delivered nationwide). This fraction includes emissions from city gates to the customer meter, but does not include other urban sources or those downstream of customer meters. The upper confidence limit accounts for the skewed distribution of measurements, where a few large emitters accounted for most of the emissions. This emission estimate is 36% to 70% less than the 2011 EPA inventory, (based largely on 1990s emission data), and reflects significant upgrades at metering and regulating stations, improvements in leak detection and maintenance activities, as well as potential effects from differences in methodologies between the two studies.

  19. Design and Realization of Online Monitoring System of Distributed New Energy and Renewable Energy

    NASA Astrophysics Data System (ADS)

    Tang, Yanfen; Zhou, Tao; Li, Mengwen; Zheng, Guotai; Li, Hao

    2018-01-01

    Aimed at difficult centralized monitoring and management of current distributed new energy and renewable energy generation projects due to great varieties, different communication protocols and large-scale difference, this paper designs a online monitoring system of new energy and renewable energy characterized by distributed deployment, tailorable functions, extendible applications and fault self-healing performance. This system is designed based on international general standard for grid information data model, formulates unified data acquisition and transmission standard for different types of new energy and renewable energy generation projects, and can realize unified data acquisition and real-time monitoring of new energy and renewable energy generation projects, such as solar energy, wind power, biomass energy, etc. within its jurisdiction. This system has applied in Beijing. At present, 576 projects are connected to the system. Good effect is achieved and stability and reliability of the system have been validated.

  20. Modelling Framework and the Quantitative Analysis of Distributed Energy Resources in Future Distribution Networks

    NASA Astrophysics Data System (ADS)

    Han, Xue; Sandels, Claes; Zhu, Kun; Nordström, Lars

    2013-08-01

    There has been a large body of statements claiming that the large-scale deployment of Distributed Energy Resources (DERs) could eventually reshape the future distribution grid operation in numerous ways. Thus, it is necessary to introduce a framework to measure to what extent the power system operation will be changed by various parameters of DERs. This article proposed a modelling framework for an overview analysis on the correlation between DERs. Furthermore, to validate the framework, the authors described the reference models of different categories of DERs with their unique characteristics, comprising distributed generation, active demand and electric vehicles. Subsequently, quantitative analysis was made on the basis of the current and envisioned DER deployment scenarios proposed for Sweden. Simulations are performed in two typical distribution network models for four seasons. The simulation results show that in general the DER deployment brings in the possibilities to reduce the power losses and voltage drops by compensating power from the local generation and optimizing the local load profiles.

  1. Comparisons of ionospheric electron density distributions reconstructed by GPS computerized tomography, backscatter ionograms, and vertical ionograms

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua

    2015-12-01

    Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.

  2. Modeling occupancy distribution in large spaces with multi-feature classification algorithm

    DOE PAGES

    Wang, Wei; Chen, Jiayu; Hong, Tianzhen

    2018-04-07

    We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less

  3. Modeling occupancy distribution in large spaces with multi-feature classification algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wei; Chen, Jiayu; Hong, Tianzhen

    We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less

  4. Ensemble Kalman filtering in presence of inequality constraints

    NASA Astrophysics Data System (ADS)

    van Leeuwen, P. J.

    2009-04-01

    Kalman filtering is presence of constraints is an active area of research. Based on the Gaussian assumption for the probability-density functions, it looks hard to bring in extra constraints in the formalism. On the other hand, in geophysical systems we often encounter constraints related to e.g. the underlying physics or chemistry, which are violated by the Gaussian assumption. For instance, concentrations are always non-negative, model layers have non-negative thickness, and sea-ice concentration is between 0 and 1. Several methods to bring inequality constraints into the Kalman-filter formalism have been proposed. One of them is probability density function (pdf) truncation, in which the Gaussian mass from the non-allowed part of the variables is just equally distributed over the pdf where the variables are alolwed, as proposed by Shimada et al. 1998. However, a problem with this method is that the probability that e.g. the sea-ice concentration is zero, is zero! The new method proposed here does not have this drawback. It assumes that the probability-density function is a truncated Gaussian, but the truncated mass is not distributed equally over all allowed values of the variables, but put into a delta distribution at the truncation point. This delta distribution can easily be handled with in Bayes theorem, leading to posterior probability density functions that are also truncated Gaussians with delta distributions at the truncation location. In this way a much better representation of the system is obtained, while still keeping most of the benefits of the Kalman-filter formalism. In the full Kalman filter the formalism is prohibitively expensive in large-scale systems, but efficient implementation is possible in ensemble variants of the kalman filter. Applications to low-dimensional systems and large-scale systems will be discussed.

  5. Browndye: A software package for Brownian dynamics

    NASA Astrophysics Data System (ADS)

    Huber, Gary A.; McCammon, J. Andrew

    2010-11-01

    A new software package, Browndye, is presented for simulating the diffusional encounter of two large biological molecules. It can be used to estimate second-order rate constants and encounter probabilities, and to explore reaction trajectories. Browndye builds upon previous knowledge and algorithms from software packages such as UHBD, SDA, and Macrodox, while implementing algorithms that scale to larger systems. Program summaryProgram title: Browndye Catalogue identifier: AEGT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license, included in distribution No. of lines in distributed program, including test data, etc.: 143 618 No. of bytes in distributed program, including test data, etc.: 1 067 861 Distribution format: tar.gz Programming language: C++, OCaml ( http://caml.inria.fr/) Computer: PC, Workstation, Cluster Operating system: Linux Has the code been vectorised or parallelized?: Yes. Runs on multiple processors with shared memory using pthreads RAM: Depends linearly on size of physical system Classification: 3 External routines: uses the output of APBS [1] ( http://www.poissonboltzmann.org/apbs/) as input. APBS must be obtained and installed separately. Expat 2.0.1, CLAPACK, ocaml-expat, Mersenne Twister. These are included in the Browndye distribution. Nature of problem: Exploration and determination of rate constants of bimolecular interactions involving large biological molecules. Solution method: Brownian dynamics with electrostatic, excluded volume, van der Waals, and desolvation forces. Running time: Depends linearly on size of physical system and quadratically on precision of results. The included example executes in a few minutes.

  6. Coherent wave transmission in quasi-one-dimensional systems with Lévy disorder

    NASA Astrophysics Data System (ADS)

    Amanatidis, Ilias; Kleftogiannis, Ioannis; Falceto, Fernando; Gopar, Víctor A.

    2017-12-01

    We study the random fluctuations of the transmission in disordered quasi-one-dimensional systems such as disordered waveguides and/or quantum wires whose random configurations of disorder are characterized by density distributions with a long tail known as Lévy distributions. The presence of Lévy disorder leads to large fluctuations of the transmission and anomalous localization, in relation to the standard exponential localization (Anderson localization). We calculate the complete distribution of the transmission fluctuations for a different number of transmission channels in the presence and absence of time-reversal symmetry. Significant differences in the transmission statistics between disordered systems with Anderson and anomalous localizations are revealed. The theoretical predictions are independently confirmed by tight-binding numerical simulations.

  7. Towards scalable Byzantine fault-tolerant replication

    NASA Astrophysics Data System (ADS)

    Zbierski, Maciej

    2017-08-01

    Byzantine fault-tolerant (BFT) replication is a powerful technique, enabling distributed systems to remain available and correct even in the presence of arbitrary faults. Unfortunately, existing BFT replication protocols are mostly load-unscalable, i.e. they fail to respond with adequate performance increase whenever new computational resources are introduced into the system. This article proposes a universal architecture facilitating the creation of load-scalable distributed services based on BFT replication. The suggested approach exploits parallel request processing to fully utilize the available resources, and uses a load balancer module to dynamically adapt to the properties of the observed client workload. The article additionally provides a discussion on selected deployment scenarios, and explains how the proposed architecture could be used to increase the dependability of contemporary large-scale distributed systems.

  8. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Lindsay; Zéphyr, Luckny; Cardell, Judith B.

    The evolution of the power system to the reliable, efficient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of renewable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distribution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for cooptimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this framework, microgrids encompass consumers, distributed renewables and storage. The energy managementmore » system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the development of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic optimization, including decomposition and stochastic dual dynamic programming.« less

  9. Fault-tolerant clock synchronization in distributed systems

    NASA Technical Reports Server (NTRS)

    Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.

    1990-01-01

    Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.

  10. A likely universal model of fracture scaling and its consequence for crustal hydromechanics

    NASA Astrophysics Data System (ADS)

    Davy, P.; Le Goc, R.; Darcel, C.; Bour, O.; de Dreuzy, J. R.; Munier, R.

    2010-10-01

    We argue that most fracture systems are spatially organized according to two main regimes: a "dilute" regime for the smallest fractures, where they can grow independently of each other, and a "dense" regime for which the density distribution is controlled by the mechanical interactions between fractures. We derive a density distribution for the dense regime by acknowledging that, statistically, fractures do not cross a larger one. This very crude rule, which expresses the inhibiting role of large fractures against smaller ones but not the reverse, actually appears be a very strong control on the eventual fracture density distribution since it results in a self-similar distribution whose exponents and density term are fully determined by the fractal dimension D and a dimensionless parameter γ that encompasses the details of fracture correlations and orientations. The range of values for D and γ appears to be extremely limited, which makes this model quite universal. This theory is supported by quantitative data on either fault or joint networks. The transition between the dilute and dense regimes occurs at about a few tenths of a kilometer for faults systems and a few meters for joints. This remarkable difference between both processes is likely due to a large-scale control (localization) of the fracture growth for faulting that does not exist for jointing. Finally, we discuss the consequences of this model on the flow properties and show that these networks are in a critical state, with a large number of nodes carrying a large amount of flow.

  11. Control technology development

    NASA Astrophysics Data System (ADS)

    Schaechter, D. B.

    1982-03-01

    The main objectives of the control technology development task are given in the slide below. The first is to develop control design techniques based on flexible structural models, rather than simple rigid-body models. Since large space structures are distributed parameter systems, a new degree of freedom, that of sensor/actuator placement, may be exercised for improving control system performance. Another characteristic of large space structures is numerous oscillatory modes within the control bandwidth. Reduced-order controller design models must be developed which produce stable closed-loop systems when combined with the full-order system. Since the date of an actual large-space-structure flight is rapidly approaching, it is vitally important that theoretical developments are tested in actual hardware. Experimental verification is a vital counterpart of all current theoretical developments.

  12. MonALISA, an agent-based monitoring and control system for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.

    2017-10-01

    MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.

  13. Trans-oceanic Remote Power Hardware-in-the-Loop: Multi-site Hardware, Integrated Controller, and Electric Network Co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel

    Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less

  14. Trans-oceanic Remote Power Hardware-in-the-Loop: Multi-site Hardware, Integrated Controller, and Electric Network Co-simulation

    DOE PAGES

    Lundstrom, Blake R.; Palmintier, Bryan S.; Rowe, Daniel; ...

    2017-07-24

    Electric system operators are increasingly concerned with the potential system-wide impacts of the large-scale integration of distributed energy resources (DERs) including voltage control, protection coordination, and equipment wear. This prompts a need for new simulation techniques that can simultaneously capture all the components of these large integrated smart grid systems. This paper describes a novel platform that combines three emerging research areas: power systems co-simulation, power hardware in the loop (PHIL) simulation, and lab-lab links. The platform is distributed, real-time capable, allows for easy internet-based connection from geographically-dispersed participants, and is software platform agnostic. We demonstrate its utility by studyingmore » real-time PHIL co-simulation of coordinated solar PV firming control of two inverters connected in multiple electric distribution network models, prototypical of U.S. and Australian systems. Here, the novel trans-pacific closed-loop system simulation was conducted in real-time using a power network simulator and physical PV/battery inverter at power at the National Renewable Energy Laboratory in Golden, CO, USA and a physical PV inverter at power at the Commonwealth Scientific and Industrial Research Organisation's Energy Centre in Newcastle, NSW, Australia. This capability enables smart grid researchers throughout the world to leverage their unique simulation capabilities for multi-site collaborations that can effectively simulate and validate emerging smart grid technology solutions.« less

  15. A Medical Television Center; a Guide to Organizing a Large Television Center in Health Science Educational Institutions. Monograph 5.

    ERIC Educational Resources Information Center

    Potts, Robert E.

    Guidelines are presented for establishing large television centers in health science education institutions. Television distribution systems are described, and staff, equipment, space and budgetary requirements are discussed. Included are: (1) a proposed chart of organizational development and job descriptions; (2) suggested equipment purchases;…

  16. Physical consequences of large organic debris in Pacific Northwest streams.

    Treesearch

    Frederick J. Swanson; George W. Lienkaemper

    1978-01-01

    Large organic debris in streams controls the distribution of aquatic habitats, the routing of sediment through stream systems, and the stability of streambed and banks. Management activities directly alter debris loading by addition or removal of material and indirectly by increasing the probability of debris torrents and removing standing streamside trees. We propose...

  17. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  18. Forbidden regimes in the distribution of bipartite quantum correlations due to multiparty entanglement

    NASA Astrophysics Data System (ADS)

    Kumar, Asutosh; Dhar, Himadri Shekhar; Prabhu, R.; Sen(De), Aditi; Sen, Ujjwal

    2017-05-01

    Monogamy is a nonclassical property that limits the distribution of quantum correlation among subparts of a multiparty system. We show that monogamy scores for different quantum correlation measures are bounded above by functions of genuine multipartite entanglement for a large majority of pure multiqubit states. The bound is universal for all three-qubit pure states. We derive necessary conditions to characterize the states that violate the bound, which can also be observed by numerical simulation for a small set of states, generated Haar uniformly. The results indicate that genuine multipartite entanglement restricts the distribution of bipartite quantum correlations in a multiparty system.

  19. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  20. Implementation strategies for load center automation on the space station module/power management and distribution testbed

    NASA Technical Reports Server (NTRS)

    Watson, Karen

    1990-01-01

    The Space Station Module/Power Management and Distribution (SSM/PMAD) testbed was developed to study the tertiary power management on modules in large spacecraft. The main goal was to study automation techniques, not necessarily develop flight ready systems. Because of the confidence gained in many of automation strategies investigated, it is appropriate to study, in more detail, implementation strategies in order to find better trade-offs for nearer to flight ready systems. These trade-offs particularly concern the weight, volume, power consumption, and performance of the automation system. These systems, in their present implementation are described.

  1. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  2. Large-Scale Wireless Temperature Monitoring System for Liquefied Petroleum Gas Storage Tanks

    PubMed Central

    Fan, Guangwen; Shen, Yu; Hao, Xiaowei; Yuan, Zongming; Zhou, Zhi

    2015-01-01

    Temperature distribution is a critical indicator of the health condition for Liquefied Petroleum Gas (LPG) storage tanks. In this paper, we present a large-scale wireless temperature monitoring system to evaluate the safety of LPG storage tanks. The system includes wireless sensors networks, high temperature fiber-optic sensors, and monitoring software. Finally, a case study on real-world LPG storage tanks proves the feasibility of the system. The unique features of wireless transmission, automatic data acquisition and management, local and remote access make the developed system a good alternative for temperature monitoring of LPG storage tanks in practical applications. PMID:26393596

  3. Chaotic dynamics of large-scale double-diffusive convection in a porous medium

    NASA Astrophysics Data System (ADS)

    Kondo, Shutaro; Gotoda, Hiroshi; Miyano, Takaya; Tokuda, Isao T.

    2018-02-01

    We have studied chaotic dynamics of large-scale double-diffusive convection of a viscoelastic fluid in a porous medium from the viewpoint of dynamical systems theory. A fifth-order nonlinear dynamical system modeling the double-diffusive convection is theoretically obtained by incorporating the Darcy-Brinkman equation into transport equations through a physical dimensionless parameter representing porosity. We clearly show that the chaotic convective motion becomes much more complicated with increasing porosity. The degree of dynamic instability during chaotic convective motion is quantified by two important measures: the network entropy of the degree distribution in the horizontal visibility graph and the Kaplan-Yorke dimension in terms of Lyapunov exponents. We also present an interesting on-off intermittent phenomenon in the probability distribution of time intervals exhibiting nearly complete synchronization.

  4. Near real-time finite fault source inversion for moderate-large earthquakes in Taiwan using teleseismic P waveform

    NASA Astrophysics Data System (ADS)

    Wong, T. P.; Lee, S. J.; Gung, Y.

    2017-12-01

    Taiwan is located at one of the most active tectonic regions in the world. Rapid estimation of the spatial slip distribution of moderate-large earthquake (Mw6.0) is important for emergency response. It is necessary to have a real-time system to provide the report immediately after earthquake happen. The earthquake activities in the vicinity of Taiwan can be monitored by Real-Time Moment Tensor Monitoring System (RMT) which provides the rapid focal mechanism and source parameters. In this study, we follow up the RMT system to develop a near real-time finite fault source inversion system for the moderate-large earthquakes occurred in Taiwan. The system will be triggered by the RMT System when an Mw6.0 is detected. According to RMT report, our system automatically determines the fault dimension, record length, and rise time. We adopted one segment fault plane with variable rake angle. The generalized ray theory was applied to calculate the Green's function for each subfault. The primary objective of the system is to provide the first order image of coseismic slip pattern and identify the centroid location on the fault plane. The performance of this system had been demonstrated by 23 big earthquakes occurred in Taiwan successfully. The results show excellent data fits and consistent with the solutions from other studies. The preliminary spatial slip distribution will be provided within 25 minutes after an earthquake occurred.

  5. Performance Monitoring of Residential Hot Water Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Anna; Lanzisera, Steven; Lutz, Jim

    Current water distribution systems are designed such that users need to run the water for some time to achieve the desired temperature, wasting energy and water in the process. We developed a wireless sensor network for large-scale, long time-series monitoring of residential water end use. Our system consists of flow meters connected to wireless motes transmitting data to a central manager mote, which in turn posts data to our server via the internet. This project also demonstrates a reliable and flexible data collection system that could be configured for various other forms of end use metering in buildings. The purposemore » of this study was to determine water and energy use and waste in hot water distribution systems in California residences. We installed meters at every end use point and the water heater in 20 homes and collected 1s flow and temperature data over an 8 month period. For a typical shower and dishwasher events, approximately half the energy is wasted. This relatively low efficiency highlights the importance of further examining the energy and water waste in hot water distribution systems.« less

  6. Research on distributed virtual reality system in electronic commerce

    NASA Astrophysics Data System (ADS)

    Xue, Qiang; Wang, Jiening; Sun, Jizhou

    2004-03-01

    In this paper, Distributed Virtual Reality (DVR) technology applied in Electronical Commerce (EC) is discussed. DVR has the capability of providing a new means for human being to recognize, analyze and resolve the large scale, complex problems, which makes it develop quickly in EC fields. The technology of CSCW (Computer Supported Cooperative Work) and middleware is introduced into the development of EC-DVR system to meet the need of a platform which can provide the necessary cooperation and communication services to avoid developing the basic module repeatedly. Finally, the paper gives a platform structure of EC-DVR system.

  7. Distributed Energy Systems: Security Implications of the Grid of the Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stamber, Kevin L.; Kelic, Andjelka; Taylor, Robert A.

    2017-01-01

    Distributed Energy Resources (DER) are being added to the nation's electric grid, and as penetration of these resources increases, they have the potential to displace or offset large-scale, capital-intensive, centralized generation. Integration of DER into operation of the traditional electric grid requires automated operational control and communication of DER elements, from system measurement to control hardware and software, in conjunction with a utility's existing automated and human-directed control of other portions of the system. Implementation of DER technologies suggests a number of gaps from both a security and a policy perspective. This page intentionally left blank.

  8. Accounting and Accountability for Distributed and Grid Systems

    NASA Technical Reports Server (NTRS)

    Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.

    2001-01-01

    While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.

  9. Deceit: A flexible distributed file system

    NASA Technical Reports Server (NTRS)

    Siegel, Alex; Birman, Kenneth; Marzullo, Keith

    1989-01-01

    Deceit, a distributed file system (DFS) being developed at Cornell, focuses on flexible file semantics in relation to efficiency, scalability, and reliability. Deceit servers are interchangeable and collectively provide the illusion of a single, large server machine to any clients of the Deceit service. Non-volatile replicas of each file are stored on a subset of the file servers. The user is able to set parameters on a file to achieve different levels of availability, performance, and one-copy serializability. Deceit also supports a file version control mechanism. In contrast with many recent DFS efforts, Deceit can behave like a plain Sun Network File System (NFS) server and can be used by any NFS client without modifying any client software. The current Deceit prototype uses the ISIS Distributed Programming Environment for all communication and process group management, an approach that reduces system complexity and increases system robustness.

  10. Exploiting geo-distributed clouds for a e-health monitoring system with minimum service delay and privacy preservation.

    PubMed

    Shen, Qinghua; Liang, Xiaohui; Shen, Xuemin; Lin, Xiaodong; Luo, Henry Y

    2014-03-01

    In this paper, we propose an e-health monitoring system with minimum service delay and privacy preservation by exploiting geo-distributed clouds. In the system, the resource allocation scheme enables the distributed cloud servers to cooperatively assign the servers to the requested users under the load balance condition. Thus, the service delay for users is minimized. In addition, a traffic-shaping algorithm is proposed. The traffic-shaping algorithm converts the user health data traffic to the nonhealth data traffic such that the capability of traffic analysis attacks is largely reduced. Through the numerical analysis, we show the efficiency of the proposed traffic-shaping algorithm in terms of service delay and privacy preservation. Furthermore, through the simulations, we demonstrate that the proposed resource allocation scheme significantly reduces the service delay compared to two other alternatives using jointly the short queue and distributed control law.

  11. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  12. Bringing modeling to the masses: A web based system to predict potential species distributions

    USGS Publications Warehouse

    Graham, Jim; Newman, Greg; Kumar, Sunil; Jarnevich, Catherine S.; Young, Nick; Crall, Alycia W.; Stohlgren, Thomas J.; Evangelista, Paul

    2010-01-01

    Predicting current and potential species distributions and abundance is critical for managing invasive species, preserving threatened and endangered species, and conserving native species and habitats. Accurate predictive models are needed at local, regional, and national scales to guide field surveys, improve monitoring, and set priorities for conservation and restoration. Modeling capabilities, however, are often limited by access to software and environmental data required for predictions. To address these needs, we built a comprehensive web-based system that: (1) maintains a large database of field data; (2) provides access to field data and a wealth of environmental data; (3) accesses values in rasters representing environmental characteristics; (4) runs statistical spatial models; and (5) creates maps that predict the potential species distribution. The system is available online at www.niiss.org, and provides web-based tools for stakeholders to create potential species distribution models and maps under current and future climate scenarios.

  13. Detecting fission from special nuclear material sources

    DOEpatents

    Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA

    2012-06-05

    A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.

  14. Quantum coherence: Reciprocity and distribution

    NASA Astrophysics Data System (ADS)

    Kumar, Asutosh

    2017-03-01

    Quantum coherence is the outcome of the superposition principle. Recently, it has been theorized as a quantum resource, and is the premise of quantum correlations in multipartite systems. It is therefore interesting to study the coherence content and its distribution in a multipartite quantum system. In this work, we show analytically as well as numerically the reciprocity between coherence and mixedness of a quantum state. We find that this trade-off is a general feature in the sense that it is true for large spectra of measures of coherence and of mixedness. We also study the distribution of coherence in multipartite systems by looking at monogamy-type relation-which we refer to as additivity relation-between coherences of different parts of the system. We show that for the Dicke states, while the normalized measures of coherence violate the additivity relation, the unnormalized ones satisfy the same.

  15. Shear-transformation-zone theory of linear glassy dynamics.

    PubMed

    Bouchbinder, Eran; Langer, J S

    2011-06-01

    We present a linearized shear-transformation-zone (STZ) theory of glassy dynamics in which the internal STZ transition rates are characterized by a broad distribution of activation barriers. For slowly aging or fully aged systems, the main features of the barrier-height distribution are determined by the effective temperature and other near-equilibrium properties of the configurational degrees of freedom. Our theory accounts for the wide range of relaxation rates observed in both metallic glasses and soft glassy materials such as colloidal suspensions. We find that the frequency-dependent loss modulus is not just a superposition of Maxwell modes. Rather, it exhibits an α peak that rises near the viscous relaxation rate and, for nearly jammed, glassy systems, extends to much higher frequencies in accord with experimental observations. We also use this theory to compute strain recovery following a period of large, persistent deformation and then abrupt unloading. We find that strain recovery is determined in part by the initial barrier-height distribution, but that true structural aging also occurs during this process and determines the system's response to subsequent perturbations. In particular, we find by comparison with experimental data that the initial deformation produces a highly disordered state with a large population of low activation barriers, and that this state relaxes quickly toward one in which the distribution is dominated by the high barriers predicted by the near-equilibrium analysis. The nonequilibrium dynamics of the barrier-height distribution is the most important of the issues raised and left unresolved in this paper.

  16. Implementation of Fiber Optic Sensing System on Sandwich Composite Cylinder Buckling Test

    NASA Technical Reports Server (NTRS)

    Pena, Francisco; Richards, W. Lance; Parker, Allen R.; Piazza, Anthony; Schultz, Marc R.; Rudd, Michelle T.; Gardner, Nathaniel W.; Hilburger, Mark W.

    2018-01-01

    The National Aeronautics and Space Administration (NASA) Engineering and Safety Center Shell Buckling Knockdown Factor Project is a multicenter project tasked with developing new analysis-based shell buckling design guidelines and design factors (i.e., knockdown factors) through high-fidelity buckling simulations and advanced test technologies. To validate these new buckling knockdown factors for future launch vehicles, the Shell Buckling Knockdown Factor Project is carrying out structural testing on a series of large-scale metallic and composite cylindrical shells at the NASA Marshall Space Flight Center (Marshall Space Flight Center, Alabama). A fiber optic sensor system was used to measure strain on a large-scale sandwich composite cylinder that was tested under multiple axial compressive loads up to more than 850,000 lb, and equivalent bending loads over 22 million in-lb. During the structural testing of the composite cylinder, strain data were collected from optical cables containing distributed fiber Bragg gratings using a custom fiber optic sensor system interrogator developed at the NASA Armstrong Flight Research Center. A total of 16 fiber-optic strands, each containing nearly 1,000 fiber Bragg gratings, measuring strain, were installed on the inner and outer cylinder surfaces to monitor the test article global structural response through high-density real-time and post test strain measurements. The distributed sensing system provided evidence of local epoxy failure at the attachment-ring-to-barrel interface that would not have been detected with conventional instrumentation. Results from the fiber optic sensor system were used to further refine and validate structural models for buckling of the large-scale composite structures. This paper discusses the techniques employed for real-time structural monitoring of the composite cylinder for structural load introduction and distributed bending-strain measurements over a large section of the cylinder by utilizing unique sensing capabilities of fiber optic sensors.

  17. Cerebral palsy in Victoria: motor types, topography and gross motor function.

    PubMed

    Howard, Jason; Soo, Brendan; Graham, H Kerr; Boyd, Roslyn N; Reid, Sue; Lanigan, Anna; Wolfe, Rory; Reddihough, Dinah S

    2005-01-01

    To study the relationships between motor type, topographical distribution and gross motor function in a large, population-based cohort of children with cerebral palsy (CP), from the State of Victoria, and compare this cohort to similar cohorts from other countries. An inception cohort was generated from the Victorian Cerebral Palsy Register (VCPR) for the birth years 1990-1992. Demographic information, motor types and topographical distribution were obtained from the register and supplemented by grading gross motor function according to the Gross Motor Function Classification System (GMFCS). Complete data were obtained on 323 (86%) of 374 children in the cohort. Gross motor function varied from GMFCS level I (35%) to GMFCS level V (18%) and was similar in distribution to a contemporaneous Swedish cohort. There was a fairly even distribution across the topographical distributions of hemiplegia (35%), diplegia (28%) and quadriplegia (37%) with a large majority of young people having the spastic motor type (86%). The VCPR is ideal for population-based studies of gross motor function in children with CP. Gross motor function is similar in populations of children with CP in developed countries but the comparison of motor types and topographical distribution is difficult because of lack of consensus with classification systems. Use of the GMFCS provides a valid and reproducible method for clinicians to describe gross motor function in children with CP using a universal language.

  18. Emerging Technologies for Software-Reliant Systems of Systems

    DTIC Science & Technology

    2010-09-01

    conditions, such as temperature, sound, vibration, light intensity , motion, or proximity to objects [Raghavendra 2006]. Cognitive Network A cognitive...systems evolutionary development emergent behavior geographic distribution Maier also defines four types of SoS based on their management...by multinational teams. Many organizations use offshoring as a way to reduce costs of software development. Large web- based systems often use

  19. Design optimization of large-size format edge-lit light guide units

    NASA Astrophysics Data System (ADS)

    Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.

    2016-04-01

    In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.

  20. Alex Swindler | NREL

    Science.gov Websites

    distributed computing, Web information systems engineering, software engineering, computer graphics, and Dashboard, NREL Energy Story visualization, Green Button data integration, as well as a large number of Web of an R&D 100 Award. Prior to joining NREL, Alex worked as a system administrator, Web developer

  1. Intrusion-Tolerant Replication under Attack

    ERIC Educational Resources Information Center

    Kirsch, Jonathan

    2010-01-01

    Much of our critical infrastructure is controlled by large software systems whose participants are distributed across the Internet. As our dependence on these critical systems continues to grow, it becomes increasingly important that they meet strict availability and performance requirements, even in the face of malicious attacks, including those…

  2. Space and energy. [space systems for energy generation, distribution and control

    NASA Technical Reports Server (NTRS)

    Bekey, I.

    1976-01-01

    Potential contributions of space to energy-related activities are discussed. Advanced concepts presented include worldwide energy distribution to substation-sized users using low-altitude space reflectors; powering large numbers of large aircraft worldwide using laser beams reflected from space mirror complexes; providing night illumination via sunlight-reflecting space mirrors; fine-scale power programming and monitoring in transmission networks by monitoring millions of network points from space; prevention of undetected hijacking of nuclear reactor fuels by space tracking of signals from tagging transmitters on all such materials; and disposal of nuclear power plant radioactive wastes in space.

  3. Research study on multi-KW-DC distribution system

    NASA Technical Reports Server (NTRS)

    Berkery, E. A.; Krausz, A.

    1975-01-01

    A detailed definition of the HVDC test facility and the equipment required to implement the test program are provided. The basic elements of the test facility are illustrated, and consist of: the power source, conventional and digital supervision and control equipment, power distribution harness and simulated loads. The regulated dc power supplies provide steady-state power up to 36 KW at 120 VDC. Power for simulated line faults will be obtained from two banks of 90 ampere-hour lead-acid batteries. The relative merits of conventional and multiplexed power control will be demonstrated by the Supervision and Monitor Unit (SMU) and the Automatically Controlled Electrical Systems (ACES) hardware. The distribution harness is supported by a metal duct which is bonded to all component structures and functions as the system ground plane. The load banks contain passive resistance and reactance loads, solid state power controllers and active pulse width modulated loads. The HVDC test facility is designed to simulate a power distribution system for large aerospace vehicles.

  4. 3D beam shape estimation based on distributed coaxial cable interferometric sensor

    NASA Astrophysics Data System (ADS)

    Cheng, Baokai; Zhu, Wenge; Liu, Jie; Yuan, Lei; Xiao, Hai

    2017-03-01

    We present a coaxial cable interferometer based distributed sensing system for 3D beam shape estimation. By making a series of reflectors on a coaxial cable, multiple Fabry-Perot cavities are created on it. Two cables are mounted on the beam at proper locations, and a vector network analyzer (VNA) is connected to them to obtain the complex reflection signal, which is used to calculate the strain distribution of the beam in horizontal and vertical planes. With 6 GHz swept bandwidth on the VNA, the spatial resolution for distributed strain measurement is 0.1 m, and the sensitivity is 3.768 MHz mɛ -1 at the interferogram dip near 3.3 GHz. Using displacement-strain transformation, the shape of the beam is reconstructed. With only two modified cables and a VNA, this system is easy to implement and manage. Comparing to optical fiber based sensor systems, the coaxial cable sensors have the advantage of large strain and robustness, making this system suitable for structure health monitoring applications.

  5. 11-kW direct diode laser system with homogenized 55 × 20 mm2 Top-Hat intensity distribution

    NASA Astrophysics Data System (ADS)

    Köhler, Bernd; Noeske, Axel; Kindervater, Tobias; Wessollek, Armin; Brand, Thomas; Biesenbach, Jens

    2007-02-01

    In comparison with other laser systems diode lasers are characterized by a unique overall efficiency, a small footprint and high reliability. However, one major drawback of direct diode laser systems is the inhomogeneous intensity distribution in the far field. Furthermore the output power of current commercially available systems is limited to about 6 kW. We report on a diode laser system with 11 kW output power at a single wavelength of 940 nm aiming for customer specific large area treatment. To the best of our knowledge this is the highest output power reported so far for a direct diode laser system. In addition to the high output power the intensity distribution of the laser beam is homogenized in both axes leading to a 55 x 20 mm2 Top-Hat intensity profile at a working distance of 400 mm. Homogeneity of the intensity distribution is better than 90%. The intensity in the focal plane is 1 kW/cm2. We will present a detailed characterization of the laser system, including measurements of power, power stability and intensity distribution of the homogenized laser beam. In addition we will compare the experimental data with the results of non-sequential raytracing simulations.

  6. Augmentation of the space station module power management and distribution breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Hall, David K.; Lollar, Louis F.

    1991-01-01

    The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.

  7. Multiplicity fluctuations and collective flow in small colliding systems

    NASA Astrophysics Data System (ADS)

    Kawaguchi, Koji; Murase, Koichi; Hirano, Tetsufumi

    2017-11-01

    Recent observation of collective-flow-like behaviours in small colliding systems attracts significant theoretical and experimental interests. In large colliding systems, large collective flow has been interpreted as manifestation of almost-perfect fluidity of the quark gluon plasma (QGP). So it is quite intriguing to explore how small the QGP can be as a fluid. Multiplicity fluctuations play a crucial role in centrality definition of the events in small colliding systems since the fluctuations are, in general, more important as the system size is getting smaller. To consider the correct multiplicity fluctuations, we employ PYTHIA which naturally describes multiplicity distribution in p+p collisions. We superpose p+p collisions by taking into account the number of participants and that of binary collisions from Monte-Carlo version of Glauber model and evaluate initial entropy density distributions which contain not only multiplicity fluctuations but also fluctuations of longitudinal profiles. Solving hydrodynamic equations followed by the hadronic afterburner, we calculate transverse momentum spectra, elliptic and triangular flow parameters in p+Au, d+Au and 3He+Au collisions at the RHIC energy and p+Pb collisions at the LHC energy. Although a large fraction of final anisotropic flow parameters comes from the fluid-dynamical stage, the effects of hadronic rescatterings turn out to be also important as well in understanding of the flow data in small colliding systems.

  8. A Data Analysis Expert System For Large Established Distributed Databases

    NASA Astrophysics Data System (ADS)

    Gnacek, Anne-Marie; An, Y. Kim; Ryan, J. Patrick

    1987-05-01

    The purpose of this work is to analyze the applicability of artificial intelligence techniques for developing a user-friendly, parallel interface to large isolated, incompatible NASA databases for the purpose of assisting the management decision process. To carry out this work, a survey was conducted to establish the data access requirements of several key NASA user groups. In addition, current NASA database access methods were evaluated. The results of this work are presented in the form of a design for a natural language database interface system, called the Deductively Augmented NASA Management Decision Support System (DANMDS). This design is feasible principally because of recently announced commercial hardware and software product developments which allow cross-vendor compatibility. The goal of the DANMDS system is commensurate with the central dilemma confronting most large companies and institutions in America, the retrieval of information from large, established, incompatible database systems. The DANMDS system implementation would represent a significant first step toward this problem's resolution.

  9. The historical distribution of Gunnison Sage-Grouse in Colorado

    USGS Publications Warehouse

    Braun, Clait E.; Oyler-McCance, Sara J.; Nehring, Jennifer A.; Commons, Michelle L.; Young, Jessica R.; Potter, Kim M.

    2014-01-01

    The historical distribution of Gunnison Sage-Grouse (Centrocercus minimus) in Colorado is described based on published literature, observations, museum specimens, and the known distribution of sagebrush (Artemisia spp.). Historically, Gunnison Sage-Grouse were widely but patchily distributed in up to 22 counties in south-central and southwestern Colorado. The historical distribution of this species was south of the Colorado-Eagle river drainages primarily west of the Continental Divide. Potential contact areas with Greater Sage-Grouse (C. urophasianus) were along the Colorado-Eagle river system in Mesa, Garfield, and Eagle counties, west of the Continental Divide. Gunnison Sage-Grouse historically occupied habitats that were naturally highly fragmented by forested mountains and plateaus/mesas, intermountain basins without robust species of sagebrush, and river systems. This species adapted to use areas with more deciduous shrubs (i.e., Quercus spp., Amelanchier spp., Prunus spp.) in conjunction with sagebrush. Most areas historically occupied were small, linear, and patchily distributed within the overall landscape matrix. The exception was the large intermountain basin in Gunnison, Hinsdale, and Saguache counties. The documented distribution east of the Continental Divide within the large expanse of the San Luis Valley (Alamosa, Conejos, Costilla, and Rio Grande counties) was minimal and mostly on the eastern, northern, and southern fringes. Many formerly occupied habitat patches were vacant by the mid 1940s with extirpations continuing to the late 1990s. Counties from which populations were recently extirpated include Archuleta and Pitkin (1960s), and Eagle, Garfield, Montezuma, and Ouray (1990s).

  10. Using an object-based grid system to evaluate a newly developed EP approach to formulate SVMs as applied to the classification of organophosphate nerve agents

    NASA Astrophysics Data System (ADS)

    Land, Walker H., Jr.; Lewis, Michael; Sadik, Omowunmi; Wong, Lut; Wanekaya, Adam; Gonzalez, Richard J.; Balan, Arun

    2004-04-01

    This paper extends the classification approaches described in reference [1] in the following way: (1.) developing and evaluating a new method for evolving organophosphate nerve agent Support Vector Machine (SVM) classifiers using Evolutionary Programming, (2.) conducting research experiments using a larger database of organophosphate nerve agents, and (3.) upgrading the architecture to an object-based grid system for evaluating the classification of EP derived SVMs. Due to the increased threats of chemical and biological weapons of mass destruction (WMD) by international terrorist organizations, a significant effort is underway to develop tools that can be used to detect and effectively combat biochemical warfare. This paper reports the integration of multi-array sensors with Support Vector Machines (SVMs) for the detection of organophosphates nerve agents using a grid computing system called Legion. Grid computing is the use of large collections of heterogeneous, distributed resources (including machines, databases, devices, and users) to support large-scale computations and wide-area data access. Finally, preliminary results using EP derived support vector machines designed to operate on distributed systems have provided accurate classification results. In addition, distributed training time architectures are 50 times faster when compared to standard iterative training time methods.

  11. An optimal control strategy for hybrid actuator systems: Application to an artificial muscle with electric motor assist.

    PubMed

    Ishihara, Koji; Morimoto, Jun

    2018-03-01

    Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  12. Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System.

    PubMed

    Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin

    2016-08-18

    Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems.

  13. Evaluating online data of water quality changes in a pilot drinking water distribution system with multivariate data exploration methods.

    PubMed

    Mustonen, Satu M; Tissari, Soile; Huikko, Laura; Kolehmainen, Mikko; Lehtola, Markku J; Hirvonen, Arja

    2008-05-01

    The distribution of drinking water generates soft deposits and biofilms in the pipelines of distribution systems. Disturbances in water distribution can detach these deposits and biofilms and thus deteriorate the water quality. We studied the effects of simulated pressure shocks on the water quality with online analysers. The study was conducted with copper and composite plastic pipelines in a pilot distribution system. The online data gathered during the study was evaluated with Self-Organising Map (SOM) and Sammon's mapping, which are useful methods in exploring large amounts of multivariate data. The objective was to test the usefulness of these methods in pinpointing the abnormal water quality changes in the online data. The pressure shocks increased temporarily the number of particles, turbidity and electrical conductivity. SOM and Sammon's mapping were able to separate these situations from the normal data and thus make those visible. Therefore these methods make it possible to detect abrupt changes in water quality and thus to react rapidly to any disturbances in the system. These methods are useful in developing alert systems and predictive applications connected to online monitoring.

  14. Service-oriented architecture for the ARGOS instrument control software

    NASA Astrophysics Data System (ADS)

    Borelli, J.; Barl, L.; Gässler, W.; Kulas, M.; Rabien, Sebastian

    2012-09-01

    The Advanced Rayleigh Guided ground layer Adaptive optic System, ARGOS, equips the Large Binocular Telescope (LBT) with a constellation of six rayleigh laser guide stars. By correcting atmospheric turbulence near the ground, the system is designed to increase the image quality of the multi-object spectrograph LUCIFER approximately by a factor of 3 over a field of 4 arc minute diameter. The control software has the critical task of orchestrating several devices, instruments, and high level services, including the already existing adaptive optic system and the telescope control software. All these components are widely distributed over the telescope, adding more complexity to the system design. The approach used by the ARGOS engineers is to write loosely coupled and distributed services under the control of different ownership systems, providing a uniform mechanism to offer, discover, interact and use these distributed capabilities. The control system counts with several finite state machines, vibration and flexure compensation loops, and safety mechanism, such as interlocks, aircraft, and satellite avoidance systems.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Jacob; Edgar, Thomas W.; Daily, Jeffrey A.

    With an ever-evolving power grid, concerns regarding how to maintain system stability, efficiency, and reliability remain constant because of increasing uncertainties and decreasing rotating inertia. To alleviate some of these concerns, demand response represents a viable solution and is virtually an untapped resource in the current power grid. This work describes a hierarchical control framework that allows coordination between distributed energy resources and demand response. This control framework is composed of two control layers: a coordination layer that ensures aggregations of resources are coordinated to achieve system objectives and a device layer that controls individual resources to assure the predeterminedmore » power profile is tracked in real time. Large-scale simulations are executed to study the hierarchical control, requiring advancements in simulation capabilities. Technical advancements necessary to investigate and answer control interaction questions, including the Framework for Network Co-Simulation platform and Arion modeling capability, are detailed. Insights into the interdependencies of controls across a complex system and how they must be tuned, as well as validation of the effectiveness of the proposed control framework, are yielded using a large-scale integrated transmission system model coupled with multiple distribution systems.« less

  16. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  17. Storage in alluvial deposits controls the timing of particle delivery from large watersheds, filtering upland erosional signals and delaying benefits from watershed best management practices

    NASA Astrophysics Data System (ADS)

    Pizzuto, J. E.; Skalak, K.; Karwan, D. L.

    2017-12-01

    Transport of suspended sediment and sediment-borne constituents (here termed fluvial particles) through large river systems can be significantly influenced by episodic storage in floodplains and other alluvial deposits. Geomorphologists quantify the importance of storage using sediment budgets, but these data alone are insufficient to determine how storage influences the routing of fluvial particles through river corridors across large spatial scales. For steady state systems, models that combine sediment budget data with "waiting time distributions" (to define how long deposited particles remain stored until being remobilized) and velocities during transport events can provide useful predictions. Limited field data suggest that waiting time distributions are well represented by power laws, extending from <1 to >104 years, while the probability of storage defined by sediment budgets varies from 0.1 km-1 for small drainage basins to 0.001 km-1 for the world's largest watersheds. Timescales of particle delivery from large watersheds are determined by storage rather than by transport processes, with most particles requiring 102 -104 years to reach the basin outlet. These predictions suggest that erosional "signals" induced by climate change, tectonics, or anthropogenic activity will be transformed by storage before delivery to the outlets of large watersheds. In particular, best management practices (BMPs) implemented in upland source areas, designed to reduce the loading of fluvial particles to estuarine receiving waters, will not achieve their intended benefits for centuries (or longer). For transient systems, waiting time distributions cannot be constant, but will vary as portions of transient sediment "pulses" enter and are later released from storage. The delivery of sediment pulses under transient conditions can be predicted by adopting the hypothesis that the probability of erosion of stored particles will decrease with increasing "age" (where age is defined as the elapsed time since deposition). Then, waiting time and age distributions for stored particles become predictions based on the architecture of alluvial storage and the tendency for erosional processes to preferentially remove younger deposits, improving assessment of watershed BMPs and other important applications.

  18. The Ophidia Stack: Toward Large Scale, Big Data Analytics Experiments for Climate Change

    NASA Astrophysics Data System (ADS)

    Fiore, S.; Williams, D. N.; D'Anca, A.; Nassisi, P.; Aloisio, G.

    2015-12-01

    The Ophidia project is a research effort on big data analytics facing scientific data analysis challenges in multiple domains (e.g. climate change). It provides a "datacube-oriented" framework responsible for atomically processing and manipulating scientific datasets, by providing a common way to run distributive tasks on large set of data fragments (chunks). Ophidia provides declarative, server-side, and parallel data analysis, jointly with an internal storage model able to efficiently deal with multidimensional data and a hierarchical data organization to manage large data volumes. The project relies on a strong background on high performance database management and On-Line Analytical Processing (OLAP) systems to manage large scientific datasets. The Ophidia analytics platform provides several data operators to manipulate datacubes (about 50), and array-based primitives (more than 100) to perform data analysis on large scientific data arrays. To address interoperability, Ophidia provides multiple server interfaces (e.g. OGC-WPS). From a client standpoint, a Python interface enables the exploitation of the framework into Python-based eco-systems/applications (e.g. IPython) and the straightforward adoption of a strong set of related libraries (e.g. SciPy, NumPy). The talk will highlight a key feature of the Ophidia framework stack: the "Analytics Workflow Management System" (AWfMS). The Ophidia AWfMS coordinates, orchestrates, optimises and monitors the execution of multiple scientific data analytics and visualization tasks, thus supporting "complex analytics experiments". Some real use cases related to the CMIP5 experiment will be discussed. In particular, with regard to the "Climate models intercomparison data analysis" case study proposed in the EU H2020 INDIGO-DataCloud project, workflows related to (i) anomalies, (ii) trend, and (iii) climate change signal analysis will be presented. Such workflows will be distributed across multiple sites - according to the datasets distribution - and will include intercomparison, ensemble, and outlier analysis. The two-level workflow solution envisioned in INDIGO (coarse grain for distributed tasks orchestration, and fine grain, at the level of a single data analytics cluster instance) will be presented and discussed.

  19. Network-scale spatial and temporal variation in Chinook salmon (Oncorhynchus tshawytscha) redd distributions: patterns inferred from spatially continuous replicate surveys

    Treesearch

    Daniel J. Isaak; Russell F. Thurow

    2006-01-01

    Spatially continuous sampling designs, when temporally replicated, provide analytical flexibility and are unmatched in their ability to provide a dynamic system view. We have compiled such a data set by georeferencing the network-scale distribution of Chinook salmon (Oncorhynchus tshawytscha) redds across a large wilderness basin (7330 km2) in...

  20. An optimal beam alignment method for large-scale distributed space surveillance radar system

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Wang, Dongya; Xia, Shuangzhi

    2018-06-01

    Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.

  1. Variability of invertebrate abundance in drinking water distribution systems in the Netherlands in relation to biostability and sediment volumes.

    PubMed

    van Lieverloo, J Hein M; Hoogenboezem, Wim; Veenendaal, Gerrit; van der Kooij, Dick

    2012-10-15

    A survey of invertebrates in drinking water from treatment works, internal taps and hydrants on mains was carried out by almost all water companies in the Netherlands from September 1993 to August 1995. Aquatic sow bugs (Asellidae, 1-12 mm) and oligochaeta worms (Oligochaeta, 1-100 mm), both known to have caused rare though embarrassing consumer complaints, were found to form 98% of the mean biomass in water flushed from mains. Their numbers in the mains water ranged up to 1500 (mean 37) Asellidae m(-3) and up to 9900 (mean 135) Oligochaeta m(-3). Smaller crustaceans (0.5-2 mm) dominated the numbers in water from mains. e.g. water fleas (Cladocera and Copepoda up to 14,000 m(-3)). Common invertebrates in treated water and in tap water were Rotifera (<1 mm) and nematode worms (Nematoda, <2 mm). No Asellidae, large Oligochaeta (>5 mm) or other large invertebrates were found in 1560 samples of 200 l treated water or tap water. Large variations in invertebrate abundance were found within and between distribution systems. Of the variability of mean biomass in mains per system, 55%, 60% and 63% could statistically be explained by differences in the Biofilm Formation Rate, non-particulate organic matter and the permanganate index of the treated water of the treatment works respectively. A similar correlation was found between mean invertebrate biomass and mean sediment volumes in the distribution systems (R(2) = 52%). Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Derivation of hydrous pyrolysis kinetic parameters from open-system pyrolysis

    NASA Astrophysics Data System (ADS)

    Tseng, Yu-Hsin; Huang, Wuu-Liang

    2010-05-01

    Kinetic information is essential to predict the temperature, timing or depth of hydrocarbon generation within a hydrocarbon system. The most common experiments for deriving kinetic parameters are mainly by open-system pyrolysis. However, it has been shown that the conditions of open-system pyrolysis are deviant from nature by its low near-ambient pressure and high temperatures. Also, the extrapolation of heating rates in open-system pyrolysis to geological conditions may be questionable. Recent study of Lewan and Ruble shows hydrous-pyrolysis conditions can simulate the natural conditions better and its applications are supported by two case studies with natural thermal-burial histories. Nevertheless, performing hydrous pyrolysis experiment is really tedious and requires large amount of sample, while open-system pyrolysis is rather convenient and efficient. Therefore, the present study aims at the derivation of convincing distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data. Our results unveil that there is a good correlation between open-system Rock-Eval parameter Tmax and the activation energy (Ea) derived from hydrous pyrolysis. The hydrous pyrolysis single Ea can be predicted from Tmax based on the correlation, while the frequency factor (A0) is estimated based on the linear relationship between single Ea and log A0. Because the Ea distribution is more rational than single Ea, we modify the predicted single hydrous pyrolysis Ea into distributed Ea by shifting the pattern of Ea distribution from open-system pyrolysis until the weight mean Ea distribution equals to the single hydrous pyrolysis Ea. Moreover, it has been shown that the shape of the Ea distribution is very much alike the shape of Tmax curve. Thus, in case of the absence of open-system Ea distribution, we may use the shape of Tmax curve to get the distributed hydrous pyrolysis Ea. The study offers a new approach as a simple method for obtaining distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data, which will allow for better estimating hydrocarbon generation.

  3. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  4. Flow and Temperature Distribution Evaluation on Sodium Heated Large-sized Straight Double-wall-tube Steam Generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kisohara, Naoyuki; Moribe, Takeshi; Sakai, Takaaki

    2006-07-01

    The sodium heated steam generator (SG) being designed in the feasibility study on commercialized fast reactor cycle systems is a straight double-wall-tube type. The SG is large sized to reduce its manufacturing cost by economics of scale. This paper addresses the temperature and flow multi-dimensional distributions at steady state to obtain the prospect of the SG. Large-sized heat exchanger components are prone to have non-uniform flow and temperature distributions. These phenomena might lead to tube buckling or tube to tube-sheet junction failure in straight tube type SGs, owing to tubes thermal expansion difference. The flow adjustment devices installed in themore » SG are optimized to prevent these issues, and the temperature distribution properties are uncovered by analysis methods. The analysis model of the SG consists of two parts, a sodium inlet distribution plenum (the plenum) and a heat transfer tubes bundle region (the bundle). The flow and temperature distributions in the plenum and the bundle are evaluated by the three-dimensional code 'FLUENT' and the two dimensional thermal-hydraulic code 'MSG', respectively. The MSG code is particularly developed for sodium heated SGs in JAEA. These codes have revealed that the sodium flow is distributed uniformly by the flow adjustment devices, and that the lateral tube temperature distributions remain within the allowable temperature range for the structural integrity of the tubes and the tube to tube-sheet junctions. (authors)« less

  5. Installation Restoration Program. Phase II. Confirmation/Quantification. Stage 1 Report for Williams Air Force Base, Chandler, Arizona.

    DTIC Science & Technology

    1986-01-24

    at F7PTA, 3) old AVGAS distribution system at LFSA, and 4) southwest drainage system. Magnetic anomalies (buried drums) were identified at the...identified magnetic anomalies (buried metals) and determine whether any are pesticide drums or cans - Dispose of excavated material in an appropriate...intervals. The survey was hindered by the presence of three large iron warning signs at the site. These signs created a large magnetic anomaly in the

  6. A thermally driven differential mutation approach for the structural optimization of large atomic systems

    NASA Astrophysics Data System (ADS)

    Biswas, Katja

    2017-09-01

    A computational method is presented which is capable to obtain low lying energy structures of topological amorphous systems. The method merges a differential mutation genetic algorithm with simulated annealing. This is done by incorporating a thermal selection criterion, which makes it possible to reliably obtain low lying minima with just a small population size and is suitable for multimodal structural optimization. The method is tested on the structural optimization of amorphous graphene from unbiased atomic starting configurations. With just a population size of six systems, energetically very low structures are obtained. While each of the structures represents a distinctly different arrangement of the atoms, their properties, such as energy, distribution of rings, radial distribution function, coordination number, and distribution of bond angles, are very similar.

  7. Testing large volume water treatment and crude oil ...

    EPA Pesticide Factsheets

    Report EPA’s Homeland Security Research Program (HSRP) partnered with the Idaho National Laboratory (INL) to build the Water Security Test Bed (WSTB) at the INL test site outside of Idaho Falls, Idaho. The WSTB was built using an 8-inch (20 cm) diameter cement-mortar lined drinking water pipe that was previously taken out of service. The pipe was exhumed from the INL grounds and oriented in the shape of a small drinking water distribution system. Effluent from the pipe is captured in a lagoon. The WSTB can support drinking water distribution system research on a variety of drinking water treatment topics including biofilms, water quality, sensors, and homeland security related contaminants. Because the WSTB is constructed of real drinking water distribution system pipes, research can be conducted under conditions similar to those in a real drinking water system. In 2014, WSTB pipe was experimentally contaminated with Bacillus globigii spores, a non-pathogenic surrogate for the pathogenic B. anthracis, and then decontaminated using chlorine dioxide. In 2015, the WSTB was used to perform the following experiments: • Four mobile disinfection technologies were tested for their ability to disinfect large volumes of biologically contaminated “dirty” water from the WSTB. B. globigii spores acted as the biological contaminant. The four technologies evaluated included: (1) Hayward Saline C™ 6.0 Chlorination System, (2) Advanced Oxidation Process (A

  8. NASA's MERBoard: An Interactive Collaborative Workspace Platform. Chapter 4

    NASA Technical Reports Server (NTRS)

    Trimble, Jay; Wales, Roxana; Gossweiler, Rich

    2003-01-01

    This chapter describes the ongoing process by which a multidisciplinary group at NASA's Ames Research Center is designing and implementing a large interactive work surface called the MERBoard Collaborative Workspace. A MERBoard system involves several distributed, large, touch-enabled, plasma display systems with custom MERBoard software. A centralized server and database back the system. We are continually tuning MERBoard to support over two hundred scientists and engineers during the surface operations of the Mars Exploration Rover Missions. These scientists and engineers come from various disciplines and are working both in small and large groups over a span of space and time. We describe the multidisciplinary, human-centered process by which this h4ERBoard system is being designed, the usage patterns and social interactions that we have observed, and issues we are currently facing.

  9. Hierarchical Data Distribution Scheme for Peer-to-Peer Networks

    NASA Astrophysics Data System (ADS)

    Bhushan, Shashi; Dave, M.; Patel, R. B.

    2010-11-01

    In the past few years, peer-to-peer (P2P) networks have become an extremely popular mechanism for large-scale content sharing. P2P systems have focused on specific application domains (e.g. music files, video files) or on providing file system like capabilities. P2P is a powerful paradigm, which provides a large-scale and cost-effective mechanism for data sharing. P2P system may be used for storing data globally. Can we implement a conventional database on P2P system? But successful implementation of conventional databases on the P2P systems is yet to be reported. In this paper we have presented the mathematical model for the replication of the partitions and presented a hierarchical based data distribution scheme for the P2P networks. We have also analyzed the resource utilization and throughput of the P2P system with respect to the availability, when a conventional database is implemented over the P2P system with variable query rate. Simulation results show that database partitions placed on the peers with higher availability factor perform better. Degradation index, throughput, resource utilization are the parameters evaluated with respect to the availability factor.

  10. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1981-01-01

    Progress is reported in reading MAGSAT tapes in modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere. The modeling technique utilizes a linear current element representation of the large-scale space-current system.

  11. Evaluating the Effectiveness of a Personal Response System in the Classroom

    ERIC Educational Resources Information Center

    Shaffer, Dennis M.; Collura, Michael J.

    2009-01-01

    We evaluated the effectiveness of the use of an electronic personal response system (or "clickers") during an introductory psychology lecture on perceptual constancy. We graphed and projected student responses to questions during the lecture onto a large-screen display in Microsoft PowerPoint. The distributions of answers corresponded…

  12. Speech Perception as a Cognitive Process: The Interactive Activation Model.

    ERIC Educational Resources Information Center

    Elman, Jeffrey L.; McClelland, James L.

    Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…

  13. Funding California Schools: The Revenue Limit System

    ERIC Educational Resources Information Center

    Weston, Margaret

    2010-01-01

    Tax revenue flows to California's nearly 1,000 school districts through many different channels. According to the Governor's Committee on Education Excellence (2007), this system is so complex that the state cannot determine how revenues are distributed among school districts, and after reviewing a large number of academic studies in the Getting…

  14. Risk-Based Neuro-Grid Architecture for Multimodal Biometrics

    NASA Astrophysics Data System (ADS)

    Venkataraman, Sitalakshmi; Kulkarni, Siddhivinayak

    Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.

  15. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  16. Review of probabilistic analysis of dynamic response of systems with random parameters

    NASA Technical Reports Server (NTRS)

    Kozin, F.; Klosner, J. M.

    1989-01-01

    The various methods that have been studied in the past to allow probabilistic analysis of dynamic response for systems with random parameters are reviewed. Dynamic response may have been obtained deterministically if the variations about the nominal values were small; however, for space structures which require precise pointing, the variations about the nominal values of the structural details and of the environmental conditions are too large to be considered as negligible. These uncertainties are accounted for in terms of probability distributions about their nominal values. The quantities of concern for describing the response of the structure includes displacements, velocities, and the distributions of natural frequencies. The exact statistical characterization of the response would yield joint probability distributions for the response variables. Since the random quantities will appear as coefficients, determining the exact distributions will be difficult at best. Thus, certain approximations will have to be made. A number of techniques that are available are discussed, even in the nonlinear case. The methods that are described were: (1) Liouville's equation; (2) perturbation methods; (3) mean square approximate systems; and (4) nonlinear systems with approximation by linear systems.

  17. Work probability distribution and tossing a biased coin

    NASA Astrophysics Data System (ADS)

    Saha, Arnab; Bhattacharjee, Jayanta K.; Chakraborty, Sagar

    2011-01-01

    We show that the rare events present in dissipated work that enters Jarzynski equality, when mapped appropriately to the phenomenon of large deviations found in a biased coin toss, are enough to yield a quantitative work probability distribution for the Jarzynski equality. This allows us to propose a recipe for constructing work probability distribution independent of the details of any relevant system. The underlying framework, developed herein, is expected to be of use in modeling other physical phenomena where rare events play an important role.

  18. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    PubMed

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  19. Complexity, Robustness, and Multistability in Network Systems with Switching Topologies: A Hierarchical Hybrid Control Approach

    DTIC Science & Technology

    2015-05-22

    sensor networks for managing power levels of wireless networks ; air and ground transportation systems for air traffic control and payload transport and... network systems, large-scale systems, adaptive control, discontinuous systems 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF...cover a broad spectrum of ap- plications including cooperative control of unmanned air vehicles, autonomous underwater vehicles, distributed sensor

  20. Cascades in the Threshold Model for varying system sizes

    NASA Astrophysics Data System (ADS)

    Karampourniotis, Panagiotis; Sreenivasan, Sameet; Szymanski, Boleslaw; Korniss, Gyorgy

    2015-03-01

    A classical model in opinion dynamics is the Threshold Model (TM) aiming to model the spread of a new opinion based on the social drive of peer pressure. Under the TM a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. Cascades in the TM depend on multiple parameters, such as the number and selection strategy of the initially active nodes (initiators), and the threshold distribution of the nodes. For a uniform threshold in the network there is a critical fraction of initiators for which a transition from small to large cascades occurs, which for ER graphs is largerly independent of the system size. Here, we study the spread contribution of each newly assigned initiator under the TM for different initiator selection strategies for synthetic graphs of various sizes. We observe that for ER graphs when large cascades occur, the spread contribution of the added initiator on the transition point is independent of the system size, while the contribution of the rest of the initiators converges to zero at infinite system size. This property is used for the identification of large transitions for various threshold distributions. Supported in part by ARL NS-CTA, ARO, ONR, and DARPA.

  1. Parallel task processing of very large datasets

    NASA Astrophysics Data System (ADS)

    Romig, Phillip Richardson, III

    This research concerns the use of distributed computer technologies for the analysis and management of very large datasets. Improvements in sensor technology, an emphasis on global change research, and greater access to data warehouses all are increase the number of non-traditional users of remotely sensed data. We present a framework for distributed solutions to the challenges of datasets which exceed the online storage capacity of individual workstations. This framework, called parallel task processing (PTP), incorporates both the task- and data-level parallelism exemplified by many image processing operations. An implementation based on the principles of PTP, called Tricky, is also presented. Additionally, we describe the challenges and practical issues in modeling the performance of parallel task processing with large datasets. We present a mechanism for estimating the running time of each unit of work within a system and an algorithm that uses these estimates to simulate the execution environment and produce estimated runtimes. Finally, we describe and discuss experimental results which validate the design. Specifically, the system (a) is able to perform computation on datasets which exceed the capacity of any one disk, (b) provides reduction of overall computation time as a result of the task distribution even with the additional cost of data transfer and management, and (c) in the simulation mode accurately predicts the performance of the real execution environment.

  2. Surviving the Glut: The Management of Event Streams in Cyberphysical Systems

    NASA Astrophysics Data System (ADS)

    Buchmann, Alejandro

    Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de

  3. Generic solar photovoltaic system dynamic simulation model specification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Behnke, Michael Robert; Elliott, Ryan Thomas

    This document is intended to serve as a specification for generic solar photovoltaic (PV) system positive-sequence dynamic models to be implemented by software developers and approved by the WECC MVWG for use in bulk system dynamic simulations in accordance with NERC MOD standards. Two specific dynamic models are included in the scope of this document. The first, a Central Station PV System model, is intended to capture the most important dynamic characteristics of large scale (> 10 MW) PV systems with a central Point of Interconnection (POI) at the transmission level. The second, a Distributed PV System model, is intendedmore » to represent an aggregation of smaller, distribution-connected systems that comprise a portion of a composite load that might be modeled at a transmission load bus.« less

  4. A steering law for a roof-type configuration for a single-gimbal control moment gyro system

    NASA Technical Reports Server (NTRS)

    Yoshikawa, T.

    1974-01-01

    Single-Gimbal Control Moment Gyro (SGCMG) systems have been investigated for attitude control of the Large Space Telescope (LST) and the High Energy Astronomy Observatory (HEAO). However, various proposed steering laws for the SGCMG systems thus far have some defects because of singular states of the system. In this report, a steering law for a roof-type SGCMG system is proposed which is based on a new momentum distribution scheme that makes all the singular states unstable. This momentum distribution scheme is formulated by a treatment of the system as a sampled-data system. From analytical considerations, it is shown that this steering law gives control performance which is satisfactory for practical applications. Results of the preliminary computer simulation entirely support this premise.

  5. GLAD: a system for developing and deploying large-scale bioinformatics grid.

    PubMed

    Teo, Yong-Meng; Wang, Xianbing; Ng, Yew-Kwong

    2005-03-01

    Grid computing is used to solve large-scale bioinformatics problems with gigabytes database by distributing the computation across multiple platforms. Until now in developing bioinformatics grid applications, it is extremely tedious to design and implement the component algorithms and parallelization techniques for different classes of problems, and to access remotely located sequence database files of varying formats across the grid. In this study, we propose a grid programming toolkit, GLAD (Grid Life sciences Applications Developer), which facilitates the development and deployment of bioinformatics applications on a grid. GLAD has been developed using ALiCE (Adaptive scaLable Internet-based Computing Engine), a Java-based grid middleware, which exploits the task-based parallelism. Two bioinformatics benchmark applications, such as distributed sequence comparison and distributed progressive multiple sequence alignment, have been developed using GLAD.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Guggilam, Swaroop

    Power networks have to withstand a variety of disturbances that affect system frequency, and the problem is compounded with the increasing integration of intermittent renewable generation. Following a large-signal generation or load disturbance, system frequency is arrested leveraging primary frequency control provided by governor action in synchronous generators. In this work, we propose a framework for distributed energy resources (DERs) deployed in distribution networks to provide (supplemental) primary frequency response. Particularly, we demonstrate how power-frequency droop slopes for individual DERs can be designed so that the distribution feeder presents a guaranteed frequency-regulation characteristic at the feeder head. Furthermore, the droopmore » slopes are engineered such that injections of individual DERs conform to a well-defined fairness objective that does not penalize them for their location on the distribution feeder. Time-domain simulations for an illustrative network composed of a combined transmission network and distribution network with frequency-responsive DERs are provided to validate the approach.« less

  7. Programming with process groups: Group and multicast semantics

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.

  8. System analysis for the Huntsville Operation Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.

    1986-01-01

    A simulation model of the NASA Huntsville Operational Support Center (HOSC) was developed. This simulation model emulates the HYPERchannel Local Area Network (LAN) that ties together the various computers of HOSC. The HOSC system is a large installation of mainframe computers such as the Perkin Elmer 3200 series and the Dec VAX series. A series of six simulation exercises of the HOSC model is described using data sets provided by NASA. The analytical analysis of the ETHERNET LAN and the video terminals (VTs) distribution system are presented. An interface analysis of the smart terminal network model which allows the data flow requirements due to VTs on the ETHERNET LAN to be estimated, is presented.

  9. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  10. RICIS Symposium 1988

    NASA Technical Reports Server (NTRS)

    1988-01-01

    Integrated Environments for Large, Complex Systems is the theme for the RICIS symposium of 1988. Distinguished professionals from industry, government, and academia have been invited to participate and present their views and experiences regarding research, education, and future directions related to this topic. Within RICIS, more than half of the research being conducted is in the area of Computer Systems and Software Engineering. The focus of this research is on the software development life-cycle for large, complex, distributed systems. Within the education and training component of RICIS, the primary emphasis has been to provide education and training for software professionals.

  11. Thermal management of batteries

    NASA Astrophysics Data System (ADS)

    Gibbard, H. F.; Chen, C.-C.

    Control of the internal temperature during high rate discharge or charge can be a major design problem for large, high energy density battery systems. A systematic approach to the thermal management of such systems is described for different load profiles based on: thermodynamic calculations of internal heat generation; calorimetric measurements of heat flux; analytical and finite difference calculations of the internal temperature distribution; appropriate system designs for heat removal and temperature control. Examples are presented of thermal studies on large lead-acid batteries for electrical utility load levelling and nickel-zinc and lithium-iron sulphide batteries for electric vehicle propulsion.

  12. Aging transition in systems of oscillators with global distributed-delay coupling.

    PubMed

    Rahman, B; Blyuss, K B; Kyrychko, Y N

    2017-09-01

    We consider a globally coupled network of active (oscillatory) and inactive (nonoscillatory) oscillators with distributed-delay coupling. Conditions for aging transition, associated with suppression of oscillations, are derived for uniform and gamma delay distributions in terms of coupling parameters and the proportion of inactive oscillators. The results suggest that for the uniform distribution increasing the width of distribution for the same mean delay allows aging transition to happen for a smaller coupling strength and a smaller proportion of inactive elements. For gamma distribution with sufficiently large mean time delay, it may be possible to achieve aging transition for an arbitrary proportion of inactive oscillators, as long as the coupling strength lies in a certain range.

  13. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    NASA Astrophysics Data System (ADS)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.

  14. Worldwide distribution and diversity of seabird ticks: implications for the ecology and epidemiology of tick-borne pathogens.

    PubMed

    Dietrich, Muriel; Gómez-Díaz, Elena; McCoy, Karen D

    2011-05-01

    The ubiquity of ticks and their importance in the transmission of pathogens involved in human and livestock diseases are reflected by the growing number of studies focusing on tick ecology and the epidemiology of tick-borne pathogens. Likewise, the involvement of wild birds in dispersing pathogens and their role as reservoir hosts are now well established. However, studies on tick-bird systems have mainly focused on land birds, and the role of seabirds in the ecology and epidemiology of tick-borne pathogens is rarely considered. Seabirds typically have large population sizes, wide geographic distributions, and high mobility, which make them significant potential players in the maintenance and dispersal of disease agents at large spatial scales. They are parasitized by at least 29 tick species found across all biogeographical regions of the world. We know that these seabird-tick systems can harbor a large diversity of pathogens, although detailed studies of this diversity remain scarce. In this article, we review current knowledge on the diversity and global distribution of ticks and tick-borne pathogens associated with seabirds. We discuss the relationship between seabirds, ticks, and their pathogens and examine the interesting characteristics of these relationships from ecological and epidemiological points of view. We also highlight some future research directions required to better understand the evolution of these systems and to assess the potential role of seabirds in the epidemiology of tick-borne pathogens.

  15. Conceptual study of superconducting urban area power systems

    NASA Astrophysics Data System (ADS)

    Noe, Mathias; Bach, Robert; Prusseit, Werner; Willén, Dag; Gold-acker, Wilfried; Poelchau, Juri; Linke, Christian

    2010-06-01

    Efficient transmission, distribution and usage of electricity are fundamental requirements for providing citizens, societies and economies with essential energy resources. It will be a major future challenge to integrate more sustainable generation resources, to meet growing electricity demand and to renew electricity networks. Research and development on superconducting equipment and components have an important role to play in addressing these challenges. Up to now, most studies on superconducting applications in power systems have been concentrated on the application of specific devices like for example cables and current limiters. In contrast to this, the main focus of our study is to show the consequence of a large scale integration of superconducting power equipment in distribution level urban power systems. Specific objectives are to summarize the state-of-the-art of superconducting power equipment including cooling systems and to compare the superconducting power system with respect to energy and economic efficiency with conventional solutions. Several scenarios were considered starting from the replacement of an existing distribution level sub-grid up to a full superconducting urban area distribution level power system. One major result is that a full superconducting urban area distribution level power system could be cost competitive with existing solutions in the future. In addition to that, superconducting power systems offer higher energy efficiency as well as a number of technical advantages like lower voltage drops and improved stability.

  16. Landscape heterogeneity shapes predation in a newly restored predator-prey system.

    PubMed

    Kauffman, Matthew J; Varley, Nathan; Smith, Douglas W; Stahler, Daniel R; MacNulty, Daniel R; Boyce, Mark S

    2007-08-01

    Because some native ungulates have lived without top predators for generations, it has been uncertain whether runaway predation would occur when predators are newly restored to these systems. We show that landscape features and vegetation, which influence predator detection and capture of prey, shape large-scale patterns of predation in a newly restored predator-prey system. We analysed the spatial distribution of wolf (Canis lupus) predation on elk (Cervus elaphus) on the Northern Range of Yellowstone National Park over 10 consecutive winters. The influence of wolf distribution on kill sites diminished over the course of this study, a result that was likely caused by territorial constraints on wolf distribution. In contrast, landscape factors strongly influenced kill sites, creating distinct hunting grounds and prey refugia. Elk in this newly restored predator-prey system should be able to mediate their risk of predation by movement and habitat selection across a heterogeneous risk landscape.

  17. Landscape heterogeneity shapes predation in a newly restored predator-prey system

    USGS Publications Warehouse

    Kauffman, M.J.; Varley, N.; Smith, D.W.; Stahler, D.R.; MacNulty, D.R.; Boyce, M.S.

    2007-01-01

    Because some native ungulates have lived without top predators for generations, it has been uncertain whether runaway predation would occur when predators are newly restored to these systems. We show that landscape features and vegetation, which influence predator detection and capture of prey, shape large-scale patterns of predation in a newly restored predator-prey system. We analysed the spatial distribution of wolf (Canis lupus) predation on elk (Cervus elaphus) on the Northern Range of Yellowstone National Park over 10 consecutive winters. The influence of wolf distribution on kill sites diminished over the course of this study, a result that was likely caused by territorial constraints on wolf distribution. In contrast, landscape factors strongly influenced kill sites, creating distinct hunting grounds and prey refugia. Elk in this newly restored predator-prey system should be able to mediate their risk of predation by movement and habitat selection across a heterogeneous risk landscape. ?? 2007 Blackwell Publishing Ltd/CNRS.

  18. Convection-Enhanced Delivery for the Treatment of Pediatric Neurologic Disorders

    PubMed Central

    Song, Debbie K.; Lonser, Russell R.

    2013-01-01

    Direct perfusion of specific regions of the central nervous system by convection-enhanced delivery is becoming more widely used for the delivery of compounds in the research and treatment of various neural disorders. In contrast to other currently available central nervous system delivery techniques, convection-enhanced delivery relies on bulk flow for distribution of solute. This allows for safe, targeted, reliable, and homogeneous delivery of small- and large-molecular-weight substances over clinically relevant volumes in a manner that bypasses the blood-central nervous system barrier. Recent studies have also shown that coinfused imaging surrogate tracers can be used to monitor and control the convective distribution of therapeutic agents in vivo. The unique features of convection-enhanced delivery, including the ability to monitor distribution in real-time, provide an opportunity to develop new research and treatment paradigms for pediatric patients with a variety of intrinsic central nervous system disorders. PMID:18952590

  19. Security of subcarrier wave quantum key distribution against the collective beam-splitting attack.

    PubMed

    Miroshnichenko, G P; Kozubov, A V; Gaidash, A A; Gleim, A V; Horoshko, D B

    2018-04-30

    We consider a subcarrier wave quantum key distribution (QKD) system, where quantum encoding is carried out at weak sidebands generated around a coherent optical beam as a result of electro-optical phase modulation. We study security of two protocols, B92 and BB84, against one of the most powerful attacks for this class of systems, the collective beam-splitting attack. Our analysis includes the case of high modulation index, where the sidebands are essentially multimode. We demonstrate numerically and experimentally that a subcarrier wave QKD system with realistic parameters is capable of distributing cryptographic keys over large distances in presence of collective attacks. We also show that BB84 protocol modification with discrimination of only one state in each basis performs not worse than the original BB84 protocol in this class of QKD systems, thus significantly simplifying the development of cryptographic networks using the considered QKD technique.

  20. Accelerator science and technology in Europe: EuCARD 2012

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2012-05-01

    Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. The paper presents a digest of the research results in the domain of accelerator science and technology in Europe, shown during the third annual meeting of the EuCARD - European Coordination of Accelerator Research and Development. The conference concerns building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution.

  1. Introduction

    NASA Astrophysics Data System (ADS)

    Zhao, Ben; Garbacki, Paweł; Gkantsidis, Christos; Iamnitchi, Adriana; Voulgaris, Spyros

    After a decade of intensive investigation, peer-to-peer computing has established itself as an accepted research eld in the general area of distributed systems. Peer-to- peer computing can be seen as the democratization of computing over throwing traditional hierarchical designs favored in client-server systems largely brought about by last-mile network improvements which have made individual PCs rst-class citizens in the network community. Much of the early focus in peer-to-peer systems was on best-effort le sharing applications. In recent years, however, research has focused on peer-to-peer systems that provide operational properties and functionality similar to those shown by more traditional distributed systems. These properties include stronger consistency, reliability, and security guarantees suitable to supporting traditional applications such as databases.

  2. Megatux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-25

    The Megatux platform enables the emulation of large scale (multi-million node) distributed systems. In particular, it allows for the emulation of large-scale networks interconnecting a very large number of emulated computer systems. It does this by leveraging virtualization and associated technologies to allow hundreds of virtual computers to be hosted on a single moderately sized server or workstation. Virtualization technology provided by modern processors allows for multiple guest OSs to run at the same time, sharing the hardware resources. The Megatux platform can be deployed on a single PC, a small cluster of a few boxes or a large clustermore » of computers. With a modest cluster, the Megatux platform can emulate complex organizational networks. By using virtualization, we emulate the hardware, but run actual software enabling large scale without sacrificing fidelity.« less

  3. Advanced Operating System Technologies

    NASA Astrophysics Data System (ADS)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.

  4. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework

    PubMed Central

    2012-01-01

    Background For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. Results We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. Conclusion The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources. PMID:23216909

  5. Hydra: a scalable proteomic search engine which utilizes the Hadoop distributed computing framework.

    PubMed

    Lewis, Steven; Csordas, Attila; Killcoyne, Sarah; Hermjakob, Henning; Hoopmann, Michael R; Moritz, Robert L; Deutsch, Eric W; Boyle, John

    2012-12-05

    For shotgun mass spectrometry based proteomics the most computationally expensive step is in matching the spectra against an increasingly large database of sequences and their post-translational modifications with known masses. Each mass spectrometer can generate data at an astonishingly high rate, and the scope of what is searched for is continually increasing. Therefore solutions for improving our ability to perform these searches are needed. We present a sequence database search engine that is specifically designed to run efficiently on the Hadoop MapReduce distributed computing framework. The search engine implements the K-score algorithm, generating comparable output for the same input files as the original implementation. The scalability of the system is shown, and the architecture required for the development of such distributed processing is discussed. The software is scalable in its ability to handle a large peptide database, numerous modifications and large numbers of spectra. Performance scales with the number of processors in the cluster, allowing throughput to expand with the available resources.

  6. Autonomous chemical and biological miniature wireless-sensor

    NASA Astrophysics Data System (ADS)

    Goldberg, Bar-Giora

    2005-05-01

    The presentation discusses a new concept and a paradigm shift in biological, chemical and explosive sensor system design and deployment. From large, heavy, centralized and expensive systems to distributed wireless sensor networks utilizing miniature platforms (nodes) that are lightweight, low cost and wirelessly connected. These new systems are possible due to the emergence and convergence of new innovative radio, imaging, networking and sensor technologies. Miniature integrated radio-sensor networks, is a technology whose time has come. These network systems are based on large numbers of distributed low cost and short-range wireless platforms that sense and process their environment and communicate data thru a network to a command center. The recent emergence of chemical and explosive sensor technology based on silicon nanostructures, coupled with the fast evolution of low-cost CMOS imagers, low power DSP engines and integrated radio chips, has created an opportunity to realize the vision of autonomous wireless networks. These threat detection networks will perform sophisticated analysis at the sensor node and convey alarm information up the command chain. Sensor networks of this type are expected to revolutionize the ability to detect and locate biological, chemical, or explosive threats. The ability to distribute large numbers of low-cost sensors over large areas enables these devices to be close to the targeted threats and therefore improve detection efficiencies and enable rapid counter responses. These sensor networks will be used for homeland security, shipping container monitoring, and other applications such as laboratory medical analysis, drug discovery, automotive, environmental and/or in-vivo monitoring. Avaak"s system concept is to image a chromatic biological, chemical and/or explosive sensor utilizing a digital imager, analyze the images and distribute alarm or image data wirelessly through the network. All the imaging, processing and communications would take place within the miniature, low cost distributed sensor platforms. This concept however presents a significant challenge due to a combination and convergence of required new technologies, as mentioned above. Passive biological and chemical sensors with very high sensitivity and which require no assaying are in development using a technique to optically and chemically encode silicon wafers with tailored nanostructures. The silicon wafer is patterned with nano-structures designed to change colors ad patterns when exposed to the target analytes (TICs, TIMs, VOC). A small video camera detects the color and pattern changes on the sensor. To determine if an alarm condition is present, an on board DSP processor, using specialized image processing algorithms and statistical analysis, determines if color gradient changes occurred on the sensor array. These sensors can detect several agents simultaneously. This system is currently under development by Avaak, with funding from DARPA through an SBIR grant.

  7. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  8. High performance frame synchronization for continuous variable quantum key distribution systems.

    PubMed

    Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua

    2015-08-24

    Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.

  9. Large-Scale Ichthyoplankton and Water Mass Distribution along the South Brazil Shelf

    PubMed Central

    de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique

    2014-01-01

    Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27′ and 34°51′S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients. PMID:24614798

  10. Large-scale ichthyoplankton and water mass distribution along the South Brazil Shelf.

    PubMed

    de Macedo-Soares, Luis Carlos Pinto; Garcia, Carlos Alberto Eiras; Freire, Andrea Santarosa; Muelbert, José Henrique

    2014-01-01

    Ichthyoplankton is an essential component of pelagic ecosystems, and environmental factors play an important role in determining its distribution. We have investigated simultaneous latitudinal and cross-shelf gradients in ichthyoplankton abundance to test the hypothesis that the large-scale distribution of fish larvae in the South Brazil Shelf is associated with water mass composition. Vertical plankton tows were collected between 21°27' and 34°51'S at 107 stations, in austral late spring and early summer seasons. Samples were taken with a conical-cylindrical plankton net from the depth of chlorophyll maxima to the surface in deep stations, or from 10 m from the bottom to the surface in shallow waters. Salinity and temperature were obtained with a CTD/rosette system, which provided seawater for chlorophyll-a and nutrient concentrations. The influence of water mass on larval fish species was studied using Indicator Species Analysis, whereas environmental effects on the distribution of larval fish species were analyzed by Distance-based Redundancy Analysis. Larval fish species were associated with specific water masses: in the north, Sardinella brasiliensis was found in Shelf Water; whereas in the south, Engraulis anchoita inhabited the Plata Plume Water. At the slope, Tropical Water was characterized by the bristlemouth Cyclothone acclinidens. The concurrent analysis showed the importance of both cross-shelf and latitudinal gradients on the large-scale distribution of larval fish species. Our findings reveal that ichthyoplankton composition and large-scale spatial distribution are determined by water mass composition in both latitudinal and cross-shelf gradients.

  11. Scaling laws of strategic behavior and size heterogeneity in agent dynamics

    NASA Astrophysics Data System (ADS)

    Vaglica, Gabriella; Lillo, Fabrizio; Moro, Esteban; Mantegna, Rosario N.

    2008-03-01

    We consider the financial market as a model system and study empirically how agents strategically adjust the properties of large orders in order to meet their preference and minimize their impact. We quantify this strategic behavior by detecting scaling relations between the variables characterizing the trading activity of different institutions. We also observe power-law distributions in the investment time horizon, in the number of transactions needed to execute a large order, and in the traded value exchanged by large institutions, and we show that heterogeneity of agents is a key ingredient for the emergence of some aggregate properties characterizing this complex system.

  12. Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrosiano, John; Newkirk, Ryan; Mc Donald, Mark P

    2010-12-03

    The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk ofmore » intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and corrunodity flow models are instantiated, a risk consequence analysis can be performed by injecting contaminant at chosen points in the system and propagating the event through the overarching system to arrive at morbidity and mortality figures. A generic chocolate snack cake model, consisting of fluid milk, liquid eggs, and cocoa, is described as an intended proof of concept for multi-ingredient food systems. We aim for an eventual tool that can be used directly by policy makers and planners.« less

  13. Use and Distribution of Rehabilitation Services: A Register Linkage Study in One Hospital District Area in Finland

    ERIC Educational Resources Information Center

    Pulkki, Jutta Maarit; Rissanen, Pekka; Raitanen, Jani A.; Viitanen, Elina A.

    2011-01-01

    This study focuses on a large set of rehabilitation services used between 2004 and 2005 in one hospital district area in Finland. The rehabilitation system consists of several subsystems. This complex system is suggested to produce arbitrary rehabilitation services. Despite the criticisms against the system during decades, no attempts have been…

  14. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  15. Large Scale System Defense

    DTIC Science & Technology

    2008-10-01

    AD); Aeolos, a distributed intrusion detection and event correlation infrastructure; STAND, a training-set sanitization technique applicable to ADs...UU 18. NUMBER OF PAGES 25 19a. NAME OF RESPONSIBLE PERSON Frank H. Born a. REPORT U b. ABSTRACT U c . THIS PAGE U 19b. TELEPHONE...Summary of findings 2 (a) Automatic Patch Generation 2 (b) Better Patch Management 2 ( c ) Artificial Diversity 3 (d) Distributed Anomaly Detection 3

  16. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System

    PubMed Central

    Prest, E. I.; Weissbrodt, D. G.; Hammes, F.; van Loosdrecht, M. C. M.; Vrouwenvelder, J. S.

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1–3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson’s correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously undocumented seasonal dynamics in the distribution network. Moreover, high-resolution FCM data enabled prediction of bacterial cell concentrations at specific water temperatures and time of year. The study highlights the need to systematically assess temporal fluctuations in parallel to spatial dynamics for individual drinking water distribution systems. PMID:27792739

  17. Long-Term Bacterial Dynamics in a Full-Scale Drinking Water Distribution System.

    PubMed

    Prest, E I; Weissbrodt, D G; Hammes, F; van Loosdrecht, M C M; Vrouwenvelder, J S

    2016-01-01

    Large seasonal variations in microbial drinking water quality can occur in distribution networks, but are often not taken into account when evaluating results from short-term water sampling campaigns. Temporal dynamics in bacterial community characteristics were investigated during a two-year drinking water monitoring campaign in a full-scale distribution system operating without detectable disinfectant residual. A total of 368 water samples were collected on a biweekly basis at the water treatment plant (WTP) effluent and at one fixed location in the drinking water distribution network (NET). The samples were analysed for heterotrophic plate counts (HPC), Aeromonas plate counts, adenosine-tri-phosphate (ATP) concentrations, and flow cytometric (FCM) total and intact cell counts (TCC, ICC), water temperature, pH, conductivity, total organic carbon (TOC) and assimilable organic carbon (AOC). Multivariate analysis of the large dataset was performed to explore correlative trends between microbial and environmental parameters. The WTP effluent displayed considerable seasonal variations in TCC (from 90 × 103 cells mL-1 in winter time up to 455 × 103 cells mL-1 in summer time) and in bacterial ATP concentrations (<1-3.6 ng L-1), which were congruent with water temperature variations. These fluctuations were not detected with HPC and Aeromonas counts. The water in the network was predominantly influenced by the characteristics of the WTP effluent. The increase in ICC between the WTP effluent and the network sampling location was small (34 × 103 cells mL-1 on average) compared to seasonal fluctuations in ICC in the WTP effluent. Interestingly, the extent of bacterial growth in the NET was inversely correlated to AOC concentrations in the WTP effluent (Pearson's correlation factor r = -0.35), and positively correlated with water temperature (r = 0.49). Collecting a large dataset at high frequency over a two year period enabled the characterization of previously undocumented seasonal dynamics in the distribution network. Moreover, high-resolution FCM data enabled prediction of bacterial cell concentrations at specific water temperatures and time of year. The study highlights the need to systematically assess temporal fluctuations in parallel to spatial dynamics for individual drinking water distribution systems.

  18. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  19. Comparison of two paradigms for distributed shared memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.

    1990-08-01

    The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less

  20. Design for Run-Time Monitor on Cloud Computing

    NASA Astrophysics Data System (ADS)

    Kang, Mikyung; Kang, Dong-In; Yun, Mira; Park, Gyung-Leen; Lee, Junghoon

    Cloud computing is a new information technology trend that moves computing and data away from desktops and portable PCs into large data centers. The basic principle of cloud computing is to deliver applications as services over the Internet as well as infrastructure. A cloud is the type of a parallel and distributed system consisting of a collection of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources. The large-scale distributed applications on a cloud require adaptive service-based software, which has the capability of monitoring the system status change, analyzing the monitored information, and adapting its service configuration while considering tradeoffs among multiple QoS features simultaneously. In this paper, we design Run-Time Monitor (RTM) which is a system software to monitor the application behavior at run-time, analyze the collected information, and optimize resources on cloud computing. RTM monitors application software through library instrumentation as well as underlying hardware through performance counter optimizing its computing configuration based on the analyzed data.

  1. Low eigenvalues of the entanglement Hamiltonian, localization length, and rare regions in one-dimensional disordered interacting systems

    NASA Astrophysics Data System (ADS)

    Berkovits, Richard

    2018-03-01

    The properties of the low-lying eigenvalues of the entanglement Hamiltonian and their relation to the localization length of a disordered interacting one-dimensional many-particle system are studied. The average of the first entanglement Hamiltonian level spacing is proportional to the ground-state localization length and shows the same dependence on the disorder and interaction strength as the localization length. This is the result of the fact that entanglement is limited to distances of order of the localization length. The distribution of the first entanglement level spacing shows a Gaussian-type behavior as expected for level spacings much larger than the disorder broadening. For weakly disordered systems (localization length larger than sample length), the distribution shows an additional peak at low-level spacings. This stems from rare regions in some samples which exhibit metalliclike behavior of large entanglement and large particle-number fluctuations. These intermediate microemulsion metallic regions embedded in the insulating phase are discussed.

  2. Equation solvers for distributed-memory computers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    A large number of scientific and engineering problems require the rapid solution of large systems of simultaneous equations. The performance of parallel computers in this area now dwarfs traditional vector computers by nearly an order of magnitude. This talk describes the major issues involved in parallel equation solvers with particular emphasis on the Intel Paragon, IBM SP-1 and SP-2 processors.

  3. Optimal Sizing and Placement of Battery Energy Storage in Distribution System Based on Solar Size for Voltage Regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazaripouya, Hamidreza; Wang, Yubo; Chu, Peter

    2016-07-26

    This paper proposes a new strategy to achieve voltage regulation in distributed power systems in the presence of solar energy sources and battery storage systems. The goal is to find the minimum size of battery storage and its corresponding location in the network based on the size and place of the integrated solar generation. The proposed method formulates the problem by employing the network impedance matrix to obtain an analytical solution instead of using a recursive algorithm such as power flow. The required modifications for modeling the slack and PV buses (generator buses) are utilized to increase the accuracy ofmore » the approach. The use of reactive power control to regulate the voltage regulation is not always an optimal solution as in distribution systems R/X is large. In this paper the minimum size and the best place of battery storage is achieved by optimizing the amount of both active and reactive power exchanged by battery storage and its gridtie inverter (GTI) based on the network topology and R/X ratios in the distribution system. Simulation results for the IEEE 14-bus system verify the effectiveness of the proposed approach.« less

  4. Dynamics of Polydisperse Foam-like Emulsion

    NASA Astrophysics Data System (ADS)

    Hicock, Harry; Feitosa, Klebert

    2011-10-01

    Foam is a complex fluid whose relaxation properties are associated with the continuous diffusion of gas from small to large bubbles driven by differences in Laplace pressures. We study the dynamics of bubble rearrangements by tracking droplets of a clear, buoyantly neutral emulsion that coarsens like a foam. The droplets are imaged in three dimensions using confocal microscopy. Analysis of the images allows us to measure their positions and radii, and track their evolution in time. We find that the droplet size distribution fits a Weibull distribution characteristics of foam systems. Additionally, we observe that droplets undergo continuous evolution interspersed by occasional large rearrangements in par with local relaxation behavior typical of foams.

  5. Detection of Temporally and Spatially Limited Periodic Earthquake Recurrence in Synthetic Seismic Records

    NASA Astrophysics Data System (ADS)

    Zielke, O.; Arrowsmith, R. J.

    2005-12-01

    The nonlinear dynamics of fault behavior are dominated by complex interactions among the multiple processes controlling the system. For example, temporal and spatial variations in pore pressure, healing effects, and stress transfer cause significant heterogeneities in fault properties and the stress-field at the sub-fault level. Numerical and laboratory fault models show that the interaction of large systems of fault elements causes the entire system to develop into a state of self-organized criticality. Once in this state, small perturbations of the system may result in chain reactions (i.e., earthquakes) which can affect any number of fault segments. This sensitivity to small perturbations is strong evidence for chaotic fault behavior, which implies that exact event prediction is not possible. However, earthquake prediction with a useful accuracy is nevertheless possible. Studies of other natural chaotic systems have shown that they may enter states of metastability, in which the system's behavior is predictable. Applying this concept to earthquake faults, these windows of metastable behavior should be characterized by periodic earthquake recurrence. The observed periodicity of the Parkfield, CA (M= 6) events may resemble such a window of metastability. I am statistically analyzing numerically generated seismic records to study these phases of periodic behavior. In this preliminary study, seismic records were generated using a model introduced by Nakanishi [Phys. Rev. A, 43, 6613-6621, 1991]. It consists of a one-dimensional chain of blocks (interconnected by springs) with a relaxation function that mimics velocity-weakened frictional behavior. The earthquakes occurring in this model show generally a power-law frequency-size distribution. However, for large events the distribution has a shoulder where the frequency of events is higher than expected from the power law. I have analyzed time-series of single block motions within the system. These time-series include noticeable periodicity during certain intervals in an otherwise aperiodic record. The observed periodic signal is not equally distributed over the range of offsets but shows a multi-modal distribution with increased periodicity for the smallest events and for large events that show a specific offset. These large events also form a shoulder in the frequency-size distribution. Apparently, the model exhibits characteristic earthquakes (defined by similar coseismic slip) that occur more frequently than expected from a power law distribution, and also are significantly more periodic. The wavelength of the periodic signal generally equals the minimum loading time, which is related to the loading velocity and the amount of coseismic slip (i.e., stress drop). No significant event occurs between the characteristic events as long as the system stays in a window of periodic behavior. Within the windows of periodic behavior, earthquake prediction is straightforward. Therefore, recognition of these windows not only in synthetic data but also in real seismic records, may improve the intra-window forecast of earthquakes. Further studies will attempt to determine the characteristics of onset, duration, and end of these windows of periodic earthquake recurrence. Only the motion of a single block within a bigger system was analyzed so far. Going from a zero dimensional scenario to a two dimensional case where the offsets not only of a single block but the displacement patterns caused by a certain event are analyzed will increase the verisimilitude of the detection of periodic earthquake recurrence within an otherwise chaotic seismic record.

  6. A Distributed Processing Approach to Payroll Time Reporting for a Large School District.

    ERIC Educational Resources Information Center

    Freeman, Raoul J.

    1983-01-01

    Describes a system for payroll reporting from geographically disparate locations in which data is entered, edited, and verified locally on minicomputers and then uploaded to a central computer for the standard payroll process. Communications and hardware, time-reporting software, data input techniques, system implementation, and its advantages are…

  7. Investigation of the effects of external current systems on the MAGSAT data utilizing grid cell modeling techniques

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M. (Principal Investigator)

    1982-01-01

    The status of the initial testing of the modeling procedure developed to compute the magnetic fields at satellite orbit due to current distributions in the ionosphere and magnetosphere is reported. The modeling technique utilizes a linear current element representation of the large scale space-current system.

  8. COMPARISON OF MYCOBACTERIUM AVIUM ISOLATES FROM A DRINKING WATER DISTRIBUTION SYSTEM AND FROM THE POPULATION SERVED BY THE SYSTEM

    EPA Science Inventory

    Background: Current evidence suggests that drinking water, soil, and produce are potential sources of Mycobacterium avium infections, a pathogen not known to be transmitted person-to-person.

    Methods: We sampled water during 2000-2002 from a large municipal drinking water ...

  9. A Study of Students' Reasoning about Probabilistic Causality: Implications for Understanding Complex Systems and for Instructional Design

    ERIC Educational Resources Information Center

    Grotzer, Tina A.; Solis, S. Lynneth; Tutwiler, M. Shane; Cuzzolino, Megan Powell

    2017-01-01

    Understanding complex systems requires reasoning about causal relationships that behave or appear to behave probabilistically. Features such as distributed agency, large spatial scales, and time delays obscure co-variation relationships and complex interactions can result in non-deterministic relationships between causes and effects that are best…

  10. Evaluation of repetitive element polymerase chain reaction for surveillance of methicillin-resistant Staphylococcus aureus at a large academic medical center and community hospitals.

    PubMed

    Wang, Shu-Hua; Stevenson, Kurt B; Hines, Lisa; Mediavilla, José R; Khan, Yosef; Soni, Ruchi; Dutch, Wendy; Brandt, Eric; Bannerman, Tammy; Kreiswirth, Barry N; Pancholi, Preeti

    2015-01-01

    Repetitive element polymerase chain reaction (rep-PCR) typing has been used for methicillin-resistant Staphylococcus aureus (MRSA) strain characterization. The goal of this study was to determine if a rapid commercial rep-PCR system, DiversiLab™ (DL; bioMérieux, Durham, NC, USA), could be used for MRSA surveillance at a large medical center and community hospitals. A total of 1286 MRSA isolates genotyped by the DL system were distributed into 84 distinct rep-PCR patterns: 737/1286 (57%) were clustered into 6 major rep-PCR patterns. A subset of 220 isolates was further typed by pulsed-field gel electrophoresis (PFGE), spa typing, and SCCmec typing. The 220 isolates were distributed into 80 rep-PCR patterns, 94 PFGE pulsotypes, 27 spa, and 3 SCCmec types. The DL rep-PCR system is sufficient for surveillance, but the DL system alone cannot be used to compare data to other institutions until a standardized nomenclature is established and the DL MRSA reference library is expanded. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

    NASA Astrophysics Data System (ADS)

    Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

    2017-12-01

    Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

  12. A system of {sup 99m}Tc production based on distributed electron accelerators and thermal separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, R.G.; Christian, J.D.; Petti, D.A.

    1999-04-01

    A system has been developed for the production of {sup 99m}Tc based on distributed electron accelerators and thermal separation. The radioactive decay parent of {sup 99m}Tc, {sup 99}Mo, is produced from {sup 100}Mo by a photoneutron reaction. Two alternative thermal separation processes have been developed to extract {sup 99m}Tc. Experiments have been performed to verify the technical feasibility of the production and assess the efficiency of the extraction processes. A system based on this technology enables the economical supply of {sup 99m}Tc for a large nuclear pharmacy. Twenty such production centers distributed near major metropolitan areas could produce the entiremore » US supply of {sup 99m}Tc at a cost less than the current subsidized price.« less

  13. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    PubMed

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Statistical Analyses of Satellite Cloud Object Data from CERES. Part II; Tropical Convective Cloud Objects During 1998 El Nino and Validation of the Fixed Anvil Temperature Hypothesis

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce a.; Parker, Lindsay; Lin, Bing; Eitzen, Zachary A.; Branson, Mark

    2006-01-01

    Characteristics of tropical deep convective cloud objects observed over the tropical Pacific during January-August 1998 are examined using the Tropical Rainfall Measuring Mission/ Clouds and the Earth s Radiant Energy System single scanner footprint (SSF) data. These characteristics include the frequencies of occurrence and statistical distributions of cloud physical properties. Their variations with cloud-object size, sea surface temperature (SST), and satellite precessing cycle are analyzed in detail. A cloud object is defined as a contiguous patch of the Earth composed of satellite footprints within a single dominant cloud-system type. It is found that statistical distributions of cloud physical properties are significantly different among three size categories of cloud objects with equivalent diameters of 100 - 150 km (small), 150 - 300 km (medium), and > 300 km (large), respectively, except for the distributions of ice particle size. The distributions for the larger-size category of cloud objects are more skewed towards high SSTs, high cloud tops, low cloud-top temperature, large ice water path, high cloud optical depth, low outgoing longwave (LW) radiation, and high albedo than the smaller-size category. As SST varied from one satellite precessing cycle to another, the changes in macrophysical properties of cloud objects over the entire tropical Pacific were small for the large-size category of cloud objects, relative to those of the small- and medium-size categories. This result suggests that the fixed anvil temperature hypothesis of Hartmann and Larson may be valid for the large-size category. Combining with the result that a higher percentage of the large-size category of cloud objects occurs during higher SST subperiods, this implies that macrophysical properties of cloud objects would be less sensitive to further warming of the climate. On the other hand, when cloud objects are classified according to SSTs where large-scale dynamics plays important roles, statistical characteristics of cloud microphysical properties, optical depth and albedo are not sensitive to the SST, but those of cloud macrophysical properties are strongly dependent upon the SST. Frequency distributions of vertical velocity from the European Center for Medium-range Weather Forecasts model that is matched to each cloud object are used to interpret some of the findings in this study.

  15. Large-N kinetic theory for highly occupied systems

    NASA Astrophysics Data System (ADS)

    Walz, R.; Boguslavski, K.; Berges, J.

    2018-06-01

    We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.

  16. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  17. Unveiling adaptation using high-resolution lineage tracking

    NASA Astrophysics Data System (ADS)

    Blundell, Jamie; Levy, Sasha; Fisher, Daniel; Petrov, Dmitri; Sherlock, Gavin

    2013-03-01

    Human diseases such as cancer and microbial infections are adaptive processes inside the human body with enormous population sizes: between 106 -1012 cells. In spite of this our understanding of adaptation in large populations is limited. The key problem is the difficulty in identifying anything more than a handful of rare, large-effect beneficial mutations. The development and use of molecular barcodes allows us to uniquely tag hundreds of thousands of cells and enable us to track tens of thousands of adaptive mutations in large yeast populations. We use this system to test some of the key theories on which our understanding of adaptation in large populations is based. We (i) measure the fitness distribution in an evolving population at different times, (ii) identify when an appreciable fraction of clones in the population have at most a single adaptive mutation and isolate a large number of clones with independent single adaptive mutations, and (iii) use this clone collection to determine the distribution of fitness effects of single beneficial mutations.

  18. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  19. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  20. The Design of Large Geothermally Powered Air-Conditioning Systems Using an Optimal Control Approach

    NASA Astrophysics Data System (ADS)

    Horowitz, F. G.; O'Bryan, L.

    2010-12-01

    The direct use of geothermal energy from Hot Sedimentary Aquifer (HSA) systems for large scale air-conditioning projects involves many tradeoffs. Aspects contributing towards making design decisions for such systems include: the inadequately known permeability and thermal distributions underground; the combinatorial complexity of selecting pumping and chiller systems to match the underground conditions to the air-conditioning requirements; the future price variations of the electricity market; any uncertainties in future Carbon pricing; and the applicable discount rate for evaluating the financial worth of the project. Expanding upon the previous work of Horowitz and Hornby (2007), we take an optimal control approach to the design of such systems. By building a model of the HSA system, the drilling process, the pumping process, and the chilling operations, along with a specified objective function, we can write a Hamiltonian for the system. Using the standard techniques of optimal control, we use gradients of the Hamiltonian to find the optimal design for any given set of permeabilities, thermal distributions, and the other engineering and financial parameters. By using this approach, optimal system designs could potentially evolve in response to the actual conditions encountered during drilling. Because the granularity of some current models is so coarse, we will be able to compare our optimal control approach to an exhaustive search of parameter space. We will present examples from the conditions appropriate for the Perth Basin of Western Australia, where the WA Geothermal Centre of Excellence is involved with two large air-conditioning projects using geothermal water from deep aquifers at 75 to 95 degrees C.

  1. A general-purpose development environment for intelligent computer-aided training systems

    NASA Technical Reports Server (NTRS)

    Savely, Robert T.

    1990-01-01

    Space station training will be a major task, requiring the creation of large numbers of simulation-based training systems for crew, flight controllers, and ground-based support personnel. Given the long duration of space station missions and the large number of activities supported by the space station, the extension of space shuttle training methods to space station training may prove to be impractical. The application of artificial intelligence technology to simulation training can provide the ability to deliver individualized training to large numbers of personnel in a distributed workstation environment. The principal objective of this project is the creation of a software development environment which can be used to build intelligent training systems for procedural tasks associated with the operation of the space station. Current NASA Johnson Space Center projects and joint projects with other NASA operational centers will result in specific training systems for existing space shuttle crew, ground support personnel, and flight controller tasks. Concurrently with the creation of these systems, a general-purpose development environment for intelligent computer-aided training systems will be built. Such an environment would permit the rapid production, delivery, and evolution of training systems for space station crew, flight controllers, and other support personnel. The widespread use of such systems will serve to preserve task and training expertise, support the training of many personnel in a distributed manner, and ensure the uniformity and verifiability of training experiences. As a result, significant reductions in training costs can be realized while safety and the probability of mission success can be enhanced.

  2. Double stars with wide separations in the AGK3 - II. The wide binaries and the multiple systems*

    NASA Astrophysics Data System (ADS)

    Halbwachs, J.-L.; Mayor, M.; Udry, S.

    2017-02-01

    A large observation programme was carried out to measure the radial velocities of the components of a selection of common proper motion (CPM) stars to select the physical binaries. 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. By adding CPM stars with separations close enough to be almost certain that they are physical, a bias-controlled sample of 116 WBs was obtained, and used to derive the distribution of separations from 100 to 30 000 au. The distribution obtained does not match the log-constant distribution, but agrees with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical information about the multiple systems. The close binaries in WBs seem to be like those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems agrees with the no correlation hypothesis; this indicates that an environment conducive to the formation of WBs does not favour the formation of subsystems with periods shorter than 10 yr.

  3. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  4. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  5. Method and infrastructure for cycle-reproducible simulation on large scale digital circuits on a coordinated set of field-programmable gate arrays (FPGAs)

    DOEpatents

    Asaad, Sameh W; Bellofatto, Ralph E; Brezzo, Bernard; Haymes, Charles L; Kapur, Mohit; Parker, Benjamin D; Roewer, Thomas; Tierno, Jose A

    2014-01-28

    A plurality of target field programmable gate arrays are interconnected in accordance with a connection topology and map portions of a target system. A control module is coupled to the plurality of target field programmable gate arrays. A balanced clock distribution network is configured to distribute a reference clock signal, and a balanced reset distribution network is coupled to the control module and configured to distribute a reset signal to the plurality of target field programmable gate arrays. The control module and the balanced reset distribution network are cooperatively configured to initiate and control a simulation of the target system with the plurality of target field programmable gate arrays. A plurality of local clock control state machines reside in the target field programmable gate arrays. The local clock state machines are configured to generate a set of synchronized free-running and stoppable clocks to maintain cycle-accurate and cycle-reproducible execution of the simulation of the target system. A method is also provided.

  6. Modeling stochastic noise in gene regulatory systems

    PubMed Central

    Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung

    2014-01-01

    The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368

  7. Iron and copper release in drinking-water distribution systems.

    PubMed

    Shi, Baoyou; Taylor, James S

    2007-09-01

    A large-scale pilot study was carried out to evaluate the impacts of changes in water source and treatment process on iron and copper release in water distribution systems. Finished surface waters, groundwaters, and desalinated waters were produced with seven different treatment systems and supplied to 18 pipe distribution systems (PDSs). The major water treatment processes included lime softening, ferric sulfate coagulation, reverse osmosis, nanofiltration, and integrated membrane systems. PDSs were constructed from PVC, lined cast iron, unlined cast iron, and galvanized pipes. Copper pipe loops were set up for corrosion monitoring. Results showed that surface water after ferric sulfate coagulation had low alkalinity and high sulfates, and consequently caused the highest iron release. Finished groundwater treated by conventional method produced the lowest iron release but the highest copper release. The iron release of desalinated water was relatively high because of the water's high chloride level and low alkalinity. Both iron and copper release behaviors were influenced by temperature.

  8. Research on Fault Characteristics and Line Protections Within a Large-scale Photovoltaic Power Plant

    NASA Astrophysics Data System (ADS)

    Zhang, Chi; Zeng, Jie; Zhao, Wei; Zhong, Guobin; Xu, Qi; Luo, Pandian; Gu, Chenjie; Liu, Bohan

    2017-05-01

    Centralized photovoltaic (PV) systems have different fault characteristics from distributed PV systems due to the different system structures and controls. This makes the fault analysis and protection methods used in distribution networks with distributed PV not suitable for a centralized PV power plant. Therefore, a consolidated expression for the fault current within a PV power plant under different controls was calculated considering the fault response of the PV array. Then, supported by the fault current analysis and the on-site testing data, the overcurrent relay (OCR) performance was evaluated in the collection system of an 850 MW PV power plant. It reveals that the OCRs at downstream side on overhead lines may malfunction. In this case, a new relay scheme was proposed using directional distance elements. In the PSCAD/EMTDC, a detailed PV system model was built and verified using the on-site testing data. Simulation results indicate that the proposed relay scheme could effectively solve the problems under variant fault scenarios and PV plant output levels.

  9. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  10. Data-Aware Retrodiction for Asynchronous Harmonic Measurement in a Cyber-Physical Energy System

    PubMed Central

    Liu, Youda; Wang, Xue; Liu, Yanchi; Cui, Sujin

    2016-01-01

    Cyber-physical energy systems provide a networked solution for safety, reliability and efficiency problems in smart grids. On the demand side, the secure and trustworthy energy supply requires real-time supervising and online power quality assessing. Harmonics measurement is necessary in power quality evaluation. However, under the large-scale distributed metering architecture, harmonic measurement faces the out-of-sequence measurement (OOSM) problem, which is the result of latencies in sensing or the communication process and brings deviations in data fusion. This paper depicts a distributed measurement network for large-scale asynchronous harmonic analysis and exploits a nonlinear autoregressive model with exogenous inputs (NARX) network to reorder the out-of-sequence measuring data. The NARX network gets the characteristics of the electrical harmonics from practical data rather than the kinematic equations. Thus, the data-aware network approximates the behavior of the practical electrical parameter with real-time data and improves the retrodiction accuracy. Theoretical analysis demonstrates that the data-aware method maintains a reasonable consumption of computing resources. Experiments on a practical testbed of a cyber-physical system are implemented, and harmonic measurement and analysis accuracy are adopted to evaluate the measuring mechanism under a distributed metering network. Results demonstrate an improvement of the harmonics analysis precision and validate the asynchronous measuring method in cyber-physical energy systems. PMID:27548171

  11. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  12. From PACS to Web-based ePR system with image distribution for enterprise-level filmless healthcare delivery.

    PubMed

    Huang, H K

    2011-07-01

    The concept of PACS (picture archiving and communication system) was initiated in 1982 during the SPIE medical imaging conference in New Port Beach, CA. Since then PACS has been matured to become an everyday clinical tool for image archiving, communication, display, and review. This paper follows the continuous development of PACS technology including Web-based PACS, PACS and ePR (electronic patient record), enterprise PACS to ePR with image distribution (ID). The concept of large-scale Web-based enterprise PACS and ePR with image distribution is presented along with its implementation, clinical deployment, and operation. The Hong Kong Hospital Authority's (HKHA) integration of its home-grown clinical management system (CMS) with PACS and ePR with image distribution is used as a case study. The current concept and design criteria of the HKHA enterprise integration of the CMS, PACS, and ePR-ID for filmless healthcare delivery are discussed, followed by its work-in-progress and current status.

  13. Design and Operation of Distribution Markets

    NASA Astrophysics Data System (ADS)

    Parhizi, Sina

    The growing penetration of distributed prosumers especially microgrids poses new challenges to the operation of wholesale markets and distribution power systems. Price spikes and higher uncertainty are among these consequences. Distribution markets are envisioned as a remedy to streamline integration of distributed resources and microgrids in the electricity market. This dissertation offers an analytical formulation of electricity markets in the distribution level, considering various prevailing aspects of the market operation problem. The prevailing challenges in regards to integration of microgrids in the electricity markets are illustrated first, and the distribution market operator (DMO) construct is outlined. The day-ahead scheduling of a microgrid participating in a DMO market is formulated and studied. Then the operation of distribution markets integrated with large numbers of responsive participants is considered, and its transactions with the distribution market participants on one hand, and the wholesale market on the other hand are modeled and studied. The market settlement and clearing, essential in operation of distribution markets, is considered and solved. The pricing mechanism in a distribution market is proposed and the relation of distribution and transmission and distribution prices is studied. A more advanced pricing mechanism considering voltages and reactive power is developed and studied. In order to offer a more accurate pricing structure within the distribution system, a linearized distribution power flow is utilized. The performance of the proposed methods is analyzed and the results are presented. Markets have been recently envisioned to be a suitable instrument for integration of distributed energy resources in the distribution system, but most of the discussions surrounding this topic is at the conceptual level. In this work, it is demonstrated that distribution markets are effective in integrating microgrids and distributed resources in the electricity markets, and an analytical model is presented for design and operation of such markets.

  14. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  15. A Linguistic Model in Component Oriented Programming

    NASA Astrophysics Data System (ADS)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2016-12-01

    It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.

  16. Comparison of a hybrid medication distribution system to simulated decentralized distribution models.

    PubMed

    Gray, John P; Ludwig, Brad; Temple, Jack; Melby, Michael; Rough, Steve

    2013-08-01

    The results of a study to estimate the human resource and cost implications of changing the medication distribution model at a large medical center are presented. A two-part study was conducted to evaluate alternatives to the hospital's existing hybrid distribution model (64% of doses dispensed via cart fill and 36% via automated dispensing cabinets [ADCs]). An assessment of nurse, pharmacist, and pharmacy technician workloads within the hybrid system was performed through direct observation, with time standards calculated for each dispensing task; similar time studies were conducted at a comparator hospital with a decentralized medication distribution system involving greater use of ADCs. The time study data were then used in simulation modeling of alternative distribution scenarios: one involving no use of cart fill, one involving no use of ADCs, and one heavily dependent on ADC dispensing (89% via ADC and 11% via cart fill). Simulation of the base-case and alternative scenarios indicated that as the modeled percentage of doses dispensed from ADCs rose, the calculated pharmacy technician labor requirements decreased, with a proportionately greater increase in the nursing staff workload. Given that nurses are a higher-cost resource than pharmacy technicians, the projected human resource opportunity cost of transitioning from the hybrid system to a decentralized system similar to the comparator facility's was estimated at $229,691 per annum. Based on the simulation results, it was decided that a transition from the existing hybrid medication distribution system to a more ADC-dependent model would result in an unfavorable shift in staff skill mix and corresponding human resource costs at the medical center.

  17. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  18. A New Concept of Controller for Accelerators' Magnet Power Supplies

    NASA Astrophysics Data System (ADS)

    Visintini, Roberto; Cleva, Stefano; Cautero, Marco; Ciesla, Tomasz

    2016-04-01

    The complexity of a particle accelerator implies the remote control of very large numbers of devices, with many different typologies, either distributed along the accelerator or concentrated in locations, often far away from each other. Local and global control systems handle the devices through dedicated communication channels and interfaces. Each controlled device is practically a “smart node” performing a specific task. In addition, very often, those tasks are managed in real-time mode. The performances required to the control interface has an influence on the cost of the distributed nodes as well as on their hardware and software implementation. In large facilities (e.g. CERN) the “smart nodes” derive from specific in-house developments. Alternatively, it is possible to find on the market commercial devices, whose performances (and prices) are spread over a broad range, and spanning from proprietary design (customizable to the user's needs) to open source/design. In this paper, we will describe some applications of smart nodes in the particle accelerators field, with special focus on the power supplies for magnets. In modern accelerators, in fact, magnets and their associated power supplies constitute systems distributed along the accelerator itself, and strongly interfaced with the remote control system as well as with more specific (and often more demanding) orbit/trajectory feedback systems. We will give examples of actual systems, installed and operational on two light sources, Elettra and FERMI, located in the Elettra Research Center in Trieste, Italy.

  19. Real time testing of intelligent relays for synchronous distributed generation islanding detection

    NASA Astrophysics Data System (ADS)

    Zhuang, Davy

    As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.

  20. A new study on the emission of EM waves from large EAS

    NASA Technical Reports Server (NTRS)

    Pathak, K. M.; Mazumdar, G. K. D.

    1985-01-01

    A method used in locating the core of individual cosmic ray showers is described. Using a microprocessor-based detecting system, the density distribution and hence, energy of each detected shower was estimated.

  1. Unscheduled load flow effect due to large variation in the distributed generation in a subtransmission network

    NASA Astrophysics Data System (ADS)

    Islam, Mujahidul

    A sustainable energy delivery infrastructure implies the safe and reliable accommodation of large scale penetration of renewable sources in the power grid. In this dissertation it is assumed there will be no significant change in the power transmission and distribution structure currently in place; except in the operating strategy and regulatory policy. That is to say, with the same old structure, the path towards unveiling a high penetration of switching power converters in the power system will be challenging. Some of the dimensions of this challenge are power quality degradation, frequent false trips due to power system imbalance, and losses due to a large neutral current. The ultimate result is the reduced life of many power distribution components - transformers, switches and sophisticated loads. Numerous ancillary services are being developed and offered by the utility operators to mitigate these problems. These services will likely raise the system's operational cost, not only from the utility operators' end, but also reflected on the Independent System Operators and by the Regional Transmission Operators (RTO) due to an unforeseen backlash of frequent variation in the load-side generation or distributed generation. The North American transmission grid is an interconnected system similar to a large electrical circuit. This circuit was not planned but designed over 100 years. The natural laws of physics govern the power flow among loads and generators except where control mechanisms are installed. The control mechanism has not matured enough to withstand the high penetration of variable generators at uncontrolled distribution ends. Unlike a radial distribution system, mesh or loop networks can alleviate complex channels for real and reactive power flow. Significant variation in real power injection and absorption on the distribution side can emerge as a bias signal on the routing reactive power in some physical links or channels that are not distinguishable from the vast network. A path tracing methodology is developed to identify the power lines that are vulnerable to an unscheduled flow effect in the sub-transmission network. It is much harder to aggregate power system network sensitivity information or data from measuring load flow physically than to simulate in software. System dynamics is one of the key factors to determine an appropriate dynamic control mechanism at an optimum network location. Once a model of deterministic but variable power generator is used, the simulation can be meaningful in justifying this claim. The method used to model the variable generator is named the two-components phase distortion model. The model was validated from the high resolution data collected from three pilot photovoltaic sites in Florida - two in the city of St. Petersburg and one in the city of Tampa. The high resolution data was correlated with weather radar closest to the sites during the design stage of the model. Technically the deterministic model cannot replicate a stochastic model which is more realistically applicable for solar isolation and involves a Markov chain. The author justified the proposition based on the fact that for analysis of the response functions of different systems, the excitation function should be common for comparison. Moreover, there could be many possible simulation scenarios but fewer worst cases. Almost all commercial systems are protected against potential faults and contingencies to a certain extent. Hence, the proposed model for worst case studies was designed within a reasonable limit. The simulation includes steady state and transient mode using multiple software modules including MatlabRTM, PSCADRTM and Paladin Design BaseRTM. It is shown that by identifying vulnerable or sensitive branches in the network, the control mechanisms can be coordinated reliably. In the long run this can save money by preventing unscheduled power flow in the network and eventually stabilizing the energy market.

  2. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  3. School Finance: A Primer. A Practical Guide to the Structural Components of, Alternative Approaches to, and Policy Questions about State School Finance Systems.

    ERIC Educational Resources Information Center

    Augenblick, John; And Others

    Although school funding structures are similar in many ways across the states, no two states have school finance systems that are precisely the same. School finance systems which are used to achieve multiple objectives, must consider characteristics of numerous school districts, distribute large amounts of money, and have developed incrementally…

  4. Heat Pumps and Combined Heat and Power | Climate Neutral Research Campuses

    Science.gov Websites

    heat and power (CHP) systems on research campuses can reduce climate impact by 15% to 30% and yield a take advantage of large central heating plants and steam distribution systems that are available on climate impact. The material handling and combustion systems used for coal are often suitable for partial

  5. Joint Sensing/Sampling Optimization for Surface Drifting Mine Detection with High-Resolution Drift Model

    DTIC Science & Technology

    2012-09-01

    as potential tools for large area detection coverage while being moderately inexpensive (Wettergren, Performance of Search via Track - Before - Detect for...via Track - Before - Detect for Distribute 34 Sensor Networks, 2008). These statements highlight three specific needs to further sensor network research...Bay hydrography. Journal of Marine Systems, 12, 221–236. Wettergren, T. A. (2008). Performance of search via track - before - detect for distributed

  6. Ropes: Support for collective opertions among distributed threads

    NASA Technical Reports Server (NTRS)

    Haines, Matthew; Mehrotra, Piyush; Cronk, David

    1995-01-01

    Lightweight threads are becoming increasingly useful in supporting parallelism and asynchronous control structures in applications and language implementations. Recently, systems have been designed and implemented to support interprocessor communication between lightweight threads so that threads can be exploited in a distributed memory system. Their use, in this setting, has been largely restricted to supporting latency hiding techniques and functional parallelism within a single application. However, to execute data parallel codes independent of other threads in the system, collective operations and relative indexing among threads are required. This paper describes the design of ropes: a scoping mechanism for collective operations and relative indexing among threads. We present the design of ropes in the context of the Chant system, and provide performance results evaluating our initial design decisions.

  7. Quantifying Availability in SCADA Environments Using the Cyber Security Metric MFC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Anis Ben; Rabai, Latifa Ben Arfa; Abercrombie, Robert K

    2014-01-01

    Supervisory Control and Data Acquisition (SCADA) systems are distributed networks dispersed over large geographic areas that aim to monitor and control industrial processes from remote areas and/or a centralized location. They are used in the management of critical infrastructures such as electric power generation, transmission and distribution, water and sewage, manufacturing/industrial manufacturing as well as oil and gas production. The availability of SCADA systems is tantamount to assuring safety, security and profitability. SCADA systems are the backbone of the national cyber-physical critical infrastructure. Herein, we explore the definition and quantification of an econometric measure of availability, as it applies tomore » SCADA systems; our metric is a specialization of the generic measure of mean failure cost.« less

  8. Effects of voltage control in utility interactive dispersed storage and generation systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkham, H.; Das, R.

    1983-03-15

    When a small generator is connected to the distribution system, the voltage at the point of interconnection is determined largely by the system and not the generator. This report examines the effect on the generator, on the load voltage and on the distribution system of a number of different voltage control strategies in the generator. Synchronous generators with three kinds of exciter control are considered, as well as induction generators and dc/ac inverters, with and without capacitor compensation. The effect of varying input power during operation (which may be experienced by generators based on renewable resources) is explored, as wellmore » as the effect of connecting and disconnecting the generator at ten percent of its rated power.« less

  9. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  10. Life Sciences Data Archive (LSDA)

    NASA Technical Reports Server (NTRS)

    Fitts, M.; Johnson-Throop, Kathy; Thomas, D.; Shackelford, K.

    2008-01-01

    In the early days of spaceflight, space life sciences data were been collected and stored in numerous databases, formats, media-types and geographical locations. While serving the needs of individual research teams, these data were largely unknown/unavailable to the scientific community at large. As a result, the Space Act of 1958 and the Science Data Management Policy mandated that research data collected by the National Aeronautics and Space Administration be made available to the science community at large. The Biomedical Informatics and Health Care Systems Branch of the Space Life Sciences Directorate at JSC and the Data Archive Project at ARC, with funding from the Human Research Program through the Exploration Medical Capability Element, are fulfilling these requirements through the systematic population of the Life Sciences Data Archive. This program constitutes a formal system for the acquisition, archival and distribution of data for Life Sciences-sponsored experiments and investigations. The general goal of the archive is to acquire, preserve, and distribute these data using a variety of media which are accessible and responsive to inquiries from the science communities.

  11. Globally distributed software defined storage (proposal)

    NASA Astrophysics Data System (ADS)

    Shevel, A.; Khoruzhnikov, S.; Grudinin, V.; Sadov, O.; Kairkanov, A.

    2017-10-01

    The volume of the coming data in HEP is growing. The volume of the data to be held for a long time is growing as well. Large volume of data - big data - is distributed around the planet. The methods, approaches how to organize and manage the globally distributed data storage are required. The distributed storage has several examples for personal needs like own-cloud.org, pydio.com, seafile.com, sparkleshare.org. For enterprise-level there is a number of systems: SWIFT - distributed storage systems (part of Openstack), CEPH and the like which are mostly object storage. When several data center’s resources are integrated, the organization of data links becomes very important issue especially if several parallel data links between data centers are used. The situation in data centers and in data links may vary each hour. All that means each part of distributed data storage has to be able to rearrange usage of data links and storage servers in each data center. In addition, for each customer of distributed storage different requirements could appear. The above topics are planned to be discussed in data storage proposal.

  12. DREAM: Distributed Resources for the Earth System Grid Federation (ESGF) Advanced Management

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2015-12-01

    The data associated with climate research is often generated, accessed, stored, and analyzed on a mix of unique platforms. The volume, variety, velocity, and veracity of this data creates unique challenges as climate research attempts to move beyond stand-alone platforms to a system that truly integrates dispersed resources. Today, sharing data across multiple facilities is often a challenge due to the large variance in supporting infrastructures. This results in data being accessed and downloaded many times, which requires significant amounts of resources, places a heavy analytic development burden on the end users, and mismanaged resources. Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) has begun to solve this problem. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces. However, significant challenges remain, including workflow provenance, modular and flexible deployment, scalability of a diverse set of computational resources, and more. Expanding on the existing ESGF, the Distributed Resources for the Earth System Grid Federation Advanced Management (DREAM) will ensure that the access, storage, movement, and analysis of the large quantities of data that are processed and produced by diverse science projects can be dynamically distributed with proper resource management. This system will enable data from an infinite number of diverse sources to be organized and accessed from anywhere on any device (including mobile platforms). The approach offers a powerful roadmap for the creation and integration of a unified knowledge base of an entire ecosystem, including its many geophysical, geographical, social, political, agricultural, energy, transportation, and cyber aspects. The resulting aggregation of data combined with analytics services has the potential to generate an informational universe and knowledge system of unprecedented size and value to the scientific community, downstream applications, decision makers, and the public.

  13. Low-authority control synthesis for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1982-01-01

    The control of vibrations of large space structures by distributed sensors and actuators is studied. A procedure is developed for calculating the feedback loop gains required to achieve specified amounts of damping. For moderate damping (Low Authority Control) the procedure is purely algebraic, but it can be applied iteratively when larger amounts of damping are required and is generalized for arbitrary time invariant systems.

  14. Two coupled, driven Ising spin systems working as an engine.

    PubMed

    Basu, Debarshi; Nandi, Joydip; Jayannavar, A M; Marathe, Rahul

    2017-05-01

    Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.

  15. Two coupled, driven Ising spin systems working as an engine

    NASA Astrophysics Data System (ADS)

    Basu, Debarshi; Nandi, Joydip; Jayannavar, A. M.; Marathe, Rahul

    2017-05-01

    Miniaturized heat engines constitute a fascinating field of current research. Many theoretical and experimental studies are being conducted that involve colloidal particles in harmonic traps as well as bacterial baths acting like thermal baths. These systems are micron-sized and are subjected to large thermal fluctuations. Hence, for these systems average thermodynamic quantities, such as work done, heat exchanged, and efficiency, lose meaning unless otherwise supported by their full probability distributions. Earlier studies on microengines are concerned with applying Carnot or Stirling engine protocols to miniaturized systems, where system undergoes typical two isothermal and two adiabatic changes. Unlike these models we study a prototype system of two classical Ising spins driven by time-dependent, phase-different, external magnetic fields. These spins are simultaneously in contact with two heat reservoirs at different temperatures for the full duration of the driving protocol. Performance of the model as an engine or a refrigerator depends only on a single parameter, namely the phase between two external drivings. We study this system in terms of fluctuations in efficiency and coefficient of performance (COP). We find full distributions of these quantities numerically and study the tails of these distributions. We also study reliability of the engine. We find the fluctuations dominate mean values of efficiency and COP, and their probability distributions are broad with power law tails.

  16. Electric and magnetic microfields inside and outside space-limited configurations of ions and ionic currents

    NASA Astrophysics Data System (ADS)

    Romanovsky, M. Yu; Ebeling, W.; Schimansky-Geier, L.

    2005-01-01

    The problem of electric and magnetic microfields inside finite spherical systems of stochastically moving ions and outside them is studied. The first possible field of applications is high temperature ion clusters created by laser fields [1]. Other possible applications are nearly spherical liquid systems at room-temperature containing electrolytes. Looking for biological applications we may also think about a cell which is a complicated electrolytic system or even a brain which is a still more complicated system of electrolytic currents. The essential model assumption is the random character of charges motion. We assume in our basic model that we have a finite nearly spherical system of randomly moving charges. Even taking into account that this is at best a caricature of any real system, it might be of interest as a limiting case, which admits a full theoretical treatment. For symmetry reasons, a random configuration of moving charges cannot generate a macroscopic magnetic field, but there will be microscopic fluctuating magnetic fields. Distributions for electric and magnetic microfields inside and outside such space- limited systems are calculated. Spherical systems of randomly distributed moving charges are investigated. Starting from earlier results for infinitely large systems, which lead to Holtsmark- type distributions, we show that the fluctuations in finite charge distributions are larger (in comparison to infinite systems of the same charge density).

  17. Large Fluvial Fans and Exploration for Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Wilkinson, Murray Justin

    2005-01-01

    A report discusses the geological phenomena known, variously, as modern large (or large modern) fluvial fans or large continental fans, from a perspective of exploring for hydrocarbons. These fans are partial cones of river sediment that spread out to radii of 100 km or more. Heretofore, they have not been much recognized in the geological literature probably because they are difficult to see from the ground. They can, however, be seen in photographs taken by astronauts and on other remotely sensed imagery. Among the topics discussed in the report is the need for research to understand what seems to be an association among fluvial fans, alluvial fans, and hydrocarbon deposits. Included in the report is an abstract that summarizes the global distribution of large modern fluvial fans and a proposal to use that distribution as a guide to understanding paleo-fluvial reservoir systems where oil and gas have formed. Also included is an abstract that summarizes what a continuing mapping project has thus far revealed about the characteristics of large fans that have been found in a variety of geological environments.

  18. CD-ROM technology at the EROS data center

    USGS Publications Warehouse

    Madigan, Michael E.; Weinheimer, Mary C.

    1993-01-01

    The vast amount of digital spatial data often required by a single user has created a demand for media alternatives to 1/2" magnetic tape. One such medium that has been recently adopted at the U.S. Geological Survey's EROS Data Center is the compact disc (CD). CD's are a versatile, dynamic, and low-cost method for providing a variety of data on a single media device and are compatible with various computer platforms. CD drives are available for personal computers, UNIX workstations, and mainframe systems, either directly connected, or through a network. This medium furnishes a quick method of reproducing and distributing large amounts of data on a single CD. Several data sets are already available on CD's, including collections of historical Landsat multispectral scanner data and biweekly composites of Advanced Very High Resolution Radiometer data for the conterminous United States. The EROS Data Center intends to provide even more data sets on CD's. Plans include specific data sets on a customized disc to fulfill individual requests, and mass production of unique data sets for large-scale distribution. Requests for a single compact disc-read only memory (CD-ROM) containing a large volume of data either for archiving or for one-time distribution can be addressed with a CD-write once (CD-WO) unit. Mass production and large-scale distribution will require CD-ROM replication and mastering.

  19. Event-by-event gluon multiplicity, energy density, and eccentricities in ultrarelativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Schenke, Björn; Tribedy, Prithwish; Venugopalan, Raju

    2012-09-01

    The event-by-event multiplicity distribution, the energy densities and energy density weighted eccentricity moments ɛn (up to n=6) at early times in heavy-ion collisions at both the BNL Relativistic Heavy Ion Collider (RHIC) (s=200GeV) and the CERN Large Hardron Collider (LHC) (s=2.76TeV) are computed in the IP-Glasma model. This framework combines the impact parameter dependent saturation model (IP-Sat) for nucleon parton distributions (constrained by HERA deeply inelastic scattering data) with an event-by-event classical Yang-Mills description of early-time gluon fields in heavy-ion collisions. The model produces multiplicity distributions that are convolutions of negative binomial distributions without further assumptions or parameters. In the limit of large dense systems, the n-particle gluon distribution predicted by the Glasma-flux tube model is demonstrated to be nonperturbatively robust. In the general case, the effect of additional geometrical fluctuations is quantified. The eccentricity moments are compared to the MC-KLN model; a noteworthy feature is that fluctuation dominated odd moments are consistently larger than in the MC-KLN model.

  20. Research on the novel FBG detection system for temperature and strain field distribution

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-chao; Yang, Jin-hua

    2017-10-01

    In order to collect the information of temperature and strain field distribution information, the novel FBG detection system was designed. The system applied linear chirped FBG structure for large bandwidth. The structure of novel FBG cover was designed as a linear change in thickness, in order to have a different response at different locations. It can obtain the temperature and strain field distribution information by reflection spectrum simultaneously. The structure of novel FBG cover was designed, and its theoretical function is calculated. Its solution is derived for strain field distribution. By simulation analysis the change trend of temperature and strain field distribution were analyzed in the conditions of different strain strength and action position, the strain field distribution can be resolved. The FOB100 series equipment was used to test the temperature in experiment, and The JSM-A10 series equipment was used to test the strain field distribution in experiment. The average error of experimental results was better than 1.1% for temperature, and the average error of experimental results was better than 1.3% for strain. There were individual errors when the strain was small in test data. It is feasibility by theoretical analysis, simulation calculation and experiment, and it is very suitable for application practice.

  1. (abstract) Satellite Physical Oceanography Data Available From an EOSDIS Archive

    NASA Technical Reports Server (NTRS)

    Digby, Susan A.; Collins, Donald J.

    1996-01-01

    The Physical Oceanography Distributed Active Archive Center (PO.DAAC) at the Jet Propulsion Laboratory archives and distributes data as part of the Earth Observing System Data and Information System (EOSDIS). Products available from JPL are largely satellite derived and include sea-surface height, surface-wind speed and vectors, integrated water vapor, atmospheric liquid water, sea-surface temperature, heat flux, and in-situ data as it pertains to satellite data. Much of the data is global and spans fourteen years.There is email access, a WWW site, product catalogs, and FTP capabilities. Data is free of charge.

  2. Supporting scalability and flexibility in a distributed management platform

    NASA Astrophysics Data System (ADS)

    Jardin, P.

    1996-06-01

    The TeMIP management platform was developed to manage very large distributed systems such as telecommunications networks. The management of these networks imposes a number of fairly stringent requirements including the partitioning of the network, division of work based on skills and target system types and the ability to adjust the functions to specific operational requirements. This requires the ability to cluster managed resources into domains that are totally defined at runtime based on operator policies. This paper addresses some of the issues that must be addressed in order to add a dynamic dimension to a management solution.

  3. Inverter design for high frequency power distribution

    NASA Technical Reports Server (NTRS)

    King, R. J.

    1985-01-01

    A class of simple resonantly commutated inverters are investigated for use in a high power (100 KW - 1000 KW) high frequency (10 KHz - 20 KHz) AC power distribution system. The Mapham inverter is found to provide a unique combination of large thyristor turn-off angle and good utilization factor, much better than an alternate 'current-fed' inverter. The effects of loading the Mapham inverter entirely with rectifier loads are investigated by simulation and with an experimental 3 KW 20 KHz inverter. This inverter is found to be well suited to a power system with heavy rectifier loading.

  4. Using Approximate Bayesian Computation to Probe Multiple Transiting Planet Systems

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.

    2015-08-01

    The large number of multiple transiting planet systems (MTPS) uncovered with Kepler suggest a population of well-aligned planetary systems. Previously, the distribution of transit duration ratios in MTPSs has been used to place constraints on the distributions of mutual orbital inclinations and orbital eccentricities in these systems. However, degeneracies with the underlying number of planets in these systems pose added challenges and make explicit likelihood functions intractable. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC proposes from a prior on the population parameters to produce synthetic datasets via a physically-motivated model. Samples are accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples then form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We will demonstrate the utility of ABC in exoplanet populations by presenting new constraints on the mutual inclination and eccentricity distributions in the Kepler MTPSs. We will also introduce Simple-ABC, a new open-source Python package designed for ease of use and rapid specification of general models, suitable for use in a wide variety of applications in both exoplanet science and astrophysics as a whole.

  5. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Agarwal, Khushbu

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less

  6. CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals

    NASA Astrophysics Data System (ADS)

    Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen

    A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.

  7. Locating inefficient links in a large-scale transportation network

    NASA Astrophysics Data System (ADS)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T < 0 or Δ T > 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  8. Verification of Space Station Secondary Power System Stability Using Design of Experiment

    NASA Technical Reports Server (NTRS)

    Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce

    1998-01-01

    This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.

  9. The dynamics of the multi-planet system orbiting Kepler-56

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Gongjie; Naoz, Smadar; Johnson, John Asher

    2014-10-20

    Kepler-56 is a multi-planet system containing two coplanar inner planets that are in orbits misaligned with respect to the spin axis of the host star, and an outer planet. Various mechanisms have been proposed to explain the broad distribution of spin-orbit angles among exoplanets, and these theories fall under two broad categories. The first is based on dynamical interactions in a multi-body system, while the other assumes that disk migration is the driving mechanism in planetary configuration and that the star (or disk) is titled with respect to the planetary plane. Here we show that the large observed obliquity ofmore » Kepler 56 system is consistent with a dynamical origin. In addition, we use observations by Huber et al. to derive the obliquity's probability distribution function, thus improving the constrained lower limit. The outer planet may be the cause of the inner planets' large obliquities, and we give the probability distribution function of its inclination, which depends on the initial orbital configuration of the planetary system. We show that even in the presence of precise measurement of the true obliquity, one cannot distinguish the initial configurations. Finally we consider the fate of the system as the star continues to evolve beyond the main sequence, and we find that the obliquity of the system will not undergo major variations as the star climbs the red giant branch. We follow the evolution of the system and find that the innermost planet will be engulfed in ∼129 Myr. Furthermore we put an upper limit of ∼155 Myr for the engulfment of the second planet. This corresponds to ∼3% of the current age of the star.« less

  10. A Vision for Co-optimized T&D System Interaction with Renewables and Demand Response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, C. Lindsay; Zéphyr, Luckny; Liu, Jialin

    The evolution of the power system to the reliable, effi- cient and sustainable system of the future will involve development of both demand- and supply-side technology and operations. The use of demand response to counterbalance the intermittency of re- newable generation brings the consumer into the spotlight. Though individual consumers are interconnected at the low-voltage distri- bution system, these resources are typically modeled as variables at the transmission network level. In this paper, a vision for co- optimized interaction of distribution systems, or microgrids, with the high-voltage transmission system is described. In this frame- work, microgrids encompass consumers, distributed renewablesmore » and storage. The energy management system of the microgrid can also sell (buy) excess (necessary) energy from the transmission system. Preliminary work explores price mechanisms to manage the microgrid and its interactions with the transmission system. Wholesale market operations are addressed through the devel- opment of scalable stochastic optimization methods that provide the ability to co-optimize interactions between the transmission and distribution systems. Modeling challenges of the co-optimization are addressed via solution methods for large-scale stochastic op- timization, including decomposition and stochastic dual dynamic programming.« less

  11. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    PubMed

    Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D

    2015-01-01

    Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  12. The R-Shell approach - Using scheduling agents in complex distributed real-time systems

    NASA Technical Reports Server (NTRS)

    Natarajan, Swaminathan; Zhao, Wei; Goforth, Andre

    1993-01-01

    Large, complex real-time systems such as space and avionics systems are extremely demanding in their scheduling requirements. The current OS design approaches are quite limited in the capabilities they provide for task scheduling. Typically, they simply implement a particular uniprocessor scheduling strategy and do not provide any special support for network scheduling, overload handling, fault tolerance, distributed processing, etc. Our design of the R-Shell real-time environment fcilitates the implementation of a variety of sophisticated but efficient scheduling strategies, including incorporation of all these capabilities. This is accomplished by the use of scheduling agents which reside in the application run-time environment and are responsible for coordinating the scheduling of the application.

  13. Practical gigahertz quantum key distribution robust against channel disturbance.

    PubMed

    Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; He, De-Yong; Hui, Cong; Hao, Peng-Lei; Fan-Yuan, Guan-Jie; Wang, Chao; Zhang, Li-Jun; Kuang, Jie; Liu, Shu-Feng; Zhou, Zheng; Wang, Yong-Gang; Guo, Guang-Can; Han, Zheng-Fu

    2018-05-01

    Quantum key distribution (QKD) provides an attractive solution for secure communication. However, channel disturbance severely limits its application when a QKD system is transferred from the laboratory to the field. Here a high-speed Faraday-Sagnac-Michelson QKD system is proposed that can automatically compensate for the channel polarization disturbance, which largely avoids the intermittency limitations of environment mutation. Over a 50 km fiber channel with 30 Hz polarization scrambling, the practicality of this phase-coding QKD system was characterized with an interference fringe visibility of 99.35% over 24 h and a stable secure key rate of 306 k bits/s over seven days without active polarization alignment.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    This report contains papers on the following topics: NREN Security Issues: Policies and Technologies; Layer Wars: Protect the Internet with Network Layer Security; Electronic Commission Management; Workflow 2000 - Electronic Document Authorization in Practice; Security Issues of a UNIX PEM Implementation; Implementing Privacy Enhanced Mail on VMS; Distributed Public Key Certificate Management; Protecting the Integrity of Privacy-enhanced Electronic Mail; Practical Authorization in Large Heterogeneous Distributed Systems; Security Issues in the Truffles File System; Issues surrounding the use of Cryptographic Algorithms and Smart Card Applications; Smart Card Augmentation of Kerberos; and An Overview of the Advanced Smart Card Access Control System.more » Selected papers were processed separately for inclusion in the Energy Science and Technology Database.« less

  15. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1993-01-01

    We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X DataSlice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.

  16. An interactive environment for the analysis of large Earth observation and model data sets

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Walsh, John E.; Wilhelmson, Robert B.

    1992-01-01

    We propose to develop an interactive environment for the analysis of large Earth science observation and model data sets. We will use a standard scientific data storage format and a large capacity (greater than 20 GB) optical disk system for data management; develop libraries for coordinate transformation and regridding of data sets; modify the NCSA X Image and X Data Slice software for typical Earth observation data sets by including map transformations and missing data handling; develop analysis tools for common mathematical and statistical operations; integrate the components described above into a system for the analysis and comparison of observations and model results; and distribute software and documentation to the scientific community.

  17. Distributed resource allocation under communication constraints

    NASA Astrophysics Data System (ADS)

    Dodin, Pierre; Nimier, Vincent

    2001-03-01

    This paper deals with a study of the multi-sensor management problem for multi-target tracking. The collaboration between many sensors observing the same target means that they are able to fuse their data during the information process. Then one must take into account this possibility to compute the optimal association sensors-target at each step of time. In order to solve this problem for real large scale system, one must both consider the information aspect and the control aspect of the problem. To unify these problems, one possibility is to use a decentralized filtering algorithm locally driven by an assignment algorithm. The decentralized filtering algorithm we use in our model is the filtering algorithm of Grime, which relaxes the usual full-connected hypothesis. By full-connected, one means that the information in a full-connected system is totally distributed everywhere at the same moment, which is unacceptable for a real large scale system. We modelize the distributed assignment decision with the help of a greedy algorithm. Each sensor performs a global optimization, in order to estimate other information sets. A consequence of the relaxation of the full- connected hypothesis is that the sensors' information set are not the same at each step of time, producing an information dis- symmetry in the system. The assignment algorithm uses a local knowledge of this dis-symmetry. By testing the reactions and the coherence of the local assignment decisions of our system, against maneuvering targets, we show that it is still possible to manage with decentralized assignment control even though the system is not full-connected.

  18. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  19. Evaluating effective swath width and droplet distribution of aerial spraying systems on M-18B and Thrush 510G airplanes

    USDA-ARS?s Scientific Manuscript database

    Aerial spraying plays an important role in promoting agricultural production and protecting the biological environment due to its flexibility, high effectiveness, and large operational area per unit of time. In order to evaluate the performance parameters of the spraying systems on two fixed wing ai...

  20. Thermostatic Radiator Valve Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dentz, Jordan; Ansanelli, Eric

    2015-01-01

    A large stock of multifamily buildings in the Northeast and Midwest are heated by steam distribution systems. Losses from these systems are typically high and a significant number of apartments are overheated much of the time. Thermostatically controlled radiator valves (TRVs) are one potential strategy to combat this problem, but have not been widely accepted by the residential retrofit market.

Top