Derivation of hydrous pyrolysis kinetic parameters from open-system pyrolysis
NASA Astrophysics Data System (ADS)
Tseng, Yu-Hsin; Huang, Wuu-Liang
2010-05-01
Kinetic information is essential to predict the temperature, timing or depth of hydrocarbon generation within a hydrocarbon system. The most common experiments for deriving kinetic parameters are mainly by open-system pyrolysis. However, it has been shown that the conditions of open-system pyrolysis are deviant from nature by its low near-ambient pressure and high temperatures. Also, the extrapolation of heating rates in open-system pyrolysis to geological conditions may be questionable. Recent study of Lewan and Ruble shows hydrous-pyrolysis conditions can simulate the natural conditions better and its applications are supported by two case studies with natural thermal-burial histories. Nevertheless, performing hydrous pyrolysis experiment is really tedious and requires large amount of sample, while open-system pyrolysis is rather convenient and efficient. Therefore, the present study aims at the derivation of convincing distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data. Our results unveil that there is a good correlation between open-system Rock-Eval parameter Tmax and the activation energy (Ea) derived from hydrous pyrolysis. The hydrous pyrolysis single Ea can be predicted from Tmax based on the correlation, while the frequency factor (A0) is estimated based on the linear relationship between single Ea and log A0. Because the Ea distribution is more rational than single Ea, we modify the predicted single hydrous pyrolysis Ea into distributed Ea by shifting the pattern of Ea distribution from open-system pyrolysis until the weight mean Ea distribution equals to the single hydrous pyrolysis Ea. Moreover, it has been shown that the shape of the Ea distribution is very much alike the shape of Tmax curve. Thus, in case of the absence of open-system Ea distribution, we may use the shape of Tmax curve to get the distributed hydrous pyrolysis Ea. The study offers a new approach as a simple method for obtaining distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data, which will allow for better estimating hydrocarbon generation.
Rafael Moreno-Sanchez
2006-01-01
The aim of this is paper is to provide a conceptual framework for the session: âThe role of web-based Geographic Information Systems in supporting sustainable management.â The concepts of sustainability, sustainable forest management, Web Services, Distributed Geographic Information Systems, interoperability, Open Specifications, and Open Source Software are defined...
Open-source framework for power system transmission and distribution dynamics co-simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Renke; Fan, Rui; Daily, Jeff
The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less
A Framework for Open, Flexible and Distributed Learning.
ERIC Educational Resources Information Center
Khan, Badrul H.
Designing open, flexible distance learning systems on the World Wide Web requires thoughtful analysis and investigation combined with an understanding of both the Web's attributes and resources and the ways instructional design principles can be applied to tap the Web's potential. A framework for open, flexible, and distributed learning has been…
NASA Astrophysics Data System (ADS)
Arias, Carolina; Brovelli, Maria Antonia; Moreno, Rafael
2015-04-01
We are in an age when water resources are increasingly scarce and the impacts of human activities on them are ubiquitous. These problems don't respect administrative or political boundaries and they must be addressed integrating information from multiple sources at multiple spatial and temporal scales. Communication, coordination and data sharing are critical for addressing the water conservation and management issues of the 21st century. However, different countries, provinces, local authorities and agencies dealing with water resources have diverse organizational, socio-cultural, economic, environmental and information technology (IT) contexts that raise challenges to the creation of information systems capable of integrating and distributing information across their areas of responsibility in an efficient and timely manner. Tight and disparate financial resources, and dissimilar IT infrastructures (data, hardware, software and personnel expertise) further complicate the creation of these systems. There is a pressing need for distributed interoperable water information systems that are user friendly, easily accessible and capable of managing and sharing large volumes of spatial and non-spatial data. In a distributed system, data and processes are created and maintained in different locations each with competitive advantages to carry out specific activities. Open Data (data that can be freely distributed) is available in the water domain, and it should be further promoted across countries and organizations. Compliance with Open Specifications for data collection, storage and distribution is the first step toward the creation of systems that are capable of interacting and exchanging data in a seamlessly (interoperable) way. The features of Free and Open Source Software (FOSS) offer low access cost that facilitate scalability and long-term viability of information systems. The World Wide Web (the Web) will be the platform of choice to deploy and access these systems. Geospatial capabilities for mapping, visualization, and spatial analysis will be important components of these new generation of Web-based interoperable information systems in the water domain. The purpose of this presentation is to increase the awareness of scientists, IT personnel and agency managers about the advantages offered by the combined use of Open Data, Open Specifications for geospatial and water-related data collection, storage and sharing, as well as mature FOSS projects for the creation of interoperable Web-based information systems in the water domain. A case study is used to illustrate how these principles and technologies can be integrated to create a system with the previously mentioned characteristics for managing and responding to flood events.
Omnetric Group Demonstrates Distributed Grid-Edge Control Hierarchy at NREL
| Energy Systems Integration Facility | NREL Omnetric Group Omnetric Group Demonstrates Group demonstrated a distributed control hierarchy-based on an open field message bus (OpenFMB resources. OMNETRIC Group first developed and validated the system in the ESIF with a combination of
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen; ...
2018-01-26
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Optimal Power Scheduling for a Medium Voltage AC/DC Hybrid Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Zhenshan; Liu, Dichen; Liao, Qingfen
With the great increase of renewable generation as well as the DC loads in the distribution network; DC distribution technology is receiving more attention; since the DC distribution network can improve operating efficiency and power quality by reducing the energy conversion stages. This paper presents a new architecture for the medium voltage AC/DC hybrid distribution network; where the AC and DC subgrids are looped by normally closed AC soft open point (ACSOP) and DC soft open point (DCSOP); respectively. The proposed AC/DC hybrid distribution systems contain renewable generation (i.e., wind power and photovoltaic (PV) generation); energy storage systems (ESSs); softmore » open points (SOPs); and both AC and DC flexible demands. An energy management strategy for the hybrid system is presented based on the dynamic optimal power flow (DOPF) method. The main objective of the proposed power scheduling strategy is to minimize the operating cost and reduce the curtailment of renewable generation while meeting operational and technical constraints. The proposed approach is verified in five scenarios. The five scenarios are classified as pure AC system; hybrid AC/DC system; hybrid system with interlinking converter; hybrid system with DC flexible demand; and hybrid system with SOPs. Results show that the proposed scheduling method can successfully dispatch the controllable elements; and that the presented architecture for the AC/DC hybrid distribution system is beneficial for reducing operating cost and renewable generation curtailment.« less
Lewin, Keith F.
1997-04-15
A multi-port valve for regulating, as a function of ambient air having varying wind velocity and wind direction in an open-field control area, the distribution of a fluid, particularly carbon dioxide (CO.sub.2) gas, in a fluid distribution system so that the control area remains generally at an elevated fluid concentration or level of said fluid. The multi-port valve generally includes a multi-port housing having a plurality of outlets therethrough disposed in a first pattern of outlets and at least one second pattern of outlets, and a movable plate having a plurality of apertures extending therethrough disposed in a first pattern of apertures and at least one second pattern of apertures. The first pattern of apertures being alignable with the first pattern of outlets and the at least one second pattern of apertures being alignable with the second pattern of outlets. The first pattern of apertures has a predetermined orientation with the at least one second pattern of apertures. For an open-field control area subject to ambient wind having a low velocity from any direction, the movable plate is positioned to equally distribute the supply of fluid in a fluid distribution system to the open-field control area. For an open-field control area subject to ambient wind having a high velocity from a given direction, the movable plate is positioned to generally distribute a supply of fluid in a fluid distribution system to that portion of the open-field control area located upwind.
Lewin, K.F.
1997-04-15
A multi-port valve is described for regulating, as a function of ambient air having varying wind velocity and wind direction in an open-field control area, the distribution of a fluid, particularly carbon dioxide (CO{sub 2}) gas, in a fluid distribution system so that the control area remains generally at an elevated fluid concentration or level of said fluid. The multi-port valve generally includes a multi-port housing having a plurality of outlets there through disposed in a first pattern of outlets and at least one second pattern of outlets, and a movable plate having a plurality of apertures extending there through disposed in a first pattern of apertures and at least one second pattern of apertures. The first pattern of apertures being alignable with the first pattern of outlets and the at least one second pattern of apertures being alignable with the second pattern of outlets. The first pattern of apertures has a predetermined orientation with the at least one second pattern of apertures. For an open-field control area subject to ambient wind having a low velocity from any direction, the movable plate is positioned to equally distribute the supply of fluid in a fluid distribution system to the open-field control area. For an open-field control area subject to ambient wind having a high velocity from a given direction, the movable plate is positioned to generally distribute a supply of fluid in a fluid distribution system to that portion of the open-field control area located upwind. 7 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiike, S.; Okazaki, Y.
This paper describes a performance estimation tool developed for modeling and simulation of open distributed energy management systems to support their design. The approach of discrete event simulation with detailed models is considered for efficient performance estimation. The tool includes basic models constituting a platform, e.g., Ethernet, communication protocol, operating system, etc. Application softwares are modeled by specifying CPU time, disk access size, communication data size, etc. Different types of system configurations for various system activities can be easily studied. Simulation examples show how the tool is utilized for the efficient design of open distributed energy management systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
The open research system: a web-based metadata and data repository for collaborative research
Charles M. Schweik; Alexander Stepanov; J. Morgan Grove
2005-01-01
Beginning in 1999, a web-based metadata and data repository we call the "open research system" (ORS) was designed and built to assist geographically distributed scientific research teams. The purpose of this innovation was to promote the open sharing of data within and across organizational lines and across geographic distances. As the use of the system...
43 CFR 418.27 - Distribution system operation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... authorized employees or agents to open and close individual turnouts and operate the distribution system... variable field conditions, weather, etc., they must immediately notify the District so proper adjustments...
EPRI and Schneider Electric Demonstrate Distributed Resource Communications
Electric Power Research Institute (EPRI) is designing, building, and testing a flexible, open-source Schneider Electric ADMS, open software platforms, an open-platform home energy management system
Learning from Multiple Collaborating Intelligent Tutors: An Agent-based Approach.
ERIC Educational Resources Information Center
Solomos, Konstantinos; Avouris, Nikolaos
1999-01-01
Describes an open distributed multi-agent tutoring system (MATS) and discusses issues related to learning in such open environments. Topics include modeling a one student-many teachers approach in a computer-based learning context; distributed artificial intelligence; implementation issues; collaboration; and user interaction. (Author/LRW)
Software Comparison for Renewable Energy Deployment in a Distribution Network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, David Wenzhong; Muljadi, Eduard; Tian, Tian
The main objective of this report is to evaluate different software options for performing robust distributed generation (DG) power system modeling. The features and capabilities of four simulation tools, OpenDSS, GridLAB-D, CYMDIST, and PowerWorld Simulator, are compared to analyze their effectiveness in analyzing distribution networks with DG. OpenDSS and GridLAB-D, two open source software, have the capability to simulate networks with fluctuating data values. These packages allow the running of a simulation each time instant by iterating only the main script file. CYMDIST, a commercial software, allows for time-series simulation to study variations on network controls. PowerWorld Simulator, another commercialmore » tool, has a batch mode simulation function through the 'Time Step Simulation' tool, which obtains solutions for a list of specified time points. PowerWorld Simulator is intended for analysis of transmission-level systems, while the other three are designed for distribution systems. CYMDIST and PowerWorld Simulator feature easy-to-use graphical user interfaces (GUIs). OpenDSS and GridLAB-D, on the other hand, are based on command-line programs, which increase the time necessary to become familiar with the software packages.« less
NASA Astrophysics Data System (ADS)
Liang, J.; Sédillot, S.; Traverson, B.
1997-09-01
This paper addresses federation of a transactional object standard - Object Management Group (OMG) object transaction service (OTS) - with the X/Open distributed transaction processing (DTP) model and International Organization for Standardization (ISO) open systems interconnection (OSI) transaction processing (TP) communication protocol. The two-phase commit propagation rules within a distributed transaction tree are similar in the X/Open, ISO and OMG models. Building an OTS on an OSI TP protocol machine is possible because the two specifications are somewhat complementary. OTS defines a set of external interfaces without specific internal protocol machine, while OSI TP specifies an internal protocol machine without any application programming interface. Given these observations, and having already implemented an X/Open two-phase commit transaction toolkit based on an OSI TP protocol machine, we analyse the feasibility of using this implementation as a transaction service provider for OMG interfaces. Based on the favourable result of this feasibility study, we are implementing an OTS compliant system, which, by initiating the extensibility and openness strengths of OSI TP, is able to provide interoperability between X/Open DTP and OMG OTS models.
Energy Systems Integration News - September 2016 | Energy Systems
, Smarter Grid Solutions demonstrated a new distributed energy resources (DER) software control platform utility interconnections require distributed generation (DG) devices to disconnect from the grid during OpenFMB distributed applications on the microgrid test site to locally optimize renewable energy resources
NASA Astrophysics Data System (ADS)
Aydiner, Ekrem; Cherstvy, Andrey G.; Metzler, Ralf
2018-01-01
We study by Monte Carlo simulations a kinetic exchange trading model for both fixed and distributed saving propensities of the agents and rationalize the person and wealth distributions. We show that the newly introduced wealth distribution - that may be more amenable in certain situations - features a different power-law exponent, particularly for distributed saving propensities of the agents. For open agent-based systems, we analyze the person and wealth distributions and find that the presence of trap agents alters their amplitude, leaving however the scaling exponents nearly unaffected. For an open system, we show that the total wealth - for different trap agent densities and saving propensities of the agents - decreases in time according to the classical Kohlrausch-Williams-Watts stretched exponential law. Interestingly, this decay does not depend on the trap agent density, but rather on saving propensities. The system relaxation for fixed and distributed saving schemes are found to be different.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan
A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less
OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing
NASA Astrophysics Data System (ADS)
Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping
2017-02-01
The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.
QuTiP: An open-source Python framework for the dynamics of open quantum systems
NASA Astrophysics Data System (ADS)
Johansson, J. R.; Nation, P. D.; Nori, Franco
2012-08-01
We present an object-oriented open-source framework for solving the dynamics of open quantum systems written in Python. Arbitrary Hamiltonians, including time-dependent systems, may be built up from operators and states defined by a quantum object class, and then passed on to a choice of master equation or Monte Carlo solvers. We give an overview of the basic structure for the framework before detailing the numerical simulation of open system dynamics. Several examples are given to illustrate the build up to a complete calculation. Finally, we measure the performance of our library against that of current implementations. The framework described here is particularly well suited to the fields of quantum optics, superconducting circuit devices, nanomechanics, and trapped ions, while also being ideal for use in classroom instruction. Catalogue identifier: AEMB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 16 482 No. of bytes in distributed program, including test data, etc.: 213 438 Distribution format: tar.gz Programming language: Python Computer: i386, x86-64 Operating system: Linux, Mac OSX, Windows RAM: 2+ Gigabytes Classification: 7 External routines: NumPy (http://numpy.scipy.org/), SciPy (http://www.scipy.org/), Matplotlib (http://matplotlib.sourceforge.net/) Nature of problem: Dynamics of open quantum systems. Solution method: Numerical solutions to Lindblad master equation or Monte Carlo wave function method. Restrictions: Problems must meet the criteria for using the master equation in Lindblad form. Running time: A few seconds up to several tens of minutes, depending on size of underlying Hilbert space.
Empirical tests of Zipf's law mechanism in open source Linux distribution.
Maillart, T; Sornette, D; Spaeth, S; von Krogh, G
2008-11-21
Zipf's power law is a ubiquitous empirical regularity found in many systems, thought to result from proportional growth. Here, we establish empirically the usually assumed ingredients of stochastic growth models that have been previously conjectured to be at the origin of Zipf's law. We use exceptionally detailed data on the evolution of open source software projects in Linux distributions, which offer a remarkable example of a growing complex self-organizing adaptive system, exhibiting Zipf's law over four full decades.
Grid Integrated Distributed PV (GridPV) Version 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reno, Matthew J.; Coogan, Kyle
2014-12-01
This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less
ERIC Educational Resources Information Center
Jonach, Rafael; Ebner, Martin; Grigoriadis, Ypatios
2015-01-01
Lectures of courses at universities are increasingly being recorded and offered through various distribution channels to support students' learning activities. This research work aims to create an automatic system for producing and distributing high quality lecture recordings. Opencast Matterhorn is an open source platform for automated video…
Tomograms for open quantum systems: In(finite) dimensional optical and spin systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thapliyal, Kishore, E-mail: tkishore36@yahoo.com; Banerjee, Subhashish, E-mail: subhashish@iitj.ac.in; Pathak, Anirban, E-mail: anirban.pathak@gmail.com
Tomograms are obtained as probability distributions and are used to reconstruct a quantum state from experimentally measured values. We study the evolution of tomograms for different quantum systems, both finite and infinite dimensional. In realistic experimental conditions, quantum states are exposed to the ambient environment and hence subject to effects like decoherence and dissipation, which are dealt with here, consistently, using the formalism of open quantum systems. This is extremely relevant from the perspective of experimental implementation and issues related to state reconstruction in quantum computation and communication. These considerations are also expected to affect the quasiprobability distribution obtained frommore » experimentally generated tomograms and nonclassicality observed from them. -- Highlights: •Tomograms are constructed for open quantum systems. •Finite and infinite dimensional quantum systems are studied. •Finite dimensional systems (phase states, single & two qubit spin states) are studied. •A dissipative harmonic oscillator is considered as an infinite dimensional system. •Both pure dephasing as well as dissipation effects are studied.« less
Memory Effects and Nonequilibrium Correlations in the Dynamics of Open Quantum Systems
NASA Astrophysics Data System (ADS)
Morozov, V. G.
2018-01-01
We propose a systematic approach to the dynamics of open quantum systems in the framework of Zubarev's nonequilibrium statistical operator method. The approach is based on the relation between ensemble means of the Hubbard operators and the matrix elements of the reduced statistical operator of an open quantum system. This key relation allows deriving master equations for open systems following a scheme conceptually identical to the scheme used to derive kinetic equations for distribution functions. The advantage of the proposed formalism is that some relevant dynamical correlations between an open system and its environment can be taken into account. To illustrate the method, we derive a non-Markovian master equation containing the contribution of nonequilibrium correlations associated with energy conservation.
ERIC Educational Resources Information Center
Hoppe, H. Ulrich
2016-01-01
The 1998 paper by Martin Mühlenbrock, Frank Tewissen, and myself introduced a multi-agent architecture and a component engineering approach for building open distributed learning environments to support group learning in different types of classroom settings. It took up prior work on "multiple student modeling" as a method to configure…
Yan, Chongjun; Tang, Jiafu; Jiang, Bowen; Fung, Richard Y K
2015-01-01
This paper compares the performance measures of traditional appointment scheduling (AS) with those of an open-access appointment scheduling (OA-AS) system with exponentially distributed service time. A queueing model is formulated for the traditional AS system with no-show probability. The OA-AS models assume that all patients who call before the session begins will show up for the appointment on time. Two types of OA-AS systems are considered: with a same-session policy and with a same-or-next-session policy. Numerical results indicate that the superiority of OA-AS systems is not as obvious as those under deterministic scenarios. The same-session system has a threshold of relative waiting cost, after which the traditional system always has higher total costs, and the same-or-next-session system is always preferable, except when the no-show probability or the weight of patients' waiting is low. It is concluded that open-access policies can be viewed as alternative approaches to mitigate the negative effects of no-show patients.
NASA Astrophysics Data System (ADS)
Amme, J.; Pleßmann, G.; Bühler, J.; Hülk, L.; Kötter, E.; Schwaegerl, P.
2018-02-01
The increasing integration of renewable energy into the electricity supply system creates new challenges for distribution grids. The planning and operation of distribution systems requires appropriate grid models that consider the heterogeneity of existing grids. In this paper, we describe a novel method to generate synthetic medium-voltage (MV) grids, which we applied in our DIstribution Network GeneratOr (DINGO). DINGO is open-source software and uses freely available data. Medium-voltage grid topologies are synthesized based on location and electricity demand in defined demand areas. For this purpose, we use GIS data containing demand areas with high-resolution spatial data on physical properties, land use, energy, and demography. The grid topology is treated as a capacitated vehicle routing problem (CVRP) combined with a local search metaheuristics. We also consider the current planning principles for MV distribution networks, paying special attention to line congestion and voltage limit violations. In the modelling process, we included power flow calculations for validation. The resulting grid model datasets contain 3608 synthetic MV grids in high resolution, covering all of Germany and taking local characteristics into account. We compared the modelled networks with real network data. In terms of number of transformers and total cable length, we conclude that the method presented in this paper generates realistic grids that could be used to implement a cost-optimised electrical energy system.
Edge Vortex Flow Due to Inhomogeneous Ion Concentration
NASA Astrophysics Data System (ADS)
Sugioka, Hideyuki
2017-04-01
The ion distribution of an open parallel electrode system is not known even though it is often used to measure the electrical characteristics of an electrolyte. Thus, for an open electrode system, we perform a non-steady direct multiphysics simulation based on the coupled Poisson-Nernst-Planck and Stokes equations and find that inhomogeneous ion concentrations at edges cause vortex flows and suppress the anomalous increase in the ion concentration near the electrodes. A surprising aspect of our findings is that the large vortex flows at the edges approximately maintain the ion-conserving condition, and thus the ion distribution of an open electrode system can be approximated by the solution of a closed electrode system that considers the ion-conserving condition rather than the Gouy-Chapman solution, which neglects the ion-conserving condition. We believe that our findings make a significant contribution to the understanding of surface science.
NASA Astrophysics Data System (ADS)
Yetman, G.; Downs, R. R.
2011-12-01
Software deployment is needed to process and distribute scientific data throughout the data lifecycle. Developing software in-house can take software development teams away from other software development projects and can require efforts to maintain the software over time. Adopting and reusing software and system modules that have been previously developed by others can reduce in-house software development and maintenance costs and can contribute to the quality of the system being developed. A variety of models are available for reusing and deploying software and systems that have been developed by others. These deployment models include open source software, vendor-supported open source software, commercial software, and combinations of these approaches. Deployment in Earth science data processing and distribution has demonstrated the advantages and drawbacks of each model. Deploying open source software offers advantages for developing and maintaining scientific data processing systems and applications. By joining an open source community that is developing a particular system module or application, a scientific data processing team can contribute to aspects of the software development without having to commit to developing the software alone. Communities of interested developers can share the work while focusing on activities that utilize in-house expertise and addresses internal requirements. Maintenance is also shared by members of the community. Deploying vendor-supported open source software offers similar advantages to open source software. However, by procuring the services of a vendor, the in-house team can rely on the vendor to provide, install, and maintain the software over time. Vendor-supported open source software may be ideal for teams that recognize the value of an open source software component or application and would like to contribute to the effort, but do not have the time or expertise to contribute extensively. Vendor-supported software may also have the additional benefits of guaranteed up-time, bug fixes, and vendor-added enhancements. Deploying commercial software can be advantageous for obtaining system or software components offered by a vendor that meet in-house requirements. The vendor can be contracted to provide installation, support and maintenance services as needed. Combining these options offers a menu of choices, enabling selection of system components or software modules that meet the evolving requirements encountered throughout the scientific data lifecycle.
Ontology-Based Peer Exchange Network (OPEN)
ERIC Educational Resources Information Center
Dong, Hui
2010-01-01
In current Peer-to-Peer networks, distributed and semantic free indexing is widely used by systems adopting "Distributed Hash Table" ("DHT") mechanisms. Although such systems typically solve a. user query rather fast in a deterministic way, they only support a very narrow search scheme, namely the exact hash key match. Furthermore, DHT systems put…
Open Quantum Walks with Noncommuting Jump Operators
NASA Astrophysics Data System (ADS)
Caballar, Roland Cristopher; Petruccione, Francesco; Sinayskiy, Ilya
2014-03-01
We examine homogeneous open quantum walks along a line, wherein each forward step is due to one quantum jump operator, and each backward step due to another quantum jump operator. We assume that these two quantum jump operators do not commute with each other. We show that if the system has N internal degrees of freedom, for particular forms of these quantum jump operators, we can obtain exact probability distributions which fall into two distinct classes, namely Gaussian distributions and solitonic distributions. We also show that it is possible for a maximum of 2 solitonic distributions to be present simultaneously in the system. Finally, we consider applications of these classes of jump operators in quantum state preparation and quantum information. We acknowledge support from the National Institute for Theoretical Physics (NITheP).
NASA Technical Reports Server (NTRS)
Isar, Aurelian
1995-01-01
The harmonic oscillator with dissipation is studied within the framework of the Lindblad theory for open quantum systems. By using the Wang-Uhlenbeck method, the Fokker-Planck equation, obtained from the master equation for the density operator, is solved for the Wigner distribution function, subject to either the Gaussian type or the delta-function type of initial conditions. The obtained Wigner functions are two-dimensional Gaussians with different widths. Then a closed expression for the density operator is extracted. The entropy of the system is subsequently calculated and its temporal behavior shows that this quantity relaxes to its equilibrium value.
Using WNTR to Model Water Distribution System Resilience
The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of di...
Open Source Service Agent (OSSA) in the intelligence community's Open Source Architecture
NASA Technical Reports Server (NTRS)
Fiene, Bruce F.
1994-01-01
The Community Open Source Program Office (COSPO) has developed an architecture for the intelligence community's new Open Source Information System (OSIS). The architecture is a multi-phased program featuring connectivity, interoperability, and functionality. OSIS is based on a distributed architecture concept. The system is designed to function as a virtual entity. OSIS will be a restricted (non-public), user configured network employing Internet communications. Privacy and authentication will be provided through firewall protection. Connection to OSIS can be made through any server on the Internet or through dial-up modems provided the appropriate firewall authentication system is installed on the client.
Code of Federal Regulations, 2012 CFR
2012-10-01
... header or manifold. SMYS means specified minimum yield strength is: (1) For steel pipe manufactured in... the waters from the mean high water mark of the coast of the Gulf of Mexico and its inlets open to the... water. High-pressure distribution system means a distribution system in which the gas pressure in the...
Code of Federal Regulations, 2011 CFR
2011-10-01
... header or manifold. SMYS means specified minimum yield strength is: (1) For steel pipe manufactured in... the waters from the mean high water mark of the coast of the Gulf of Mexico and its inlets open to the... water. High-pressure distribution system means a distribution system in which the gas pressure in the...
Code of Federal Regulations, 2013 CFR
2013-10-01
... header or manifold. SMYS means specified minimum yield strength is: (1) For steel pipe manufactured in... the waters from the mean high water mark of the coast of the Gulf of Mexico and its inlets open to the... water. High-pressure distribution system means a distribution system in which the gas pressure in the...
Code of Federal Regulations, 2014 CFR
2014-10-01
... header or manifold. SMYS means specified minimum yield strength is: (1) For steel pipe manufactured in... the waters from the mean high water mark of the coast of the Gulf of Mexico and its inlets open to the... water. High-pressure distribution system means a distribution system in which the gas pressure in the...
Code of Federal Regulations, 2010 CFR
2010-10-01
... header or manifold. SMYS means specified minimum yield strength is: (1) For steel pipe manufactured in... the waters from the mean high water mark of the coast of the Gulf of Mexico and its inlets open to the... water. High-pressure distribution system means a distribution system in which the gas pressure in the...
The investigation of the lateral interaction effect's on traffic flow behavior under open boundaries
NASA Astrophysics Data System (ADS)
Bouadi, M.; Jetto, K.; Benyoussef, A.; El Kenz, A.
2017-11-01
In this paper, an open boundaries traffic flow system is studied by taking into account the lateral interaction with spatial defects. For a random defects distribution, if the vehicles velocities are weakly correlated, the traffic phases can be predicted by considering the corresponding inflow and outflow functions. Conversely, if the vehicles velocities are strongly correlated, a phase segregation appears inside the system's bulk which induces the maximum current appearance. Such velocity correlation depends mainly on the defects densities and the probabilities of lateral deceleration. However, for a compact defects distribution, the traffic phases are predictable by using the inflow in the system beginning, the inflow entering the defects zone and the outflow function.
A History of the Andrew File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bashear, Derrick
2011-02-22
Derrick Brashear and Jeffrey Altman will present a technical history of the evolution of Andrew File System starting with the early days of the Andrew Project at Carnegie Mellon through the commercialization by Transarc Corporation and IBM and a decade of OpenAFS. The talk will be technical with a focus on the various decisions and implementation trade-offs that were made over the course of AFS versions 1 through 4, the development of the Distributed Computing Environment Distributed File System (DCE DFS), and the course of the OpenAFS development community. The speakers will also discuss the various AFS branches developed atmore » the University of Michigan, Massachusetts Institute of Technology and Carnegie Mellon University.« less
Research on Closed Residential Area Based on Balanced Distribution Theory
NASA Astrophysics Data System (ADS)
Lan, Si; Fang, Ni; Lin, Hai Peng; Ye, Shi Qi
2018-06-01
With the promotion of the street system, residential quarters and units of the compound gradually open. In this paper, the relationship between traffic flow and traffic flow is established for external roads, and the road resistance model is established by internal roads. We propose a balanced distribution model from the two aspects of road opening conditions and traffic flow inside and outside the district, and quantitatively analyze the impact of the opening and closing on the surrounding roads. Finally, it puts forward feasible suggestions to improve the traffic situation and optimize the network structure.
Kramer, Tobias; Noack, Matthias; Reinefeld, Alexander; Rodríguez, Mirta; Zelinskyy, Yaroslav
2018-06-11
Time- and frequency-resolved optical signals provide insights into the properties of light-harvesting molecular complexes, including excitation energies, dipole strengths and orientations, as well as in the exciton energy flow through the complex. The hierarchical equations of motion (HEOM) provide a unifying theory, which allows one to study the combined effects of system-environment dissipation and non-Markovian memory without making restrictive assumptions about weak or strong couplings or separability of vibrational and electronic degrees of freedom. With increasing system size the exact solution of the open quantum system dynamics requires memory and compute resources beyond a single compute node. To overcome this barrier, we developed a scalable variant of HEOM. Our distributed memory HEOM, DM-HEOM, is a universal tool for open quantum system dynamics. It is used to accurately compute all experimentally accessible time- and frequency-resolved processes in light-harvesting molecular complexes with arbitrary system-environment couplings for a wide range of temperatures and complex sizes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Entropy generation in biophysical systems
NASA Astrophysics Data System (ADS)
Lucia, U.; Maino, G.
2013-03-01
Recently, in theoretical biology and in biophysical engineering the entropy production has been verified to approach asymptotically its maximum rate, by using the probability of individual elementary modes distributed in accordance with the Boltzmann distribution. The basis of this approach is the hypothesis that the entropy production rate is maximum at the stationary state. In the present work, this hypothesis is explained and motivated, starting from the entropy generation analysis. This latter quantity is obtained from the entropy balance for open systems considering the lifetime of the natural real process. The Lagrangian formalism is introduced in order to develop an analytical approach to the thermodynamic analysis of the open irreversible systems. The stationary conditions of the open systems are thus obtained in relation to the entropy generation and the least action principle. Consequently, the considered hypothesis is analytically proved and it represents an original basic approach in theoretical and mathematical biology and also in biophysical engineering. It is worth remarking that the present results show that entropy generation not only increases but increases as fast as possible.
An OpenACC-Based Unified Programming Model for Multi-accelerator Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jungwon; Lee, Seyong; Vetter, Jeffrey S
2015-01-01
This paper proposes a novel SPMD programming model of OpenACC. Our model integrates the different granularities of parallelism from vector-level parallelism to node-level parallelism into a single, unified model based on OpenACC. It allows programmers to write programs for multiple accelerators using a uniform programming model whether they are in shared or distributed memory systems. We implement a prototype of our model and evaluate its performance with a GPU-based supercomputer using three benchmark applications.
No degree of treatment will insure the delivery of a safe water supplyto the consumer's tap when the distribution system is subject to cross-connections water pressure losses, frequent line breaks, open reservoirs and infrastructure deterioration. n one recent U.S. outbreak, wate...
Content Management Middleware for the Support of Distributed Teaching
ERIC Educational Resources Information Center
Tsalapatas, Hariklia; Stav, John B.; Kalantzis, Christos
2004-01-01
eCMS is a web-based federated content management system for the support of distributed teaching based on an open, distributed middleware architecture for the publication, discovery, retrieval, and integration of educational material. The infrastructure supports the management of both standalone material and structured courses, as well as the…
Challenges of Using CSCL in Open Distributed Learning.
ERIC Educational Resources Information Center
Nilsen, Anders Grov; Instefjord, Elen J.
As a compulsory part of the study in Pedagogical Information Science at the University of Bergen and Stord/Haugesund College (Norway) during the spring term of 1999, students participated in a distributed group activity that provided experience on distributed collaboration and use of online groupware systems. The group collaboration process was…
MPPhys—A many-particle simulation package for computational physics education
NASA Astrophysics Data System (ADS)
Müller, Thomas
2014-03-01
In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent
NASA Astrophysics Data System (ADS)
Maksimov, P. P.; Tsvyk, A. I.; Shestopalov, V. P.
1985-10-01
The effect of local phase nonuniformities of the diffraction gratings and the field distribution of the open cavity on the electronic efficiency of a diffraction-radiation generator (DRG) is analyzed numerically on the basis of a self-consistent system of nonlinear stationary equations for the DRG. It is shown that the interaction power and efficiency of a DRG can be increased by the use of an open cavity with a nonuniform diffraction grating and a complex form of microwave field distribution over the interaction space.
NASA Astrophysics Data System (ADS)
Poat, M. D.; Lauret, J.; Betts, W.
2015-12-01
The STAR online computing infrastructure has become an intensive dynamic system used for first-hand data collection and analysis resulting in a dense collection of data output. As we have transitioned to our current state, inefficient, limited storage systems have become an impediment to fast feedback to online shift crews. Motivation for a centrally accessible, scalable and redundant distributed storage system had become a necessity in this environment. OpenStack Swift Object Storage and Ceph Object Storage are two eye-opening technologies as community use and development have led to success elsewhere. In this contribution, OpenStack Swift and Ceph have been put to the test with single and parallel I/O tests, emulating real world scenarios for data processing and workflows. The Ceph file system storage, offering a POSIX compliant file system mounted similarly to an NFS share was of particular interest as it aligned with our requirements and was retained as our solution. I/O performance tests were run against the Ceph POSIX file system and have presented surprising results indicating true potential for fast I/O and reliability. STAR'S online compute farm historical use has been for job submission and first hand data analysis. The goal of reusing the online compute farm to maintain a storage cluster and job submission will be an efficient use of the current infrastructure.
24 CFR 3280.602 - Definitions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... opening of a door. Air gap (water distribution system) means the unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, plumbing fixture, water supplied appliances, or other device and the flood level rim of the receptacle...
24 CFR 3280.602 - Definitions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... opening of a door. Air gap (water distribution system) means the unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, plumbing fixture, water supplied appliances, or other device and the flood level rim of the receptacle...
OpenDanubia - An integrated, modular simulation system to support regional water resource management
NASA Astrophysics Data System (ADS)
Muerth, M.; Waldmann, D.; Heinzeller, C.; Hennicker, R.; Mauser, W.
2012-04-01
The already completed, multi-disciplinary research project GLOWA-Danube has developed a regional scale, integrated modeling system, which was successfully applied on the 77,000 km2 Upper Danube basin to investigate the impact of Global Change on both the natural and anthropogenic water cycle. At the end of the last project phase, the integrated modeling system was transferred into the open source project OpenDanubia, which now provides both the core system as well as all major model components to the general public. First, this will enable decision makers from government, business and management to use OpenDanubia as a tool for proactive management of water resources in the context of global change. Secondly, the model framework to support integrated simulations and all simulation models developed for OpenDanubia in the scope of GLOWA-Danube are further available for future developments and research questions. OpenDanubia allows for the investigation of water-related scenarios considering different ecological and economic aspects to support both scientists and policy makers to design policies for sustainable environmental management. OpenDanubia is designed as a framework-based, distributed system. The model system couples spatially distributed physical and socio-economic process during run-time, taking into account their mutual influence. To simulate the potential future impacts of Global Change on agriculture, industrial production, water supply, households and tourism businesses, so-called deep actor models are implemented in OpenDanubia. All important water-related fluxes and storages in the natural environment are implemented in OpenDanubia as spatially explicit, process-based modules. This includes the land surface water and energy balance, dynamic plant water uptake, ground water recharge and flow as well as river routing and reservoirs. Although the complete system is relatively demanding on data requirements and hardware requirements, the modular structure and the generic core system (Core Framework, Actor Framework) allows the application in new regions and the selection of a reduced number of modules for simulation. As part of the Open Source Initiative in GLOWA-Danube (opendanubia.glowa-danube.de) a comprehensive documentation for the system installation was created and both the program code of the framework and of all major components is licensed under the GNU General Public License. In addition, some helpful programs and scripts necessary for the operation and processing of input and result data sets are provided.
NASA Astrophysics Data System (ADS)
Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K.
2015-03-01
The extent of real power loss and voltage deviation associated with overloaded feeders in radial distribution system can be reduced by reconfiguration. Reconfiguration is normally achieved by changing the open/closed state of tie/sectionalizing switches. Finding optimal switch combination is a complicated problem as there are many switching combinations possible in a distribution system. Hence optimization techniques are finding greater importance in reducing the complexity of reconfiguration problem. This paper presents the application of firefly algorithm (FA) for optimal reconfiguration of radial distribution system with distributed generators (DG). The algorithm is tested on IEEE 33 bus system installed with DGs and the results are compared with binary genetic algorithm. It is found that binary FA is more effective than binary genetic algorithm in achieving real power loss reduction and improving voltage profile and hence enhancing the performance of radial distribution system. Results are found to be optimum when DGs are added to the test system, which proved the impact of DGs on distribution system.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-29
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... proposes to revise its standards to reflect changes and updates for Express Mail[supreg] Open and Distribute and Priority Mail[supreg] Open and Distribute to improve efficiencies in processing and to control...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-24
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... to reflect changes and updates for Express Mail[supreg] Open and Distribute and Priority Mail[supreg] Open and Distribute to improve efficiencies in processing and to control costs. DATES: Effective Date...
75 FR 56920 - Express Mail Open and Distribute and Priority Mail Open and Distribute
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-17
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and...] Open and Distribute containers. The Postal Service also proposes to revise the service commitment for Express Mail Open and Distribute as a guaranteed end of day product; and to add a five-pound minimum...
Dissipation and entropy production in open quantum systems
NASA Astrophysics Data System (ADS)
Majima, H.; Suzuki, A.
2010-11-01
A microscopic description of an open system is generally expressed by the Hamiltonian of the form: Htot = Hsys + Henviron + Hsys-environ. We developed a microscopic theory of entropy and derived a general formula, so-called "entropy-Hamiltonian relation" (EHR), that connects the entropy of the system to the interaction Hamiltonian represented by Hsys-environ for a nonequilibrium open quantum system. To derive the EHR formula, we mapped the open quantum system to the representation space of the Liouville-space formulation or thermo field dynamics (TFD), and thus worked on the representation space Script L := Script H otimes , where Script H denotes the ordinary Hilbert space while the tilde Hilbert space conjugates to Script H. We show that the natural transformation (mapping) of nonequilibrium open quantum systems is accomplished within the theoretical structure of TFD. By using the obtained EHR formula, we also derived the equation of motion for the distribution function of the system. We demonstrated that by knowing the microscopic description of the interaction, namely, the specific form of Hsys-environ on the representation space Script L, the EHR formulas enable us to evaluate the entropy of the system and to gain some information about entropy for nonequilibrium open quantum systems.
the System Modeling & Geospatial Data Science Group in the Strategic Energy Analysis Center. Areas Publications Oliveira, R and Moreno, R. 2016. Harvesting, Integrating and Distributing Large Open Geospatial Datasets Using Free and Open-Source Software. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLI-B7
Interactive Video-Based Industrial Training in Basic Electronics.
ERIC Educational Resources Information Center
Mirkin, Barry
The Wisconsin Foundation for Vocational, Technical, and Adult Education is currently involved in the development, implementation, and distribution of a sophisticated interactive computer and video learning system. Designed to offer trainees an open entry and open exit opportunity to pace themselves through a comprehensive competency-based,…
The Integration of CloudStack and OCCI/OpenNebula with DIRAC
NASA Astrophysics Data System (ADS)
Méndez Muñoz, Víctor; Fernández Albor, Víctor; Graciani Diaz, Ricardo; Casajús Ramo, Adriàn; Fernández Pena, Tomás; Merino Arévalo, Gonzalo; José Saborido Silva, Juan
2012-12-01
The increasing availability of Cloud resources is arising as a realistic alternative to the Grid as a paradigm for enabling scientific communities to access large distributed computing resources. The DIRAC framework for distributed computing is an easy way to efficiently access to resources from both systems. This paper explains the integration of DIRAC with two open-source Cloud Managers: OpenNebula (taking advantage of the OCCI standard) and CloudStack. These are computing tools to manage the complexity and heterogeneity of distributed data center infrastructures, allowing to create virtual clusters on demand, including public, private and hybrid clouds. This approach has required to develop an extension to the previous DIRAC Virtual Machine engine, which was developed for Amazon EC2, allowing the connection with these new cloud managers. In the OpenNebula case, the development has been based on the CernVM Virtual Software Appliance with appropriate contextualization, while in the case of CloudStack, the infrastructure has been kept more general, which permits other Virtual Machine sources and operating systems being used. In both cases, CernVM File System has been used to facilitate software distribution to the computing nodes. With the resulting infrastructure, the cloud resources are transparent to the users through a friendly interface, like the DIRAC Web Portal. The main purpose of this integration is to get a system that can manage cloud and grid resources at the same time. This particular feature pushes DIRAC to a new conceptual denomination as interware, integrating different middleware. Users from different communities do not need to care about the installation of the standard software that is available at the nodes, nor the operating system of the host machine which is transparent to the user. This paper presents an analysis of the overhead of the virtual layer, doing some tests to compare the proposed approach with the existing Grid solution. License Notice: Published under licence in Journal of Physics: Conference Series by IOP Publishing Ltd.
Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland
2003-01-01
In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
Open-Source Intelligence in the Czech Military: Knowledge System and Process Design
2002-06-01
in Open-Source Intelligence OSINT, as one of the intelligence disciplines, bears some of the general problems of intelligence " business " OSINT...ADAPTING KNOWLEDGE MANAGEMENT THEORY TO THE CZECH MILITARY INTELLIGENCE Knowledge work is the core business of the military intelligence . As...NAVAL POSTGRADUATE SCHOOL Monterey, California THESIS Approved for public release; distribution is unlimited OPEN-SOURCE INTELLIGENCE IN THE
A Disk-Based System for Producing and Distributing Science Products from MODIS
NASA Technical Reports Server (NTRS)
Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael
2007-01-01
Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.
Distributed Control Architecture for Gas Turbine Engine. Chapter 4
NASA Technical Reports Server (NTRS)
Culley, Dennis; Garg, Sanjay
2009-01-01
The transformation of engine control systems from centralized to distributed architecture is both necessary and enabling for future aeropropulsion applications. The continued growth of adaptive control applications and the trend to smaller, light weight cores is a counter influence on the weight and volume of control system hardware. A distributed engine control system using high temperature electronics and open systems communications will reverse the growing trend of control system weight ratio to total engine weight and also be a major factor in decreasing overall cost of ownership for aeropropulsion systems. The implementation of distributed engine control is not without significant challenges. There are the needs for high temperature electronics, development of simple, robust communications, and power supply for the on-board electronics.
Web Service Distributed Management Framework for Autonomic Server Virtualization
NASA Astrophysics Data System (ADS)
Solomon, Bogdan; Ionescu, Dan; Litoiu, Marin; Mihaescu, Mircea
Virtualization for the x86 platform has imposed itself recently as a new technology that can improve the usage of machines in data centers and decrease the cost and energy of running a high number of servers. Similar to virtualization, autonomic computing and more specifically self-optimization, aims to improve server farm usage through provisioning and deprovisioning of instances as needed by the system. Autonomic systems are able to determine the optimal number of server machines - real or virtual - to use at a given time, and add or remove servers from a cluster in order to achieve optimal usage. While provisioning and deprovisioning of servers is very important, the way the autonomic system is built is also very important, as a robust and open framework is needed. One such management framework is the Web Service Distributed Management (WSDM) system, which is an open standard of the Organization for the Advancement of Structured Information Standards (OASIS). This paper presents an open framework built on top of the WSDM specification, which aims to provide self-optimization for applications servers residing on virtual machines.
Pryce, Joanna; Albertsen, Karen; Nielsen, Karina
2006-05-01
To evaluate the impact of an open-rota scheduling system on the health, work-life balance and job satisfaction of nurses working in a psychiatric ward in Denmark. The effects of shift rotation and scheduling are well known; however, little is known about the wider benefits of open-rota systems. Method A structured questionnaire was distributed to control and intervention groups preintervention and postintervention (20 months). Nurses within the intervention group trialed an open-rota system in which nurses designed their own work-rest schedules. Nurses in the intervention group reported that they were more satisfied with their work hours, less likely to swap their shift when working within the open-rota system and reported significant increases in work-life balance, job satisfaction, social support and community spirit when compared with nurses in the control groups. The ownership and choice over work-rest schedules has benefits for nurses, and potentially the hospital.
Proton beam therapy control system
Baumann, Michael A [Riverside, CA; Beloussov, Alexandre V [Bernardino, CA; Bakir, Julide [Alta Loma, CA; Armon, Deganit [Redlands, CA; Olsen, Howard B [Colton, CA; Salem, Dana [Riverside, CA
2008-07-08
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A.; Beloussov, Alexandre V.; Bakir, Julide; Armon, Deganit; Olsen, Howard B.; Salem, Dana
2010-09-21
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-06-25
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
Proton beam therapy control system
Baumann, Michael A; Beloussov, Alexandre V; Bakir, Julide; Armon, Deganit; Olsen, Howard B; Salem, Dana
2013-12-03
A tiered communications architecture for managing network traffic in a distributed system. Communication between client or control computers and a plurality of hardware devices is administered by agent and monitor devices whose activities are coordinated to reduce the number of open channels or sockets. The communications architecture also improves the transparency and scalability of the distributed system by reducing network mapping dependence. The architecture is desirably implemented in a proton beam therapy system to provide flexible security policies which improve patent safety and facilitate system maintenance and development.
NREL's Energy Systems Integration Supporting Facilities - Continuum
Integration Facility opened in December, 2012. Photo by Dennis Schroeder, NREL NREL's Energy Systems capabilities. Photo by Dennis Schroeder, NREL This research electrical distribution bus (REDB) works as a power
NASA Astrophysics Data System (ADS)
Yu, Z. P.; Yue, Z. F.; Liu, W.
2018-05-01
With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.
75 FR 72686 - Express Mail Open and Distribute and Priority Mail Open and Distribute
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-26
... POSTAL SERVICE 39 CFR Part 111 Express Mail Open and Distribute and Priority Mail Open and... ``DB'' prefix along with new Tag 257, Tag 267, or Label 257S, on all Express Mail[supreg] Open and Distribute containers. The Postal Service is also revising the service commitment for Express Mail Open and...
Experiences Using OpenMP Based on Compiler Directed Software DSM on a PC Cluster
NASA Technical Reports Server (NTRS)
Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland; Biegel, Bryan (Technical Monitor)
2002-01-01
In this work we report on our experiences running OpenMP (message passing) programs on a commodity cluster of PCs (personal computers) running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS (NASA Advanced Supercomputing) Parallel Benchmarks that have been automatically parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lundstrom, Blake; Gotseff, Peter; Giraldez, Julieta
Continued deployment of renewable and distributed energy resources is fundamentally changing the way that electric distribution systems are controlled and operated; more sophisticated active system control and greater situational awareness are needed. Real-time measurements and distribution system state estimation (DSSE) techniques enable more sophisticated system control and, when combined with visualization applications, greater situational awareness. This paper presents a novel demonstration of a high-speed, real-time DSSE platform and related control and visualization functionalities, implemented using existing open-source software and distribution system monitoring hardware. Live scrolling strip charts of meter data and intuitive annotated map visualizations of the entire state (obtainedmore » via DSSE) of a real-world distribution circuit are shown. The DSSE implementation is validated to demonstrate provision of accurate voltage data. This platform allows for enhanced control and situational awareness using only a minimum quantity of distribution system measurement units and modest data and software infrastructure.« less
An Open Service Provider Concept for Enterprise Complex Automation
NASA Astrophysics Data System (ADS)
Ivaschenko, A. V.; Sitnikov, P. V.; Tanonykhina, M. O.
2017-01-01
The paper introduces a solution for IT services representation and management in the integrated information space of distributed enterprises. It is proposed to develop an Open Service Provider as a software platform for interaction between IT services providers and their users. Implementation of the proposed concept and approach is illustrated by an after-sales customer support system for a large manufacturing corporation delivered by SEC “Open Code”.
Managing multicentre clinical trials with open source.
Raptis, Dimitri Aristotle; Mettler, Tobias; Fischer, Michael Alexander; Patak, Michael; Lesurtel, Mickael; Eshmuminov, Dilmurodjon; de Rougemont, Olivier; Graf, Rolf; Clavien, Pierre-Alain; Breitenstein, Stefan
2014-03-01
Multicentre clinical trials are challenged by high administrative burden, data management pitfalls and costs. This leads to a reduced enthusiasm and commitment of the physicians involved and thus to a reluctance in conducting multicentre clinical trials. The purpose of this study was to develop a web-based open source platform to support a multi-centre clinical trial. We developed on Drupal, an open source software distributed under the terms of the General Public License, a web-based, multi-centre clinical trial management system with the design science research approach. This system was evaluated by user-testing and well supported several completed and on-going clinical trials and is available for free download. Open source clinical trial management systems are capable in supporting multi-centre clinical trials by enhancing efficiency, quality of data management and collaboration.
Switched Broadband Services For The Home
NASA Astrophysics Data System (ADS)
Sawyer, Don M.
1990-01-01
In considering the deployment of fiber optics to the residence, two critical questions arise: what are the leading services that could be offered to justify the required investment; and what is the nature of the business that would offer these services to the consumer ? This talk will address these two questions together with the related issue of how the "financial engine" of today's television distribution infrastructure - TV advertising - would be affected by an open access system based on fiber optics coupled with broadband switching. On the business side, the talk concludes that the potential for open ended capacity expansion, fair competition between service providers, and new interactive services inherent in an open access, switched broadband system are the critical items in differentiating it from existing video and TV distribution systems. On the question of broadband services, the talk will highlight several new opportunities together with some findings from recent market research conducted by BNR. The talk will show that there are variations on existing services plus many new services that could be offered and which have real consumer appeal. The postulated open access system discussed here is visualized as having ultimately 1,000 to 2,000 video channels available to the consumer. Although this may appear to hopelessly fragment the TV audience and destroy the current TV advertising infrastructure, the technology of open access, switched broadband will present many new advertising techniques, which have the potential to be far more effective than those available today. Some of these techniques will be described in this talk.
Secure communications using quantum cryptography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, R.J.; Buttler, W.T.; Kwiat, P.G.
1997-08-01
The secure distribution of the secret random bit sequences known as {open_quotes}key{close_quotes} material, is an essential precursor to their use for the encryption and decryption of confidential communications. Quantum cryptography is an emerging technology for secure key distribution with single-photon transmissions, nor evade detection (eavesdropping raises the key error rate above a threshold value). We have developed experimental quantum cryptography systems based on the transmission of non-orthogonal single-photon states to generate shared key material over multi-kilometer optical fiber paths and over line-of-sight links. In both cases, key material is built up using the transmission of a single-photon per bit ofmore » an initial secret random sequence. A quantum-mechanically random subset of this sequence is identified, becoming the key material after a data reconciliation stage with the sender. In our optical fiber experiment we have performed quantum key distribution over 24-km of underground optical fiber using single-photon interference states, demonstrating that secure, real-time key generation over {open_quotes}open{close_quotes} multi-km node-to-node optical fiber communications links is possible. We have also constructed a quantum key distribution system for free-space, line-of-sight transmission using single-photon polarization states, which is currently undergoing laboratory testing. 7 figs.« less
Suhanic, West; Crandall, Ian; Pennefather, Peter
2009-07-17
Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation enables local control over the creation and use of diagnostic data, while allowing for remote collaborative support of diagnostic data interpretation and tracking. It can enable global pooling of malaria disease information and the development of open, participatory, and adaptable laboratory medicine practices. The informatic model highlights how the larger issue of access to generic commoditized measurement, information processing, and communication technology in both high- and low-income countries can enable diagnostic services that are much less expensive, but substantially equivalent to those currently in use in high-income countries.
OpenFLUID: an open-source software environment for modelling fluxes in landscapes
NASA Astrophysics Data System (ADS)
Fabre, Jean-Christophe; Rabotin, Michaël; Crevoisier, David; Libres, Aline; Dagès, Cécile; Moussa, Roger; Lagacherie, Philippe; Raclot, Damien; Voltz, Marc
2013-04-01
Integrative landscape functioning has become a common concept in environmental management. Landscapes are complex systems where many processes interact in time and space. In agro-ecosystems, these processes are mainly physical processes, including hydrological-processes, biological processes and human activities. Modelling such systems requires an interdisciplinary approach, coupling models coming from different disciplines, developed by different teams. In order to support collaborative works, involving many models coupled in time and space for integrative simulations, an open software modelling platform is a relevant answer. OpenFLUID is an open source software platform for modelling landscape functioning, mainly focused on spatial fluxes. It provides an advanced object-oriented architecture allowing to i) couple models developed de novo or from existing source code, and which are dynamically plugged to the platform, ii) represent landscapes as hierarchical graphs, taking into account multi-scale, spatial heterogeneities and landscape objects connectivity, iii) run and explore simulations in many ways : using the OpenFLUID software interfaces for users (command line interface, graphical user interface), or using external applications such as GNU R through the provided ROpenFLUID package. OpenFLUID is developed in C++ and relies on open source libraries only (Boost, libXML2, GLib/GTK, OGR/GDAL, …). For modelers and developers, OpenFLUID provides a dedicated environment for model development, which is based on an open source toolchain, including the Eclipse editor, the GCC compiler and the CMake build system. OpenFLUID is distributed under the GPLv3 open source license, with a special exception allowing to plug existing models licensed under any license. It is clearly in the spirit of sharing knowledge and favouring collaboration in a community of modelers. OpenFLUID has been involved in many research applications, such as modelling of hydrological network transfer, diagnosis and prediction of water quality taking into account human activities, study of the effect of spatial organization on hydrological fluxes, modelling of surface-subsurface water exchanges, … At LISAH research unit, OpenFLUID is the supporting development platform of the MHYDAS model, which is a distributed model for agrosystems (Moussa et al., 2002, Hydrological Processes, 16, 393-412). OpenFLUID web site : http://www.openfluid-project.org
DEVELOPMENT OF A SORBENT DISTRIBUTION AND RECOVERY SYSTEM
This report describes the design, fabrication, and test of a prototype system for the recovery of spilled oil from the surface of river, estuarine, and harbor waters. The system utilizes an open cell polyurethane foam in small cubes to absorb the floating oil. The system is highl...
Altered Standards of Care: An Analysis of Existing Federal, State, and Local Guidelines
2011-12-01
Approved for public release; distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words ) A...data systems for communications and the transference of data. Losing data systems during disasters cuts off access to electronic medical records...emergency procedures as mouth - to- mouth resuscitation, external chest compression, electric shock, insertion of a tube to open the patient’s airway
Approaches to Legacy System Evolution.
1997-12-01
such as migrating legacy systems, to more distributed open environments. This framework draws out the important global issues early in the planning...ongoing system evolution initiatives, for drawing out important global issues early in the planning cycle using the checklists as a guide, and for
ADMS State of the Industry and Gap Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agalgaonkar, Yashodhan P.; Marinovici, Maria C.; Vadari, Subramanian V.
2016-03-31
An Advanced distribution management system (ADMS) is a platform for optimized distribution system operational management. This platform comprises of distribution management system (DMS) applications, supervisory control and data acquisition (SCADA), outage management system (OMS), and distributed energy resource management system (DERMS). One of the primary objectives of this work is to study and analyze several ADMS component and auxiliary systems. All the important component and auxiliary systems, SCADA, GISs, DMSs, AMRs/AMIs, OMSs, and DERMS, are discussed in this report. Their current generation technologies are analyzed, and their integration (or evolution) with an ADMS technology is discussed. An ADMS technology statemore » of the art and gap analysis is also presented. There are two technical gaps observed. The integration challenge between the component operational systems is the single largest challenge for ADMS design and deployment. Another significant challenge noted is concerning essential ADMS applications, for instance, fault location, isolation, and service restoration (FLISR), volt-var optimization (VVO), etc. There are a relatively small number of ADMS application developers as ADMS software platform is not open source. There is another critical gap and while not being technical in nature (when compared the two above) is still important to consider. The data models currently residing in utility GIS systems are either incomplete or inaccurate or both. This data is essential for planning and operations because it is typically one of the primary sources from which power system model are created. To achieve the full potential of ADMS, the ability to execute acute Power Flow solution is an important pre-requisite. These critical gaps are hindering wider Utility adoption of an ADMS technology. The development of an open architecture platform can eliminate many of these barriers and also aid seamless integration of distribution Utility legacy systems with an ADMS.« less
Kasthurirathne, Suranga N; Mamlin, Burke; Grieve, Grahame; Biondich, Paul
2015-01-01
Interoperability is essential to address limitations caused by the ad hoc implementation of clinical information systems and the distributed nature of modern medical care. The HL7 V2 and V3 standards have played a significant role in ensuring interoperability for healthcare. FHIR is a next generation standard created to address fundamental limitations in HL7 V2 and V3. FHIR is particularly relevant to OpenMRS, an Open Source Medical Record System widely used across emerging economies. FHIR has the potential to allow OpenMRS to move away from a bespoke, application specific API to a standards based API. We describe efforts to design and implement a FHIR based API for the OpenMRS platform. Lessons learned from this effort were used to define long term plans to transition from the legacy OpenMRS API to a FHIR based API that greatly reduces the learning curve for developers and helps enhance adhernce to standards.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
Stocker, Gernot; Rieder, Dietmar; Trajanoski, Zlatko
2004-03-22
ClusterControl is a web interface to simplify distributing and monitoring bioinformatics applications on Linux cluster systems. We have developed a modular concept that enables integration of command line oriented program into the application framework of ClusterControl. The systems facilitate integration of different applications accessed through one interface and executed on a distributed cluster system. The package is based on freely available technologies like Apache as web server, PHP as server-side scripting language and OpenPBS as queuing system and is available free of charge for academic and non-profit institutions. http://genome.tugraz.at/Software/ClusterControl
Building A Cloud Based Distributed Active Data Archive Center
NASA Technical Reports Server (NTRS)
Ramachandran, Rahul; Baynes, Katie; Murphy, Kevin
2017-01-01
NASA's Earth Science Data System (ESDS) Program facilitates the implementation of NASA's Earth Science strategic plan, which is committed to the full and open sharing of Earth science data obtained from NASA instruments to all users. The Earth Science Data information System (ESDIS) project manages the Earth Observing System Data and Information System (EOSDIS). Data within EOSDIS are held at Distributed Active Archive Centers (DAACs). One of the key responsibilities of the ESDS Program is to continuously evolve the entire data and information system to maximize returns on the collected NASA data.
OpenID Connect as a security service in cloud-based medical imaging systems.
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-04-01
The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as "Kerberos of cloud." We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model.
The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geospatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ananthakrishnan, Rachana; Bell, Gavin; Cinquini, Luca
2013-01-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less
The Earth System Grid Federation: An Open Infrastructure for Access to Distributed Geo-Spatial Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cinquini, Luca; Crichton, Daniel; Miller, Neill
2012-01-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF s architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL,more » GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).« less
The Earth System Grid Federation : an Open Infrastructure for Access to Distributed Geospatial Data
NASA Technical Reports Server (NTRS)
Cinquini, Luca; Crichton, Daniel; Mattmann, Chris; Harney, John; Shipman, Galen; Wang, Feiyi; Ananthakrishnan, Rachana; Miller, Neill; Denvil, Sebastian; Morgan, Mark;
2012-01-01
The Earth System Grid Federation (ESGF) is a multi-agency, international collaboration that aims at developing the software infrastructure needed to facilitate and empower the study of climate change on a global scale. The ESGF's architecture employs a system of geographically distributed peer nodes, which are independently administered yet united by the adoption of common federation protocols and application programming interfaces (APIs). The cornerstones of its interoperability are the peer-to-peer messaging that is continuously exchanged among all nodes in the federation; a shared architecture and API for search and discovery; and a security infrastructure based on industry standards (OpenID, SSL, GSI and SAML). The ESGF software is developed collaboratively across institutional boundaries and made available to the community as open source. It has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the entire model output used for the next international assessment report on climate change (IPCC-AR5) and a suite of satellite observations (obs4MIPs) and reanalysis data sets (ANA4MIPs).
Wealth condensation in pareto macroeconomies
NASA Astrophysics Data System (ADS)
Burda, Z.; Johnston, D.; Jurkiewicz, J.; Kamiński, M.; Nowak, M. A.; Papp, G.; Zahed, I.
2002-02-01
We discuss a Pareto macroeconomy (a) in a closed system with fixed total wealth and (b) in an open system with average mean wealth, and compare our results to a similar analysis in a super-open system (c) with unbounded wealth [J.-P. Bouchaud and M. Mézard, Physica A 282, 536 (2000)]. Wealth condensation takes place in the social phase for closed and open economies, while it occurs in the liberal phase for super-open economies. In the first two cases, the condensation is related to a mechanism known from the balls-in-boxes model, while in the last case, to the nonintegrable tails of the Pareto distribution. For a closed macroeconomy in the social phase, we point to the emergence of a ``corruption'' phenomenon: a sizeable fraction of the total wealth is always amassed by a single individual.
Safety System for Controlling Fluid Flow into a Suction Line
NASA Technical Reports Server (NTRS)
England, John Dwight (Inventor); Kelley, Anthony R. (Inventor); Cronise, Raymond J. (Inventor)
2015-01-01
A safety system includes a sleeve fitted within a pool's suction line at the inlet thereof. An open end of the sleeve is approximately aligned with the suction line's inlet. The sleeve terminates with a plate that resides within the suction line. The plate has holes formed therethrough. A housing defining a plurality of distinct channels is fitted in the sleeve so that the distinct channels lie within the sleeve. Each of the distinct channels has a first opening on one end thereof and a second opening on another end thereof. The second openings reside in the sleeve. Each of the distinct channels is at least approximately three feet in length. The first openings are in fluid communication with the water in the pool, and are distributed around a periphery of an area of the housing that prevents coverage of all the first openings when a human interacts therewith.
Filmless PACS in a multiple facility environment
NASA Astrophysics Data System (ADS)
Wilson, Dennis L.; Glicksman, Robert A.; Prior, Fred W.; Siu, Kai-Yeung; Goldburgh, Mitchell M.
1996-05-01
A Picture Archiving and Communication System centered on a shared image file server can support a filmless hospital. Systems based on this architecture have proven themselves in over four years of clinical operation. Changes in healthcare delivery are causing radiology groups to support multiple facilities for remote clinic support and consolidation of services. There will be a corresponding need for communicating over a standardized wide area network (WAN). Interactive workflow, a natural extension to the single facility case, requires a means to work effectively and seamlessly across moderate to low speed communication networks. Several schemes for supporting a consortium of medical treatment facilities over a WAN are explored. Both centralized and distributed database approaches are evaluated against several WAN scenarios. Likewise, several architectures for distributing image file servers or buffers over a WAN are explored, along with the caching and distribution strategies that support them. An open system implementation is critical to the success of a wide area system. The role of the Digital Imaging and Communications in Medicine (DICOM) standard in supporting multi- facility and multi-vendor open systems is also addressed. An open system can be achieved by using a DICOM server to provide a view of the system-wide distributed database. The DICOM server interface to a local version of the global database lets a local workstation treat the multiple, distributed data servers as though they were one local server for purposes of examination queries. The query will recover information about the examination that will permit retrieval over the network from the server on which the examination resides. For efficiency reasons, the ability to build cross-facility radiologist worklists and clinician-oriented patient folders is essential. The technologies of the World-Wide-Web can be used to generate worklists and patient folders across facilities. A reliable broadcast protocol may be a convenient way to notify many different users and many image servers about new activities in the network of image servers. In addition to ensuring reliability of message delivery and global serialization of each broadcast message in the network, the broadcast protocol should not introduce significant communication overhead.
Building a Better Grid, in Partnership with the OMNETRIC Group and Siemens
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waight, Jim; Grover, Shailendra; Wiedetz, Clark
In collaboration with Siemens and the National Renewable Energy Laboratory (NREL), OMNETRIC Group developed a distributed control hierarchy—based on an open field message bus (OpenFMB) framework—that allows control decisions to be made at the edge of the grid. The technology was validated and demonstrated at NREL’s Energy Systems Integration Facility.
On Open Access to Research: The Green, the Gold, and the Public Good
ERIC Educational Resources Information Center
Roach, Audra K.; Gainer, Jesse
2013-01-01
In this column the authors discuss barriers to worldwide open access to peer-reviewed journal articles online and how they might be addressed by literacy scholars. They highlight economic and ethical problems associated with the current subscription-based system for distributing articles (which sometimes works against the ideals of research and…
NASA Astrophysics Data System (ADS)
Tavakkoli-Moghaddam, Reza; Forouzanfar, Fateme; Ebrahimnejad, Sadoullah
2013-07-01
This paper considers a single-sourcing network design problem for a three-level supply chain. For the first time, a novel mathematical model is presented considering risk-pooling, the inventory existence at distribution centers (DCs) under demand uncertainty, the existence of several alternatives to transport the product between facilities, and routing of vehicles from distribution centers to customer in a stochastic supply chain system, simultaneously. This problem is formulated as a bi-objective stochastic mixed-integer nonlinear programming model. The aim of this model is to determine the number of located distribution centers, their locations, and capacity levels, and allocating customers to distribution centers and distribution centers to suppliers. It also determines the inventory control decisions on the amount of ordered products and the amount of safety stocks at each opened DC, selecting a type of vehicle for transportation. Moreover, it determines routing decisions, such as determination of vehicles' routes starting from an opened distribution center to serve its allocated customers and returning to that distribution center. All are done in a way that the total system cost and the total transportation time are minimized. The Lingo software is used to solve the presented model. The computational results are illustrated in this paper.
Collaboration using open standards and open source software (examples of DIAS/CEOS Water Portal)
NASA Astrophysics Data System (ADS)
Miura, S.; Sekioka, S.; Kuroiwa, K.; Kudo, Y.
2015-12-01
The DIAS/CEOS Water Portal is a part of the DIAS (Data Integration and Analysis System, http://www.editoria.u-tokyo.ac.jp/projects/dias/?locale=en_US) systems for data distribution for users including, but not limited to, scientists, decision makers and officers like river administrators. One of the functions of this portal is to enable one-stop search and access variable water related data archived multiple data centers located all over the world. This portal itself does not store data. Instead, according to requests made by users on the web page, it retrieves data from distributed data centers on-the-fly and lets them download and see rendered images/plots. Our system mainly relies on the open source software GI-cat (http://essi-lab.eu/do/view/GIcat) and open standards such as OGC-CSW, Opensearch and OPeNDAP protocol to enable the above functions. Details on how it works will be introduced during the presentation. Although some data centers have unique meta data format and/or data search protocols, our portal's brokering function enables users to search across various data centers at one time. And this portal is also connected to other data brokering systems, including GEOSS DAB (Discovery and Access Broker). As a result, users can search over thousands of datasets, millions of files at one time. Users can access the DIAS/CEOS Water Portal system at http://waterportal.ceos.org/.
Staghorn: An Automated Large-Scale Distributed System Analysis Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gabert, Kasimir; Burns, Ian; Elliott, Steven
2016-09-01
Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less
Status, upgrades, and advances of RTS2: the open source astronomical observatory manager
NASA Astrophysics Data System (ADS)
Kubánek, Petr
2016-07-01
RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for "business logic" of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.
Present and future free-space quantum key distribution
NASA Astrophysics Data System (ADS)
Nordholt, Jane E.; Hughes, Richard J.; Morgan, George L.; Peterson, C. Glen; Wipf, Christopher C.
2002-04-01
Free-space quantum key distribution (QKD), more popularly know as quantum cryptography, uses single-photon free-space optical communications to distribute the secret keys required for secure communications. At Los Alamos National Laboratory we have demonstrated a fully automated system that is capable of operations at any time of day over a horizontal range of several kilometers. This has proven the technology is capable of operation from a spacecraft to the ground, opening up the possibility of QKD between any group of users anywhere on Earth. This system, the prototyping of a new system for use on a spacecraft, and the techniques required for world-wide quantum key distribution will be described. The operational parameters and performance of a system designed to operate between low earth orbit (LEO) and the ground will also be discussed.
The development and performance of smud grid-connected photovoltaic projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, D.E.; Collier, D.E.
1995-11-01
The utility grid-connected market has been identified as a key market to be developed to accelerate the commercialization of photovoltaics. The Sacramento Municipal Utility District (SMUD) has completed the first two years of a continuing commercialization effort based on two years of a continuing commercialization effort based on the sustained, orderly development of the grid-connected, utility PV market. This program is aimed at developing the experience needed to successfully integrate PV as distributed generation into the utility system and to stimulate the collaborative processes needed to accelerate the cost reductions necessary for PV to be cost-effective in these applications bymore » the year 2000. In the first two years, SMUD has installed over 240 residential and commercial building, grid-connected, rooftop, {open_quotes}PV Pioneer{close_quotes} systems totaling over 1MW of capacity and four substation sited, grid-support PV systems totaling 600 kW bringing the SMUD distributed PV power systems to over 3.7 MW. The 1995 SMUD PV Program will add another approximately 800 kW of PV systems to the District`s distributed PV power system. SMUD also established a partnership with its customers through the PV Pioneer {open_quotes}green pricing{close_quotes} program to advance PV commercialization.« less
The NATO III 5 MHz Distribution System
NASA Technical Reports Server (NTRS)
Vulcan, A.; Bloch, M.
1981-01-01
A high performance 5 MHz distribution system is described which has extremely low phase noise and jitter characteristics and provides multiple buffered outputs. The system is completely redundant with automatic switchover and is self-testing. Since the 5 MHz reference signals distributed by the NATO III distribution system are used for up-conversion and multiplicative functions, a high degree of phase stability and isolation between outputs is necessary. Unique circuit design and packaging concepts insure that the isolation between outputs is sufficient to quarantee a phase perturbation of less than 0.0016 deg when other outputs are open circuited, short circuited or terminated in 50 ohms. Circuit design techniques include high isolation cascode amplifiers. Negative feedback stabilizes system gain and minimizes circuit phase noise contributions. Balanced lines, in lieu of single ended coaxial transmission media, minimize pickup.
48 CFR 813.202 - Purchase guidelines.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase guidelines. 813.202 Section 813.202 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CONTRACTING... Threshold 813.202 Purchase guidelines. Open market micro-purchases shall be equitably distributed among all...
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture
NASA Technical Reports Server (NTRS)
Behbahani, Alireza; Culley, Dennis; Garg, Sanjay; Millar, Richard; Smith, Bert; Wood, Jim; Mahoney, Tim; Quinn, Ronald; Carpenter, Sheldon; Mailander, Bill;
2007-01-01
A Distributed Engine Control Working Group (DECWG) consisting of the Department of Defense (DoD), the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) and industry has been formed to examine the current and future requirements of propulsion engine systems. The scope of this study will include an assessment of the paradigm shift from centralized engine control architecture to an architecture based on distributed control utilizing open system standards. Included will be a description of the work begun in the 1990's, which continues today, followed by the identification of the remaining technical challenges which present barriers to on-engine distributed control.
Case study of open-source enterprise resource planning implementation in a small business
NASA Astrophysics Data System (ADS)
Olson, David L.; Staley, Jesse
2012-02-01
Enterprise resource planning (ERP) systems have been recognised as offering great benefit to some organisations, although they are expensive and problematic to implement. The cost and risk make well-developed proprietorial systems unaffordable to small businesses. Open-source software (OSS) has become a viable means of producing ERP system products. The question this paper addresses is the feasibility of OSS ERP systems for small businesses. A case is reported involving two efforts to implement freely distributed ERP software products in a small US make-to-order engineering firm. The case emphasises the potential of freely distributed ERP systems, as well as some of the hurdles involved in their implementation. The paper briefly reviews highlights of OSS ERP systems, with the primary focus on reporting the case experiences for efforts to implement ERPLite software and xTuple software. While both systems worked from a technical perspective, both failed due to economic factors. While these economic conditions led to imperfect results, the case demonstrates the feasibility of OSS ERP for small businesses. Both experiences are evaluated in terms of risk dimension.
Protecting Cryptographic Keys and Functions from Malware Attacks
2010-12-01
registers. modifies RSA private key signing in OpenSSL to use the technique. The resulting system has the following features: 1. No special hardware is...the above method based on OpenSSL , by exploiting the Streaming SIMD Extension (SSE) XMM registers of modern Intel and AMD x86-compatible CPU’s [22...one can store a 2048-bit exponent.1 Our prototype is based on OpenSSL 0.9.8e, the Ubuntu 6.06 Linux distribution with a 2.6.15 kernel, and SSE2 which
NASA Astrophysics Data System (ADS)
Shao, Yuxiang; Chen, Qing; Wei, Zhenhua
Logistics distribution center location evaluation is a dynamic, fuzzy, open and complicated nonlinear system, which makes it difficult to evaluate the distribution center location by the traditional analysis method. The paper proposes a distribution center location evaluation system which uses the fuzzy neural network combined with the genetic algorithm. In this model, the neural network is adopted to construct the fuzzy system. By using the genetic algorithm, the parameters of the neural network are optimized and trained so as to improve the fuzzy system’s abilities of self-study and self-adaptation. At last, the sampled data are trained and tested by Matlab software. The simulation results indicate that the proposed identification model has very small errors.
Research into a distributed fault diagnosis system and its application
NASA Astrophysics Data System (ADS)
Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei
2005-12-01
CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.
A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations
NASA Astrophysics Data System (ADS)
Demir, I.; Agliamzanov, R.
2014-12-01
Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.
Wilms, C T; Schober, P; Kalb, R; Loer, S A
2006-01-01
During partial liquid ventilation perfluorocarbons are instilled into the airways from where they subsequently evaporate via the bronchial system. This process is influenced by multiple factors, such as the vapour pressure of the perfluorocarbons, the instilled volume, intrapulmonary perfluorocarbon distribution, postural positioning and ventilatory settings. In our study we compared the effects of open and closed breathing systems, a heat-and-moisture-exchanger and a sodalime absorber on perfluorocarbon evaporation during partial liquid ventilation. Isolated rat lungs were suspended from a force transducer. After intratracheal perfluorocarbon instillation (10 mL kg(-1)) the lungs were either ventilated with an open breathing system (n = 6), a closed breathing system (n = 6), an open breathing system with an integrated heat-and-moisture-exchanger (n = 6), an open breathing system with an integrated sodalime absorber (n = 6), or a closed breathing system with an integrated heat-and-moisture-exchanger and a sodalime absorber (n = 6). Evaporative perfluorocarbon elimination was determined gravimetrically. When compared to the elimination half-life in an open breathing system (1.2 +/- 0.07 h), elimination half-life was longer with a closed system (6.4 +/- 0.9 h, P 0.05) when compared to a closed system. Evaporative perfluorocarbon loss can be reduced effectively with closed breathing systems, followed by the use of sodalime absorbers and heat-and-moisture-exchangers.
OpenID Connect as a security service in cloud-based medical imaging systems
Ma, Weina; Sartipi, Kamran; Sharghigoorabi, Hassan; Koff, David; Bak, Peter
2016-01-01
Abstract. The evolution of cloud computing is driving the next generation of medical imaging systems. However, privacy and security concerns have been consistently regarded as the major obstacles for adoption of cloud computing by healthcare domains. OpenID Connect, combining OpenID and OAuth together, is an emerging representational state transfer-based federated identity solution. It is one of the most adopted open standards to potentially become the de facto standard for securing cloud computing and mobile applications, which is also regarded as “Kerberos of cloud.” We introduce OpenID Connect as an authentication and authorization service in cloud-based diagnostic imaging (DI) systems, and propose enhancements that allow for incorporating this technology within distributed enterprise environments. The objective of this study is to offer solutions for secure sharing of medical images among diagnostic imaging repository (DI-r) and heterogeneous picture archiving and communication systems (PACS) as well as Web-based and mobile clients in the cloud ecosystem. The main objective is to use OpenID Connect open-source single sign-on and authorization service and in a user-centric manner, while deploying DI-r and PACS to private or community clouds should provide equivalent security levels to traditional computing model. PMID:27340682
Parallel, Distributed Scripting with Python
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P J
2002-05-24
Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI librarymore » gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.« less
Search for supporting methodologies - Or how to support SEI for 35 years
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Masline, Richard C.
1991-01-01
Concepts relevant to the development of an evolvable information management system are examined in terms of support for the Space Exploration Initiative. The issues of interoperability within NASA and industry initiatives are studied including the Open Systems Interconnection standard and the operating system of the Open Software Foundation. The requirements of partitioning functionality into separate areas are determined with attention given to the infrastructure required to ensure system-wide compliance. The need for a decision-making context is a key to the distributed implementation of the program, and this environment is concluded to be next step in developing an evolvable, interoperable, and securable support network.
Safety System for Controlling Fluid Flow into a Suction Line
NASA Technical Reports Server (NTRS)
England, John Dwight (Inventor); Kelley, Anthony R. (Inventor); Cronise, Raymond J. (Inventor)
2018-01-01
A safety system includes a sleeve fitted within a pool's suction line at its inlet. The sleeve terminates with a plate that resides within the suction line. The plate has holes formed therethrough. A housing defining distinct channels is fitted in the sleeve so that the distinct channels lie within the sleeve. Each of the distinct channels has a first opening on one end thereof and a second opening on another end thereof. The second openings reside in the sleeve. The first openings are in fluid communication with the water in the pool, and are distributed around a periphery of an area of the housing that prevents coverage of all the first openings when a human interacts therewith. A first sensor is coupled to the sleeve to sense pressure therein, and a second pressure sensor is coupled to the plate to sense pressure in one of the plates' holes.
Rock fracture processes in chemically reactive environments
NASA Astrophysics Data System (ADS)
Eichhubl, P.
2015-12-01
Rock fracture is traditionally viewed as a brittle process involving damage nucleation and growth in a zone ahead of a larger fracture, resulting in fracture propagation once a threshold loading stress is exceeded. It is now increasingly recognized that coupled chemical-mechanical processes influence fracture growth in wide range of subsurface conditions that include igneous, metamorphic, and geothermal systems, and diagenetically reactive sedimentary systems with possible applications to hydrocarbon extraction and CO2 sequestration. Fracture processes aided or driven by chemical change can affect the onset of fracture, fracture shape and branching characteristics, and fracture network geometry, thus influencing mechanical strength and flow properties of rock systems. We are investigating two fundamental modes of chemical-mechanical interactions associated with fracture growth: 1. Fracture propagation may be aided by chemical dissolution or hydration reactions at the fracture tip allowing fracture propagation under subcritical stress loading conditions. We are evaluating effects of environmental conditions on critical (fracture toughness KIc) and subcritical (subcritical index) fracture properties using double torsion fracture mechanics tests on shale and sandstone. Depending on rock composition, the presence of reactive aqueous fluids can increase or decrease KIc and/or subcritical index. 2. Fracture may be concurrent with distributed dissolution-precipitation reactions in the hostrock beyond the immediate vicinity of the fracture tip. Reconstructing the fracture opening history recorded in crack-seal fracture cement of deeply buried sandstone we find that fracture length growth and fracture opening can be decoupled, with a phase of initial length growth followed by a phase of dominant fracture opening. This suggests that mechanical crack-tip failure processes, possibly aided by chemical crack-tip weakening, and distributed solution-precipitation creep in the hostrock can independently affect fracture opening displacement and thus fracture aperture profiles and aperture distribution.
NASA Astrophysics Data System (ADS)
Kearns, E. J.
2017-12-01
NOAA's Big Data Project is conducting an experiment in the collaborative distribution of open government data to non-governmental cloud-based systems. Through Cooperative Research and Development Agreements signed in 2015 between NOAA and Amazon Web Services, Google Cloud Platform, IBM, Microsoft Azure, and the Open Commons Consortium, NOAA is distributing open government data to a wide community of potential users. There are a number of significant advantages related to the use of open data on commercial cloud platforms, but through this experiment NOAA is also discovering significant challenges for those stewarding and maintaining NOAA's data resources in support of users in the wider open data ecosystem. Among the challenges that will be discussed are: the need to provide effective interpretation of the data content to enable their use by data scientists from other expert communities; effective maintenance of Collaborators' open data stores through coordinated publication of new data and new versions of older data; the provenance and verification of open data as authentic NOAA-sourced data across multiple management boundaries and analytical tools; and keeping pace with the accelerating expectations of users with regard to improved quality control, data latency, availability, and discoverability. Suggested strategies to address these challenges will also be described.
Method for image reconstruction of moving radionuclide source distribution
Stolin, Alexander V.; McKisson, John E.; Lee, Seung Joon; Smith, Mark Frederick
2012-12-18
A method for image reconstruction of moving radionuclide distributions. Its particular embodiment is for single photon emission computed tomography (SPECT) imaging of awake animals, though its techniques are general enough to be applied to other moving radionuclide distributions as well. The invention eliminates motion and blurring artifacts for image reconstructions of moving source distributions. This opens new avenues in the area of small animal brain imaging with radiotracers, which can now be performed without the perturbing influences of anesthesia or physical restraint on the biological system.
Industry Day Workshops | Energy Systems Integration Facility | NREL
, 2017: Siemens-OMNETRIC Industry Day OMNETRIC Group demonstrated a distributed control hierarchy, based Systems Integration, NREL OMNETRIC Group: Grid Edge Communications and Control Utilizing an OpenFMB NREL Murali Baggu, Manager, Power Systems Operations and Control Group, NREL Santosh Veda, Research
Towards G2G: Systems of Technology Database Systems
NASA Technical Reports Server (NTRS)
Maluf, David A.; Bell, David
2005-01-01
We present an approach and methodology for developing Government-to-Government (G2G) Systems of Technology Database Systems. G2G will deliver technologies for distributed and remote integration of technology data for internal use in analysis and planning as well as for external communications. G2G enables NASA managers, engineers, operational teams and information systems to "compose" technology roadmaps and plans by selecting, combining, extending, specializing and modifying components of technology database systems. G2G will interoperate information and knowledge that is distributed across organizational entities involved that is ideal for NASA future Exploration Enterprise. Key contributions of the G2G system will include the creation of an integrated approach to sustain effective management of technology investments that supports the ability of various technology database systems to be independently managed. The integration technology will comply with emerging open standards. Applications can thus be customized for local needs while enabling an integrated management of technology approach that serves the global needs of NASA. The G2G capabilities will use NASA s breakthrough in database "composition" and integration technology, will use and advance emerging open standards, and will use commercial information technologies to enable effective System of Technology Database systems.
Squid - a simple bioinformatics grid.
Carvalho, Paulo C; Glória, Rafael V; de Miranda, Antonio B; Degrave, Wim M
2005-08-03
BLAST is a widely used genetic research tool for analysis of similarity between nucleotide and protein sequences. This paper presents a software application entitled "Squid" that makes use of grid technology. The current version, as an example, is configured for BLAST applications, but adaptation for other computing intensive repetitive tasks can be easily accomplished in the open source version. This enables the allocation of remote resources to perform distributed computing, making large BLAST queries viable without the need of high-end computers. Most distributed computing / grid solutions have complex installation procedures requiring a computer specialist, or have limitations regarding operating systems. Squid is a multi-platform, open-source program designed to "keep things simple" while offering high-end computing power for large scale applications. Squid also has an efficient fault tolerance and crash recovery system against data loss, being able to re-route jobs upon node failure and recover even if the master machine fails. Our results show that a Squid application, working with N nodes and proper network resources, can process BLAST queries almost N times faster than if working with only one computer. Squid offers high-end computing, even for the non-specialist, and is freely available at the project web site. Its open-source and binary Windows distributions contain detailed instructions and a "plug-n-play" instalation containing a pre-configured example.
Ji, Haoran; Wang, Chengshan; Li, Peng; ...
2017-09-20
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Haoran; Wang, Chengshan; Li, Peng
The integration of distributed generators (DGs) exacerbates the feeder power flow fluctuation and load unbalanced condition in active distribution networks (ADNs). The unbalanced feeder load causes inefficient use of network assets and network congestion during system operation. The flexible interconnection based on the multi-terminal soft open point (SOP) significantly benefits the operation of ADNs. The multi-terminal SOP, which is a controllable power electronic device installed to replace the normally open point, provides accurate active and reactive power flow control to enable the flexible connection of feeders. An enhanced SOCP-based method for feeder load balancing using the multi-terminal SOP is proposedmore » in this paper. Furthermore, by regulating the operation of the multi-terminal SOP, the proposed method can mitigate the unbalanced condition of feeder load and simultaneously reduce the power losses of ADNs. Then, the original non-convex model is converted into a second-order cone programming (SOCP) model using convex relaxation. In order to tighten the SOCP relaxation and improve the computation efficiency, an enhanced SOCP-based approach is developed to solve the proposed model. Finally, case studies are performed on the modified IEEE 33-node system to verify the effectiveness and efficiency of the proposed method.« less
OpenID connect as a security service in Cloud-based diagnostic imaging systems
NASA Astrophysics Data System (ADS)
Ma, Weina; Sartipi, Kamran; Sharghi, Hassan; Koff, David; Bak, Peter
2015-03-01
The evolution of cloud computing is driving the next generation of diagnostic imaging (DI) systems. Cloud-based DI systems are able to deliver better services to patients without constraining to their own physical facilities. However, privacy and security concerns have been consistently regarded as the major obstacle for adoption of cloud computing by healthcare domains. Furthermore, traditional computing models and interfaces employed by DI systems are not ready for accessing diagnostic images through mobile devices. RESTful is an ideal technology for provisioning both mobile services and cloud computing. OpenID Connect, combining OpenID and OAuth together, is an emerging REST-based federated identity solution. It is one of the most perspective open standards to potentially become the de-facto standard for securing cloud computing and mobile applications, which has ever been regarded as "Kerberos of Cloud". We introduce OpenID Connect as an identity and authentication service in cloud-based DI systems and propose enhancements that allow for incorporating this technology within distributed enterprise environment. The objective of this study is to offer solutions for secure radiology image sharing among DI-r (Diagnostic Imaging Repository) and heterogeneous PACS (Picture Archiving and Communication Systems) as well as mobile clients in the cloud ecosystem. Through using OpenID Connect as an open-source identity and authentication service, deploying DI-r and PACS to private or community clouds should obtain equivalent security level to traditional computing model.
Transparent Ada rendezvous in a fault tolerant distributed system
NASA Technical Reports Server (NTRS)
Racine, Roger
1986-01-01
There are many problems associated with distributing an Ada program over a loosely coupled communication network. Some of these problems involve the various aspects of the distributed rendezvous. The problems addressed involve supporting the delay statement in a selective call and supporting the else clause in a selective call. Most of these difficulties are compounded by the need for an efficient communication system. The difficulties are compounded even more by considering the possibility of hardware faults occurring while the program is running. With a hardware fault tolerant computer system, it is possible to design a distribution scheme and communication software which is efficient and allows Ada semantics to be preserved. An Ada design for the communications software of one such system will be presented, including a description of the services provided in the seven layers of an International Standards Organization (ISO) Open System Interconnect (OSI) model communications system. The system capabilities (hardware and software) that allow this communication system will also be described.
Influence of Applying Additional Forcing Fans for the Air Distribution in Ventilation Network
NASA Astrophysics Data System (ADS)
Szlązak, Nikodem; Obracaj, Dariusz; Korzec, Marek
2016-09-01
Mining progress in underground mines cause the ongoing movement of working areas. Consequently, it becomes necessary to adapt the ventilation network of a mine to direct airflow into newly-opened districts. For economic reasons, opening new fields is often achieved via underground workings. Length of primary intake and return routes increases and also increases the total resistance of a complex ventilation network. The development of a subsurface structure can make it necessary to change the air distribution in a ventilation network. Increasing airflow into newly-opened districts is necessary. In mines where extraction does not entail gas-related hazards, there is possibility of implementing a push-pull ventilation system in order to supplement airflows to newly developed mining fields. This is achieved by installing subsurface fan stations with forcing fans at the bottom of downcast shaft. In push-pull systems with multiple main fans, it is vital to select forcing fans with characteristic curves matching those of the existing exhaust fans to prevent undesirable mutual interaction. In complex ventilation networks it is necessary to calculate distribution of airflow (especially in networks with a large number of installed fans). In the article the influence of applying additional forcing fans for the air distribution in ventilation network for underground mine were considered. There are also analysed the extent of overpressure caused by the additional forcing fan in branches of the ventilation network (the operating range of additional forcing fan). Possibilities of increasing airflow rate in working areas were conducted.
Opendf - An Implementation of the Dual Fermion Method for Strongly Correlated Systems
NASA Astrophysics Data System (ADS)
Antipov, Andrey E.; LeBlanc, James P. F.; Gull, Emanuel
The dual fermion method is a multiscale approach for solving lattice problems of interacting strongly correlated systems. In this paper, we present the opendfcode, an open-source implementation of the dual fermion method applicable to fermionic single- orbital lattice models in dimensions D = 1, 2, 3 and 4. The method is built on a dynamical mean field starting point, which neglects all local correlations, and perturbatively adds spatial correlations. Our code is distributed as an open-source package under the GNU public license version 2.
Ecosystem variability in the offshore northeastern Chukchi Sea
NASA Astrophysics Data System (ADS)
Blanchard, Arny L.; Day, Robert H.; Gall, Adrian E.; Aerts, Lisanne A. M.; Delarue, Julien; Dobbins, Elizabeth L.; Hopcroft, Russell R.; Questel, Jennifer M.; Weingartner, Thomas J.; Wisdom, Sheyna S.
2017-12-01
Understanding influences of cumulative effects from multiple stressors in marine ecosystems requires an understanding of the sources for and scales of variability. A multidisciplinary ecosystem study in the offshore northeastern Chukchi Sea during 2008-2013 investigated the variability of the study area's two adjacent sub-ecosystems: a pelagic system influenced by interannual and/or seasonal temporal variation at large, oceanographic (regional) scales, and a benthic-associated system more influenced by small-scale spatial variations. Variability in zooplankton communities reflected interannual oceanographic differences in waters advected northward from the Bering Sea, whereas variation in benthic communities was associated with seafloor and bottom-water characteristics. Variations in the planktivorous seabird community were correlated with prey distributions, whereas interaction effects in ANOVA for walruses were related to declines of sea-ice. Long-term shifts in seabird distributions were also related to changes in sea-ice distributions that led to more open water. Although characteristics of the lower trophic-level animals within sub-ecosystems result from oceanographic variations and interactions with seafloor topography, distributions of apex predators were related to sea-ice as a feeding platform (walruses) or to its absence (i.e., open water) for feeding (seabirds). The stability of prey resources appears to be a key factor in mediating predator interactions with other ocean characteristics. Seabirds reliant on highly-variable zooplankton prey show long-term changes as open water increases, whereas walruses taking benthic prey in biomass hotspots respond to sea-ice changes in the short-term. A better understanding of how variability scales up from prey to predators and how prey resource stability (including how critical prey respond to environmental changes over space and time) might be altered by climate and anthropogenic stressors is essential to predicting the future state of both the Chukchi and other arctic systems.
Hierarchical resilience with lightweight threads.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wheeler, Kyle Bruce
2011-10-01
This paper proposes methodology for providing robustness and resilience for a highly threaded distributed- and shared-memory environment based on well-defined inputs and outputs to lightweight tasks. These inputs and outputs form a failure 'barrier', allowing tasks to be restarted or duplicated as necessary. These barriers must be expanded based on task behavior, such as communication between tasks, but do not prohibit any given behavior. One of the trends in high-performance computing codes seems to be a trend toward self-contained functions that mimic functional programming. Software designers are trending toward a model of software design where their core functions are specifiedmore » in side-effect free or low-side-effect ways, wherein the inputs and outputs of the functions are well-defined. This provides the ability to copy the inputs to wherever they need to be - whether that's the other side of the PCI bus or the other side of the network - do work on that input using local memory, and then copy the outputs back (as needed). This design pattern is popular among new distributed threading environment designs. Such designs include the Barcelona STARS system, distributed OpenMP systems, the Habanero-C and Habanero-Java systems from Vivek Sarkar at Rice University, the HPX/ParalleX model from LSU, as well as our own Scalable Parallel Runtime effort (SPR) and the Trilinos stateless kernels. This design pattern is also shared by CUDA and several OpenMP extensions for GPU-type accelerators (e.g. the PGI OpenMP extensions).« less
Review of Supervisory Control and Data Acquisition (SCADA) Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reva Nickelson; Briam Johnson; Ken Barnes
2004-01-01
A review using open source information was performed to obtain data related to Supervisory Control and Data Acquisition (SCADA) systems used to supervise and control domestic electric power generation, transmission, and distribution. This report provides the technical details for the types of systems used, system disposal, cyber and physical security measures, network connections, and a gap analysis of SCADA security holes.
75 FR 76282 - Domestic Shipping Services Pricing and Mailing Standards Changes
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-08
... when using a qualifying shipping label managed by the PC Postage system used. b. Permit imprint... system used. b. Permit imprint customers. * * * * * 1.8 Determining Single-Piece Weight [Revise the last... imprint customers. c. Priority Mail Open and Distribute (PMOD) customers whose account volume exceeds 600...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudgins, Andrew P.; Carrillo, Ismael M.; Jin, Xin
This document is the final report of a two-year development, test, and demonstration project, 'Cohesive Application of Standards- Based Connected Devices to Enable Clean Energy Technologies.' The project was part of the National Renewable Energy Laboratory's (NREL's) Integrated Network Testbed for Energy Grid Research and Technology (INTEGRATE) initiative hosted at Energy Systems Integration Facility (ESIF). This project demonstrated techniques to control distribution grid events using the coordination of traditional distribution grid devices and high-penetration renewable resources and demand response. Using standard communication protocols and semantic standards, the project examined the use cases of high/low distribution voltage, requests for volt-ampere-reactive (VAR)more » power support, and transactive energy strategies using Volttron. Open source software, written by EPRI to control distributed energy resources (DER) and demand response (DR), was used by an advanced distribution management system (ADMS) to abstract the resources reporting to a collection of capabilities rather than needing to know specific resource types. This architecture allows for scaling both horizontally and vertically. Several new technologies were developed and tested. Messages from the ADMS based on the common information model (CIM) were developed to control the DER and DR management systems. The OpenADR standard was used to help manage grid events by turning loads off and on. Volttron technology was used to simulate a homeowner choosing the price at which to enter the demand response market. Finally, the ADMS used newly developed algorithms to coordinate these resources with a capacitor bank and voltage regulator to respond to grid events.« less
Web Monitoring of EOS Front-End Ground Operations, Science Downlinks and Level 0 Processing
NASA Technical Reports Server (NTRS)
Cordier, Guy R.; Wilkinson, Chris; McLemore, Bruce
2008-01-01
This paper addresses the efforts undertaken and the technology deployed to aggregate and distribute the metadata characterizing the real-time operations associated with NASA Earth Observing Systems (EOS) high-rate front-end systems and the science data collected at multiple ground stations and forwarded to the Goddard Space Flight Center for level 0 processing. Station operators, mission project management personnel, spacecraft flight operations personnel and data end-users for various EOS missions can retrieve the information at any time from any location having access to the internet. The users are distributed and the EOS systems are distributed but the centralized metadata accessed via an external web server provide an effective global and detailed view of the enterprise-wide events as they are happening. The data-driven architecture and the implementation of applied middleware technology, open source database, open source monitoring tools, and external web server converge nicely to fulfill the various needs of the enterprise. The timeliness and content of the information provided are key to making timely and correct decisions which reduce project risk and enhance overall customer satisfaction. The authors discuss security measures employed to limit access of data to authorized users only.
NASA Astrophysics Data System (ADS)
Mwakabuta, Ndaga Stanslaus
Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System.
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI).
Reproducible Large-Scale Neuroimaging Studies with the OpenMOLE Workflow Management System
Passerat-Palmbach, Jonathan; Reuillon, Romain; Leclaire, Mathieu; Makropoulos, Antonios; Robinson, Emma C.; Parisot, Sarah; Rueckert, Daniel
2017-01-01
OpenMOLE is a scientific workflow engine with a strong emphasis on workload distribution. Workflows are designed using a high level Domain Specific Language (DSL) built on top of Scala. It exposes natural parallelism constructs to easily delegate the workload resulting from a workflow to a wide range of distributed computing environments. OpenMOLE hides the complexity of designing complex experiments thanks to its DSL. Users can embed their own applications and scale their pipelines from a small prototype running on their desktop computer to a large-scale study harnessing distributed computing infrastructures, simply by changing a single line in the pipeline definition. The construction of the pipeline itself is decoupled from the execution context. The high-level DSL abstracts the underlying execution environment, contrary to classic shell-script based pipelines. These two aspects allow pipelines to be shared and studies to be replicated across different computing environments. Workflows can be run as traditional batch pipelines or coupled with OpenMOLE's advanced exploration methods in order to study the behavior of an application, or perform automatic parameter tuning. In this work, we briefly present the strong assets of OpenMOLE and detail recent improvements targeting re-executability of workflows across various Linux platforms. We have tightly coupled OpenMOLE with CARE, a standalone containerization solution that allows re-executing on a Linux host any application that has been packaged on another Linux host previously. The solution is evaluated against a Python-based pipeline involving packages such as scikit-learn as well as binary dependencies. All were packaged and re-executed successfully on various HPC environments, with identical numerical results (here prediction scores) obtained on each environment. Our results show that the pair formed by OpenMOLE and CARE is a reliable solution to generate reproducible results and re-executable pipelines. A demonstration of the flexibility of our solution showcases three neuroimaging pipelines harnessing distributed computing environments as heterogeneous as local clusters or the European Grid Infrastructure (EGI). PMID:28381997
Extracting Message Inter-Departure Time Distributions from the Human Electroencephalogram
Mišić, Bratislav; Vakorin, Vasily A.; Kovačević, Nataša; Paus, Tomáš; McIntosh, Anthony R.
2011-01-01
The complex connectivity of the cerebral cortex is a topic of much study, yet the link between structure and function is still unclear. The processing capacity and throughput of information at individual brain regions remains an open question and one that could potentially bridge these two aspects of neural organization. The rate at which information is emitted from different nodes in the network and how this output process changes under different external conditions are general questions that are not unique to neuroscience, but are of interest in multiple classes of telecommunication networks. In the present study we show how some of these questions may be addressed using tools from telecommunications research. An important system statistic for modeling and performance evaluation of distributed communication systems is the time between successive departures of units of information at each node in the network. We describe a method to extract and fully characterize the distribution of such inter-departure times from the resting-state electroencephalogram (EEG). We show that inter-departure times are well fitted by the two-parameter Gamma distribution. Moreover, they are not spatially or neurophysiologically trivial and instead are regionally specific and sensitive to the presence of sensory input. In both the eyes-closed and eyes-open conditions, inter-departure time distributions were more dispersed over posterior parietal channels, close to regions which are known to have the most dense structural connectivity. The biggest differences between the two conditions were observed at occipital sites, where inter-departure times were significantly more variable in the eyes-open condition. Together, these results suggest that message departure times are indicative of network traffic and capture a novel facet of neural activity. PMID:21673866
A numerical differentiation library exploiting parallel architectures
NASA Astrophysics Data System (ADS)
Voglis, C.; Hadjidoukas, P. E.; Lagaris, I. E.; Papageorgiou, D. G.
2009-08-01
We present a software library for numerically estimating first and second order partial derivatives of a function by finite differencing. Various truncation schemes are offered resulting in corresponding formulas that are accurate to order O(h), O(h), and O(h), h being the differencing step. The derivatives are calculated via forward, backward and central differences. Care has been taken that only feasible points are used in the case where bound constraints are imposed on the variables. The Hessian may be approximated either from function or from gradient values. There are three versions of the software: a sequential version, an OpenMP version for shared memory architectures and an MPI version for distributed systems (clusters). The parallel versions exploit the multiprocessing capability offered by computer clusters, as well as modern multi-core systems and due to the independent character of the derivative computation, the speedup scales almost linearly with the number of available processors/cores. Program summaryProgram title: NDL (Numerical Differentiation Library) Catalogue identifier: AEDG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 73 030 No. of bytes in distributed program, including test data, etc.: 630 876 Distribution format: tar.gz Programming language: ANSI FORTRAN-77, ANSI C, MPI, OPENMP Computer: Distributed systems (clusters), shared memory systems Operating system: Linux, Solaris Has the code been vectorised or parallelized?: Yes RAM: The library uses O(N) internal storage, N being the dimension of the problem Classification: 4.9, 4.14, 6.5 Nature of problem: The numerical estimation of derivatives at several accuracy levels is a common requirement in many computational tasks, such as optimization, solution of nonlinear systems, etc. The parallel implementation that exploits systems with multiple CPUs is very important for large scale and computationally expensive problems. Solution method: Finite differencing is used with carefully chosen step that minimizes the sum of the truncation and round-off errors. The parallel versions employ both OpenMP and MPI libraries. Restrictions: The library uses only double precision arithmetic. Unusual features: The software takes into account bound constraints, in the sense that only feasible points are used to evaluate the derivatives, and given the level of the desired accuracy, the proper formula is automatically employed. Running time: Running time depends on the function's complexity. The test run took 15 ms for the serial distribution, 0.6 s for the OpenMP and 4.2 s for the MPI parallel distribution on 2 processors.
openBIS: a flexible framework for managing and analyzing complex data in biology research
2011-01-01
Background Modern data generation techniques used in distributed systems biology research projects often create datasets of enormous size and diversity. We argue that in order to overcome the challenge of managing those large quantitative datasets and maximise the biological information extracted from them, a sound information system is required. Ease of integration with data analysis pipelines and other computational tools is a key requirement for it. Results We have developed openBIS, an open source software framework for constructing user-friendly, scalable and powerful information systems for data and metadata acquired in biological experiments. openBIS enables users to collect, integrate, share, publish data and to connect to data processing pipelines. This framework can be extended and has been customized for different data types acquired by a range of technologies. Conclusions openBIS is currently being used by several SystemsX.ch and EU projects applying mass spectrometric measurements of metabolites and proteins, High Content Screening, or Next Generation Sequencing technologies. The attributes that make it interesting to a large research community involved in systems biology projects include versatility, simplicity in deployment, scalability to very large data, flexibility to handle any biological data type and extensibility to the needs of any research domain. PMID:22151573
Theoretical Framework for Integrating Distributed Energy Resources into Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lian, Jianming; Wu, Di; Kalsi, Karanjit
This paper focuses on developing a novel theoretical framework for effective coordination and control of a large number of distributed energy resources in distribution systems in order to more reliably manage the future U.S. electric power grid under the high penetration of renewable generation. The proposed framework provides a systematic view of the overall structure of the future distribution systems along with the underlying information flow, functional organization, and operational procedures. It is characterized by the features of being open, flexible and interoperable with the potential to support dynamic system configuration. Under the proposed framework, the energy consumption of variousmore » DERs is coordinated and controlled in a hierarchical way by using market-based approaches. The real-time voltage control is simultaneously considered to complement the real power control in order to keep nodal voltages stable within acceptable ranges during real time. In addition, computational challenges associated with the proposed framework are also discussed with recommended practices.« less
Choi, James J.; Wang, Shougang; Tung, Yao-Sheng; Morrison, Barclay; Konofagou, Elisa E.
2009-01-01
Focused ultrasound (FUS) is hereby shown to noninvasively and selectively deliver compounds at pharmacologically relevant molecular weights through the opened blood-brain barrier (BBB). A complete examination on the size of the FUS-induced BBB opening, the spatial distribution of the delivered agents and its dependence on the agent's molecular weight were imaged and quantified using fluorescence microscopy. BBB opening in mice (n=13) was achieved in vivo after systemic administration of microbubbles and subsequent application of pulsed FUS (frequency: 1.525 MHz, peak-rarefactional pressure in situ: 569 kPa) to the left murine hippocampus through the intact skin and skull. BBB-impermeant, fluorescent-tagged dextrans at three distinct molecular weights spanning over several orders of magnitude were systemically administered and acted as model therapeutic compounds. First, dextrans of 3 and 70 kDa were delivered trans-BBB while 2000 kDa dextran was not. Second, compared to 70 kDa dextran, a higher concentration of 3 kDa dextran was delivered through the opened BBB. Third, the 3 and 70 kDa dextrans were both diffusely distributed throughout the targeted brain region. However, high concentrations of 70 kDa dextran appeared more punctated throughout the targeted region. In conclusion, FUS combined with microbubbles opened the BBB sufficiently to allow passage of compounds of at least 70 kDa, but not greater than 2000 kDa, into the brain parenchyma. This noninvasive and localized BBB opening technique could thus provide a unique means for the delivery of compounds of several magnitudes of kDa that include agents with shown therapeutic promise in vitro, but whose in vivo translation has been hampered by their associated BBB impermeability. PMID:19900750
Accounting and Accountability for Distributed and Grid Systems
NASA Technical Reports Server (NTRS)
Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.
2001-01-01
While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.
2013-01-01
Background The openEHR project and the closely related ISO 13606 standard have defined structures supporting the content of Electronic Health Records (EHRs). However, there is not yet any finalized openEHR specification of a service interface to aid application developers in creating, accessing, and storing the EHR content. The aim of this paper is to explore how the Representational State Transfer (REST) architectural style can be used as a basis for a platform-independent, HTTP-based openEHR service interface. Associated benefits and tradeoffs of such a design are also explored. Results The main contribution is the formalization of the openEHR storage, retrieval, and version-handling semantics and related services into an implementable HTTP-based service interface. The modular design makes it possible to prototype, test, replicate, distribute, cache, and load-balance the system using ordinary web technology. Other contributions are approaches to query and retrieval of the EHR content that takes caching, logging, and distribution into account. Triggering on EHR change events is also explored. A final contribution is an open source openEHR implementation using the above-mentioned approaches to create LiU EEE, an educational EHR environment intended to help newcomers and developers experiment with and learn about the archetype-based EHR approach and enable rapid prototyping. Conclusions Using REST addressed many architectural concerns in a successful way, but an additional messaging component was needed to address some architectural aspects. Many of our approaches are likely of value to other archetype-based EHR implementations and may contribute to associated service model specifications. PMID:23656624
Sundvall, Erik; Nyström, Mikael; Karlsson, Daniel; Eneling, Martin; Chen, Rong; Örman, Håkan
2013-05-09
The openEHR project and the closely related ISO 13606 standard have defined structures supporting the content of Electronic Health Records (EHRs). However, there is not yet any finalized openEHR specification of a service interface to aid application developers in creating, accessing, and storing the EHR content.The aim of this paper is to explore how the Representational State Transfer (REST) architectural style can be used as a basis for a platform-independent, HTTP-based openEHR service interface. Associated benefits and tradeoffs of such a design are also explored. The main contribution is the formalization of the openEHR storage, retrieval, and version-handling semantics and related services into an implementable HTTP-based service interface. The modular design makes it possible to prototype, test, replicate, distribute, cache, and load-balance the system using ordinary web technology. Other contributions are approaches to query and retrieval of the EHR content that takes caching, logging, and distribution into account. Triggering on EHR change events is also explored.A final contribution is an open source openEHR implementation using the above-mentioned approaches to create LiU EEE, an educational EHR environment intended to help newcomers and developers experiment with and learn about the archetype-based EHR approach and enable rapid prototyping. Using REST addressed many architectural concerns in a successful way, but an additional messaging component was needed to address some architectural aspects. Many of our approaches are likely of value to other archetype-based EHR implementations and may contribute to associated service model specifications.
NASA Technical Reports Server (NTRS)
Hart, Andrew F.; Verma, Rishi; Mattmann, Chris A.; Crichton, Daniel J.; Kelly, Sean; Kincaid, Heather; Hughes, Steven; Ramirez, Paul; Goodale, Cameron; Anton, Kristen;
2012-01-01
For the past decade, the NASA Jet Propulsion Laboratory, in collaboration with Dartmouth University has served as the center for informatics for the Early Detection Research Network (EDRN). The EDRN is a multi-institution research effort funded by the U.S. National Cancer Institute (NCI) and tasked with identifying and validating biomarkers for the early detection of cancer. As the distributed network has grown, increasingly formal processes have been developed for the acquisition, curation, storage, and dissemination of heterogeneous research information assets, and an informatics infrastructure has emerged. In this paper we discuss the evolution of EDRN informatics, its success as a mechanism for distributed information integration, and the potential sustainability and reuse benefits of emerging efforts to make the platform components themselves open source. We describe our experience transitioning a large closed-source software system to a community driven, open source project at the Apache Software Foundation, and point to lessons learned that will guide our present efforts to promote the reuse of the EDRN informatics infrastructure by a broader community.
THE AUTOMATED GEOSPATIAL WATERSHED ASSESSMENT TOOL
A toolkit for distributed hydrologic modeling at multiple scales using a geographic information system is presented. This open-source, freely available software was developed through a collaborative endeavor involving two Universities and two government agencies. Called the Auto...
Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Chen, Yousu; Wu, Di
2015-12-09
Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less
NASA Technical Reports Server (NTRS)
Utz, Hans Heinrich
2011-01-01
This talk gives an overview of the the Robot Applications Programmers Interface Delegate (RAPID) as well as the distributed systems middleware Data Distribution Service (DDS). DDS is an open software standard, RAPID is cleared for open-source release under NOSA. RAPID specifies data-structures and semantics for high-level telemetry published by NASA robotic software. These data-structures are supported by multiple robotic platforms at Johnson Space Center (JSC), Jet Propulsion Laboratory (JPL) and Ames Research Center (ARC), providing high-level interoperability between those platforms. DDS is used as the middleware for data transfer. The feature set of the middleware heavily influences the design decision made in the RAPID specification. So it is appropriate to discuss both in this introductory talk.
NOMADS-NOAA Operational Model Archive and Distribution System
Forecast Maps Climate Climate Prediction Climate Archives Weather Safety Storm Ready NOAA Central Library (16km) 6 hours grib filter http OpenDAP-alt URMA hourly - http - Climate Models Climate Forecast System Flux Products 6 hours grib filter http - Climate Forecast System 3D Pressure Products 6 hours grib
On distributed wavefront reconstruction for large-scale adaptive optics systems.
de Visser, Cornelis C; Brunner, Elisabeth; Verhaegen, Michel
2016-05-01
The distributed-spline-based aberration reconstruction (D-SABRE) method is proposed for distributed wavefront reconstruction with applications to large-scale adaptive optics systems. D-SABRE decomposes the wavefront sensor domain into any number of partitions and solves a local wavefront reconstruction problem on each partition using multivariate splines. D-SABRE accuracy is within 1% of a global approach with a speedup that scales quadratically with the number of partitions. The D-SABRE is compared to the distributed cumulative reconstruction (CuRe-D) method in open-loop and closed-loop simulations using the YAO adaptive optics simulation tool. D-SABRE accuracy exceeds CuRe-D for low levels of decomposition, and D-SABRE proved to be more robust to variations in the loop gain.
Distributed numerical controllers
NASA Astrophysics Data System (ADS)
Orban, Peter E.
2001-12-01
While the basic principles of Numerical Controllers (NC) have not changed much during the years, the implementation of NCs' has changed tremendously. NC equipment has evolved from yesterday's hard-wired specialty control apparatus to today's graphics intensive, networked, increasingly PC based open systems, controlling a wide variety of industrial equipment with positioning needs. One of the newest trends in NC technology is the distributed implementation of the controllers. Distributed implementation promises to offer robustness, lower implementation costs, and a scalable architecture. Historically partitioning has been done along the hierarchical levels, moving individual modules into self contained units. The paper discusses various NC architectures, the underlying technology for distributed implementation, and relevant design issues. First the functional requirements of individual NC modules are analyzed. Module functionality, cycle times, and data requirements are examined. Next the infrastructure for distributed node implementation is reviewed. Various communication protocols and distributed real-time operating system issues are investigated and compared. Finally, a different, vertical system partitioning, offering true scalability and reconfigurability is presented.
NASA Technical Reports Server (NTRS)
Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond
2001-01-01
The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a COTS-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.
NASA Technical Reports Server (NTRS)
Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond; Schkolnik, Gerald (Technical Monitor)
1998-01-01
The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a commercial off-the-shelf (COTS)-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.
Fall 2014 SEI Research Review Edge-Enabled Tactical Systems (EETS)
2014-10-29
Effective communicate and reasoning despite connectivity issues • More generally, how to make programming distributed algorithms with extensible...distributed collaboration in VREP simulations for 5-12 quadcopters and ground robots • Open-source middleware and algorithms released to community...Integration into CMU Drone-RK quadcopter and Platypus autonomous boat platforms • Presentations at DARPA (CODE), AFRL C4I Workshop, and AFRL Eglin
Open source tools for large-scale neuroscience.
Freeman, Jeremy
2015-06-01
New technologies for monitoring and manipulating the nervous system promise exciting biology but pose challenges for analysis and computation. Solutions can be found in the form of modern approaches to distributed computing, machine learning, and interactive visualization. But embracing these new technologies will require a cultural shift: away from independent efforts and proprietary methods and toward an open source and collaborative neuroscience. Copyright © 2015 The Author. Published by Elsevier Ltd.. All rights reserved.
Cloud-Based Distributed Control of Unmanned Systems
2015-04-01
during mission execution. At best, the data is saved onto hard-drives and is accessible only by the local team. Data history in a form available and...following open source technologies: GeoServer, OpenLayers, PostgreSQL , and PostGIS are chosen to implement the back-end database and server. A brief...geospatial map data. 3. PostgreSQL : An SQL-compliant object-relational database that easily scales to accommodate large amounts of data - upwards to
Ko, Heasin; Choi, Byung-Seok; Choe, Joong-Seon; Kim, Kap-Joong; Kim, Jong-Hoi; Youn, Chun Ju
2017-08-21
Most polarization-based BB84 quantum key distribution (QKD) systems utilize multiple lasers to generate one of four polarization quantum states randomly. However, random bit generation with multiple lasers can potentially open critical side channels that significantly endangers the security of QKD systems. In this paper, we show unnoticed side channels of temporal disparity and intensity fluctuation, which possibly exist in the operation of multiple semiconductor laser diodes. Experimental results show that the side channels can enormously degrade security performance of QKD systems. An important system issue for the improvement of quantum bit error rate (QBER) related with laser driving condition is further addressed with experimental results.
NASA Astrophysics Data System (ADS)
Schneider, Uwe; Strack, Ruediger
1992-04-01
apART reflects the structure of an open, distributed environment. According to the general trend in the area of imaging, network-capable, general purpose workstations with capabilities of open system image communication and image input are used. Several heterogeneous components like CCD cameras, slide scanners, and image archives can be accessed. The system is driven by an object-oriented user interface where devices (image sources and destinations), operators (derived from a commercial image processing library), and images (of different data types) are managed and presented uniformly to the user. Browsing mechanisms are used to traverse devices, operators, and images. An audit trail mechanism is offered to record interactive operations on low-resolution image derivatives. These operations are processed off-line on the original image. Thus, the processing of extremely high-resolution raster images is possible, and the performance of resolution dependent operations is enhanced significantly during interaction. An object-oriented database system (APRIL), which can be browsed, is integrated into the system. Attribute retrieval is supported by the user interface. Other essential features of the system include: implementation on top of the X Window System (X11R4) and the OSF/Motif widget set; a SUN4 general purpose workstation, inclusive ethernet, magneto optical disc, etc., as the hardware platform for the user interface; complete graphical-interactive parametrization of all operators; support of different image interchange formats (GIF, TIFF, IIF, etc.); consideration of current IPI standard activities within ISO/IEC for further refinement and extensions.
TEAM (Technologies Enabling Agile Manufacturing) shop floor control requirements guide: Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-03-28
TEAM will create a shop floor control system (SFC) to link the pre-production planning to shop floor execution. SFC must meet the requirements of a multi-facility corporation, where control must be maintained between co-located facilities down to individual workstations within each facility. SFC must also meet the requirements of a small corporation, where there may only be one small facility. A hierarchical architecture is required to meet these diverse needs. The hierarchy contains the following levels: Enterprise, Factory, Cell, Station, and Equipment. SFC is focused on the top three levels. Each level of the hierarchy is divided into three basicmore » functions: Scheduler, Dispatcher, and Monitor. The requirements of each function depend on the hierarchical level in which it is to be used. For example, the scheduler at the Enterprise level must allocate production to individual factories and assign due-dates; the scheduler at the Cell level must provide detailed start and stop times of individual operations. Finally the system shall have the following features: distributed and open-architecture. Open architecture software is required in order that the appropriate technology be used at each level of the SFC hierarchy, and even at different instances within the same hierarchical level (for example, Factory A uses discrete-event simulation scheduling software, and Factory B uses an optimization-based scheduler). A distributed implementation is required to reduce the computational burden of the overall system, and allow for localized control. A distributed, open-architecture implementation will also require standards for communication between hierarchical levels.« less
Distributed Virtual System (DIVIRS) Project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on contract NCC 2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to program parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the virtual system model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1994-01-01
As outlined in our continuation proposal 92-ISI-. OR (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
DIstributed VIRtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, Clifford B.
1995-01-01
As outlined in our continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC2-539, we are (1) developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; (2) developing communications routines that support the abstractions implemented in item one; (3) continuing the development of file and information systems based on the Virtual System Model; and (4) incorporating appropriate security measures to allow the mechanisms developed in items 1 through 3 to be used on an open network. The goal throughout our work is to provide a uniform model that can be applied to both parallel and distributed systems. We believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. Our work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Distributed Virtual System (DIVIRS) project
NASA Technical Reports Server (NTRS)
Schorr, Herbert; Neuman, B. Clifford
1993-01-01
As outlined in the continuation proposal 92-ISI-50R (revised) on NASA cooperative agreement NCC 2-539, the investigators are developing software, including a system manager and a job manager, that will manage available resources and that will enable programmers to develop and execute parallel applications in terms of a virtual configuration of processors, hiding the mapping to physical nodes; developing communications routines that support the abstractions implemented; continuing the development of file and information systems based on the Virtual System Model; and incorporating appropriate security measures to allow the mechanisms developed to be used on an open network. The goal throughout the work is to provide a uniform model that can be applied to both parallel and distributed systems. The authors believe that multiprocessor systems should exist in the context of distributed systems, allowing them to be more easily shared by those that need them. The work provides the mechanisms through which nodes on multiprocessors are allocated to jobs running within the distributed system and the mechanisms through which files needed by those jobs can be located and accessed.
Calvani, Dario; Cuccoli, Alessandro; Gidopoulos, Nikitas I; Verrucchi, Paola
2013-04-23
The behavior of most physical systems is affected by their natural surroundings. A quantum system with an environment is referred to as open, and its study varies according to the classical or quantum description adopted for the environment. We propose an approach to open quantum systems that allows us to follow the cross-over from quantum to classical environments; to achieve this, we devise an exact parametric representation of the principal system, based on generalized coherent states for the environment. The method is applied to the s = 1/2 Heisenberg star with frustration, where the quantum character of the environment varies with the couplings entering the Hamiltonian H. We find that when the star is in an eigenstate of H, the central spin behaves as if it were in an effective magnetic field, pointing in the direction set by the environmental coherent-state angle variables (θ, ϕ), and broadened according to their quantum probability distribution. Such distribution is independent of ϕ, whereas as a function of θ is seen to get narrower as the quantum character of the environment is reduced, collapsing into a Dirac-δ function in the classical limit. In such limit, because ϕ is left undetermined, the Von Neumann entropy of the central spin remains finite; in fact, it is equal to the entanglement of the original fully quantum model, a result that establishes a relation between this latter quantity and the Berry phase characterizing the dynamics of the central spin in the effective magnetic field.
A distributed system for fast alignment of next-generation sequencing data.
Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D
2010-12-01
We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.
High-uniformity centimeter-wide Si etching method for MEMS devices with large opening elements
NASA Astrophysics Data System (ADS)
Okamoto, Yuki; Tohyama, Yukiya; Inagaki, Shunsuke; Takiguchi, Mikio; Ono, Tomoki; Lebrasseur, Eric; Mita, Yoshio
2018-04-01
We propose a compensated mesh pattern filling method to achieve highly uniform wafer depth etching (over hundreds of microns) with a large-area opening (over centimeter). The mesh opening diameter is gradually changed between the center and the edge of a large etching area. Using such a design, the etching depth distribution depending on sidewall distance (known as the local loading effect) inversely compensates for the over-centimeter-scale etching depth distribution, known as the global or within-die(chip)-scale loading effect. Only a single DRIE with test structure patterns provides a micro-electromechanical systems (MEMS) designer with the etched depth dependence on the mesh opening size as well as on the distance from the chip edge, and the designer only has to set the opening size so as to obtain a uniform etching depth over the entire chip. This method is useful when process optimization cannot be performed, such as in the cases of using standard conditions for a foundry service and of short turn-around-time prototyping. To demonstrate, a large MEMS mirror that needed over 1 cm2 of backside etching was successfully fabricated using as-is-provided DRIE conditions.
National Utility Rate Database: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ong, S.; McKeel, R.
2012-08-01
When modeling solar energy technologies and other distributed energy systems, using high-quality expansive electricity rates is essential. The National Renewable Energy Laboratory (NREL) developed a utility rate platform for entering, storing, updating, and accessing a large collection of utility rates from around the United States. This utility rate platform lives on the Open Energy Information (OpenEI) website, OpenEI.org, allowing the data to be programmatically accessed from a web browser, using an application programming interface (API). The semantic-based utility rate platform currently has record of 1,885 utility rates and covers over 85% of the electricity consumption in the United States.
Karpievitch, Yuliya V; Almeida, Jonas S
2006-01-01
Background Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. Results mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Conclusion Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet. PMID:16539707
Karpievitch, Yuliya V; Almeida, Jonas S
2006-03-15
Matlab, a powerful and productive language that allows for rapid prototyping, modeling and simulation, is widely used in computational biology. Modeling and simulation of large biological systems often require more computational resources then are available on a single computer. Existing distributed computing environments like the Distributed Computing Toolbox, MatlabMPI, Matlab*G and others allow for the remote (and possibly parallel) execution of Matlab commands with varying support for features like an easy-to-use application programming interface, load-balanced utilization of resources, extensibility over the wide area network, and minimal system administration skill requirements. However, all of these environments require some level of access to participating machines to manually distribute the user-defined libraries that the remote call may invoke. mGrid augments the usual process distribution seen in other similar distributed systems by adding facilities for user code distribution. mGrid's client-side interface is an easy-to-use native Matlab toolbox that transparently executes user-defined code on remote machines (i.e. the user is unaware that the code is executing somewhere else). Run-time variables are automatically packed and distributed with the user-defined code and automated load-balancing of remote resources enables smooth concurrent execution. mGrid is an open source environment. Apart from the programming language itself, all other components are also open source, freely available tools: light-weight PHP scripts and the Apache web server. Transparent, load-balanced distribution of user-defined Matlab toolboxes and rapid prototyping of many simple parallel applications can now be done with a single easy-to-use Matlab command. Because mGrid utilizes only Matlab, light-weight PHP scripts and the Apache web server, installation and configuration are very simple. Moreover, the web-based infrastructure of mGrid allows for it to be easily extensible over the Internet.
Application of enthalpy model for floating zone silicon crystal growth
NASA Astrophysics Data System (ADS)
Krauze, A.; Bergfelds, K.; Virbulis, J.
2017-09-01
A 2D simplified crystal growth model based on the enthalpy method and coupled with a low-frequency harmonic electromagnetic model is developed to simulate the silicon crystal growth near the external triple point (ETP) and crystal melting on the open melting front of a polycrystalline feed rod in FZ crystal growth systems. Simulations of the crystal growth near the ETP show significant influence of the inhomogeneities of the EM power distribution on the crystal growth rate for a 4 in floating zone (FZ) system. The generated growth rate fluctuations are shown to be larger in the system with higher crystal pull rate. Simulations of crystal melting on the open melting front of the polycrystalline rod show the development of melt-filled grooves at the open melting front surface. The distance between the grooves is shown to grow with the increase of the skin-layer depth in the solid material.
Standardization as an Arena for Open Innovation
NASA Astrophysics Data System (ADS)
Grøtnes, Endre
This paper argues that anticipatory standardization can be viewed as an arena for open innovation and shows this through two cases from mobile telecommunication standardization. One case is the Android initiative by Google and the Open Handset Alliance, while the second case is the general standardization work of the Open Mobile Alliance. The paper shows how anticipatory standardization intentionally uses inbound and outbound streams of research and intellectual property to create new innovations. This is at the heart of the open innovation model. The standardization activities use both pooling of R&D and the distribution of freely available toolkits to create products and architectures that can be utilized by the participants and third parties to leverage their innovation. The paper shows that the technology being standardized needs to have a systemic nature to be part of an open innovation process.
OpenStereo: Open Source, Cross-Platform Software for Structural Geology Analysis
NASA Astrophysics Data System (ADS)
Grohmann, C. H.; Campanha, G. A.
2010-12-01
Free and open source software (FOSS) are increasingly seen as synonyms of innovation and progress. Freedom to run, copy, distribute, study, change and improve the software (through access to the source code) assure a high level of positive feedback between users and developers, which results in stable, secure and constantly updated systems. Several software packages for structural geology analysis are available to the user, with commercial licenses or that can be downloaded at no cost from the Internet. Some provide basic tools of stereographic projections such as plotting poles, great circles, density contouring, eigenvector analysis, data rotation etc, while others perform more specific tasks, such as paleostress or geotechnical/rock stability analysis. This variety also means a wide range of data formating for input, Graphical User Interface (GUI) design and graphic export format. The majority of packages is built for MS-Windows and even though there are packages for the UNIX-based MacOS, there aren't native packages for *nix (UNIX, Linux, BSD etc) Operating Systems (OS), forcing the users to run these programs with emulators or virtual machines. Those limitations lead us to develop OpenStereo, an open source, cross-platform software for stereographic projections and structural geology. The software is written in Python, a high-level, cross-platform programming language and the GUI is designed with wxPython, which provide a consistent look regardless the OS. Numeric operations (like matrix and linear algebra) are performed with the Numpy module and all graphic capabilities are provided by the Matplolib library, including on-screen plotting and graphic exporting to common desktop formats (emf, eps, ps, pdf, png, svg). Data input is done with simple ASCII text files, with values of dip direction and dip/plunge separated by spaces, tabs or commas. The user can open multiple file at the same time (or the same file more than once), and overlay different elements of each dataset (poles, great circles etc). The GUI shows the opened files in a tree structure, similar to “layers” of many illustration software, where the vertical order of the files in the tree reflects the drawing order of the selected elements. At this stage, the software performs plotting operations of poles to planes, lineations, great circles, density contours and rose diagrams. A set of statistics is calculated for each file and its eigenvalues and eigenvectors are used to suggest if the data is clustered about a mean value or distributed along a girdle. Modified Flinn, Triangular and histograms plots are also available. Next step of development will focus on tools as merging and rotation of datasets, possibility to save 'projects' and paleostress analysis. In its current state, OpenStereo requires Python, wxPython, Numpy and Matplotlib installed in the system. We recommend installing PythonXY or the Enthought Python Distribution on MS-Windows and MacOS machines, since all dependencies are provided. Most Linux distributions provide an easy way to install all dependencies through software repositories. OpenStereo is released under the GNU General Public License. Programmers willing to contribute are encouraged to contact the authors directly. FAPESP Grant #09/17675-5
Dissipative open systems theory as a foundation for the thermodynamics of linear systems.
Delvenne, Jean-Charles; Sandberg, Henrik
2017-03-06
In this paper, we advocate the use of open dynamical systems, i.e. systems sharing input and output variables with their environment, and the dissipativity theory initiated by Jan Willems as models of thermodynamical systems, at the microscopic and macroscopic level alike. We take linear systems as a study case, where we show how to derive a global Lyapunov function to analyse networks of interconnected systems. We define a suitable notion of dynamic non-equilibrium temperature that allows us to derive a discrete Fourier law ruling the exchange of heat between lumped, discrete-space systems, enriched with the Maxwell-Cattaneo correction. We complete these results by a brief recall of the steps that allow complete derivation of the dissipation and fluctuation in macroscopic systems (i.e. at the level of probability distributions) from lossless and deterministic systems.This article is part of the themed issue 'Horizons of cybernetical physics'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Ujevic, Sebastian; Mendoza, Michel
2010-07-01
We propose numerical simulations of longitudinal magnetoconductance through a finite antidot lattice located inside an open quantum dot with a magnetic field applied perpendicular to the plane. The system is connected to reservoirs using quantum point contacts. We discuss the relationship between the longitudinal magnetoconductance and the generation of transversal couplings between the induced open quantum dots in the system. The system presents longitudinal magnetoconductance maps with crossovers (between transversal bands) and closings (longitudinal decoupling) of fundamental quantum states related to the open quantum dots induced by the antidot lattice. A relationship is observed between the distribution of antidots and the formed conductance bands, allowing a systematic follow up of the bands as a function of the applied magnetic field and quantum point-contact width. We observed a high conductance intensity [between n and (n+1) quantum of conductance, n=1,2,… ] in the regions of crossover and closing of states. This suggests transversal couplings between the induced open quantum dots of the system that can be modulated by varying both the antidots potential and the quantum point-contact width. A new continuous channel (not expected) is induced by the variation in the contact width and generate Fano resonances in the conductance. These resonances can be manipulated by the applied magnetic field.
NASA Astrophysics Data System (ADS)
Gros, J.-B.; Kuhl, U.; Legrand, O.; Mortessagne, F.
2016-03-01
The effective Hamiltonian formalism is extended to vectorial electromagnetic waves in order to describe statistical properties of the field in reverberation chambers. The latter are commonly used in electromagnetic compatibility tests. As a first step, the distribution of wave intensities in chaotic systems with varying opening in the weak coupling limit for scalar quantum waves is derived by means of random matrix theory. In this limit the only parameters are the modal overlap and the number of open channels. Using the extended effective Hamiltonian, we describe the intensity statistics of the vectorial electromagnetic eigenmodes of lossy reverberation chambers. Finally, the typical quantity of interest in such chambers, namely, the distribution of the electromagnetic response, is discussed. By determining the distribution of the phase rigidity, describing the coupling to the environment, using random matrix numerical data, we find good agreement between the theoretical prediction and numerical calculations of the response.
Accessing and managing open medical resources in Africa over the Internet
NASA Astrophysics Data System (ADS)
Hussein, Rada; Khalifa, Aly; Jimenez-Castellanos, Ana; de la Calle, Guillermo; Ramirez-Robles, Maximo; Crespo, Jose; Perez-Rey, David; Garcia-Remesal, Miguel; Anguita, Alberto; Alonso-Calvo, Raul; de la Iglesia, Diana; Barreiro, Jose M.; Maojo, Victor
2014-10-01
Recent commentaries have proposed the advantages of using open exchange of data and informatics resources for improving health-related policies and patient care in Africa. Yet, in many African regions, both private medical and public health information systems are still unaffordable. Open exchange over the social Web 2.0 could encourage more altruistic support of medical initiatives. We have carried out some experiments to demonstrate the feasibility of using this approach to disseminate open data and informatics resources in Africa. After the experiments we developed the AFRICA BUILD Portal, the first Social Network for African biomedical researchers. Through the AFRICA BUILD Portal users can access in a transparent way to several resources. Currently, over 600 researchers are using distributed and open resources through this platform committed to low connections.
Expeditionary Oblong Mezzanine
2016-03-01
Operating System OSI Open Systems Interconnection OS X Operating System Ten PDU Power Distribution Unit POE Power Over Ethernet xvii SAAS ...providing infrastructure as a service (IaaS) and software as a service ( SaaS ) cloud computing technologies. IaaS is a way of providing computing services...such as servers, storage, and network equipment services (Mell & Grance, 2009). SaaS is a means of providing software and applications as an on
Implementation of Open-Source Web Mapping Technologies to Support Monitoring of Governmental Schemes
NASA Astrophysics Data System (ADS)
Pulsani, B. R.
2015-10-01
Several schemes are undertaken by the government to uplift social and economic condition of people. The monitoring of these schemes is done through information technology where involvement of Geographic Information System (GIS) is lacking. To demonstrate the benefits of thematic mapping as a tool for assisting the officials in making decisions, a web mapping application for three government programs such as Mother and Child Tracking system (MCTS), Telangana State Housing Corporation Limited (TSHCL) and Ground Water Quality Mapping (GWQM) has been built. Indeed the three applications depicted the distribution of various parameters thematically and helped in identifying the areas with higher and weaker distributions. Based on the three applications, the study tends to find similarities of many government schemes reflecting the nature of thematic mapping and hence deduces to implement this kind of approach for other schemes as well. These applications have been developed using SharpMap Csharp library which is a free and open source mapping library for developing geospatial applications. The study highlights upon the cost benefits of SharpMap and brings out the advantage of this library over proprietary vendors and further discusses its advantages over other open source libraries as well.
Open source software integrated into data services of Japanese planetary explorations
NASA Astrophysics Data System (ADS)
Yamamoto, Y.; Ishihara, Y.; Otake, H.; Imai, K.; Masuda, K.
2015-12-01
Scientific data obtained by Japanese scientific satellites and lunar and planetary explorations are archived in DARTS (Data ARchives and Transmission System). DARTS provides the data with a simple method such as HTTP directory listing for long-term preservation while DARTS tries to provide rich web applications for ease of access with modern web technologies based on open source software. This presentation showcases availability of open source software through our services. KADIAS is a web-based application to search, analyze, and obtain scientific data measured by SELENE(Kaguya), a Japanese lunar orbiter. KADIAS uses OpenLayers to display maps distributed from Web Map Service (WMS). As a WMS server, open source software MapServer is adopted. KAGUYA 3D GIS (KAGUYA 3D Moon NAVI) provides a virtual globe for the SELENE's data. The main purpose of this application is public outreach. NASA World Wind Java SDK is used to develop. C3 (Cross-Cutting Comparisons) is a tool to compare data from various observations and simulations. It uses Highcharts to draw graphs on web browsers. Flow is a tool to simulate a Field-Of-View of an instrument onboard a spacecraft. This tool itself is open source software developed by JAXA/ISAS, and the license is BSD 3-Caluse License. SPICE Toolkit is essential to compile FLOW. SPICE Toolkit is also open source software developed by NASA/JPL, and the website distributes many spacecrafts' data. Nowadays, open source software is an indispensable tool to integrate DARTS services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katipamula, Srinivas; Gowri, Krishnan; Hernandez, George
This paper describes one such reference process that can be deployed to provide continuous automated conditioned-based maintenance management for buildings that have BIM, a building automation system (BAS) and a computerized maintenance management software (CMMS) systems. The process can be deployed using an open source transactional network platform, VOLTTRON™, designed for distributed sensing and controls and supports both energy efficiency and grid services.
ERIC Educational Resources Information Center
Goglio, Valentina; Parigi, Paolo
2016-01-01
This paper sheds light on the development of a peculiar organizational form in the Italian higher education system: satellite campuses. In comparison with other European countries, the Italian system shows peculiarities in terms of differentiation and power distribution among institutional actors. Building on the idea that the opening of a…
Wilson, William G.; Lundberg, Per
2004-01-01
Theoretical interest in the distributions of species abundances observed in ecological communities has focused recently on the results of models that assume all species are identical in their interactions with one another, and rely upon immigration and speciation to promote coexistence. Here we examine a one-trophic level system with generalized species interactions, including species-specific intraspecific and interspecific interaction strengths, and density-independent immigration from a regional species pool. Comparisons between results from numerical integrations and an approximate analytic calculation for random communities demonstrate good agreement, and both approaches yield abundance distributions of nearly arbitrary shape, including bimodality for intermediate immigration rates. PMID:15347523
Wilson, William G; Lundberg, Per
2004-09-22
Theoretical interest in the distributions of species abundances observed in ecological communities has focused recently on the results of models that assume all species are identical in their interactions with one another, and rely upon immigration and speciation to promote coexistence. Here we examine a one-trophic level system with generalized species interactions, including species-specific intraspecific and interspecific interaction strengths, and density-independent immigration from a regional species pool. Comparisons between results from numerical integrations and an approximate analytic calculation for random communities demonstrate good agreement, and both approaches yield abundance distributions of nearly arbitrary shape, including bimodality for intermediate immigration rates.
Mercury- Distributed Metadata Management, Data Discovery and Access System
NASA Astrophysics Data System (ADS)
Palanisamy, Giri; Wilson, Bruce E.; Devarakonda, Ranjeet; Green, James M.
2007-12-01
Mercury is a federated metadata harvesting, search and retrieval tool based on both open source and ORNL- developed software. It was originally developed for NASA, and the Mercury development consortium now includes funding from NASA, USGS, and DOE. Mercury supports various metadata standards including XML, Z39.50, FGDC, Dublin-Core, Darwin-Core, EML, and ISO-19115 (under development). Mercury provides a single portal to information contained in disparate data management systems. It collects metadata and key data from contributing project servers distributed around the world and builds a centralized index. The Mercury search interfaces then allow the users to perform simple, fielded, spatial and temporal searches across these metadata sources. This centralized repository of metadata with distributed data sources provides extremely fast search results to the user, while allowing data providers to advertise the availability of their data and maintain complete control and ownership of that data. Mercury supports various projects including: ORNL DAAC, NBII, DADDI, LBA, NARSTO, CDIAC, OCEAN, I3N, IAI, ESIP and ARM. The new Mercury system is based on a Service Oriented Architecture and supports various services such as Thesaurus Service, Gazetteer Web Service and UDDI Directory Services. This system also provides various search services including: RSS, Geo-RSS, OpenSearch, Web Services and Portlets. Other features include: Filtering and dynamic sorting of search results, book-markable search results, save, retrieve, and modify search criteria.
Real-time sensor validation and fusion for distributed autonomous sensors
NASA Astrophysics Data System (ADS)
Yuan, Xiaojing; Li, Xiangshang; Buckles, Bill P.
2004-04-01
Multi-sensor data fusion has found widespread applications in industrial and research sectors. The purpose of real time multi-sensor data fusion is to dynamically estimate an improved system model from a set of different data sources, i.e., sensors. This paper presented a systematic and unified real time sensor validation and fusion framework (RTSVFF) based on distributed autonomous sensors. The RTSVFF is an open architecture which consists of four layers - the transaction layer, the process fusion layer, the control layer, and the planning layer. This paradigm facilitates distribution of intelligence to the sensor level and sharing of information among sensors, controllers, and other devices in the system. The openness of the architecture also provides a platform to test different sensor validation and fusion algorithms and thus facilitates the selection of near optimal algorithms for specific sensor fusion application. In the version of the model presented in this paper, confidence weighted averaging is employed to address the dynamic system state issue noted above. The state is computed using an adaptive estimator and dynamic validation curve for numeric data fusion and a robust diagnostic map for decision level qualitative fusion. The framework is then applied to automatic monitoring of a gas-turbine engine, including a performance comparison of the proposed real-time sensor fusion algorithms and a traditional numerical weighted average.
Use of Open Architecture Middleware for Autonomous Platforms
NASA Astrophysics Data System (ADS)
Naranjo, Hector; Diez, Sergio; Ferrero, Francisco
2011-08-01
Network Enabled Capabilities (NEC) is the vision for next-generation systems in the defence domain formulated by governments, the European Defence Agency (EDA) and the North Atlantic Treaty Organization (NATO). It involves the federation of military information systems, rather than just a simple interconnection, to provide each user with the "right information, right place, right time - and not too much". It defines openness, standardization and flexibility principles in military systems, likewise applicable in the civilian space applications.This paper provides the conclusions drawn from "Architecture for Embarked Middleware" (EMWARE) study, funded by the European Defence Agency (EDA).The aim of the EMWARE project was to provide the information and understanding to facilitate the adoption of informed decisions regarding the specification and implementation of Open Architecture Middleware in future distributed systems, linking it with the NEC goal.EMWARE project included the definition of four business cases, each devoted to a different field of application (Unmanned Aerial Vehicles, Helicopters, Unmanned Ground Vehicles and the Satellite Ground Segment).
NASA Technical Reports Server (NTRS)
Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin
2000-01-01
The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.
Population Response to Habitat Fragmentation in a Stream-Dwelling Brook Trout Population
Letcher, Benjamin H.; Nislow, Keith H.; Coombs, Jason A.; O'Donnell, Matthew J.; Dubreuil, Todd L.
2007-01-01
Fragmentation can strongly influence population persistence and expression of life-history strategies in spatially-structured populations. In this study, we directly estimated size-specific dispersal, growth, and survival of stream-dwelling brook trout in a stream network with connected and naturally-isolated tributaries. We used multiple-generation, individual-based data to develop and parameterize a size-class and location-based population projection model, allowing us to test effects of fragmentation on population dynamics at local (i.e., subpopulation) and system-wide (i.e., metapopulation) scales, and to identify demographic rates which influence the persistence of isolated and fragmented populations. In the naturally-isolated tributary, persistence was associated with higher early juvenile survival (∼45% greater), shorter generation time (one-half) and strong selection against large body size compared to the open system, resulting in a stage-distribution skewed towards younger, smaller fish. Simulating barriers to upstream migration into two currently-connected tributary populations caused rapid (2–6 generations) local extinction. These local extinctions in turn increased the likelihood of system-wide extinction, as tributaries could no longer function as population sources. Extinction could be prevented in the open system if sufficient immigrants from downstream areas were available, but the influx of individuals necessary to counteract fragmentation effects was high (7–46% of the total population annually). In the absence of sufficient immigration, a demographic change (higher early survival characteristic of the isolated tributary) was also sufficient to rescue the population from fragmentation, suggesting that the observed differences in size distributions between the naturally-isolated and open system may reflect an evolutionary response to isolation. Combined with strong genetic divergence between the isolated tributary and open system, these results suggest that local adaptation can ‘rescue’ isolated populations, particularly in one-dimensional stream networks where both natural and anthropogenically-mediated isolation is common. However, whether rescue will occur before extinction depends critically on the race between adaptation and reduced survival in response to fragmentation. PMID:18188404
Population response to habitat fragmentation in a stream-dwelling brook trout population
Letcher, B.H.; Nislow, K.H.; Coombs, J.A.; O'Donnell, M. J.; Dubreuil, T.L.
2007-01-01
Fragmentation can strongly influence population persistence and expression of life-history strategies in spatially-structured populations. In this study, we directly estimated size-specific dispersal, growth, and survival of stream-dwelling brook trout in a stream network with connected and naturally-isolated tributaries. We used multiple-generation, individual-based data to develop and parameterize a size-class and location-based population projection model, allowing us to test effects of fragmentation on population dynamics at local (i.e., subpopulation) and system-wide (i.e., metapopulation) scales, and to identify demographic rates which influence the persistence of isolated and fragmented populations. In the naturally-isolated tributary, persistence was associated with higher early juvenile survival (-45% greater), shorter generation time (one-half) and strong selection against large body size compared to the open system, resulting in a stage-distribution skewed towards younger, smaller fish. Simulating barriers to upstream migration into two currently-connected tribuory populations caused rapid (2-6 generations) local extinction. These local extinctions in turn increased the likelihood of system-wide extinction, as tributaries could no longer function as population sources. Extinction could be prevented in the open system if sufficient immigrants from downstream areas were available, but the influx of individuals necessary to counteract fragmentation effects was high (7-46% of the total population annually). In the absence of sufficient immigration, a demographic change (higher early survival characteristic of the isolated tributary) was also sufficient to rescue the population from fragmentation, suggesting that the observed differences in size distributions between the naturally-isolated and open system may reflect an evolutionary response to isolation. Combined with strong genetic divergence between the isolated tributary and open system, these results suggest that local adaptation can 'rescue' isolated populations, particularly in one-dimensional stream networks where both natural and anthropegenically-mediated isolation is common. However, whether rescue will occur before extinction depends critically on the race between adaptation and reduced survival in response to fragmentation.
Responsive systems - The challenge for the nineties
NASA Technical Reports Server (NTRS)
Malek, Miroslaw
1990-01-01
A concept of responsive computer systems will be introduced. The emerging responsive systems demand fault-tolerant and real-time performance in parallel and distributed computing environments. The design methodologies for fault-tolerant, real time and responsive systems will be presented. Novel techniques of introducing redundancy for improved performance and dependability will be illustrated. The methods of system responsiveness evaluation will be proposed. The issues of determinism, closed and open systems will also be discussed from the perspective of responsive systems design.
NCI's Distributed Geospatial Data Server
NASA Astrophysics Data System (ADS)
Larraondo, P. R.; Evans, B. J. K.; Antony, J.
2016-12-01
Earth systems, environmental and geophysics datasets are an extremely valuable source of information about the state and evolution of the Earth. However, different disciplines and applications require this data to be post-processed in different ways before it can be used. For researchers experimenting with algorithms across large datasets or combining multiple data sets, the traditional approach to batch data processing and storing all the output for later analysis rapidly becomes unfeasible, and often requires additional work to publish for others to use. Recent developments on distributed computing using interactive access to significant cloud infrastructure opens the door for new ways of processing data on demand, hence alleviating the need for storage space for each individual copy of each product. The Australian National Computational Infrastructure (NCI) has developed a highly distributed geospatial data server which supports interactive processing of large geospatial data products, including satellite Earth Observation data and global model data, using flexible user-defined functions. This system dynamically and efficiently distributes the required computations among cloud nodes and thus provides a scalable analysis capability. In many cases this completely alleviates the need to preprocess and store the data as products. This system presents a standards-compliant interface, allowing ready accessibility for users of the data. Typical data wrangling problems such as handling different file formats and data types, or harmonising the coordinate projections or temporal and spatial resolutions, can now be handled automatically by this service. The geospatial data server exposes functionality for specifying how the data should be aggregated and transformed. The resulting products can be served using several standards such as the Open Geospatial Consortium's (OGC) Web Map Service (WMS) or Web Feature Service (WFS), Open Street Map tiles, or raw binary arrays under different conventions. We will show some cases where we have used this new capability to provide a significant improvement over previous approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsenin, V. V.; Terekhin, P. N.
2010-08-15
The Kruskal-Oberman kinetic model is used to determine the conditions for the convective stability of a plasma in a system of coupled axisymmetric adiabatic open cells in which the magnetic field curvature has opposite signs. For a combination of a nonparaxial simple mirror cell and a semicusp, the boundaries of the interval of values of the flux coordinate where the plasma can be stable are determined, as well as the range in which the ratio of the pressures in the component cells should lie. Numerical simulations were carried out for different particle distributions over the pitch angle.
Data management in Oceanography at SOCIB
NASA Astrophysics Data System (ADS)
Joaquin, Tintoré; March, David; Lora, Sebastian; Sebastian, Kristian; Frontera, Biel; Gómara, Sonia; Pau Beltran, Joan
2014-05-01
SOCIB, the Balearic Islands Coastal Ocean Observing and Forecasting System (http://www.socib.es), is a Marine Research Infrastructure, a multiplatform distributed and integrated system, a facility of facilities that extends from the nearshore to the open sea and provides free, open and quality control data. SOCIB is a facility o facilities and has three major infrastructure components: (1) a distributed multiplatform observing system, (2) a numerical forecasting system, and (3) a data management and visualization system. We present the spatial data infrastructure and applications developed at SOCIB. One of the major goals of the SOCIB Data Centre is to provide users with a system to locate and download the data of interest (near real-time and delayed mode) and to visualize and manage the information. Following SOCIB principles, data need to be (1) discoverable and accessible, (2) freely available, and (3) interoperable and standardized. In consequence, SOCIB Data Centre Facility is implementing a general data management system to guarantee international standards, quality assurance and interoperability. The combination of different sources and types of information requires appropriate methods to ingest, catalogue, display, and distribute this information. SOCIB Data Centre is responsible for directing the different stages of data management, ranging from data acquisition to its distribution and visualization through web applications. The system implemented relies on open source solutions. In other words, the data life cycle relies in the following stages: • Acquisition: The data managed by SOCIB mostly come from its own observation platforms, numerical models or information generated from the activities in the SIAS Division. • Processing: Applications developed at SOCIB to deal with all collected platform data performing data calibration, derivation, quality control and standardization. • Archival: Storage in netCDF and spatial databases. • Distribution: Data web services using Thredds, Geoserver and RESTful own services. • Catalogue: Metadata is provided through the ncISO plugin in Thredds and Geonetwork. • Visualization: web and mobile applications to present SOCIB data to different user profiles. SOCIB data services and applications have been developed to provide response to science and society needs (eg. European initiatives such as Emodnet or Copernicus), by targeting different user profiles (eg. researchers, technicians, policy and decision makers, educators, students, and society in general). For example, SOCIB has developed applications to: 1) allow researchers and technicians to access oceanographic information; 2) provide decision support for oil spills response; 3) disseminate information about the coastal state for tourists and recreational users; 4) present coastal research in educational programs; and 5) offer easy and fast access to marine information through mobile devices. In conclusion, the organizational and conceptual structure of SOCIB's Data Centre and the components developed provide an example of marine information systems within the framework of new ocean observatories and/or marine research infrastructures.
Distributed information system architecture for Primary Health Care.
Grammatikou, M; Stamatelopoulos, F; Maglaris, B
2000-01-01
We present a distributed architectural framework for Primary Health Care (PHC) Centres. Distribution is handled through the introduction of the Roaming Electronic Health Care Record (R-EHCR) and the use of local caching and incremental update of a global index. The proposed architecture is designed to accommodate a specific PHC workflow model. Finally, we discuss a pilot implementation in progress, which is based on CORBA and web-based user interfaces. However, the conceptual architecture is generic and open to other middleware approaches like the DHE or HL7.
Model for calorimetric measurements in an open quantum system
NASA Astrophysics Data System (ADS)
Donvil, Brecht; Muratore-Ginanneschi, Paolo; Pekola, Jukka P.; Schwieger, Kay
2018-05-01
We investigate the experimental setup proposed in New J. Phys. 15, 115006 (2013), 10.1088/1367-2630/15/11/115006 for calorimetric measurements of thermodynamic indicators in an open quantum system. As a theoretical model we consider a periodically driven qubit coupled with a large yet finite electron reservoir, the calorimeter. The calorimeter is initially at equilibrium with an infinite phonon bath. As time elapses, the temperature of the calorimeter varies in consequence of energy exchanges with the qubit and the phonon bath. We show how under weak-coupling assumptions, the evolution of the qubit-calorimeter system can be described by a generalized quantum jump process including as dynamical variable the temperature of the calorimeter. We study the jump process by numeric and analytic methods. Asymptotically with the duration of the drive, the qubit-calorimeter attains a steady state. In this same limit, we use multiscale perturbation theory to derive a Fokker-Planck equation governing the calorimeter temperature distribution. We inquire the properties of the temperature probability distribution close and at the steady state. In particular, we predict the behavior of measurable statistical indicators versus the qubit-calorimeter coupling constant.
Value Creation Through Integrated Networks and Convergence
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Martini, Paul; Taft, Jeffrey D.
2015-04-01
Customer adoption of distributed energy resources and public policies are driving changes in the uses of the distribution system. A system originally designed and built for one-way energy flows from central generating facilities to end-use customers is now experiencing injections of energy from customers anywhere on the grid and frequent reversals in the direction of energy flow. In response, regulators and utilities are re-thinking the design and operations of the grid to create more open and transactive electric networks. This evolution has the opportunity to unlock significant value for customers and utilities. Alternatively, failure to seize this potential may insteadmore » lead to an erosion of value if customers seek to defect and disconnect from the system. This paper will discuss how current grid modernization investments may be leveraged to create open networks that increase value through the interaction of intelligent devices on the grid and prosumerization of customers. Moreover, even greater value can be realized through the synergistic effects of convergence of multiple networks. This paper will highlight examples of the emerging nexus of non-electric networks with electricity.« less
Hale, M W; Hay-Schmidt, A; Mikkelsen, J D; Poulsen, B; Shekhar, A; Lowry, C A
2008-08-26
Anxiety states and anxiety-related behaviors appear to be regulated by a distributed and highly interconnected system of brain structures including the basolateral amygdala. Our previous studies demonstrate that exposure of rats to an open-field in high- and low-light conditions results in a marked increase in c-Fos expression in the anterior part of the basolateral amygdaloid nucleus (BLA) compared with controls. The neural mechanisms underlying the anatomically specific effects of open-field exposure on c-Fos expression in the BLA are not clear, however, it is likely that this reflects activation of specific afferent input to this region of the amygdala. In order to identify candidate brain regions mediating anxiety-induced activation of the basolateral amygdaloid complex in rats, we used cholera toxin B subunit (CTb) as a retrograde tracer to identify neurons with direct afferent projections to this region in combination with c-Fos immunostaining to identify cells responding to exposure to an open-field arena in low-light (8-13 lux) conditions (an anxiogenic stimulus in rats). Adult male Wistar rats received a unilateral microinjection of 4% CTb in phosphate-buffered saline into the basolateral amygdaloid complex. Rats were housed individually for 11 days after CTb injections and handled (HA) for 2 min each day. On the test day rats were either, 1) exposed to an open-field in low-light conditions (8-13 lux) for 15 min (OF); 2) briefly HA or 3) left undisturbed (control). We report that dual immunohistochemical staining for c-Fos and CTb revealed an increase in the percentage of c-Fos-immunopositive basolateral amygdaloid complex-projecting neurons in open-field-exposed rats compared with HA and control rats in the ipsilateral CA1 region of the ventral hippocampus, subiculum and lateral entorhinal cortex. These data are consistent with the hypothesis that exposure to the open-field arena activates an anxiety-related neuronal system with convergent input to the basolateral amygdaloid complex.
MFIRE-2: A Multi Agent System for Flow-Based Intrusion Detection Using Stochastic Search
2012-03-01
attacks that are distributed in nature , but may not protect individual systems effectively without incurring large bandwidth penalties while collecting...system-level information to help prepare for more significant attacks. The type of information potentially revealed by footprinting includes account...key areas where MAS may be appropriate: • The environment is open, highly dynamic, uncertain, or complex • Agents are a natural metaphor—Many
A Case for Open Network Health Systems: Systems as Networks in Public Mental Health.
Rhodes, Michael Grant; de Vries, Marten W
2017-01-08
Increases in incidents involving so-called confused persons have brought attention to the potential costs of recent changes to public mental health (PMH) services in the Netherlands. Decentralized under the (Community) Participation Act (2014), local governments must find resources to compensate for reduced central funding to such services or "innovate." But innovation, even when pressure for change is intense, is difficult. This perspective paper describes experience during and after an investigation into a particularly violent incident and murder. The aim was to provide recommendations to improve the functioning of local PMH services. The investigation concluded that no specific failure by an individual professional or service provider facility led to the murder. Instead, also as a result of the Participation Act that severed communication lines between individuals and organizations, information sharing failures were likely to have reduced system level capacity to identify risks. The methods and analytical frameworks employed to reach this conclusion, also lead to discussion as to the plausibility of an unconventional solution. If improving communication is the primary problem, non-hierarchical information, and organizational networks arise as possible and innovative system solutions. The proposal for debate is that traditional "health system" definitions, literature and narratives, and operating assumptions in public (mental) health are 'locked in' constraining technical and organization innovations. If we view a "health system" as an adaptive system of economic and social "networks," it becomes clear that the current orthodox solution, the so-called integrated health system, typically results in a "centralized hierarchical" or "tree" network. An overlooked alternative that breaks out of the established policy narratives is the view of a 'health systems' as a non-hierarchical organizational structure or 'Open Network.' In turn, this opens new technological and organizational possibilities in seeking policy solutions, and suggests an alternative governance model of huge potential value in public health both locally and globally. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Experimental Research on Boundary Shear Stress in Typical Meandering Channel
NASA Astrophysics Data System (ADS)
Chen, Kai-hua; Xia, Yun-feng; Zhang, Shi-zhao; Wen, Yun-cheng; Xu, Hua
2018-06-01
A novel instrument named Micro-Electro-Mechanical System (MEMS) flexible hot-film shear stress sensor was used to study the boundary shear stress distribution in the generalized natural meandering open channel, and the mean sidewall shear stress distribution along the meandering channel, and the lateral boundary shear stress distribution in the typical cross-section of the meandering channel was analysed. Based on the measurement of the boundary shear stress, a semi-empirical semi-theoretical computing approach of the boundary shear stress was derived including the effects of the secondary flow, sidewall roughness factor, eddy viscosity and the additional Reynolds stress, and more importantly, for the first time, it combined the effects of the cross-section central angle and the Reynolds number into the expressions. Afterwards, a comparison between the previous research and this study was developed. Following the result, we found that the semi-empirical semi-theoretical boundary shear stress distribution algorithm can predict the boundary shear stress distribution precisely. Finally, a single factor analysis was conducted on the relationship between the average sidewall shear stress on the convex and concave bank and the flow rate, water depth, slope ratio, or the cross-section central angle of the open channel bend. The functional relationship with each of the above factors was established, and then the distance from the location of the extreme sidewall shear stress to the bottom of the open channel was deduced based on the statistical theory.
Integrating the Web and continuous media through distributed objects
NASA Astrophysics Data System (ADS)
Labajo, Saul P.; Garcia, Narciso N.
1998-09-01
The Web has rapidly grown to become the standard for documents interchange on the Internet. At the same time the interest on transmitting continuous media flows on the Internet, and its associated applications like multimedia on demand, is also growing. Integrating both kinds of systems should allow building real hypermedia systems where all media objects can be linked from any other, taking into account temporal and spatial synchronization. A way to achieve this integration is using the Corba architecture. This is a standard for open distributed systems. There are also recent efforts to integrate Web and Corba systems. We use this architecture to build a service for distribution of data flows endowed with timing restrictions. We use to integrate it with the Web, by one side Java applets that can use the Corba architecture and are embedded on HTML pages. On the other side, we also benefit from the efforts to integrate Corba and the Web.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J
The purpose of this model was to facilitate the design of a control system that uses fine grained control of residential and small commercial HVAC loads to counterbalance voltage swings caused by intermittent solar power sources (e.g., rooftop panels) installed in that distribution circuit. Included is the source code and pre-compiled 64 bit dll for adding building HVAC loads to an OpenDSS distribution circuit. As written, the Makefile assumes you are using the Microsoft C++ development tools.
Open Rotor Tone Shielding Methods for System Noise Assessments Using Multiple Databases
NASA Technical Reports Server (NTRS)
Bahr, Christopher J.; Thomas, Russell H.; Lopes, Leonard V.; Burley, Casey L.; Van Zante, Dale E.
2014-01-01
Advanced aircraft designs such as the hybrid wing body, in conjunction with open rotor engines, may allow for significant improvements in the environmental impact of aviation. System noise assessments allow for the prediction of the aircraft noise of such designs while they are still in the conceptual phase. Due to significant requirements of computational methods, these predictions still rely on experimental data to account for the interaction of the open rotor tones with the hybrid wing body airframe. Recently, multiple aircraft system noise assessments have been conducted for hybrid wing body designs with open rotor engines. These assessments utilized measured benchmark data from a Propulsion Airframe Aeroacoustic interaction effects test. The measured data demonstrated airframe shielding of open rotor tonal and broadband noise with legacy F7/A7 open rotor blades. Two methods are proposed for improving the use of these data on general open rotor designs in a system noise assessment. The first, direct difference, is a simple octave band subtraction which does not account for tone distribution within the rotor acoustic signal. The second, tone matching, is a higher-fidelity process incorporating additional physical aspects of the problem, where isolated rotor tones are matched by their directivity to determine tone-by-tone shielding. A case study is conducted with the two methods to assess how well each reproduces the measured data and identify the merits of each. Both methods perform similarly for system level results and successfully approach the experimental data for the case study. The tone matching method provides additional tools for assessing the quality of the match to the data set. Additionally, a potential path to improve the tone matching method is provided.
Concepts for Distributed Engine Control
NASA Technical Reports Server (NTRS)
Culley, Dennis E.; Thomas, Randy; Saus, Joseph
2007-01-01
Gas turbine engines for aero-propulsion systems are found to be highly optimized machines after over 70 years of development. Still, additional performance improvements are sought while reduction in the overall cost is increasingly a driving factor. Control systems play a vitally important part in these metrics but are severely constrained by the operating environment and the consequences of system failure. The considerable challenges facing future engine control system design have been investigated. A preliminary analysis has been conducted of the potential benefits of distributed control architecture when applied to aero-engines. In particular, reductions in size, weight, and cost of the control system are possible. NASA is conducting research to further explore these benefits, with emphasis on the particular benefits enabled by high temperature electronics and an open-systems approach to standardized communications interfaces.
Security Criteria for Distributed Systems: Functional Requirements.
1995-09-01
Open Company Limited. Ziv , J. and A. Lempel . 1977. A Universal Algorithm for Sequential Data Compression . IEEE Transactions on Information Theory Vol...3, SCF-5 DCF-7. Configurable Cryptographic Algorithms (a) It shall be possible to configure the system such that the data confidentiality functions...use different cryptographic algorithms for different protocols (e.g., mail or interprocess communication data ). (b) The modes of encryption
ERIC Educational Resources Information Center
Brewer, Denise
2012-01-01
The air transport industry (ATI) is a dynamic, communal, international, and intercultural environment in which the daily operations of airlines, airports, and service providers are dependent on information technology (IT). Many of the IT legacy systems are more than 30 years old, and current regulations and the globally distributed workplace have…
Li, Peng; Ji, Haoran; Wang, Chengshan; ...
2017-03-22
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Peng; Ji, Haoran; Wang, Chengshan
The increasing penetration of distributed generators (DGs) exacerbates the risk of voltage violations in active distribution networks (ADNs). The conventional voltage regulation devices limited by the physical constraints are difficult to meet the requirement of real-time voltage and VAR control (VVC) with high precision when DGs fluctuate frequently. But, soft open point (SOP), a flexible power electronic device, can be used as the continuous reactive power source to realize the fast voltage regulation. Considering the cooperation of SOP and multiple regulation devices, this paper proposes a coordinated VVC method based on SOP for ADNs. Firstly, a time-series model of coordi-natedmore » VVC is developed to minimize operation costs and eliminate voltage violations of ADNs. Then, by applying the linearization and conic relaxation, the original nonconvex mixed-integer non-linear optimization model is converted into a mixed-integer second-order cone programming (MISOCP) model which can be efficiently solved to meet the requirement of voltage regulation rapidity. Here, we carried out some case studies on the IEEE 33-node system and IEEE 123-node system to illustrate the effectiveness of the proposed method.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Ji, Haoran; Wang, Chengshan
Distributed generators (DGs) including photovoltaic panels (PVs) have been integrated dramatically in active distribution networks (ADNs). Due to the strong volatility and uncertainty, the high penetration of PV generation immensely exacerbates the conditions of voltage violation in ADNs. However, the emerging flexible interconnection technology based on soft open points (SOPs) provides increased controllability and flexibility to the system operation. For fully exploiting the regulation ability of SOPs to address the problems caused by PV, this paper proposes a robust optimization method to achieve the robust optimal operation of SOPs in ADNs. A two-stage adjustable robust optimization model is built tomore » tackle the uncertainties of PV outputs, in which robust operation strategies of SOPs are generated to eliminate the voltage violations and reduce the power losses of ADNs. A column-and-constraint generation (C&CG) algorithm is developed to solve the proposed robust optimization model, which are formulated as second-order cone program (SOCP) to facilitate the accuracy and computation efficiency. Case studies on the modified IEEE 33-node system and comparisons with the deterministic optimization approach are conducted to verify the effectiveness and robustness of the proposed method.« less
Autonomous watersheds: Reducing flooding and stream erosion through real-time control
NASA Astrophysics Data System (ADS)
Kerkez, B.; Wong, B. P.
2017-12-01
We introduce an analytical toolchain, based on dynamical system theory and feedback control, to determine how many control points (valves, gates, pumps, etc.) are needed to transform urban watersheds from static to adaptive. Advances and distributed sensing and control stand to fundamentally change how we manage urban watersheds. In lieu of new and costly infrastructure, the real-time control of stormwater systems will reduce flooding, mitigate stream erosion, and improve the treatment of polluted runoff. We discuss the how open source technologies, in the form of wireless sensor nodes and remotely-controllable valves (open-storm.org), have been deployed to build "smart" stormwater systems in the Midwestern US. Unlike "static" infrastructure, which cannot readily adapt to changing inputs and land uses, these distributed control assets allow entire watersheds to be reconfigured on a storm-by-storm basis. Our results show how the control of even just a few valves within urban catchments (1-10km^2) allows for the real-time "shaping" of hydrographs, which reduces downstream erosion and flooding. We also introduce an equivalence framework that can be used by decision-makers to objectively compare investments into "smart" system to more traditional solutions, such as gray and green stormwater infrastructure.
NASA Astrophysics Data System (ADS)
Almeida, W. G.; Ferreira, A. L.; Mendes, M. V.; Ribeiro, A.; Yoksas, T.
2007-05-01
CPTEC, a division of Brazil’s INPE, has been using several open-source software packages for a variety of tasks in its Data Division. Among these tools are ones traditionally used in research and educational communities such as GrADs (Grid Analysis and Display System from the Center for Ocean-Land-Atmosphere Studies (COLA)), the Local Data Manager (LDM) and GEMPAK (from Unidata), andl operational tools such the Automatic File Distributor (AFD) that are popular among National Meteorological Services. In addition, some tools developed locally at CPTEC are also being made available as open-source packages. One package is being used to manage the data from Automatic Weather Stations that INPE operates. This system uses only open- source tools such as MySQL database, PERL scripts and Java programs for web access, and Unidata’s Internet Data Distribution (IDD) system and AFD for data delivery. All of these packages are get bundled into a low-cost and easy to install and package called the Meteorological Data Operational System. Recently, in a cooperation with the SICLIMAD project, this system has been modified for use by Portuguese- speaking countries in Africa to manage data from many Automatic Weather Stations that are being installed in these countries under SICLIMAD sponsorship. In this presentation we describe the tools included-in and and architecture-of the Meteorological Data Operational System.
Open discovery: An integrated live Linux platform of Bioinformatics tools.
Vetrivel, Umashankar; Pilla, Kalabharath
2008-01-01
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery - a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in.
Michel, Anna P M; Kapit, Jason; Witinski, Mark F; Blanchard, Romain
2017-04-10
Methane is a powerful greenhouse gas that has both natural and anthropogenic sources. The ability to measure methane using an integrated path length approach such as an open/long-path length sensor would be beneficial in several environments for examining anthropogenic and natural sources, including tundra landscapes, rivers, lakes, landfills, estuaries, fracking sites, pipelines, and agricultural sites. Here a broadband monolithic distributed feedback-quantum cascade laser array was utilized as the source for an open-path methane sensor. Two telescopes were utilized for the launch (laser source) and receiver (detector) in a bistatic configuration for methane sensing across a 50 m path length. Direct-absorption spectroscopy was utilized with intrapulse tuning. Ambient methane levels were detectable, and an instrument precision of 70 ppb with 100 s averaging and 90 ppb with 10 s averaging was achieved. The sensor system was designed to work "off the grid" and utilizes batteries that are rechargeable with solar panels and wind turbines.
NASA Astrophysics Data System (ADS)
Mohammed, Touseef Ahmed Faisal
Since 2000, renewable electricity installations in the United States (excluding hydropower) have more than tripled. Renewable electricity has grown at a compounded annual average of nearly 14% per year from 2000-2010. Wind, Concentrated Solar Power (CSP) and solar Photo Voltaic (PV) are the fastest growing renewable energy sectors. In 2010 in the U.S., solar PV grew over 71% and CSP grew by 18% from the previous year. Globally renewable electricity installations have more than quadrupled from 2000-2010. Solar PV generation grew by a factor of more than 28 between 2000 and 2010. The amount of CSP and solar PV installations are increasing on the distribution grid. These PV installations transmit electrical current from the load centers to the generating stations. But the transmission and distribution grid have been designed for uni-directional flow of electrical energy from generating stations to load centers. This causes imbalances in voltage and switchgear of the electrical circuitry. With the continuous rise in PV installations, analysis of voltage profile and penetration levels remain an active area of research. Standard distributed photovoltaic (PV) generators represented in simulation studies do not reflect the exact location and variability properties such as distance between interconnection points to substations, voltage regulators, solar irradiance and other environmental factors. Quasi-Static simulations assist in peak load planning hour and day ahead as it gives a time sequence analysis to help in generation allocation. Simulation models can be daily, hourly or yearly depending on duty cycle and dynamics of the system. High penetration of PV into the power grid changes the voltage profile and power flow dynamically in the distribution circuits due to the inherent variability of PV. There are a number of modeling and simulations tools available for the study of such high penetration PV scenarios. This thesis will specifically utilize OpenDSS, a open source Distribution System Simulator developed by Electric Power Research Institute, to simulate grid voltage profile with a large scale PV system under quasi-static time series considering variations of PV output in seconds, minutes, and the average daily load variations. A 13 bus IEEE distribution feeder model is utilized with distributed residential and commercial scale PV at different buses for simulation studies. Time series simulations are discussed for various modes of operation considering dynamic PV penetration at different time periods in a day. In addition, this thesis demonstrates simulations taking into account the presence of moving cloud for solar forecasting studies.
Kaufman, Scott Barry; Benedek, Mathias; Jung, Rex E.; Kenett, Yoed N.; Jauk, Emanuel; Neubauer, Aljoscha C.; Silvia, Paul J.
2015-01-01
Abstract The brain's default network (DN) has been a topic of considerable empirical interest. In fMRI research, DN activity is associated with spontaneous and self‐generated cognition, such as mind‐wandering, episodic memory retrieval, future thinking, mental simulation, theory of mind reasoning, and creative cognition. Despite large literatures on developmental and disease‐related influences on the DN, surprisingly little is known about the factors that impact normal variation in DN functioning. Using structural equation modeling and graph theoretical analysis of resting‐state fMRI data, we provide evidence that Openness to Experience—a normally distributed personality trait reflecting a tendency to engage in imaginative, creative, and abstract cognitive processes—underlies efficiency of information processing within the DN. Across two studies, Openness predicted the global efficiency of a functional network comprised of DN nodes and corresponding edges. In Study 2, Openness remained a robust predictor—even after controlling for intelligence, age, gender, and other personality variables—explaining 18% of the variance in DN functioning. These findings point to a biological basis of Openness to Experience, and suggest that normally distributed personality traits affect the intrinsic architecture of large‐scale brain systems. Hum Brain Mapp 37:773–779, 2016. © 2015 Wiley Periodicals, Inc. PMID:26610181
Integrating Socioeconomic and Earth Science Data Using Geobrowsers and Web Services: A Demonstration
NASA Astrophysics Data System (ADS)
Schumacher, J. A.; Yetman, G. G.
2007-12-01
The societal benefit areas identified as the focus for the Global Earth Observing System of Systems (GEOSS) 10- year implementation plan are an indicator of the importance of integrating socioeconomic data with earth science data to support decision makers. To aid this integration, CIESIN is delivering its global and U.S. demographic data to commercial and open source Geobrowsers and providing open standards based services for data access. Currently, data on population distribution, poverty, and detailed census data for the U.S. are available for visualization and access in Google Earth, NASA World Wind, and a browser-based 2-dimensional mapping client. The mapping client allows for the creation of web map documents that pull together layers from distributed servers and can be saved and shared. Visualization tools with Geobrowsers, user-driven map creation and sharing via browser-based clients, and a prototype for characterizing populations at risk to predicted precipitation deficits will be demonstrated.
NASA Astrophysics Data System (ADS)
Salcedo Ulerio, Reynaldo Odalis
The analysis of overvoltages in electrical distribution networks is of considerable significance since they may damage the power system infrastructure and the associated electrical equipment. Overvoltages in distribution networks arise due to switching transients, resonance, lightning strikes and ground faults, among other causes. The operation of network protectors (NWP), low voltage circuit breakers with directional power relay, in a secondary network prevents the continuous flow of reverse power. There are three modes of operation for the network protectors: sensitive, time delayed, and insensitive. In case of a fault, although all of the network protectors sense the fault at the same time, their operation is not simultaneous. Many of them open very quickly with opening times similar to those of the feeder breaker. However, some operate a few cycles later, others take several seconds to open and a few might even fail to operate. Therefore, depending on the settings of the network protectors, faults can last for significantly long time due to backfeeding of current from the low voltage (LV) network into the medium voltage (MV) network. In this work, low voltages are defined as 208V/460V and medium voltage are defined as 25kV/35kV. This thesis presents overvoltages which arise because of the occurrence of a single-line-to-ground (SLG) fault on the MV side (connected in delta) of the system. The thesis reveals that overvoltage stresses are imposed on insulation, micro-processor controlled equipment, and switching devices by overvoltages during current backfeeding. Also, it establishes a relationship between overvoltage magnitude, its duration, and the network loading conditions. Overvoltages above 3 p.u. may be developed as a result of a simultaneous occurrence of three phenomena: neutral displacement, Ferranti effect, and magnetic current chopping. Furthermore, this thesis exposes the possibility of occurrence of the ferro-resonance phenomena in a distribution network having secondary grid, making the study of extreme importance especially in the case of a misoperating network protector. The test systems for both studies were designed following the conventional distribution network with secondary grid, similar to those in the New York City Area. Simulations were performed using the electro-magnetic transient program revised version (EMTP-RV) considering detailed representation of system components as well as the non-linear magnetization and losses of transformers.
Baghirov, Habib; Snipstad, Sofie; Sulheim, Einar; Berg, Sigrid; Hansen, Rune; Thorsen, Frits; Mørch, Yrr; Åslund, Andreas K. O.
2018-01-01
The treatment of brain diseases is hindered by the blood-brain barrier (BBB) preventing most drugs from entering the brain. Focused ultrasound (FUS) with microbubbles can open the BBB safely and reversibly. Systemic drug injection might induce toxicity, but encapsulation into nanoparticles reduces accumulation in normal tissue. Here we used a novel platform based on poly(2-ethyl-butyl cyanoacrylate) nanoparticle-stabilized microbubbles to permeabilize the BBB in a melanoma brain metastasis model. With a dual-frequency ultrasound transducer generating FUS at 1.1 MHz and 7.8 MHz, we opened the BBB using nanoparticle-microbubbles and low-frequency FUS, and applied high-frequency FUS to generate acoustic radiation force and push nanoparticles through the extracellular matrix. Using confocal microscopy and image analysis, we quantified nanoparticle extravasation and distribution in the brain parenchyma. We also evaluated haemorrhage, as well as the expression of P-glycoprotein, a key BBB component. FUS and microbubbles distributed nanoparticles in the brain parenchyma, and the distribution depended on the extent of BBB opening. The results from acoustic radiation force were not conclusive, but in a few animals some effect could be detected. P-glycoprotein was not significantly altered immediately after sonication. In summary, FUS with our nanoparticle-stabilized microbubbles can achieve accumulation and displacement of nanoparticles in the brain parenchyma. PMID:29338016
Distributed visualization of gridded geophysical data: the Carbon Data Explorer, version 0.2.3
NASA Astrophysics Data System (ADS)
Endsley, K. A.; Billmire, M. G.
2016-01-01
Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool - the Carbon Data Explorer - that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Pratt, Annabelle; Bialek, Tom
2016-11-21
This paper reports on tools and methodologies developed to study the impact of adding rooftop photovoltaic (PV) systems, with and without the ability to provide voltage support, on the voltage profile of distribution feeders. Simulation results are provided from a study of a specific utility feeder. The simulation model of the utility distribution feeder was built in OpenDSS and verified by comparing the simulated voltages to field measurements. First, we set all PV systems to operate at unity power factor and analyzed the impact on feeder voltages. Then we conducted multiple simulations with voltage support activated for all the smartmore » PV inverters. These included different constant power factor settings and volt/VAR controls.« less
Changes to Quantum Cryptography
NASA Astrophysics Data System (ADS)
Sakai, Yasuyuki; Tanaka, Hidema
Quantum cryptography has become a subject of widespread interest. In particular, quantum key distribution, which provides a secure key agreement by using quantum systems, is believed to be the most important application of quantum cryptography. Quantum key distribution has the potential to achieve the “unconditionally” secure infrastructure. We also have many cryptographic tools that are based on “modern cryptography” at the present time. They are being used in an effort to guarantee secure communication over open networks such as the Internet. Unfortunately, their ultimate efficacy is in doubt. Quantum key distribution systems are believed to be close to practical and commercial use. In this paper, we discuss what we should do to apply quantum cryptography to our communications. We also discuss how quantum key distribution can be combined with or used to replace cryptographic tools based on modern cryptography.
The event notification and alarm system for the Open Science Grid operations center
NASA Astrophysics Data System (ADS)
Hayashi, S.; Teige and, S.; Quick, R.
2012-12-01
The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.
Advanced and secure architectural EHR approaches.
Blobel, Bernd
2006-01-01
Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.
Event parallelism: Distributed memory parallel computing for high energy physics experiments
NASA Astrophysics Data System (ADS)
Nash, Thomas
1989-12-01
This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.
Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion
NASA Astrophysics Data System (ADS)
Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger
2007-12-01
Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.
Cloud-based distributed control of unmanned systems
NASA Astrophysics Data System (ADS)
Nguyen, Kim B.; Powell, Darren N.; Yetman, Charles; August, Michael; Alderson, Susan L.; Raney, Christopher J.
2015-05-01
Enabling warfighters to efficiently and safely execute dangerous missions, unmanned systems have been an increasingly valuable component in modern warfare. The evolving use of unmanned systems leads to vast amounts of data collected from sensors placed on the remote vehicles. As a result, many command and control (C2) systems have been developed to provide the necessary tools to perform one of the following functions: controlling the unmanned vehicle or analyzing and processing the sensory data from unmanned vehicles. These C2 systems are often disparate from one another, limiting the ability to optimally distribute data among different users. The Space and Naval Warfare Systems Center Pacific (SSC Pacific) seeks to address this technology gap through the UxV to the Cloud via Widgets project. The overarching intent of this three year effort is to provide three major capabilities: 1) unmanned vehicle control using an open service oriented architecture; 2) data distribution utilizing cloud technologies; 3) a collection of web-based tools enabling analysts to better view and process data. This paper focuses on how the UxV to the Cloud via Widgets system is designed and implemented by leveraging the following technologies: Data Distribution Service (DDS), Accumulo, Hadoop, and Ozone Widget Framework (OWF).
An analysis of water data systems to inform the Open Water Data Initiative
Blodgett, David L.; Read, Emily K.; Lucido, Jessica M.; Slawecki, Tad; Young, Dwane
2016-01-01
Improving access to data and fostering open exchange of water information is foundational to solving water resources issues. In this vein, the Department of the Interior's Assistant Secretary for Water and Science put forward the charge to undertake an Open Water Data Initiative (OWDI) that would prioritize and accelerate work toward better water data infrastructure. The goal of the OWDI is to build out the Open Water Web (OWW). We therefore considered the OWW in terms of four conceptual functions: water data cataloging, water data as a service, enriching water data, and community for water data. To describe the current state of the OWW and identify areas needing improvement, we conducted an analysis of existing systems using a standard model for describing distributed systems and their business requirements. Our analysis considered three OWDI-focused use cases—flooding, drought, and contaminant transport—and then examined the landscape of other existing applications that support the Open Water Web. The analysis, which includes a discussion of observed successful practices of cataloging, serving, enriching, and building community around water resources data, demonstrates that we have made significant progress toward the needed infrastructure, although challenges remain. The further development of the OWW can be greatly informed by the interpretation and findings of our analysis.
PACS for Bhutan: a cost effective open source architecture for emerging countries.
Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine
2016-10-01
This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.
Strategies for Competition Beyond Open Architecture (OA): Acquisition at the Edge of Chaos
2014-11-08
critical region in which the global properties of the system take on regular behavior, such as a power-law distribution of event sizes. Such ideas are...standardization can improve system developments. For example, satellite development can be improved by using standard interfaces for the sensors...installed on the satellite bus. This allows for more adaptability within the system.1 The adaptability is a key foundation for realizing capability on demand
Focused Logistics and Support for Force Projection in Force XXI and Beyond
1999-12-09
business system linking trading partners with point of sale demand and real time manufacturing for clothing items.17 Quick Response achieved $1.7...be able to determine the real - time status and supply requirements of units. With "distributed logistics system software model hosts൨ and active...location, quantity, condition, and movement of assets. The system is designed to be fully automated, operate in near- real time with an open-architecture
Open discovery: An integrated live Linux platform of Bioinformatics tools
Vetrivel, Umashankar; Pilla, Kalabharath
2008-01-01
Historically, live linux distributions for Bioinformatics have paved way for portability of Bioinformatics workbench in a platform independent manner. Moreover, most of the existing live Linux distributions limit their usage to sequence analysis and basic molecular visualization programs and are devoid of data persistence. Hence, open discovery ‐ a live linux distribution has been developed with the capability to perform complex tasks like molecular modeling, docking and molecular dynamics in a swift manner. Furthermore, it is also equipped with complete sequence analysis environment and is capable of running windows executable programs in Linux environment. Open discovery portrays the advanced customizable configuration of fedora, with data persistency accessible via USB drive or DVD. Availability The Open Discovery is distributed free under Academic Free License (AFL) and can be downloaded from http://www.OpenDiscovery.org.in PMID:19238235
THE BERKELEY DATA ANALYSIS SYSTEM (BDAS): AN OPEN SOURCE PLATFORM FOR BIG DATA ANALYTICS
2017-09-01
Evan Sparks, Oliver Zahn, Michael J. Franklin, David A. Patterson, Saul Perlmutter. Scientific Computing Meets Big Data Technology: An Astronomy ...Processing Astronomy Imagery Using Big Data Technology. IEEE Transaction on Big Data, 2016. Approved for Public Release; Distribution Unlimited. 22 [93
Kricheldorf, Fabio; Bueno, Cleuber Rodrigo de Souza; Amaral, Wilson da Silva; Junior, Joel Ferreira Santiago; Filho, Hugo Nary
2018-01-01
The objective of this study is to compare the marginal adaptation of feldspathic porcelain crowns using two computer-aided design/computer-aided manufacturing systems, one of them is open and the other is closed. Twenty identical titanium abutments were divided into two groups: open system (OS), where ceramic crowns were created using varied equipment and software, and closed system (CS), where ceramic crowns were created using the CEREC system. Through optical microscopy analysis, we assess the marginal adaptation of the prosthetic interfaces. The data were subjected to the distribution of normality and variance. The t -test was used for the analysis of the comparison factor between the groups, and the one-way ANOVA was used to compare the variance of crown analysis regions within the group. A significance level of 5% was considered for the analyses. There was a significant difference between the systems ( P = 0.007), with the CS group having the higher mean (23.75 μm ± 3.05) of marginal discrepancy when compared to the open group (17.94 μm ± 4.77). Furthermore, there were no differences in marginal discrepancy between the different points between the groups ( P ≥ 0.05). The studied groups presented results within the requirements set out in the literature. However, the OS used presented better results in marginal adaptation.
Telescience - Optimizing aerospace science return through geographically distributed operations
NASA Technical Reports Server (NTRS)
Rasmussen, Daryl N.; Mian, Arshad M.
1990-01-01
The paper examines the objectives and requirements of teleoperations, defined as the means and process for scientists, NASA operations personnel, and astronauts to conduct payload operations as if these were colocated. This process is described in terms of Space Station era platforms. Some of the enabling technologies are discussed, including open architecture workstations, distributed computing, transaction management, expert systems, and high-speed networks. Recent testbedding experiments are surveyed to highlight some of the human factors requirements.
ERIC Educational Resources Information Center
Pardos, Zachary A.; Whyte, Anthony; Kao, Kevin
2016-01-01
In this paper, we address issues of transparency, modularity, and privacy with the introduction of an open source, web-based data repository and analysis tool tailored to the Massive Open Online Course community. The tool integrates data request/authorization and distribution workflow features as well as provides a simple analytics module upload…
The Tithing of Higher Education, Out-of-Pocket Spending by Faculty. A Research Report.
ERIC Educational Resources Information Center
Maury, Kathleen; And Others
This study was done to determine how much faculty in the Minnesota State University System spend out of their own pocket to support their work. A survey was distributed to all system faculty (n=2,370) and included demographic and spending pattern items as well as open-ended items. Seven hundred and eleven surveys were returned. Results indicated…
Klise, Katherine A.; Bynum, Michael; Moriarty, Dylan; ...
2017-07-07
Water utilities are vulnerable to a wide variety of human-caused and natural disasters. The Water Network Tool for Resilience (WNTR) is a new open source PythonTM package designed to help water utilities investigate resilience of water distribution systems to hazards and evaluate resilience-enhancing actions. In this paper, the WNTR modeling framework is presented and a case study is described that uses WNTR to simulate the effects of an earthquake on a water distribution system. The case study illustrates that the severity of damage is not only a function of system integrity and earthquake magnitude, but also of the available resourcesmore » and repair strategies used to return the system to normal operating conditions. While earthquakes are particularly concerning since buried water distribution pipelines are highly susceptible to damage, the software framework can be applied to other types of hazards, including power outages and contamination incidents.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.; Bynum, Michael; Moriarty, Dylan
Water utilities are vulnerable to a wide variety of human-caused and natural disasters. The Water Network Tool for Resilience (WNTR) is a new open source PythonTM package designed to help water utilities investigate resilience of water distribution systems to hazards and evaluate resilience-enhancing actions. In this paper, the WNTR modeling framework is presented and a case study is described that uses WNTR to simulate the effects of an earthquake on a water distribution system. The case study illustrates that the severity of damage is not only a function of system integrity and earthquake magnitude, but also of the available resourcesmore » and repair strategies used to return the system to normal operating conditions. While earthquakes are particularly concerning since buried water distribution pipelines are highly susceptible to damage, the software framework can be applied to other types of hazards, including power outages and contamination incidents.« less
Study on process design of partially-balanced, hydraulically lifting vertical ship lift
NASA Astrophysics Data System (ADS)
Xin, Shen; Xiaofeng, Xu; Lu, Zhang; Bing, Zhu; Fei, Li
2017-11-01
The hub ship lift in Panjin is the first navigation structure in China for the link between the inland and open seas, which adopts a novel partially-balanced, hydraulically lifting ship lift; it can meet such requirements as fast and sharp water level change in open sea, large draft of a yacht, and launching of a ship reception chamber; its balancing weight system can effectively reduce the load of the primary lifting cylinder, and optimize the force distribution of the ship reception chamber. The paper provides an introduction to main equipment, basic principles, main features and system composition of a ship lift. The unique power system and balancing system of the completed ship lift has offered some experience for the construction of the tourism-type ship lifts with a lower lifting height.
Experimental and modelling of Arthrospira platensis cultivation in open raceway ponds.
Ranganathan, Panneerselvam; Amal, J C; Savithri, S; Haridas, Ajith
2017-10-01
In this study, the growth of Arthrospira platensis was studied in an open raceway pond. Furthermore, dynamic model for algae growth and CFD modelling of hydrodynamics in open raceway pond were developed. The dynamic behaviour of the algal system was developed by solving mass balance equations of various components, considering light intensity and gas-liquid mass transfer. A CFD modelling of the hydrodynamics of open raceway pond was developed by solving mass and momentum balance equations of the liquid medium. The prediction of algae concentration from the dynamic model was compared with the experimental data. The hydrodynamic behaviour of the open raceway pond was compared with the literature data for model validation. The model predictions match the experimental findings. Furthermore, the hydrodynamic behaviour and residence time distribution in our small raceway pond were predicted. These models can serve as a tool to assess the pond performance criteria. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Drescher, Anushka C.; Yost, Michael G.; Park, Doo Y.; Levine, Steven P.; Gadgil, Ashok J.; Fischer, Marc L.; Nazaroff, William W.
1995-05-01
Optical remote sensing and iterative computed tomography (CT) can be combined to measure the spatial distribution of gaseous pollutant concentrations in a plane. We have conducted chamber experiments to test this combination of techniques using an Open Path Fourier Transform Infrared Spectrometer (OP-FTIR) and a standard algebraic reconstruction technique (ART). ART was found to converge to solutions that showed excellent agreement with the ray integral concentrations measured by the FTIR but were inconsistent with simultaneously gathered point sample concentration measurements. A new CT method was developed based on (a) the superposition of bivariate Gaussians to model the concentration distribution and (b) a simulated annealing minimization routine to find the parameters of the Gaussians that resulted in the best fit to the ray integral concentration data. This new method, named smooth basis function minimization (SBFM) generated reconstructions that agreed well, both qualitatively and quantitatively, with the concentration profiles generated from point sampling. We present one set of illustrative experimental data to compare the performance of ART and SBFM.
KeyWare: an open wireless distributed computing environment
NASA Astrophysics Data System (ADS)
Shpantzer, Isaac; Schoenfeld, Larry; Grindahl, Merv; Kelman, Vladimir
1995-12-01
Deployment of distributed applications in the wireless domain lack equivalent tools, methodologies, architectures, and network management that exist in LAN based applications. A wireless distributed computing environment (KeyWareTM) based on intelligent agents within a multiple client multiple server scheme was developed to resolve this problem. KeyWare renders concurrent application services to wireline and wireless client nodes encapsulated in multiple paradigms such as message delivery, database access, e-mail, and file transfer. These services and paradigms are optimized to cope with temporal and spatial radio coverage, high latency, limited throughput and transmission costs. A unified network management paradigm for both wireless and wireline facilitates seamless extensions of LAN- based management tools to include wireless nodes. A set of object oriented tools and methodologies enables direct asynchronous invocation of agent-based services supplemented by tool-sets matched to supported KeyWare paradigms. The open architecture embodiment of KeyWare enables a wide selection of client node computing platforms, operating systems, transport protocols, radio modems and infrastructures while maintaining application portability.
NASA Astrophysics Data System (ADS)
Qi, Bing; Lougovski, Pavel; Pooser, Raphael; Grice, Warren; Bobrek, Miljko
2015-10-01
Continuous-variable quantum key distribution (CV-QKD) protocols based on coherent detection have been studied extensively in both theory and experiment. In all the existing implementations of CV-QKD, both the quantum signal and the local oscillator (LO) are generated from the same laser and propagate through the insecure quantum channel. This arrangement may open security loopholes and limit the potential applications of CV-QKD. In this paper, we propose and demonstrate a pilot-aided feedforward data recovery scheme that enables reliable coherent detection using a "locally" generated LO. Using two independent commercial laser sources and a spool of 25-km optical fiber, we construct a coherent communication system. The variance of the phase noise introduced by the proposed scheme is measured to be 0.04 (rad2 ), which is small enough to enable secure key distribution. This technology also opens the door for other quantum communication protocols, such as the recently proposed measurement-device-independent CV-QKD, where independent light sources are employed by different users.
An approach for the semantic interoperability of ISO EN 13606 and OpenEHR archetypes.
Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2010-10-01
The communication between health information systems of hospitals and primary care organizations is currently an important challenge to improve the quality of clinical practice and patient safety. However, clinical information is usually distributed among several independent systems that may be syntactically or semantically incompatible. This fact prevents healthcare professionals from accessing clinical information of patients in an understandable and normalized way. In this work, we address the semantic interoperability of two EHR standards: OpenEHR and ISO EN 13606. Both standards follow the dual model approach which distinguishes information and knowledge, this being represented through archetypes. The solution presented here is capable of transforming OpenEHR archetypes into ISO EN 13606 and vice versa by combining Semantic Web and Model-driven Engineering technologies. The resulting software implementation has been tested using publicly available collections of archetypes for both standards.
Method and apparatus for active tamper indicating device using optical time-domain reflectometry
Smith, D. Barton; Muhs, Jeffrey D.; Pickett, Chris A.; Earl, D. Duncan
1999-01-01
An optical time-domain reflectometer (OTDR) launches pulses of light into a link or a system of multiplexed links and records the waveform of pulses reflected by the seals in the link(s). If a seal is opened, the link of cables will become a discontinuous transmitter of the light pulses and the OTDR can immediately detect that a seal has been opened. By analyzing the waveform, the OTDR can also quickly determine which seal(s) were opened. In this way the invention functions as a system of active seals. The invention is intended for applications that require long-term surveillance of a large number of closures. It provides immediate tamper detection, allows for periodic access to secured closures, and can be configured for many different distributions of closures. It can monitor closures in indoor and outdoor locations and it can monitor containers or groups of containers located many kilometers apart.
Recent insights into instability and transition to turbulence in open-flow systems
NASA Technical Reports Server (NTRS)
Morkovin, Mark V.
1988-01-01
Roads to turbulence in open-flow shear layers are interpreted as sequences of often competing instabilities. These correspond to primary and higher order restructurings of vorticity distributions which culminate in convected spatial disorder (with some spatial coherence on the scale of the shear layer) traditionally called turbulence. Attempts are made to interpret these phenomena in terms of concepts of convective and global instabilities on one hand, and of chaos and strange attractors on the other. The first is fruitful, and together with a review of mechanisms of receptivity provides a unifying approach to understanding and estimating transition to turbulence. In contrast, current evidence indicates that concepts of chaos are unlikely to help in predicting transition in open-flow systems. Furthermore, a distinction should apparently be made between temporal chaos and the convected spatial disorder of turbulence past Reynolds numbers where boundary layers and separated shear layers are formed.
Grefenstette, John J; Brown, Shawn T; Rosenfeld, Roni; DePasse, Jay; Stone, Nathan T B; Cooley, Phillip C; Wheaton, William D; Fyshe, Alona; Galloway, David D; Sriram, Anuroop; Guclu, Hasan; Abraham, Thomas; Burke, Donald S
2013-10-08
Mathematical and computational models provide valuable tools that help public health planners to evaluate competing health interventions, especially for novel circumstances that cannot be examined through observational or controlled studies, such as pandemic influenza. The spread of diseases like influenza depends on the mixing patterns within the population, and these mixing patterns depend in part on local factors including the spatial distribution and age structure of the population, the distribution of size and composition of households, employment status and commuting patterns of adults, and the size and age structure of schools. Finally, public health planners must take into account the health behavior patterns of the population, patterns that often vary according to socioeconomic factors such as race, household income, and education levels. FRED (a Framework for Reconstructing Epidemic Dynamics) is a freely available open-source agent-based modeling system based closely on models used in previously published studies of pandemic influenza. This version of FRED uses open-access census-based synthetic populations that capture the demographic and geographic heterogeneities of the population, including realistic household, school, and workplace social networks. FRED epidemic models are currently available for every state and county in the United States, and for selected international locations. State and county public health planners can use FRED to explore the effects of possible influenza epidemics in specific geographic regions of interest and to help evaluate the effect of interventions such as vaccination programs and school closure policies. FRED is available under a free open source license in order to contribute to the development of better modeling tools and to encourage open discussion of modeling tools being used to evaluate public health policies. We also welcome participation by other researchers in the further development of FRED.
Power System Simulation for Policymaking and Making Policymakers
NASA Astrophysics Data System (ADS)
Cohen, Michael Ari
Power system simulation is a vital tool for anticipating, planning for and ultimately addressing future conditions on the power grid, especially in light of contemporary shifts in power generation, transmission and use that are being driven by a desire to utilize more environmentally responsible energy sources. This dissertation leverages power system simulation and engineering-economic analysis to provide initial answers to one open question about future power systems: how will high penetrations of distributed (rooftop) solar power affect the physical and economic operation of distribution feeders? We find that the overall impacts of distributed solar power (both positive and negative) on the feeders we modeled are minor compared to the overall cost of energy, but that there is on average a small net benefit provided by distributed generation. We then describe an effort to make similar analyses more accessible to a non-engineering (high school) audience by developing an educational video game called "Griddle" that is based on the same power system simulation techniques used in the first study. We describe the design and evaluation of Griddle and find that it demonstrates potential to provide students with insights about key power system learning objectives.
Open Source Next Generation Visualization Software for Interplanetary Missions
NASA Technical Reports Server (NTRS)
Trimble, Jay; Rinker, George
2016-01-01
Mission control is evolving quickly, driven by the requirements of new missions, and enabled by modern computing capabilities. Distributed operations, access to data anywhere, data visualization for spacecraft analysis that spans multiple data sources, flexible reconfiguration to support multiple missions, and operator use cases, are driving the need for new capabilities. NASA's Advanced Multi-Mission Operations System (AMMOS), Ames Research Center (ARC) and the Jet Propulsion Laboratory (JPL) are collaborating to build a new generation of mission operations software for visualization, to enable mission control anywhere, on the desktop, tablet and phone. The software is built on an open source platform that is open for contributions (http://nasa.github.io/openmct).
Selbig, William R.
2017-01-01
Collection of water-quality samples that accurately characterize average particle concentrations and distributions in channels can be complicated by large sources of variability. The U.S. Geological Survey (USGS) developed a fully automated Depth-Integrated Sample Arm (DISA) as a way to reduce bias and improve accuracy in water-quality concentration data. The DISA was designed to integrate with existing autosampler configurations commonly used for the collection of water-quality samples in vertical profile thereby providing a better representation of average suspended sediment and sediment-associated pollutant concentrations and distributions than traditional fixed-point samplers. In controlled laboratory experiments, known concentrations of suspended sediment ranging from 596 to 1,189 mg/L were injected into a 3 foot diameter closed channel (circular pipe) with regulated flows ranging from 1.4 to 27.8 ft3 /s. Median suspended sediment concentrations in water-quality samples collected using the DISA were within 7 percent of the known, injected value compared to 96 percent for traditional fixed-point samplers. Field evaluation of this technology in open channel fluvial systems showed median differences between paired DISA and fixed-point samples to be within 3 percent. The range of particle size measured in the open channel was generally that of clay and silt. Differences between the concentration and distribution measured between the two sampler configurations could potentially be much larger in open channels that transport larger particles, such as sand.
Root canal penetration of a sodium hypochlorite mixture using sonic or ultrasonic activation.
Sáinz-Pardo, Marta; Estevez, Roberto; Pablo, Óliver Valencia de; Rossi-Fedele, Giampiero; Cisneros, Rafael
2014-01-01
The purpose of this ex vivo study was to determine, in "open" and "closed" systems, whether the design has an influence on the penetration length of sodium hypochlorite mixed with a radiopaque contrast medium, measured in millimeters, when delivered using positive pressure (PP) and using sonic (SI) or passive ultrasonic (PUI) activation. Sixty single-rooted teeth were divided into two groups: open and closed systems (n=30). Root canal shaping was performed to a working length of 17 mm. The samples were divided into three sub-groups (n=10) according to irrigant delivery and activation: PP, and SI or PUI activation. By using radiographs, penetration length was measured, and vapor lock was assessed. For the closed group, the penetration distance means were: PP 15.715 (±0.898) mm, SI 16.299 (±0.738) mm and PUI 16.813 (±0.465) mm, with vapor lock occurring in 53.3% of the specimens. In the open group, penetration to 17 mm occurred in 97.6% of the samples, and no vapor lock occurred. Irrigant penetration and distribution evaluation using open and closed systems provide significantly different results. For closed systems, PUI is the most effective in delivering the irrigant to working length, followed by SI.
Building an Open Source Framework for Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Jagers, B.; Meijers, E.; Villars, M.
2015-12-01
In order to develop effective strategies and associated policies for environmental management, we need to understand the dynamics of the natural system as a whole and the human role therein. This understanding is gained by comparing our mental model of the world with observations from the field. However, to properly understand the system we should look at dynamics of water, sediments, water quality, and ecology throughout the whole system from catchment to coast both at the surface and in the subsurface. Numerical models are indispensable in helping us understand the interactions of the overall system, but we need to be able to update and adjust them to improve our understanding and test our hypotheses. To support researchers around the world with this challenging task we started a few years ago with the development of a new open source modeling environment DeltaShell that integrates distributed hydrological models with 1D, 2D, and 3D hydraulic models including generic components for the tracking of sediment, water quality, and ecological quantities throughout the hydrological cycle composed of the aforementioned components. The open source approach combined with a modular approach based on open standards, which allow for easy adjustment and expansion as demands and knowledge grow, provides an ideal starting point for addressing challenging integrated environmental questions.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-20
... initiated a rulemaking proceeding in accordance with provisions added by the Digital Millennium Copyright... available in digital copies. Proponent: The Open Book Alliance. 2. Literary works, distributed electronically, that: (1) Contain digital rights management and/or other access controls which either prevent the...
LiveInventor: An Interactive Development Environment for Robot Autonomy
NASA Technical Reports Server (NTRS)
Neveu, Charles; Shirley, Mark
2003-01-01
LiveInventor is an interactive development environment for robot autonomy developed at NASA Ames Research Center. It extends the industry-standard OpenInventor graphics library and scenegraph file format to include kinetic and kinematic information, a physics-simulation library, an embedded Scheme interpreter, and a distributed communication system.
NASA Astrophysics Data System (ADS)
Wang, Yongli; Wang, Gang; Zuo, Yi; Fan, Lisha; Wei, Jiaxiang
2017-03-01
On March 15, 2015, the central office issued the "Opinions on Further Deepening the Reform of Electric Power System" (in the 2015 No. 9). This policy marks the central government officially opened a new round of electricity reform. As a programmatic document under the new situation to comprehensively promote the reform of the power system, No. 9 document will be approved as a separate transmission and distribution of electricity prices, which is the first task of promoting the reform of the power system. Grid tariff reform is not only the transmission and distribution price of a separate approval, more of the grid company input-output relationship and many other aspects of deep-level adjustments. Under the background of the reform of the transmission and distribution price, the main factors affecting the input-output relationship, such as the main business, electricity pricing, and investment approval, financial accounting and so on, have changed significantly. The paper designed the comprehensive evaluation index system of power grid enterprises' credit rating under the reform of transmission and distribution price to reduce the impact of the reform on the company's international rating results and the ability to raise funds.
NASA Astrophysics Data System (ADS)
Wang, Yongli; Wang, Gang; Zuo, Yi; Fan, Lisha; Ling, Yunpeng
2017-03-01
On March 15, 2015, the Central Office issued the "Opinions on Further Deepening the Reform of Electric Power System" (Zhong Fa No. 9). This policy marks the central government officially opened a new round of electricity reform. As a programmatic document under the new situation to comprehensively promote the reform of the power system, No. 9 document will be approved as a separate transmission and distribution of electricity prices, which is the first task of promoting the reform of the power system. Grid tariff reform is not only the transmission and distribution price of a separate approval, more of the grid company input-output relationship and many other aspects of deep-level adjustments. Under the background of the reform of the transmission and distribution price, the main factors affecting the input-output relationship, such as the main business, electricity pricing, and investment approval, financial accounting and so on, have changed significantly. The paper designed the comprehensive evaluation index system of power grid projects' investment benefits under the reform of transmission and distribution price to improve the investment efficiency of power grid projects after the power reform in China.
NASA Astrophysics Data System (ADS)
Xu, Dazhi; Cao, Jianshu
2016-08-01
The concept of polaron, emerged from condense matter physics, describes the dynamical interaction of moving particle with its surrounding bosonic modes. This concept has been developed into a useful method to treat open quantum systems with a complete range of system-bath coupling strength. Especially, the polaron transformation approach shows its validity in the intermediate coupling regime, in which the Redfield equation or Fermi's golden rule will fail. In the polaron frame, the equilibrium distribution carried out by perturbative expansion presents a deviation from the canonical distribution, which is beyond the usual weak coupling assumption in thermodynamics. A polaron transformed Redfield equation (PTRE) not only reproduces the dissipative quantum dynamics but also provides an accurate and efficient way to calculate the non-equilibrium steady states. Applications of the PTRE approach to problems such as exciton diffusion, heat transport and light-harvesting energy transfer are presented.
Versioned distributed arrays for resilience in scientific applications: Global view resilience
Chien, A.; Balaji, P.; Beckman, P.; ...
2015-06-01
Exascale studies project reliability challenges for future high-performance computing (HPC) systems. We propose the Global View Resilience (GVR) system, a library that enables applications to add resilience in a portable, application-controlled fashion using versioned distributed arrays. We describe GVR’s interfaces to distributed arrays, versioning, and cross-layer error recovery. Using several large applications (OpenMC, the preconditioned conjugate gradient solver PCG, ddcMD, and Chombo), we evaluate the programmer effort to add resilience. The required changes are small (<2% LOC), localized, and machine-independent, requiring no software architecture changes. We also measure the overhead of adding GVR versioning and show that generally overheads <2%more » are achieved. We conclude that GVR’s interfaces and implementation are flexible and portable and create a gentle-slope path to tolerate growing error rates in future systems.« less
Open Source Cloud-Based Technologies for Bim
NASA Astrophysics Data System (ADS)
Logothetis, S.; Karachaliou, E.; Valari, E.; Stylianidis, E.
2018-05-01
This paper presents a Cloud-based open source system for storing and processing data from a 3D survey approach. More specifically, we provide an online service for viewing, storing and analysing BIM. Cloud technologies were used to develop a web interface as a BIM data centre, which can handle large BIM data using a server. The server can be accessed by many users through various electronic devices anytime and anywhere so they can view online 3D models using browsers. Nowadays, the Cloud computing is engaged progressively in facilitating BIM-based collaboration between the multiple stakeholders and disciplinary groups for complicated Architectural, Engineering and Construction (AEC) projects. Besides, the development of Open Source Software (OSS) has been rapidly growing and their use tends to be united. Although BIM and Cloud technologies are extensively known and used, there is a lack of integrated open source Cloud-based platforms able to support all stages of BIM processes. The present research aims to create an open source Cloud-based BIM system that is able to handle geospatial data. In this effort, only open source tools will be used; from the starting point of creating the 3D model with FreeCAD to its online presentation through BIMserver. Python plug-ins will be developed to link the two software which will be distributed and freely available to a large community of professional for their use. The research work will be completed by benchmarking four Cloud-based BIM systems: Autodesk BIM 360, BIMserver, Graphisoft BIMcloud and Onuma System, which present remarkable results.
Simulators Sustainment Management: Advance Planning Briefing to Industry
2007-05-15
ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13...Training Systems Business Opportunities FY0708 T1A Ground Based Training System CLS Recompete C130 Landing Gear Trainer MC130W Weapon...value $150M • Full and open competition • One year basic with nine option years Name: Capt. Greg Purnell Organization: 507 ACSS/GFLA Phone
Quality Attribute-Guided Evaluation of NoSQL Databases: A Case Study
2015-01-16
evaluations of NoSQL databases specifically, and big data systems in general, that have become apparent during our study. Keywords—NoSQL, distributed...technology, namely that of big data , software systems [1]. At the heart of big data systems are a collection of database technologies that are more...born organizations such as Google and Amazon [3][4], along with those of numerous other big data innovators, have created a variety of open source and
Towards an Open, Distributed Software Architecture for UxS Operations
NASA Technical Reports Server (NTRS)
Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette
2015-01-01
To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.
Quantum hacking on quantum key distribution using homodyne detection
NASA Astrophysics Data System (ADS)
Huang, Jing-Zheng; Kunz-Jacques, Sébastien; Jouguet, Paul; Weedbrook, Christian; Yin, Zhen-Qiang; Wang, Shuang; Chen, Wei; Guo, Guang-Can; Han, Zheng-Fu
2014-03-01
Imperfect devices in commercial quantum key distribution systems open security loopholes that an eavesdropper may exploit. An example of one such imperfection is the wavelength-dependent coupling ratio of the fiber beam splitter. Utilizing this loophole, the eavesdropper can vary the transmittances of the fiber beam splitter at the receiver's side by inserting lights with wavelengths different from what is normally used. Here, we propose a wavelength attack on a practical continuous-variable quantum key distribution system using homodyne detection. By inserting light pulses at different wavelengths, this attack allows the eavesdropper to bias the shot-noise estimation even if it is done in real time. Based on experimental data, we discuss the feasibility of this attack and suggest a prevention scheme by improving the previously proposed countermeasures.
Open source system OpenVPN in a function of Virtual Private Network
NASA Astrophysics Data System (ADS)
Skendzic, A.; Kovacic, B.
2017-05-01
Using of Virtual Private Networks (VPN) can establish high security level in network communication. VPN technology enables high security networking using distributed or public network infrastructure. VPN uses different security and managing rules inside networks. It can be set up using different communication channels like Internet or separate ISP communication infrastructure. VPN private network makes security communication channel over public network between two endpoints (computers). OpenVPN is an open source software product under GNU General Public License (GPL) that can be used to establish VPN communication between two computers inside business local network over public communication infrastructure. It uses special security protocols and 256-bit Encryption and it is capable of traversing network address translators (NATs) and firewalls. It allows computers to authenticate each other using a pre-shared secret key, certificates or username and password. This work gives review of VPN technology with a special accent on OpenVPN. This paper will also give comparison and financial benefits of using open source VPN software in business environment.
NASA Astrophysics Data System (ADS)
Gunawardena, N.; Pardyjak, E. R.; Stoll, R.; Khadka, A.
2018-02-01
Over the last decade there has been a proliferation of low-cost sensor networks that enable highly distributed sensor deployments in environmental applications. The technology is easily accessible and rapidly advancing due to the use of open-source microcontrollers. While this trend is extremely exciting, and the technology provides unprecedented spatial coverage, these sensors and associated microcontroller systems have not been well evaluated in the literature. Given the large number of new deployments and proposed research efforts using these technologies, it is necessary to quantify the overall instrument and microcontroller performance for specific applications. In this paper, an Arduino-based weather station system is presented in detail. These low-cost energy-budget measurement stations, or LEMS, have now been deployed for continuous measurements as part of several different field campaigns, which are described herein. The LEMS are low-cost, flexible, and simple to maintain. In addition to presenting the technical details of the LEMS, its errors are quantified in laboratory and field settings. A simple artificial neural network-based radiation-error correction scheme is also presented. Finally, challenges and possible improvements to microcontroller-based atmospheric sensing systems are discussed.
NASA Astrophysics Data System (ADS)
Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu
2014-03-01
With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.
Terabytes to Megabytes: Data Reduction Onsite for Remote Limited Bandwidth Systems
NASA Astrophysics Data System (ADS)
Hirsch, M.
2016-12-01
Inexpensive, battery-powerable embedded computer systems such as the Intel Edison and Raspberry Pi have inspired makers of all ages to create and deploy sensor systems. Geoscientists are also leveraging such inexpensive embedded computers for solar-powered or other low-resource utilization systems for ionospheric observation. We have developed OpenCV-based machine vision algorithms to reduce terabytes per night of high-speed aurora video data down to megabytes of data to aid in automated sifting and retention of high-value data from the mountains of less interesting data. Given prohibitively expensive data connections in many parts of the world, such techniques may be generalizable to more than just the auroral video and passive FM radar implemented so far. After the automated algorithm decides which data to keep, automated upload and distribution techniques are relevant to avoid excessive delay and consumption of researcher time. Open-source collaborative software development enables data audiences from experts through citizen enthusiasts to access the data and make exciting plots. Open software and data aids in cross-disciplinary collaboration opportunities, STEM outreach and increasing public awareness of the contributions each geoscience data collection system makes.
Business logic for geoprocessing of distributed geodata
NASA Astrophysics Data System (ADS)
Kiehle, Christian
2006-12-01
This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).
NASA's EOSDIS Cumulus: Ingesting, Archiving, Managing, and Distributing from Commercial Cloud
NASA Astrophysics Data System (ADS)
Baynes, K.; Ramachandran, R.; Pilone, D.; Quinn, P.; Schuler, I.; Gilman, J.; Jazayeri, A.
2017-12-01
NASA's Earth Observing System Data and Information System (EOSDIS) has been working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk will outline the motivation for this work, present the achievements and hurdles of the past 18 months and will chart a course for the future expansion of the Cumulus expansion. We will explore on not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud and how we are rising to meet those challenges through open collaboration and intentional stakeholder engagement.
2015-06-01
abstract constraints along six dimen- sions for expansion: user, actions, data , business rules, interfaces, and quality attributes [Gottesdiener 2010...relevant open source systems. For example, the CONNECT and HADOOP Distributed File System (HDFS) projects have many user stories that deal with...Iteration Zero involves architecture planning before writing any code. An overly long Iteration Zero is equivalent to the dysfunctional “ Big Up-Front
Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, K. P.; Mather, B. A.; Pal, B. C.
For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less
Analytic Considerations and Design Basis for the IEEE Distribution Test Feeders
Schneider, K. P.; Mather, B. A.; Pal, B. C.; ...
2017-10-10
For nearly 20 years the Test Feeder Working Group of the Distribution System Analysis Subcommittee has been developing openly available distribution test feeders for use by researchers. The purpose of these test feeders is to provide models of distribution systems that reflect the wide diversity in design and their various analytic challenges. Because of their utility and accessibility, the test feeders have been used for a wide range of research, some of which has been outside the original scope of intended uses. This paper provides an overview of the existing distribution feeder models and clarifies the specific analytic challenges thatmore » they were originally designed to examine. Additionally, the paper will provide guidance on which feeders are best suited for various types of analysis. The purpose of this paper is to provide the original intent of the Working Group and to provide the information necessary so that researchers may make an informed decision on which of the test feeders are most appropriate for their work.« less
The KATP channel in migraine pathophysiology: a novel therapeutic target for migraine.
Al-Karagholi, Mohammad Al-Mahdi; Hansen, Jakob Møller; Severinsen, Johanne; Jansen-Olesen, Inger; Ashina, Messoud
2017-08-23
To review the distribution and function of K ATP channels, describe the use of K ATP channels openers in clinical trials and make the case that these channels may play a role in headache and migraine. K ATP channels are widely present in the trigeminovascular system and play an important role in the regulation of tone in cerebral and meningeal arteries. Clinical trials using synthetic K ATP channel openers report headache as a prevalent-side effect in non-migraine sufferers, indicating that K ATP channel opening may cause headache, possibly due to vascular mechanisms. Whether K ATP channel openers can provoke migraine in migraine sufferers is not known. We suggest that K ATP channels may play an important role in migraine pathogenesis and could be a potential novel therapeutic anti-migraine target.
Using Approximate Bayesian Computation to Probe Multiple Transiting Planet Systems
NASA Astrophysics Data System (ADS)
Morehead, Robert C.
2015-08-01
The large number of multiple transiting planet systems (MTPS) uncovered with Kepler suggest a population of well-aligned planetary systems. Previously, the distribution of transit duration ratios in MTPSs has been used to place constraints on the distributions of mutual orbital inclinations and orbital eccentricities in these systems. However, degeneracies with the underlying number of planets in these systems pose added challenges and make explicit likelihood functions intractable. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC proposes from a prior on the population parameters to produce synthetic datasets via a physically-motivated model. Samples are accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples then form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We will demonstrate the utility of ABC in exoplanet populations by presenting new constraints on the mutual inclination and eccentricity distributions in the Kepler MTPSs. We will also introduce Simple-ABC, a new open-source Python package designed for ease of use and rapid specification of general models, suitable for use in a wide variety of applications in both exoplanet science and astrophysics as a whole.
Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.
2002-01-01
A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.
ERIC Educational Resources Information Center
Hobbs, Charles Eugene
The author investigates elementary school students' performance when solving selected open distributive sentences in relation to three factors (Open Sentence Type, Context, Number Size) and identifies and classifies solution methods attempted by students and students' errors in performance. Eighty fifth-grade students participated in the…
System Engineering Strategy for Distributed Multi-Purpose Simulation Architectures
NASA Technical Reports Server (NTRS)
Bhula, Dlilpkumar; Kurt, Cindy Marie; Luty, Roger
2007-01-01
This paper describes the system engineering approach used to develop distributed multi-purpose simulations. The multi-purpose simulation architecture focuses on user needs, operations, flexibility, cost and maintenance. This approach was used to develop an International Space Station (ISS) simulator, which is called the International Space Station Integrated Simulation (ISIS)1. The ISIS runs unmodified ISS flight software, system models, and the astronaut command and control interface in an open system design that allows for rapid integration of multiple ISS models. The initial intent of ISIS was to provide a distributed system that allows access to ISS flight software and models for the creation, test, and validation of crew and ground controller procedures. This capability reduces the cost and scheduling issues associated with utilizing standalone simulators in fixed locations, and facilitates discovering unknowns and errors earlier in the development lifecycle. Since its inception, the flexible architecture of the ISIS has allowed its purpose to evolve to include ground operator system and display training, flight software modification testing, and as a realistic test bed for Exploration automation technology research and development.
Linear prediction and single-channel recording.
Carter, A A; Oswald, R E
1995-08-01
The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.
High-Surety Telemedicine in a Distributed, 'Plug-andPlan' Environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, Richard L.; Funkhouser, Donald R.; Gallagher, Linda K.
1999-05-17
Commercial telemedicine systems are increasingly functional, incorporating video-conferencing capabilities, diagnostic peripherals, medication reminders, and patient education services. However, these systems (1) rarely utilize information architectures which allow them to be easily integrated with existing health information networks and (2) do not always protect patient confidentiality with adequate security mechanisms. Using object-oriented methods and software wrappers, we illustrate the transformation of an existing stand-alone telemedicine system into `plug-and-play' components that function in a distributed medical information environment. We show, through the use of open standards and published component interfaces, that commercial telemedicine offerings which were once incompatible with electronic patient recordmore » systems can now share relevant data with clinical information repositories while at the same time hiding the proprietary implementations of the respective systems. Additionally, we illustrate how leading-edge technology can secure this distributed telemedicine environment, maintaining patient confidentiality and the integrity of the associated electronic medical data. Information surety technology also encourages the development of telemedicine systems that have both read and write access to electronic medical records containing patient-identifiable information. The win-win approach to telemedicine information system development preserves investments in legacy software and hardware while promoting security and interoperability in a distributed environment.« less
Software Management for the NOνAExperiment
NASA Astrophysics Data System (ADS)
Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.
2015-12-01
The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.
Networking and AI systems: Requirements and benefits
NASA Technical Reports Server (NTRS)
1988-01-01
The price performance benefits of network systems is well documented. The ability to share expensive resources sold timesharing for mainframes, department clusters of minicomputers, and now local area networks of workstations and servers. In the process, other fundamental system requirements emerged. These have now been generalized with open system requirements for hardware, software, applications and tools. The ability to interconnect a variety of vendor products has led to a specification of interfaces that allow new techniques to extend existing systems for new and exciting applications. As an example of the message passing system, local area networks provide a testbed for many of the issues addressed by future concurrent architectures: synchronization, load balancing, fault tolerance and scalability. Gold Hill has been working with a number of vendors on distributed architectures that range from a network of workstations to a hypercube of microprocessors with distributed memory. Results from early applications are promising both for performance and scalability.
Access to Higher Education in China: Differences in Opportunity
ERIC Educational Resources Information Center
Wang, Houxiong
2011-01-01
Access to higher education in China has opened up significantly in the move towards a mass higher education system. However, aggregate growth does not necessarily imply fair or reasonable distribution of opportunity. In fact, the expansion of higher education has a rather more complex influence on opportunity when admissions statistics are viewed…
Podcast Pilots for Distance Planning, Programming, and Development
ERIC Educational Resources Information Center
Cordes, Sean
2005-01-01
This paper examines podcasting as a library support for distance learning and information systems and services. The manuscript provides perspective on the knowledge base in the growing area of podcasting in libraries and academia. A walkthrough of the podcast creation and distribution process using basic computing skills and open source tools is…
Acquire: an open-source comprehensive cancer biobanking system.
Dowst, Heidi; Pew, Benjamin; Watkins, Chris; McOwiti, Apollo; Barney, Jonathan; Qu, Shijing; Becnel, Lauren B
2015-05-15
The probability of effective treatment of cancer with a targeted therapeutic can be improved for patients with defined genotypes containing actionable mutations. To this end, many human cancer biobanks are integrating more tightly with genomic sequencing facilities and with those creating and maintaining patient-derived xenografts (PDX) and cell lines to provide renewable resources for translational research. To support the complex data management needs and workflows of several such biobanks, we developed Acquire. It is a robust, secure, web-based, database-backed open-source system that supports all major needs of a modern cancer biobank. Its modules allow for i) up-to-the-minute 'scoreboard' and graphical reporting of collections; ii) end user roles and permissions; iii) specimen inventory through caTissue Suite; iv) shipping forms for distribution of specimens to pathology, genomic analysis and PDX/cell line creation facilities; v) robust ad hoc querying; vi) molecular and cellular quality control metrics to track specimens' progress and quality; vii) public researcher request; viii) resource allocation committee distribution request review and oversight and ix) linkage to available derivatives of specimen. © The Author 2015. Published by Oxford University Press.
NASA Technical Reports Server (NTRS)
Harrington, Douglas E.; Burley, Richard R.; Corban, Robert R.
1986-01-01
Wall Mach number distributions were determined over a range of test-section free-stream Mach numbers from 0.2 to 0.92. The test section was slotted and had a nominal porosity of 11 percent. Reentry flaps located at the test-section exit were varied from 0 (fully closed) to 9 (fully open) degrees. Flow was bled through the test-section slots by means of a plenum evacuation system (PES) and varied from 0 to 3 percent of tunnel flow. Variations in reentry flap angle or PES flow rate had little or no effect on the Mach number distributions in the first 70 percent of the test section. However, in the aft region of the test section, flap angle and PES flow rate had a major impact on the Mach number distributions. Optimum PES flow rates were nominally 2 to 2.5 percent wtih the flaps fully closed and less than 1 percent when the flaps were fully open. The standard deviation of the test-section wall Mach numbers at the optimum PES flow rates was 0.003 or less.
Network-based reading system for lung cancer screening CT
NASA Astrophysics Data System (ADS)
Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio
2006-03-01
This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
Castelló, Damià; Cobo, Ana; Mestres, Enric; Garcia, Maria; Vanrell, Ivette; Alejandro Remohí, José; Calderón, Gloria; Costa-Borges, Nuno
2018-04-01
Vitrification is currently a well-established technique for the cryopreservation of oocytes and embryos. It can be achieved either by direct (open systems) or indirect (closed systems) contact with liquid nitrogen. While there is not a direct evidence of disease transmission by transferred cryopreserved embryos, it was experimentally demonstrated that cross-contamination between liquid nitrogen and embryos may occur, and thus, the use of closed devices has been recommended to avoid the risk of contamination. Unfortunately, closed systems may result in lower cooling rates compared to open systems, due to the thermal insulation of the samples, which may cause ice crystal formation resulting in impaired results. In our study, we aimed to validate a newly developed vitrification device (Cryotop SC) that has been specifically designed for being used as a closed system. The cooling and warming rates calculated for the closed system were 5.254 °C/min and 43.522 °C/min, respectively. Results obtained with the closed system were equivalent to those with the classic Cryotop (open system), with survival rates in oocytes close to 100%. Similarly, the potential of the survived oocytes to develop up to good quality blastocysts after parthenogenetic activation between both groups was statistically equivalent. Assessment of the meiotic spindle and chromosome distribution by fluorescence microscopy in vitrified oocytes showed alike morphologies between the open and closed system. No differences were found either between the both systems in terms of survival rates of one-cell stage embryos or blastocysts, as well as, in the potential of the vitrified/warmed blastocysts to develop to full-term after transferred to surrogate females. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Bohrson, W. A.; Spera, F. J.; Fowler, S.; Belkin, H.; de Vivo, B.
2005-12-01
The Campanian Ignimbrite, a large volume (~200 km3 DRE) trachytic to phonolitic ignimbrite was deposited at ~39.3 ka and represents the largest of a number of highly explosive volcanic events in the region near Naples, Italy. Thermodynamic modeling of the major element evolution using the MELTS algorithm (see companion contribution by Fowler et al.) provides detailed information about the identity of and changes in proportions of solids along the liquid line of descent during isobaric fractional crystallization. We have derived trace element mass balance equations that explicitly accommodate changing mineral-melt bulk distribution coefficients during crystallization and also simultaneously satisfy energy and major element mass conservation. Although major element patterns are reasonably modeled assuming closed system fractional crystallization, modeling of trace elements that represent a range of behaviors (e.g. Zr, Nb, Th, U, Rb, Sm, Sr) yields trends for closed system fractionation that are distinct from those observed. These results suggest open-system processes were also important in the evolution of the Campanian magmatic system. Th isotope data yield an apparent isochron that is ~20 kyr younger than the age of the deposit, and age-corrected Th isotope data indicate that the magma body was an open-system at the time of eruption. Because open-system processes can profoundly change isotopic characteristics of a magma body, these results illustrate that it is critical to understand the contribution that open-system processes make to silicic magma bodies prior to assigning relevance to age or timescale information derived from isotope systematics. Fluid-magma interaction has been proposed as a mechanism to change isotopic and elemental characteristics of magma bodies, but an evaluation of the mass and thermal constraints on such a process suggest large-scale fluid-melt interaction at liquidus temperatures is unlikely. In the case of the magma body associated with the Campanian Ignimbrite, the most likely source of open-system signatures is assimilation of partial melts of compositionally heterogeneous basement composed of older cumulates and intrusive equivalents of volcanic activity within the Campanian region. Additional trace element modeling, explicitly evaluating the mass and energy balance effects that fluid, solids, and melt have on trace element evolution, will further elucidate the contributions of open vs. closed system processes within the Campanian magma body.
Qcorp: an annotated classification corpus of Chinese health questions.
Guo, Haihong; Na, Xu; Li, Jiao
2018-03-22
Health question-answering (QA) systems have become a typical application scenario of Artificial Intelligent (AI). An annotated question corpus is prerequisite for training machines to understand health information needs of users. Thus, we aimed to develop an annotated classification corpus of Chinese health questions (Qcorp) and make it openly accessible. We developed a two-layered classification schema and corresponding annotation rules on basis of our previous work. Using the schema, we annotated 5000 questions that were randomly selected from 5 Chinese health websites within 6 broad sections. 8 annotators participated in the annotation task, and the inter-annotator agreement was evaluated to ensure the corpus quality. Furthermore, the distribution and relationship of the annotated tags were measured by descriptive statistics and social network map. The questions were annotated using 7101 tags that covers 29 topic categories in the two-layered schema. In our released corpus, the distribution of questions on the top-layered categories was treatment of 64.22%, diagnosis of 37.14%, epidemiology of 14.96%, healthy lifestyle of 10.38%, and health provider choice of 4.54% respectively. Both the annotated health questions and annotation schema were openly accessible on the Qcorp website. Users can download the annotated Chinese questions in CSV, XML, and HTML format. We developed a Chinese health question corpus including 5000 manually annotated questions. It is openly accessible and would contribute to the intelligent health QA system development.
Performance evaluation of multi-channel wireless mesh networks with embedded systems.
Lam, Jun Huy; Lee, Sang-Gon; Tan, Whye Kit
2012-01-01
Many commercial wireless mesh network (WMN) products are available in the marketplace with their own proprietary standards, but interoperability among the different vendors is not possible. Open source communities have their own WMN implementation in accordance with the IEEE 802.11s draft standard, Linux open80211s project and FreeBSD WMN implementation. While some studies have focused on the test bed of WMNs based on the open80211s project, none are based on the FreeBSD. In this paper, we built an embedded system using the FreeBSD WMN implementation that utilizes two channels and evaluated its performance. This implementation allows the legacy system to connect to the WMN independent of the type of platform and distributes the load between the two non-overlapping channels. One channel is used for the backhaul connection and the other one is used to connect to the stations to wireless mesh network. By using the power efficient 802.11 technology, this device can also be used as a gateway for the wireless sensor network (WSN).
Autocorrel I: A Neural Network Based Network Event Correlation Approach
2005-05-01
which concern any component of the network. 2.1.1 Existing Intrusion Detection Systems EMERALD [8] is a distributed, scalable, hierarchal, customizable...writing this paper, the updaters of this system had not released their correlation unit to the public. EMERALD ex- plicitly divides statistical analysis... EMERALD , NetSTAT is scalable and composi- ble. QuidSCOR [12] is an open-source IDS, though it requires a subscription from its publisher, Qualys Inc
TU-H-BRC-05: Stereotactic Radiosurgery Optimized with Orthovoltage Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagerstrom, J; Culberson, W; Bender, E
2016-06-15
Purpose: To achieve improved stereotactic radiosurgery (SRS) dose distributions using orthovoltage energy fluence modulation with inverse planning optimization techniques. Methods: A pencil beam model was used to calculate dose distributions from the institution’s orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods as well as measurements with radiochromic film. The orthovoltage photon spectra, modulated by varying thicknesses of attenuating material, were approximated using open-source software. A genetic algorithm search heuristic routine was used to optimize added tungsten filtration thicknesses to approach rectangular function dose distributions at depth. Optimizations were performed for depths of 2.5,more » 5.0, and 7.5 cm, with cone sizes of 8, 10, and 12 mm. Results: Circularly-symmetric tungsten filters were designed based on the results of the optimization, to modulate the orthovoltage beam across the aperture of an SRS cone collimator. For each depth and cone size combination examined, the beam flatness and 80–20% and 90–10% penumbrae were calculated for both standard, open cone-collimated beams as well as for the optimized, filtered beams. For all configurations tested, the modulated beams were able to achieve improved penumbra widths and flatness statistics at depth, with flatness improving between 33 and 52%, and penumbrae improving between 18 and 25% for the modulated beams compared to the unmodulated beams. Conclusion: A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions at depth with improved flatness and penumbrae compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system.« less
Scalable and fail-safe deployment of the ATLAS Distributed Data Management system Rucio
NASA Astrophysics Data System (ADS)
Lassnig, M.; Vigne, R.; Beermann, T.; Barisits, M.; Garonne, V.; Serfon, C.
2015-12-01
This contribution details the deployment of Rucio, the ATLAS Distributed Data Management system. The main complication is that Rucio interacts with a wide variety of external services, and connects globally distributed data centres under different technological and administrative control, at an unprecedented data volume. It is therefore not possible to create a duplicate instance of Rucio for testing or integration. Every software upgrade or configuration change is thus potentially disruptive and requires fail-safe software and automatic error recovery. Rucio uses a three-layer scaling and mitigation strategy based on quasi-realtime monitoring. This strategy mainly employs independent stateless services, automatic failover, and service migration. The technologies used for deployment and mitigation include OpenStack, Puppet, Graphite, HAProxy and Apache. In this contribution, the interplay between these components, their deployment, software mitigation, and the monitoring strategy are discussed.
Direct-write graded index materials realized in protein hydrogels
Kaehr, Bryan; Scrymgeour, David A.
2016-09-20
Here, the ability to create optical materials with arbitrary index distributions would prove transformative for optics design and applications. However, current fabrication techniques for graded index (GRIN) materials rely on diffusion profiles and therefore are unable to realize arbitrary distribution GRIN design. Here, we demonstrate the laser direct writing of graded index structures in protein-based hydrogels using multiphoton lithography. We show index changes spanning a range of 10 –2, which is comparable with laser densified glass and polymer systems. Further, we demonstrate the conversion of these written density variation structures into SiO 2, opening up the possibility of transforming GRINmore » hydrogels to a wide range of material systems.« less
Direct-write graded index materials realized in protein hydrogels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaehr, Bryan; Scrymgeour, David A.
Here, the ability to create optical materials with arbitrary index distributions would prove transformative for optics design and applications. However, current fabrication techniques for graded index (GRIN) materials rely on diffusion profiles and therefore are unable to realize arbitrary distribution GRIN design. Here, we demonstrate the laser direct writing of graded index structures in protein-based hydrogels using multiphoton lithography. We show index changes spanning a range of 10 –2, which is comparable with laser densified glass and polymer systems. Further, we demonstrate the conversion of these written density variation structures into SiO 2, opening up the possibility of transforming GRINmore » hydrogels to a wide range of material systems.« less
Optimized Orthovoltage Stereotactic Radiosurgery
NASA Astrophysics Data System (ADS)
Fagerstrom, Jessica M.
Because of its ability to treat intracranial targets effectively and noninvasively, stereotactic radiosurgery (SRS) is a prevalent treatment modality in modern radiation therapy. This work focused on SRS delivering rectangular function dose distributions, which are desirable for some targets such as those with functional tissue included within the target volume. In order to achieve such distributions, this work used fluence modulation and energies lower than those utilized in conventional SRS. In this work, the relationship between prescription isodose and dose gradients was examined for standard, unmodulated orthovoltage SRS dose distributions. Monte Carlo-generated energy deposition kernels were used to calculate 4pi, isocentric dose distributions for a polyenergetic orthovoltage spectrum, as well as monoenergetic orthovoltage beams. The relationship between dose gradients and prescription isodose was found to be field size and energy dependent, and values were found for prescription isodose that optimize dose gradients. Next, a pencil-beam model was used with a Genetic Algorithm search heuristic to optimize the spatial distribution of added tungsten filtration within apertures of cone collimators in a moderately filtered 250 kVp beam. Four cone sizes at three depths were examined with a Monte Carlo model to determine the effects of the optimized modulation compared to open cones, and the simulations found that the optimized cones were able to achieve both improved penumbra and flatness statistics at depth compared to the open cones. Prototypes of the filter designs calculated using mathematical optimization techniques and Monte Carlo simulations were then manufactured and inserted into custom built orthovoltage SRS cone collimators. A positioning system built in-house was used to place the collimator and filter assemblies temporarily in the 250 kVp beam line. Measurements were performed in water using radiochromic film scanned with both a standard white light flatbed scanner as well as a prototype laser densitometry system. Measured beam profiles showed that the modulated beams could more closely approach rectangular function dose profiles compared to the open cones. A methodology has been described and implemented to achieve optimized SRS delivery, including the development of working prototypes. Future work may include the construction of a full treatment platform.
Open star clusters and Galactic structure
NASA Astrophysics Data System (ADS)
Joshi, Yogesh C.
2018-04-01
In order to understand the Galactic structure, we perform a statistical analysis of the distribution of various cluster parameters based on an almost complete sample of Galactic open clusters yet available. The geometrical and physical characteristics of a large number of open clusters given in the MWSC catalogue are used to study the spatial distribution of clusters in the Galaxy and determine the scale height, solar offset, local mass density and distribution of reddening material in the solar neighbourhood. We also explored the mass-radius and mass-age relations in the Galactic open star clusters. We find that the estimated parameters of the Galactic disk are largely influenced by the choice of cluster sample.
Open Platform for Limit Protection with Carefree Maneuver Applications
NASA Technical Reports Server (NTRS)
Jeram, Geoffrey J.
2004-01-01
This Open Platform for Limit Protection guides the open design of maneuver limit protection systems in general, and manned, rotorcraft, aerospace applications in particular. The platform uses three stages of limit protection modules: limit cue creation, limit cue arbitration, and control system interface. A common set of limit cue modules provides commands that can include constraints, alerts, transfer functions, and friction. An arbitration module selects the "best" limit protection cues and distributes them to the most appropriate control path interface. This platform adopts a holistic approach to limit protection whereby it considers all potential interface points, including the pilot's visual, aural, and tactile displays; and automatic command restraint shaping for autonomous limit protection. For each functional module, this thesis guides the control system designer through the design choices and information interfaces among the modules. Limit cue module design choices include type of prediction, prediction mechanism, method of critical control calculation, and type of limit cue. Special consideration is given to the nature of the limit, particularly the level of knowledge about it, and the ramifications for limit protection design, especially with respect to intelligent control methods such as fuzzy inference systems and neural networks.
Rubinson, K A
1992-01-01
The underlying principles of the kinetics and equilibrium of a solitary sodium channel in the steady state are examined. Both the open and closed kinetics are postulated to result from round-trip excursions from a transition region that separates the openable and closed forms. Exponential behavior of the kinetics can have origins different from small-molecule systems. These differences suggest that the probability density functions (PDFs) that describe the time dependences of the open and closed forms arise from a distribution of rate constants. The distribution is likely to arise from a thermal modulation of the channel structure, and this provides a physical basis for the following three-variable equation: [formula; see text] Here, A0 is a scaling term, k is the mean rate constant, and sigma quantifies the Gaussian spread for the contributions of a range of effective rate constants. The maximum contribution is made by k, with rates faster and slower contributing less. (When sigma, the standard deviation of the spread, goes to zero, then p(f) = A0 e-kt.) The equation is applied to the single-channel steady-state probability density functions for batrachotoxin-treated sodium channels (1986. Keller et al. J. Gen. Physiol. 88: 1-23). The following characteristics are found: (a) The data for both open and closed forms of the channel are fit well with the above equation, which represents a Gaussian distribution of first-order rate processes. (b) The simple relationship [formula; see text] holds for the mean effective rat constants. Or, equivalently stated, the values of P open calculated from the k values closely agree with the P open values found directly from the PDF data. (c) In agreement with the known behavior of voltage-dependent rate constants, the voltage dependences of the mean effective rate constants for the opening and closing of the channel are equal and opposite over the voltage range studied. That is, [formula; see text] "Bursts" are related to the well-known cage effect of solution chemistry. PMID:1312365
An Open Source Extensible Smart Energy Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rankin, Linda
Aggregated distributed energy resources are the subject of much interest in the energy industry and are expected to play an important role in meeting our future energy needs by changing how we use, distribute and generate electricity. This energy future includes an increased amount of energy from renewable resources, load management techniques to improve resiliency and reliability, and distributed energy storage and generation capabilities that can be managed to meet the needs of the grid as well as individual customers. These energy assets are commonly referred to as Distributed Energy Resources (DER). DERs rely on a means to communicate informationmore » between an energy provider and multitudes of devices. Today DER control systems are typically vendor-specific, using custom hardware and software solutions. As a result, customers are locked into communication transport protocols, applications, tools, and data formats. Today’s systems are often difficult to extend to meet new application requirements, resulting in stranded assets when business requirements or energy management models evolve. By partnering with industry advisors and researchers, an implementation DER research platform was developed called the Smart Energy Framework (SEF). The hypothesis of this research was that an open source Internet of Things (IoT) framework could play a role in creating a commodity-based eco-system for DER assets that would reduce costs and provide interoperable products. SEF is based on the AllJoynTM IoT open source framework. The demonstration system incorporated DER assets, specifically batteries and smart water heaters. To verify the behavior of the distributed system, models of water heaters and batteries were also developed. An IoT interface for communicating between the assets and a control server was defined. This interface supports a series of “events” and telemetry reporting, similar to those defined by current smart grid communication standards. The results of this effort demonstrated the feasibility and application potential of using IoT frameworks for the creation of commodity-based DER systems. All of the identified commodity-based system requirements were met by the AllJoyn framework. By having commodity solutions, small vendors can enter the market and the cost of implementation for all parties is reduced. Utilities and aggregators can choose from multiple interoperable products reducing the risk of stranded assets. Based on this research it is recommended that interfaces based on existing smart grid communication protocol standards be created for these emerging IoT frameworks. These interfaces should be standardized as part of the IoT framework allowing for interoperability testing and certification. Similarly, IoT frameworks are introducing application level security. This type of security is needed for protecting application and platforms and will be important moving forward. Recommendations are that along with DER-based data model interfaces, platform and application security requirements also be prescribed when IoT devices support DER applications.« less
Open architecture of smart sensor suites
NASA Astrophysics Data System (ADS)
Müller, Wilmuth; Kuwertz, Achim; Grönwall, Christina; Petersson, Henrik; Dekker, Rob; Reinert, Frank; Ditzel, Maarten
2017-10-01
Experiences from recent conflicts show the strong need for smart sensor suites comprising different multi-spectral imaging sensors as core elements as well as additional non-imaging sensors. Smart sensor suites should be part of a smart sensor network - a network of sensors, databases, evaluation stations and user terminals. Its goal is to optimize the use of various information sources for military operations such as situation assessment, intelligence, surveillance, reconnaissance, target recognition and tracking. Such a smart sensor network will enable commanders to achieve higher levels of situational awareness. Within the study at hand, an open system architecture was developed in order to increase the efficiency of sensor suites. The open system architecture for smart sensor suites, based on a system-of-systems approach, enables combining different sensors in multiple physical configurations, such as distributed sensors, co-located sensors combined in a single package, tower-mounted sensors, sensors integrated in a mobile platform, and trigger sensors. The architecture was derived from a set of system requirements and relevant scenarios. Its mode of operation is adaptable to a series of scenarios with respect to relevant objects of interest, activities to be observed, available transmission bandwidth, etc. The presented open architecture is designed in accordance with the NATO Architecture Framework (NAF). The architecture allows smart sensor suites to be part of a surveillance network, linked e.g. to a sensor planning system and a C4ISR center, and to be used in combination with future RPAS (Remotely Piloted Aircraft Systems) for supporting a more flexible dynamic configuration of RPAS payloads.
Towards multifocal ultrasonic neural stimulation: pattern generation algorithms
NASA Astrophysics Data System (ADS)
Hertzberg, Yoni; Naor, Omer; Volovick, Alexander; Shoham, Shy
2010-10-01
Focused ultrasound (FUS) waves directed onto neural structures have been shown to dynamically modulate neural activity and excitability, opening up a range of possible systems and applications where the non-invasiveness, safety, mm-range resolution and other characteristics of FUS are advantageous. As in other neuro-stimulation and modulation modalities, the highly distributed and parallel nature of neural systems and neural information processing call for the development of appropriately patterned stimulation strategies which could simultaneously address multiple sites in flexible patterns. Here, we study the generation of sparse multi-focal ultrasonic distributions using phase-only modulation in ultrasonic phased arrays. We analyse the relative performance of an existing algorithm for generating multifocal ultrasonic distributions and new algorithms that we adapt from the field of optical digital holography, and find that generally the weighted Gerchberg-Saxton algorithm leads to overall superior efficiency and uniformity in the focal spots, without significantly increasing the computational burden. By combining phased-array FUS and magnetic-resonance thermometry we experimentally demonstrate the simultaneous generation of tightly focused multifocal distributions in a tissue phantom, a first step towards patterned FUS neuro-modulation systems and devices.
Numerical Simulation of Dispersion from Urban Greenhouse Gas Sources
NASA Astrophysics Data System (ADS)
Nottrott, Anders; Tan, Sze; He, Yonggang; Winkler, Renato
2017-04-01
Cities are characterized by complex topography, inhomogeneous turbulence, and variable pollutant source distributions. These features create a scale separation between local sources and urban scale emissions estimates known as the Grey-Zone. Modern computational fluid dynamics (CFD) techniques provide a quasi-deterministic, physically based toolset to bridge the scale separation gap between source level dynamics, local measurements, and urban scale emissions inventories. CFD has the capability to represent complex building topography and capture detailed 3D turbulence fields in the urban boundary layer. This presentation discusses the application of OpenFOAM to urban CFD simulations of natural gas leaks in cities. OpenFOAM is an open source software for advanced numerical simulation of engineering and environmental fluid flows. When combined with free or low cost computer aided drawing and GIS, OpenFOAM generates a detailed, 3D representation of urban wind fields. OpenFOAM was applied to model scalar emissions from various components of the natural gas distribution system, to study the impact of urban meteorology on mobile greenhouse gas measurements. The numerical experiments demonstrate that CH4 concentration profiles are highly sensitive to the relative location of emission sources and buildings. Sources separated by distances of 5-10 meters showed significant differences in vertical dispersion of plumes, due to building wake effects. The OpenFOAM flow fields were combined with an inverse, stochastic dispersion model to quantify and visualize the sensitivity of point sensors to upwind sources in various built environments. The Boussinesq approximation was applied to investigate the effects of canopy layer temperature gradients and convection on sensor footprints.
NASA Astrophysics Data System (ADS)
Arias Muñoz, C.; Brovelli, M. A.; Kilsedar, C. E.; Moreno-Sanchez, R.; Oxoli, D.
2017-09-01
The availability of water-related data and information across different geographical and jurisdictional scales is of critical importance for the conservation and management of water resources in the 21st century. Today information assets are often found fragmented across multiple agencies that use incompatible data formats and procedures for data collection, storage, maintenance, analysis, and distribution. The growing adoption of Web mapping systems in the water domain is reducing the gap between data availability and its practical use and accessibility. Nevertheless, more attention must be given to the design and development of these systems to achieve high levels of interoperability and usability while fulfilling different end user informational needs. This paper first presents a brief overview of technologies used in the water domain, and then presents three examples of Web mapping architectures based on free and open source software (FOSS) and the use of open specifications (OS) that address different users' needs for data sharing, visualization, manipulation, scenario simulations, and map production. The purpose of the paper is to illustrate how the latest developments in OS for geospatial and water-related data collection, storage, and sharing, combined with the use of mature FOSS projects facilitate the creation of sophisticated interoperable Web-based information systems in the water domain.
High Level Analysis, Design and Validation of Distributed Mobile Systems with
NASA Astrophysics Data System (ADS)
Farahbod, R.; Glässer, U.; Jackson, P. J.; Vajihollahi, M.
System design is a creative activity calling for abstract models that facilitate reasoning about the key system attributes (desired requirements and resulting properties) so as to ensure these attributes are properly established prior to actually building a system. We explore here the practical side of using the abstract state machine (ASM) formalism in combination with the CoreASM open source tool environment for high-level design and experimental validation of complex distributed systems. Emphasizing the early phases of the design process, a guiding principle is to support freedom of experimentation by minimizing the need for encoding. CoreASM has been developed and tested building on a broad scope of applications, spanning computational criminology, maritime surveillance and situation analysis. We critically reexamine here the CoreASM project in light of three different application scenarios.
Application of the actor model to large scale NDE data analysis
NASA Astrophysics Data System (ADS)
Coughlin, Chris
2018-03-01
The Actor model of concurrent computation discretizes a problem into a series of independent units or actors that interact only through the exchange of messages. Without direct coupling between individual components, an Actor-based system is inherently concurrent and fault-tolerant. These traits lend themselves to so-called "Big Data" applications in which the volume of data to analyze requires a distributed multi-system design. For a practical demonstration of the Actor computational model, a system was developed to assist with the automated analysis of Nondestructive Evaluation (NDE) datasets using the open source Myriad Data Reduction Framework. A machine learning model trained to detect damage in two-dimensional slices of C-Scan data was deployed in a streaming data processing pipeline. To demonstrate the flexibility of the Actor model, the pipeline was deployed on a local system and re-deployed as a distributed system without recompiling, reconfiguring, or restarting the running application.
NASA Technical Reports Server (NTRS)
Baynes, Katie; Ramachandran, Rahul; Pilone, Dan; Quinn, Patrick; Gilman, Jason; Schuler, Ian; Jazayeri, Alireza
2017-01-01
NASA's Earth Observing System Data and Information System (EOSDIS) has been working towards a vision of a cloud-based, highly-flexible, ingest, archive, management, and distribution system for its ever-growing and evolving data holdings. This system, Cumulus, is emerging from its prototyping stages and is poised to make a huge impact on how NASA manages and disseminates its Earth science data. This talk will outline the motivation for this work, present the achievements and hurdles of the past 18 months and will chart a course for the future expansion of the Cumulus expansion. We will explore on not just the technical, but also the socio-technical challenges that we face in evolving a system of this magnitude into the cloud and how we are rising to meet those challenges through open collaboration and intentional stakeholder engagement.
Facilitating the openEHR approach - organizational structures for defining high-quality archetypes.
Kohl, Christian Dominik; Garde, Sebastian; Knaup, Petra
2008-01-01
Using openEHR archetypes to establish an electronic patient record promises rapid development and system interoperability by using or adopting existing archetypes. However, internationally accepted, high quality archetypes which enable a comprehensive semantic interoperability require adequate development and maintenance processes. Therefore, structures have to be created involving different health professions. In the following we present a model which facilitates and governs distributed but cooperative development and adoption of archetypes by different professionals including peer reviews. Our model consists of a hierarchical structure of professional committees and descriptions of the archetype development process considering these different committees.
An authentication infrastructure for today and tomorrow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engert, D.E.
1996-06-01
The Open Software Foundation`s Distributed Computing Environment (OSF/DCE) was originally designed to provide a secure environment for distributed applications. By combining it with Kerberos Version 5 from MIT, it can be extended to provide network security as well. This combination can be used to build both an inter and intra organizational infrastructure while providing single sign-on for the user with overall improved security. The ESnet community of the Department of Energy is building just such an infrastructure. ESnet has modified these systems to improve their interoperability, while encouraging the developers to incorporate these changes and work more closely together tomore » continue to improve the interoperability. The success of this infrastructure depends on its flexibility to meet the needs of many applications and network security requirements. The open nature of Kerberos, combined with the vendor support of OSF/DCE, provides the infrastructure for today and tomorrow.« less
Method of determining interwell oil field fluid saturation distribution
Donaldson, Erle C.; Sutterfield, F. Dexter
1981-01-01
A method of determining the oil and brine saturation distribution in an oil field by taking electrical current and potential measurements among a plurality of open-hole wells geometrically distributed throughout the oil field. Poisson's equation is utilized to develop fluid saturation distributions from the electrical current and potential measurement. Both signal generating equipment and chemical means are used to develop current flow among the several open-hole wells.
Financial Effect of a Drug Distribution Model Change on a Health System.
Turingan, Erin M; Mekoba, Bijan C; Eberwein, Samuel M; Roberts, Patricia A; Pappas, Ashley L; Cruz, Jennifer L; Amerine, Lindsey B
2017-06-01
Background: Drug manufacturers change distribution models based on patient safety and product integrity needs. These model changes can limit health-system access to medications, and the financial impact on health systems can be significant. Objective: The primary aim of this study was to determine the health-system financial impact of a manufacturer's change from open to limited distribution for bevacizumab (Avastin), rituximab (Rituxan), and trastuzumab (Herceptin). The secondary aim was to identify opportunities to shift administration to outpatient settings to support formulary change. Methods: To assess the financial impact on the health system, the cost minus discount was applied to total drug expenditure during a 1-year period after the distribution model change. The opportunity analysis was conducted for three institutions within the health system through chart review of each inpatient administration. Opportunity cost was the sum of the inpatient administration cost and outpatient administration margin. Results: The total drug expenditure for the study period was $26 427 263. By applying the cost minus discount, the financial effect of the distribution model change was $1 393 606. A total of 387 administrations were determined to be opportunities to be shifted to the outpatient setting. During the study period, the total opportunity cost was $1 766 049. Conclusion: Drug expenditure increased for the health system due to the drug distribution model change and loss of cost minus discount. The opportunity cost of shifting inpatient administrations could offset the increase in expenditure. It is recommended to restrict bevacizumab, rituximab, and trastuzumab through Pharmacy & Therapeutics Committees to outpatient use where clinically appropriate.
TRIQS: A toolbox for research on interacting quantum systems
NASA Astrophysics Data System (ADS)
Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka
2015-11-01
We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.
Flight dynamics software in a distributed network environment
NASA Technical Reports Server (NTRS)
Jeletic, J.; Weidow, D.; Boland, D.
1995-01-01
As with all NASA facilities, the announcement of reduced budgets, reduced staffing, and the desire to implement smaller/quicker/cheaper missions has required the Agency's organizations to become more efficient in what they do. To accomplish these objectives, the FDD has initiated the development of the Flight Dynamics Distributed System (FDDS). The underlying philosophy of FDDS is to build an integrated system that breaks down the traditional barriers of attitude, mission planning, and navigation support software to provide a uniform approach to flight dynamics applications. Through the application of open systems concepts and state-of-the-art technologies, including object-oriented specification concepts, object-oriented software, and common user interface, communications, data management, and executive services, the FDD will reengineer most of its six million lines of code.
NASA Astrophysics Data System (ADS)
Kishcha, P.; Starobinets, B.; Bozzano, R.; Pensieri, S.; Canepa, E.; Nickovie, S.; di Sarra, A.; Udisti, R.; Becagli, S.; Alpert, P.
2012-03-01
Sea-salt aerosol (SSA) could influence the Earth's climate acting as cloud condensation nuclei. However, there were no regular measurements of SSA in the open sea. At Tel-Aviv University, the DREAM-Salt prediction system has been producing daily forecasts of 3-D distribution of sea-salt aerosol concentrations over the Mediterranean Sea (http://wind.tau.ac.il/saltina/ salt.html). In order to evaluate the model performance in the open sea, daily modeled concentrations were compared directly with SSA measurements taken at the tiny island of Lampedusa, in the Central Mediterranean. In order to further test the robustness of the model, the model performance over the open sea was indirectly verified by comparing modeled SSA concentrations with wave height measurements collected by the ODAS Italia 1 buoy and the Llobregat buoy. Model-vs.-measurement comparisons show that the model is capable of producing realistic SSA concentrations and their day-today variations over the open sea, in accordance with observed wave height and wind speed.
NASA Technical Reports Server (NTRS)
Jung, S. Y.; Sanandres, Luis A.; Vance, J. M.
1991-01-01
Measurements of pressure distributions and force coefficients were carried out in two types of squeeze film dampers, executing a circular centered orbit, an open-ended configuration, and a partially sealed one, in order to investigate the effect of fluid inertia and cavitation on pressure distributions and force coefficients. Dynamic pressure measurements were carried out for two orbit radii, epsilon 0.5 and 0.8. It was found that the partially sealed configuration was less influenced by fluid inertia than the open ended configuration.
Drug policy in China: pharmaceutical distribution in rural areas.
Dong, H; Bogg, L; Rehnberg, C; Diwan, V
1999-03-01
In 1978, China decided to reform its economy and since then has gradually opened up to the world. The economy has grown rapidly at an average of 9.8% per year from 1978 to 1994. Medical expenditure, especially for drugs, has grown even more rapidly. The increase in medical expenditure can be attributed to changing disease patterns, a higher proportion of older people in the population and fee-for-service incentives for hospitals. Due to the changing economic system and higher cost of health care, the Chinese government has reformed its health care system, including its health and drug policy. The drug policy reform has led to more comprehensive policy elements, including registration, production, distribution, utilization and administration. As a part of drug policy reform, the drug distribution network has also been changed, from a centrally controlled supply system (push system) to a market-oriented demand system (pull system). Hospitals can now purchase drugs directly from drug companies, factories and retailers, leading to increased price competition. Patients have easier access to drugs as more drugs are available on the market. At the same time, this has also entailed negative effects. The old drug administrative system is not suitable for the new drug distribution network. It is easy for people to get drugs on the market and this can lead to overuse and misuse. Marketing factors have influenced drug distribution so strongly that there is a risk of fake or low quality drugs being distributed. The government has taken some measures to fight these negative effects. This paper describes the drug policy reform in China, particularly the distribution of drugs to health care facilities.
Characterizing Crowd Participation and Productivity of Foldit Through Web Scraping
2016-03-01
Berkeley Open Infrastructure for Network Computing CDF Cumulative Distribution Function CPU Central Processing Unit CSSG Crowdsourced Serious Game...computers at once can create a similar capacity. According to Anderson [6], principal investigator for the Berkeley Open Infrastructure for Network...extraterrestrial life. From this project, a software-based distributed computing platform called the Berkeley Open Infrastructure for Network Computing
Swan: A tool for porting CUDA programs to OpenCL
NASA Astrophysics Data System (ADS)
Harvey, M. J.; De Fabritiis, G.
2011-04-01
The use of modern, high-performance graphical processing units (GPUs) for acceleration of scientific computation has been widely reported. The majority of this work has used the CUDA programming model supported exclusively by GPUs manufactured by NVIDIA. An industry standardisation effort has recently produced the OpenCL specification for GPU programming. This offers the benefits of hardware-independence and reduced dependence on proprietary tool-chains. Here we describe a source-to-source translation tool, "Swan" for facilitating the conversion of an existing CUDA code to use the OpenCL model, as a means to aid programmers experienced with CUDA in evaluating OpenCL and alternative hardware. While the performance of equivalent OpenCL and CUDA code on fixed hardware should be comparable, we find that a real-world CUDA application ported to OpenCL exhibits an overall 50% increase in runtime, a reduction in performance attributable to the immaturity of contemporary compilers. The ported application is shown to have platform independence, running on both NVIDIA and AMD GPUs without modification. We conclude that OpenCL is a viable platform for developing portable GPU applications but that the more mature CUDA tools continue to provide best performance. Program summaryProgram title: Swan Catalogue identifier: AEIH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public License version 2 No. of lines in distributed program, including test data, etc.: 17 736 No. of bytes in distributed program, including test data, etc.: 131 177 Distribution format: tar.gz Programming language: C Computer: PC Operating system: Linux RAM: 256 Mbytes Classification: 6.5 External routines: NVIDIA CUDA, OpenCL Nature of problem: Graphical Processing Units (GPUs) from NVIDIA are preferentially programed with the proprietary CUDA programming toolkit. An alternative programming model promoted as an industry standard, OpenCL, provides similar capabilities to CUDA and is also supported on non-NVIDIA hardware (including multicore ×86 CPUs, AMD GPUs and IBM Cell processors). The adaptation of a program from CUDA to OpenCL is relatively straightforward but laborious. The Swan tool facilitates this conversion. Solution method:Swan performs a translation of CUDA kernel source code into an OpenCL equivalent. It also generates the C source code for entry point functions, simplifying kernel invocation from the host program. A concise host-side API abstracts the CUDA and OpenCL APIs. A program adapted to use Swan has no dependency on the CUDA compiler for the host-side program. The converted program may be built for either CUDA or OpenCL, with the selection made at compile time. Restrictions: No support for CUDA C++ features Running time: Nominal
Design of a modulated orthovoltage stereotactic radiosurgery system.
Fagerstrom, Jessica M; Bender, Edward T; Lawless, Michael J; Culberson, Wesley S
2017-07-01
To achieve stereotactic radiosurgery (SRS) dose distributions with sharp gradients using orthovoltage energy fluence modulation with inverse planning optimization techniques. A pencil beam model was used to calculate dose distributions from an orthovoltage unit at 250 kVp. Kernels for the model were derived using Monte Carlo methods. A Genetic Algorithm search heuristic was used to optimize the spatial distribution of added tungsten filtration to achieve dose distributions with sharp dose gradients. Optimizations were performed for depths of 2.5, 5.0, and 7.5 cm, with cone sizes of 5, 6, 8, and 10 mm. In addition to the beam profiles, 4π isocentric irradiation geometries were modeled to examine dose at 0.07 mm depth, a representative skin depth, for the low energy beams. Profiles from 4π irradiations of a constant target volume, assuming maximally conformal coverage, were compared. Finally, dose deposition in bone compared to tissue in this energy range was examined. Based on the results of the optimization, circularly symmetric tungsten filters were designed to modulate the orthovoltage beam across the apertures of SRS cone collimators. For each depth and cone size combination examined, the beam flatness and 80-20% and 90-10% penumbrae were calculated for both standard, open cone-collimated beams as well as for optimized, filtered beams. For all configurations tested, the modulated beam profiles had decreased penumbra widths and flatness statistics at depth. Profiles for the optimized, filtered orthovoltage beams also offered decreases in these metrics compared to measured linear accelerator cone-based SRS profiles. The dose at 0.07 mm depth in the 4π isocentric irradiation geometries was higher for the modulated beams compared to unmodulated beams; however, the modulated dose at 0.07 mm depth remained <0.025% of the central, maximum dose. The 4π profiles irradiating a constant target volume showed improved statistics for the modulated, filtered distribution compared to the standard, open cone-collimated distribution. Simulations of tissue and bone confirmed previously published results that a higher energy beam (≥ 200 keV) would be preferable, but the 250 kVp beam was chosen for this work because it is available for future measurements. A methodology has been described that may be used to optimize the spatial distribution of added filtration material in an orthovoltage SRS beam to result in dose distributions with decreased flatness and penumbra statistics compared to standard open cones. This work provides the mathematical foundation for a novel, orthovoltage energy fluence-modulated SRS system. © 2017 American Association of Physicists in Medicine.
Avalanches and power-law behaviour in lung inflation
NASA Astrophysics Data System (ADS)
Suki, Béla; Barabási, Albert-László; Hantos, Zoltán; Peták, Ferenc; Stanley, H. Eugene
1994-04-01
WHEN lungs are emptied during exhalation, peripheral airways close up1. For people with lung disease, they may not reopen for a significant portion of inhalation, impairing gas exchange2,3. A knowledge of the mechanisms that govern reinflation of collapsed regions of lungs is therefore central to the development of ventilation strategies for combating respiratory problems. Here we report measurements of the terminal airway resistance, Rt , during the opening of isolated dog lungs. When inflated by a constant flow, Rt decreases in discrete jumps. We find that the probability distribution of the sizes of the jumps and of the time intervals between them exhibit power-law behaviour over two decades. We develop a model of the inflation process in which 'avalanches' of airway openings are seen-with power-law distributions of both the size of avalanches and the time intervals between them-which agree quantitatively with those seen experimentally, and are reminiscent of the power-law behaviour observed for self-organized critical systems4. Thus power-law distributions, arising from avalanches associated with threshold phenomena propagating down a branching tree structure, appear to govern the recruitment of terminal airspaces.
Feng, Liang; Yuan, Shuai; Zhang, Liang-Liang; Tan, Kui; Li, Jia-Luo; Kirchon, Angelo; Liu, Ling-Mei; Zhang, Peng; Han, Yu; Chabal, Yves J; Zhou, Hong-Cai
2018-02-14
Sufficient pore size, appropriate stability, and hierarchical porosity are three prerequisites for open frameworks designed for drug delivery, enzyme immobilization, and catalysis involving large molecules. Herein, we report a powerful and general strategy, linker thermolysis, to construct ultrastable hierarchically porous metal-organic frameworks (HP-MOFs) with tunable pore size distribution. Linker instability, usually an undesirable trait of MOFs, was exploited to create mesopores by generating crystal defects throughout a microporous MOF crystal via thermolysis. The crystallinity and stability of HP-MOFs remain after thermolabile linkers are selectively removed from multivariate metal-organic frameworks (MTV-MOFs) through a decarboxylation process. A domain-based linker spatial distribution was found to be critical for creating hierarchical pores inside MTV-MOFs. Furthermore, linker thermolysis promotes the formation of ultrasmall metal oxide nanoparticles immobilized in an open framework that exhibits high catalytic activity for Lewis acid-catalyzed reactions. Most importantly, this work provides fresh insights into the connection between linker apportionment and vacancy distribution, which may shed light on probing the disordered linker apportionment in multivariate systems, a long-standing challenge in the study of MTV-MOFs.
Shape and Reinforcement Optimization of Underground Tunnels
NASA Astrophysics Data System (ADS)
Ghabraie, Kazem; Xie, Yi Min; Huang, Xiaodong; Ren, Gang
Design of support system and selecting an optimum shape for the opening are two important steps in designing excavations in rock masses. Currently selecting the shape and support design are mainly based on designer's judgment and experience. Both of these problems can be viewed as material distribution problems where one needs to find the optimum distribution of a material in a domain. Topology optimization techniques have proved to be useful in solving these kinds of problems in structural design. Recently the application of topology optimization techniques in reinforcement design around underground excavations has been studied by some researchers. In this paper a three-phase material model will be introduced changing between normal rock, reinforced rock, and void. Using such a material model both problems of shape and reinforcement design can be solved together. A well-known topology optimization technique used in structural design is bi-directional evolutionary structural optimization (BESO). In this paper the BESO technique has been extended to simultaneously optimize the shape of the opening and the distribution of reinforcements. Validity and capability of the proposed approach have been investigated through some examples.
Hu, L; Zhao, Z; Song, J; Fan, Y; Jiang, W; Chen, J
2001-02-01
The distribution of stress on the surface of condylar cartilage was investigated. Three-dimensional model of the 'Temporomandibular joint mandible Herbst appliance system' was set up by SUPER SAP software (version 9.3). On this model, various bite reconstruction was simulated according to specified advanced displacement and vertical bite opening. The distribution of maximum and minimum principal stress on the surface of condylar cartilage were computerized and analyzed. When Herbst appliance drove the mandible forward, the anterior condyle surface was compressed while the posterior surface was drawn. The trend of stress on the same point on the condyle surface was consistent in various reconstruction conditions, but the trend of stress on various point were different in same reconstruction conditions. All five groups of bite reconstruction (3-7 mm advancement, 4-2 mm vertical bite opening of the mandible) designed by this study can be selected in clinic according to the patient's capability of adaptation, the extent of malocclusion and the potential and direction of growth.
Qi, Bing; Lougovski, Pavel; Pooser, Raphael C.; ...
2015-10-21
Continuous-variable quantum key distribution (CV-QKD) protocols based on coherent detection have been studied extensively in both theory and experiment. In all the existing implementations of CV-QKD, both the quantum signal and the local oscillator (LO) are generated from the same laser and propagate through the insecure quantum channel. This arrangement may open security loopholes and limit the potential applications of CV-QKD. In our paper, we propose and demonstrate a pilot-aided feedforward data recovery scheme that enables reliable coherent detection using a “locally” generated LO. Using two independent commercial laser sources and a spool of 25-km optical fiber, we construct amore » coherent communication system. The variance of the phase noise introduced by the proposed scheme is measured to be 0.04 (rad 2), which is small enough to enable secure key distribution. This technology opens the door for other quantum communication protocols, such as the recently proposed measurement-device-independent CV-QKD, where independent light sources are employed by different users.« less
The double LHPR system, a high speed micro- and macroplankton sampler
NASA Astrophysics Data System (ADS)
Williams, R.; Collins, N. R.; Conway, D. V. P.
1983-03-01
A double-net sampling system, consisting of two separate Longhurst-Hardy Plankton Recorders, capable of being towed at speeds up to 3 m s -1 and capturing plankton organisms such as copepod nauplii, with the minimum of damage is described. The systems are mounted in a Lowestoft sampler and connected to 53 and 280-μm mesh nets; both nets are fitted with doors opened by a variable timer unit. The system is suitable for neritic and oceanic deployment. The ratio of the area of the net mouth to the net open area ( R) for the 280-μm net is 9 and for the 53-μm net it is 14 to 121 depending on the nose cone used. The R values are considerably better than those of previous described systems. Examples are given showing how the instrument has been used to resolve spatially the vertical distribution of the nauplii and copepodite stages of Calanus helgolandicus.
OpenSim: A Flexible Distributed Neural Network Simulator with Automatic Interactive Graphics.
Jarosch, Andreas; Leber, Jean Francois
1997-06-01
An object-oriented simulator called OpenSim is presented that achieves a high degree of flexibility by relying on a small set of building blocks. The state variables and algorithms put in this framework can easily be accessed through a command shell. This allows one to distribute a large-scale simulation over several workstations and to generate the interactive graphics automatically. OpenSim opens new possibilities for cooperation among Neural Network researchers. Copyright 1997 Elsevier Science Ltd.
A Scalable Infrastructure for Lidar Topography Data Distribution, Processing, and Discovery
NASA Astrophysics Data System (ADS)
Crosby, C. J.; Nandigam, V.; Krishnan, S.; Phan, M.; Cowart, C. A.; Arrowsmith, R.; Baru, C.
2010-12-01
High-resolution topography data acquired with lidar (light detection and ranging) technology have emerged as a fundamental tool in the Earth sciences, and are also being widely utilized for ecological, planning, engineering, and environmental applications. Collected from airborne, terrestrial, and space-based platforms, these data are revolutionary because they permit analysis of geologic and biologic processes at resolutions essential for their appropriate representation. Public domain lidar data collection by federal, state, and local agencies are a valuable resource to the scientific community, however the data pose significant distribution challenges because of the volume and complexity of data that must be stored, managed, and processed. Lidar data acquisition may generate terabytes of data in the form of point clouds, digital elevation models (DEMs), and derivative products. This massive volume of data is often challenging to host for resource-limited agencies. Furthermore, these data can be technically challenging for users who lack appropriate software, computing resources, and expertise. The National Science Foundation-funded OpenTopography Facility (www.opentopography.org) has developed a cyberinfrastructure-based solution to enable online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products. OpenTopography provides access to terabytes of point cloud data, standard DEMs, and Google Earth image data, all co-located with computational resources for on-demand data processing. The OpenTopography portal is built upon a cyberinfrastructure platform that utilizes a Services Oriented Architecture (SOA) to provide a modular system that is highly scalable and flexible enough to support the growing needs of the Earth science lidar community. OpenTopography strives to host and provide access to datasets as soon as they become available, and also to expose greater application level functionalities to our end-users (such as generation of custom DEMs via various gridding algorithms, and hydrological modeling algorithms). In the future, the SOA will enable direct authenticated access to back-end functionality through simple Web service Application Programming Interfaces (APIs), so that users may access our data and compute resources via clients other than Web browsers. In addition to an overview of the OpenTopography SOA, this presentation will discuss our recently developed lidar data ingestion and management system for point cloud data delivered in the binary LAS standard. This system compliments our existing partitioned database approach for data delivered in ASCII format, and permits rapid ingestion of data. The system has significantly reduced data ingestion times and has implications for data distribution in emergency response situations. We will also address on ongoing work to develop a community lidar metadata catalog based on the OGC Catalogue Service for Web (CSW) standard, which will help to centralize discovery of public domain lidar data.
NASA Astrophysics Data System (ADS)
Banerjee, J.; Verma, M. K.; Manna, S.; Ghosh, S.
2006-02-01
Noise profile of Voltage Dependent Anion Channel (VDAC) is investigated in open channel state. Single-channel currents through VDAC from mitochondria of rat brain reconstituted into a planar lipid bilayer are recorded under different voltage clamped conditions across the membrane. Power spectrum analysis of current indicates power law noise of 1/f nature. Moreover, this 1/f nature of the open channel noise is seen throughout the range of applied membrane potential from -30 to +30 mV. It is being proposed that 1/f noise in open ion channel arises out of obstruction in the passage of ions across the membrane. The process is recognised as a phenomenon of self-organized criticality (SOC) like sandpile avalanche and other physical systems. Based on SOC it has been theoretically established that the system of ion channel follows power law noise as observed in our experiments. We also show that the first-time return probability of current fluctuations obeys a power law distribution.
Open quantum random walk in terms of quantum Bernoulli noise
NASA Astrophysics Data System (ADS)
Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling
2018-03-01
In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.
Field-size dependence of doses of therapeutic carbon beams.
Kusano, Yohsuke; Kanai, Tatsuaki; Yonai, Shunsuke; Komori, Masataka; Ikeda, Noritoshi; Tachikawa, Yuji; Ito, Atsushi; Uchida, Hirohisa
2007-10-01
To estimate the physical dose at the center of spread-out Bragg peaks (SOBP) for various conditions of the irradiation system, a semiempirical approach was applied. The dose at the center of the SOBP depends on the field size because of large-angle scattering particles in the water phantom. For a small field of 5 x 5 cm2, the dose was reduced to 99.2%, 97.5%, and 96.5% of the dose used for the open field in the case of 290, 350, and 400 MeV/n carbon beams, respectively. Based on the three-Gaussian form of the lateral dose distributions of the carbon pencil beam, which has previously been shown to be effective for describing scattered carbon beams, we reconstructed the dose distributions of the SOBP beam. The reconstructed lateral dose distribution reproduced the measured lateral dose distributions very well. The field-size dependencies calculated using the reconstructed lateral dose distribution of the therapeutic carbon beam agreed with the measured dose dependency very well. The reconstructed beam was also used for irregularly shaped fields. The resultant dose distribution agreed with the measured dose distribution. The reconstructed beams were found to be applicable to the treatment-planning system.
Building an Efficient and Effective Test Management System in an ODL Institution
ERIC Educational Resources Information Center
Yusof, Safiah Md; Lim, Tick Meng; Png, Leo; Khatab, Zainuriyah Abd; Singh, Harvinder Kaur Dharam
2017-01-01
Open University Malaysia (OUM) is progressively moving towards implementing assessment on demand and online assessment. This move is deemed necessary for OUM to continue to be the leading provider of flexible learning. OUM serves a very large number of students each semester and these students are vastly distributed throughout the country. As the…
New Forum Addresses Microbiologically Influenced Corrosion
2012-06-01
methanogens. 15. SUBJECT TERMS MIC, biofilm Formation , localized corrosion, microoganisms 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b... Stainless Steels for Prestressed Concrete" Elisabeth Schwarzenbbck University of Bourgogne Third Place, Mars Fontana Category "Microelectrochemical...Campbell described a monitoring sys- tem for a Type 316L stainless steel (UNS S31603) drinking water distribution system that measured open circuit
The acquisition toxicity test data of sufficient quality from open literature to fulfill taxonomic diversity requirements can be a limiting factor in the creation of new 304(a) Aquatic Life Criteria. The use of existing models (WebICE and ACE) that estimate acute and chronic eff...
Network-Aware Mechanisms for Tolerating Byzantine Failures in Distributed Systems
2012-01-01
In Digest, we use SHA-256 as the collision-resistant hash function. We use the realization of SHA-256 from the OpenSSL toolkit [48]. The results...vol. 46, pp. 372–378, 1997. [48] “ Openssl project.” [Online]. Available: http://www.openssl.org/ 154 [49] M. J. Fischer, N. A. Lynch, and M. Merritt
Effects of elevated atmospheric CO2 and N fertilization on bahiagrass root distribution
USDA-ARS?s Scientific Manuscript database
The effects of elevated atmospheric CO2 on pasture systems remain understudied in the Southeastern US. A 10-year study of bahiagrass (Paspalum notatum Flüggé) response to elevated CO2 was established in 2005 using open top field chambers on a Blanton loamy sand (loamy siliceous, thermic, Grossarenic...
NASA Technical Reports Server (NTRS)
Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.
2000-01-01
The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.
OpenPET Hardware, Firmware, Software, and Board Design Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abu-Nimeh, Faisal; Choong, Woon-Sengq; Moses, William W.
OpenPET is an open source, flexible, high-performance, and modular data acquisition system for a variety of applications. The OpenPET electronics are capable of reading analog voltage or current signals from a wide variety of sensors. The electronics boards make extensive use of field programmable gate arrays (FPGAs) to provide flexibility and scalability. Firmware and software for the FPGAs and computer are used to control and acquire data from the system. The command and control flow is similar to the data flow, however, the commands are initiated from the computer similar to a tree topology (i.e., from top-to-bottom). Each node inmore » the tree discovers its parent and children, and all addresses are configured accordingly. A user (or a script) initiates a command from the computer. This command will be translated and encoded to the corresponding child (e.g., SB, MB, DB, etc.). Consecutively, each node will pass the command to its corresponding child(ren) by looking at the destination address. Finally, once the command reaches its desired destination(s) the corresponding node(s) execute(s) the command and send(s) a reply, if required. All the firmware, software, and the electronics board design files are distributed through the OpenPET website (http://openpet.lbl.gov).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbara Chapman
OpenMP was not well recognized at the beginning of the project, around year 2003, because of its limited use in DoE production applications and the inmature hardware support for an efficient implementation. Yet in the recent years, it has been graduately adopted both in HPC applications, mostly in the form of MPI+OpenMP hybrid code, and in mid-scale desktop applications for scientific and experimental studies. We have observed this trend and worked deligiently to improve our OpenMP compiler and runtimes, as well as to work with the OpenMP standard organization to make sure OpenMP are evolved in the direction close tomore » DoE missions. In the Center for Programming Models for Scalable Parallel Computing project, the HPCTools team at the University of Houston (UH), directed by Dr. Barbara Chapman, has been working with project partners, external collaborators and hardware vendors to increase the scalability and applicability of OpenMP for multi-core (and future manycore) platforms and for distributed memory systems by exploring different programming models, language extensions, compiler optimizations, as well as runtime library support.« less
NASA Astrophysics Data System (ADS)
Messina, S.; Lanzafame, A. C.; Malo, L.; Desidera, S.; Buccino, A.; Zhang, L.; Artemenko, S.; Millward, M.; Hambsch, F.-J.
2017-10-01
Context. Low-mass members of young loose stellar associations and open clusters exhibit a wide spread of rotation periods. Such a spread originates from the distributions of masses and initial rotation periods. However, multiplicity can also play a significant role. Aims: We aim to investigate the role played by physical companions in multiple systems in shortening the primordial disk lifetime, anticipating the rotation spin up with respect to single stars. Methods: We have compiled the most extensive list to date of low-mass bona fide and candidate members of the young 25-Myr β Pictoris association. We have measured from our own photometric time series or from archival time series the rotation periods of almost all members. In a few cases the rotation periods were retrieved from the literature. We used updated UVWXYZ components to assess the membership of the whole stellar sample. Thanks to the known basic properties of most members we built the rotation period distribution distinguishing between bona fide members and candidate members and according to their multiplicity status. Results: We find that single stars and components of multiple systems in wide orbits (>80 AU) have rotation periods that exhibit a well defined sequence arising from mass distribution with some level of spread likely arising from initial rotation period distribution. All components of multiple systems in close orbits (<80 AU) have rotation periods that are significantly shorter than their equal-mass single counterparts. For these close components of multiple systems a linear dependence of rotation rate on separation is only barely detected. A comparison with the younger 13 Myr h Per cluster and with the older 40-Myr open clusters and stellar associations NGC 2547, IC 2391, Argus, and IC 2602 and the 130-Myr Pleiades shows that whereas the evolution of F-G stars is well reproduced by angular momentum evolution models, this is not the case for the slow K and early-M stars. Finally, we find that the amplitude of their light curves is correlated neither with rotation nor with mass. Conclusions: Once single stars and wide components of multiple systems are separated from close components of multiple systems, the rotation period distributions exhibit a well defined dependence on mass that allows us to make a meaningful comparison with similar distributions of either younger or older associations and clusters. Such cleaned distributions allow us to use the stellar rotation period meaningfully as an age indicator for F and G type stars. Tables 2 and 3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A3
Intelligent Distribution Voltage Control with Distributed Generation =
NASA Astrophysics Data System (ADS)
Castro Mendieta, Jose
In this thesis, three methods for the optimal participation of the reactive power of distributed generations (DGs) in unbalanced distributed network have been proposed, developed, and tested. These new methods were developed with the objectives of maintain voltage within permissible limits and reduce losses. The first method proposes an optimal participation of reactive power of all devices available in the network. The propose approach is validated by comparing the results with other methods reported in the literature. The proposed method was implemented using Simulink of Matlab and OpenDSS. Optimization techniques and the presentation of results are from Matlab. The co-simulation of Electric Power Research Institute's (EPRI) OpenDSS program solves a three-phase optimal power flow problem in the unbalanced IEEE 13 and 34-node test feeders. The results from this work showed a better loss reduction compared to the Coordinated Voltage Control (CVC) method. The second method aims to minimize the voltage variation on the pilot bus on distribution network using DGs. It uses Pareto and Fuzzy-PID logic to reduce the voltage variation. Results indicate that the proposed method reduces the voltage variation more than the other methods. Simulink of Matlab and OpenDSS is used in the development of the proposed approach. The performance of the method is evaluated on IEEE 13-node test feeder with one and three DGs. Variables and unbalanced loads are used, based on real consumption data, over a time window of 48 hours. The third method aims to minimize the reactive losses using DGs on distribution networks. This method analyzes the problem using the IEEE 13-node test feeder with three different loads and the IEEE 123-node test feeder with four DGs. The DGs can be fixed or variables. Results indicate that integration of DGs to optimize the reactive power of the network helps to maintain the voltage within the allowed limits and to reduce the reactive power losses. The thesis is presented in the form of the three articles. The first article is published in the journal Electrical Power and Energy System, the second is published in the international journal Energies and the third was submitted to the journal Electrical Power and Energy System. Two other articles have been published in conferences with reviewing committee. This work is based on six chapters, which are detailed in the various sections of the thesis.
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
Using WNTR to Model Water Distribution System Resilience ...
The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres
The Geogenomic Mutational Atlas of Pathogens (GoMAP) Web System
Sargeant, David P.; Hedden, Michael W.; Deverasetty, Sandeep; Strong, Christy L.; Alaniz, Izua J.; Bartlett, Alexandria N.; Brandon, Nicholas R.; Brooks, Steven B.; Brown, Frederick A.; Bufi, Flaviona; Chakarova, Monika; David, Roxanne P.; Dobritch, Karlyn M.; Guerra, Horacio P.; Levit, Kelvy S.; Mathew, Kiran R.; Matti, Ray; Maza, Dorothea Q.; Mistry, Sabyasachy; Novakovic, Nemanja; Pomerantz, Austin; Rafalski, Timothy F.; Rathnayake, Viraj; Rezapour, Noura; Ross, Christian A.; Schooler, Steve G.; Songao, Sarah; Tuggle, Sean L.; Wing, Helen J.; Yousif, Sandy; Schiller, Martin R.
2014-01-01
We present a new approach for pathogen surveillance we call Geogenomics. Geogenomics examines the geographic distribution of the genomes of pathogens, with a particular emphasis on those mutations that give rise to drug resistance. We engineered a new web system called Geogenomic Mutational Atlas of Pathogens (GoMAP) that enables investigation of the global distribution of individual drug resistance mutations. As a test case we examined mutations associated with HIV resistance to FDA-approved antiretroviral drugs. GoMAP-HIV makes use of existing public drug resistance and HIV protein sequence data to examine the distribution of 872 drug resistance mutations in ∼502,000 sequences for many countries in the world. We also implemented a broadened classification scheme for HIV drug resistance mutations. Several patterns for geographic distributions of resistance mutations were identified by visual mining using this web tool. GoMAP-HIV is an open access web application available at http://www.bio-toolkit.com/GoMap/project/ PMID:24675726
The Geogenomic Mutational Atlas of Pathogens (GoMAP) web system.
Sargeant, David P; Hedden, Michael W; Deverasetty, Sandeep; Strong, Christy L; Alaniz, Izua J; Bartlett, Alexandria N; Brandon, Nicholas R; Brooks, Steven B; Brown, Frederick A; Bufi, Flaviona; Chakarova, Monika; David, Roxanne P; Dobritch, Karlyn M; Guerra, Horacio P; Levit, Kelvy S; Mathew, Kiran R; Matti, Ray; Maza, Dorothea Q; Mistry, Sabyasachy; Novakovic, Nemanja; Pomerantz, Austin; Rafalski, Timothy F; Rathnayake, Viraj; Rezapour, Noura; Ross, Christian A; Schooler, Steve G; Songao, Sarah; Tuggle, Sean L; Wing, Helen J; Yousif, Sandy; Schiller, Martin R
2014-01-01
We present a new approach for pathogen surveillance we call Geogenomics. Geogenomics examines the geographic distribution of the genomes of pathogens, with a particular emphasis on those mutations that give rise to drug resistance. We engineered a new web system called Geogenomic Mutational Atlas of Pathogens (GoMAP) that enables investigation of the global distribution of individual drug resistance mutations. As a test case we examined mutations associated with HIV resistance to FDA-approved antiretroviral drugs. GoMAP-HIV makes use of existing public drug resistance and HIV protein sequence data to examine the distribution of 872 drug resistance mutations in ∼ 502,000 sequences for many countries in the world. We also implemented a broadened classification scheme for HIV drug resistance mutations. Several patterns for geographic distributions of resistance mutations were identified by visual mining using this web tool. GoMAP-HIV is an open access web application available at http://www.bio-toolkit.com/GoMap/project/
Effect of soil-rock system on speleothems weathering in Bailong Cave, Yunnan Province, China*
Wang, Jing; Song, Lin-hua
2005-01-01
Bailong Cave with its well-developed Middle Triassic calcareous dolomite’s system was opened as a show cave for visitors in 1988. The speleothem scenery has been strongly weathered as white powder on the outer layers. Study of the cave winds, permeability of soil-rock system and the chemical compositions of the dripping water indicated: (1) The cave dimension structure distinctively affects the cave winds, which were stronger at narrow places. (2) Based on the different soil grain size distribution, clay was the highest in composition in the soil. The response sense of dripping water to the rainwater percolation was slow. The density of joints and other openings in dolomite make the dolomite as mesh seepage body forming piles of thin and high columns and stalactites. (3) Study of 9 dripping water samples by HYDROWIN computer program showed that the major mineral in the water was dolomite. PMID:15682505
Effect of soil-rock system on speleothems weathering in Bailong Cave, Yunnan Province, China.
Wang, Jing; Song, Lin-Hua
2005-03-01
Bailong Cave with its well-developed Middle Triassic calcareous dolomite's system was opened as a show cave for visitors in 1988. The speleothem scenery has been strongly weathered as white powder on the outer layers. Study of the cave winds, permeability of soil-rock system and the chemical compositions of the dripping water indicated: (1) The cave dimension structure distinctively affects the cave winds, which were stronger at narrow places. (2) Based on the different soil grain size distribution, clay was the highest in composition in the soil. The response sense of dripping water to the rainwater percolation was slow. The density of joints and other openings in dolomite make the dolomite as mesh seepage body forming piles of thin and high columns and stalactites. (3) Study of 9 dripping water samples by HYDROWIN computer program showed that the major mineral in the water was dolomite.
Aircraft Brake Systems Testing Handbook.
1981-05-01
distribution is unlimited. AIR FORCE FLIGHT TEST CENTER EDWARDS AIR FORCE BASE , CALIFORNIA AIR FORCE SYSTEMS COMMAND UNITED STATES AIR FORCE A This handbook... Base , California 93523. This handbook has been reviewed and cleared for open publication and/or public release by the AFFTC Office of Public Affairs in...Force lbs Ft Engine Thrust lbs F vrt Vertical Force acting on the tire at the qround lbs 9 Acceleration due to gravity 32.17 ft/sec 2 h Distance
Meng, Dewei; Ansari, Farhad
2016-12-01
Detection of cracks while at their early stages of evolution is important in health monitoring of civil structures. Review of technical literature reveals that single or sparsely distributed multiple cracks can be detected by Brillouin-scattering-based optical fiber sensor systems. In a recent study, a pre-pump-pulse Brillouin optical time-domain analysis (PPP-BOTDA) system was employed for detection of a single microcrack. Specific characteristics of the Brillouin gain spectrum, such as Brillouin frequency shift, and Brillouin gain spectrum width, were utilized in order to detect the formation and growth of microcracks with crack opening displacements as small as 25 μm. In most situations, formations of neighboring microcracks are not detected due to inherent limitations of Brillouin-based systems. In the study reported here, the capability of PPP-BOTDA for detection of two neighboring microcracks was investigated in terms of the proximity of the microcracks with respect to each other, i.e., crack spacing distance, crack opening displacement, and the spatial resolution of the PPP-BOTDA. The extent of the study pertained both to theoretical as well as experimental investigations. The concept of shape index is introduced in order to establish an analytical method for gauging the influence of the neighboring microcracks in detection and microcrack differentiation capabilities of Brillouin-based optical fiber sensor systems.
Open solutions to distributed control in ground tracking stations
NASA Technical Reports Server (NTRS)
Heuser, William Randy
1994-01-01
The advent of high speed local area networks has made it possible to interconnect small, powerful computers to function together as a single large computer. Today, distributed computer systems are the new paradigm for large scale computing systems. However, the communications provided by the local area network is only one part of the solution. The services and protocols used by the application programs to communicate across the network are as indispensable as the local area network. And the selection of services and protocols that do not match the system requirements will limit the capabilities, performance, and expansion of the system. Proprietary solutions are available but are usually limited to a select set of equipment. However, there are two solutions based on 'open' standards. The question that must be answered is 'which one is the best one for my job?' This paper examines a model for tracking stations and their requirements for interprocessor communications in the next century. The model and requirements are matched with the model and services provided by the five different software architectures and supporting protocol solutions. Several key services are examined in detail to determine which services and protocols most closely match the requirements for the tracking station environment. The study reveals that the protocols are tailored to the problem domains for which they were originally designed. Further, the study reveals that the process control model is the closest match to the tracking station model.
M.T. Tyree; H. Cochard; P. Cruziat
2003-01-01
When petioles of transpiring leaves are cut in the air, according to the 'Scholander assumption', the vessels cut open should fill with air as the water is drained away by continued transpiration, The distribution of air-filled vessels versus distance from the cut surface should match the distribution of lengths of 'open vessels', i.e. vessels cut...
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data.
Delussu, Giovanni; Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR's formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called "Constant Load" and "Constant Number of Records", with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes.
NASA Astrophysics Data System (ADS)
Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou
2014-10-01
The existing distributed quantum gates required physical particles to be transmitted between two distant nodes in the quantum network. We here demonstrate the possibility to implement distributed quantum computation without transmitting any particles. We propose a scheme for a distributed controlled-phase gate between two distant quantum-dot electron-spin qubits in optical microcavities. The two quantum-dot-microcavity systems are linked by a nested Michelson-type interferometer. A single photon acting as ancillary resource is sent in the interferometer to complete the distributed controlled-phase gate, but it never enters the transmission channel between the two nodes. Moreover, we numerically analyze the effect of experimental imperfections and show that the present scheme can be implemented with high fidelity in the ideal asymptotic limit. The scheme provides further evidence of quantum counterfactuality and opens promising possibilities for distributed quantum computation.
Celesti, Antonio; Fazio, Maria; Romano, Agata; Bramanti, Alessia; Bramanti, Placido; Villari, Massimo
2018-05-01
The Open Archive Information System (OAIS) is a reference model for organizing people and resources in a system, and it is already adopted in care centers and medical systems to efficiently manage clinical data, medical personnel, and patients. Archival storage systems are typically implemented using traditional relational database systems, but the relation-oriented technology strongly limits the efficiency in the management of huge amount of patients' clinical data, especially in emerging cloud-based, that are distributed. In this paper, we present an OAIS healthcare architecture useful to manage a huge amount of HL7 clinical documents in a scalable way. Specifically, it is based on a NoSQL column-oriented Data Base Management System deployed in the cloud, thus to benefit from a big tables and wide rows available over a virtual distributed infrastructure. We developed a prototype of the proposed architecture at the IRCCS, and we evaluated its efficiency in a real case of study.
Thermodynamic Vent System for an On-Orbit Cryogenic Reaction Control Engine
NASA Technical Reports Server (NTRS)
Hurlbert, Eric A.; Romig, Kris A.; Jimenez, Rafael; Flores, Sam
2012-01-01
A report discusses a cryogenic reaction control system (RCS) that integrates a Joule-Thompson (JT) device (expansion valve) and thermodynamic vent system (TVS) with a cryogenic distribution system to allow fine control of the propellant quality (subcooled liquid) during operation of the device. It enables zero-venting when coupled with an RCS engine. The proper attachment locations and sizing of the orifice are required with the propellant distribution line to facilitate line conditioning. During operations, system instrumentation was strategically installed along the distribution/TVS line assembly, and temperature control bands were identified. A sub-scale run tank, full-scale distribution line, open-loop TVS, and a combination of procured and custom-fabricated cryogenic components were used in the cryogenic RCS build-up. Simulated on-orbit activation and thruster firing profiles were performed to quantify system heat gain and evaluate the TVS s capability to maintain the required propellant conditions at the inlet to the engine valves. Test data determined that a small control valve, such as a piezoelectric, is optimal to provide continuously the required thermal control. The data obtained from testing has also assisted with the development of fluid and thermal models of an RCS to refine integrated cryogenic propulsion system designs. This system allows a liquid oxygenbased main propulsion and reaction control system for a spacecraft, which improves performance, safety, and cost over conventional hypergolic systems due to higher performance, use of nontoxic propellants, potential for integration with life support and power subsystems, and compatibility with in-situ produced propellants.
Vortex Noise from Rotating Cylindrical Rods
NASA Technical Reports Server (NTRS)
Stowell, E Z; Deming, A F
1935-01-01
A series of round rods of the some diameter were rotated individually about the mid-point of each rod. Vortices are shed from the rods when in motion, giving rise to the emission of sound. With the rotating system placed in the open air, the distribution of sound in space, the acoustical power output, and the spectral distribution have been studied. The frequency of emission of vortices from any point on the rod is given by the formula von Karman. From the spectrum estimates are made of the distribution of acoustical power along the rod, the amount of air concerned in sound production, the "equivalent size" of the vortices, and the acoustical energy content for each vortex.
Making Peer-Assisted Content Distribution Robust to Collusion Using Bandwidth Puzzles
NASA Astrophysics Data System (ADS)
Reiter, Michael K.; Sekar, Vyas; Spensky, Chad; Zhang, Zhenghao
Many peer-assisted content-distribution systems reward a peer based on the amount of data that this peer serves to others. However, validating that a peer did so is, to our knowledge, an open problem; e.g., a group of colluding attackers can earn rewards by claiming to have served content to one another, when they have not. We propose a puzzle mechanism to make contribution-aware peer-assisted content distribution robust to such collusion. Our construction ties solving the puzzle to possession of specific content and, by issuing puzzle challenges simultaneously to all parties claiming to have that content, our mechanism prevents one content-holder from solving many others' puzzles. We prove (in the random oracle model) the security of our scheme, describe our integration of bandwidth puzzles into a media streaming system, and demonstrate the resulting attack resilience via simulations.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Mohanty, Debapriya P.; Tomar, Vikas
2016-11-01
Inconel 617 (IN-617) is a solid solution alloy, which is widely used in applications that require high-temperature component operation due to its high-temperature stability and strength as well as strong resistance to oxidation and carburization. The current work focuses on in situ measurements of stress distribution under 3-point bending at elevated temperature in IN-617. A nanomechanical Raman spectroscopy measurement platform was designed and built based on a combination of a customized open Raman spectroscopy (NMRS) system incorporating a motorized scanning and imaging system with a nanomechanical loading platform. Based on the scanning of the crack tip notch area using the NMRS notch tip, stress distribution under applied load with micron-scale resolution for analyzed microstructures is predicted. A finite element method-based formulation to predict crack tip stresses is presented and validated using the presented experimental data.
The Future of the Andrew File System
Brashear, Drrick; Altman, Jeffry
2018-05-25
The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.
The Future of the Andrew File System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brashear, Drrick; Altman, Jeffry
2011-02-23
The talk will discuss the ten operational capabilities that have made AFS unique in the distributed file system space and how these capabilities are being expanded upon to meet the needs of the 21st century. Derrick Brashear and Jeffrey Altman will present a technical road map of new features and technical innovations that are under development by the OpenAFS community and Your File System, Inc. funded by a U.S. Department of Energy Small Business Innovative Research grant. The talk will end with a comparison of AFS to its modern days competitors.
Real-time monitoring of Lévy flights in a single quantum system
NASA Astrophysics Data System (ADS)
Issler, M.; Höller, J.; Imamoǧlu, A.
2016-02-01
Lévy flights are random walks where the dynamics is dominated by rare events. Even though they have been studied in vastly different physical systems, their observation in a single quantum system has remained elusive. Here we analyze a periodically driven open central spin system and demonstrate theoretically that the dynamics of the spin environment exhibits Lévy flights. For the particular realization in a single-electron charged quantum dot driven by periodic resonant laser pulses, we use Monte Carlo simulations to confirm that the long waiting times between successive nuclear spin-flip events are governed by a power-law distribution; the corresponding exponent η =-3 /2 can be directly measured in real time by observing the waiting time distribution of successive photon emission events. Remarkably, the dominant intrinsic limitation of the scheme arising from nuclear quadrupole coupling can be minimized by adjusting the magnetic field or by implementing spin echo.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-01-07
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming
2013-04-03
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less
The QuakeSim Project: Web Services for Managing Geophysical Data and Applications
NASA Astrophysics Data System (ADS)
Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet
2008-04-01
We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.
Building a Trustworthy Environmental Science Data Repository: Lessons Learned from the ORNL DAAC
NASA Astrophysics Data System (ADS)
Wei, Y.; Santhana Vannan, S. K.; Boyer, A.; Beaty, T.; Deb, D.; Hook, L.
2017-12-01
The Oak Ridge National Laboratory Distributed Active Archive Center (ORNL DAAC, https://daac.ornl.gov) for biogeochemical dynamics is one of NASA's Earth Observing System Data and Information System (EOSDIS) data centers. The mission of the ORNL DAAC is to assemble, distribute, and provide data services for a comprehensive archive of terrestrial biogeochemistry and ecological dynamics observations and models to facilitate research, education, and decision-making in support of NASA's Earth Science. Since its establishment in 1994, ORNL DAAC has been continuously building itself into a trustworthy environmental science data repository by not only ensuring the quality and usability of its data holdings, but also optimizing its data publication and management process. This paper describes the lessons learned from ORNL DAAC's effort toward this goal. ORNL DAAC has been proactively implementing international community standards throughout its data management life cycle, including data publication, preservation, discovery, visualization, and distribution. Data files in standard formats, detailed documentation, and metadata following standard models are prepared to improve the usability and longevity of data products. Assignment of a Digital Object Identifier (DOI) ensures the identifiability and accessibility of every data product, including the different versions and revisions of its life cycle. ORNL DAAC's data citation policy assures data producers receive appropriate recognition of use of their products. Web service standards, such as OpenSearch and Open Geospatial Consortium (OGC), promotes the discovery, visualization, distribution, and integration of ORNL DAAC's data holdings. Recently, ORNL DAAC began efforts to optimize and standardize its data archival and data publication workflows, to improve the efficiency and transparency of its data archival and management processes.
Service Oriented Architecture for Wireless Sensor Networks in Agriculture
NASA Astrophysics Data System (ADS)
Sawant, S. A.; Adinarayana, J.; Durbha, S. S.; Tripathy, A. K.; Sudharsan, D.
2012-08-01
Rapid advances in Wireless Sensor Network (WSN) for agricultural applications has provided a platform for better decision making for crop planning and management, particularly in precision agriculture aspects. Due to the ever-increasing spread of WSNs there is a need for standards, i.e. a set of specifications and encodings to bring multiple sensor networks on common platform. Distributed sensor systems when brought together can facilitate better decision making in agricultural domain. The Open Geospatial Consortium (OGC) through Sensor Web Enablement (SWE) provides guidelines for semantic and syntactic standardization of sensor networks. In this work two distributed sensing systems (Agrisens and FieldServer) were selected to implement OGC SWE standards through a Service Oriented Architecture (SOA) approach. Online interoperable data processing was developed through SWE components such as Sensor Model Language (SensorML) and Sensor Observation Service (SOS). An integrated web client was developed to visualize the sensor observations and measurements that enables the retrieval of crop water resources availability and requirements in a systematic manner for both the sensing devices. Further, the client has also the ability to operate in an interoperable manner with any other OGC standardized WSN systems. The study of WSN systems has shown that there is need to augment the operations / processing capabilities of SOS in order to understand about collected sensor data and implement the modelling services. Also, the very low cost availability of WSN systems in future, it is possible to implement the OGC standardized SWE framework for agricultural applications with open source software tools.
Implementation of Grid Tier 2 and Tier 3 facilities on a Distributed OpenStack Cloud
NASA Astrophysics Data System (ADS)
Limosani, Antonio; Boland, Lucien; Coddington, Paul; Crosby, Sean; Huang, Joanna; Sevior, Martin; Wilson, Ross; Zhang, Shunde
2014-06-01
The Australian Government is making a AUD 100 million investment in Compute and Storage for the academic community. The Compute facilities are provided in the form of 30,000 CPU cores located at 8 nodes around Australia in a distributed virtualized Infrastructure as a Service facility based on OpenStack. The storage will eventually consist of over 100 petabytes located at 6 nodes. All will be linked via a 100 Gb/s network. This proceeding describes the development of a fully connected WLCG Tier-2 grid site as well as a general purpose Tier-3 computing cluster based on this architecture. The facility employs an extension to Torque to enable dynamic allocations of virtual machine instances. A base Scientific Linux virtual machine (VM) image is deployed in the OpenStack cloud and automatically configured as required using Puppet. Custom scripts are used to launch multiple VMs, integrate them into the dynamic Torque cluster and to mount remote file systems. We report on our experience in developing this nation-wide ATLAS and Belle II Tier 2 and Tier 3 computing infrastructure using the national Research Cloud and storage facilities.
NASA Astrophysics Data System (ADS)
Choi, D. H.; Noh, J. H.; Selph, K. E.; Lee, C. M.
2016-02-01
Photosynthetic picoeukaryotes (PPEs) are major oceanic primary producers. However, the diversity of such communities remains poorly understood, especially in the northwestern (NW) Pacific. We investigated the abundance and diversity of PPEs, and recorded environmental variables, along a transect from the coast to the open Pacific Ocean. High-throughput tag sequencing (using the MiSeq system) revealed the diversity of plastid 16S rRNA genes. The dominant PPEs changed at the class level along the transect. Prymnesiophyceae were the only dominant PPEs in the warm pool of the NW Pacific, but Mamiellophyceae dominated in coastal waters of the East China Sea. Phylogenetically, most Prymnesiophyceae sequences could not be resolved at lower taxonomic levels because no close relatives have been cultured. Within the Mamiellophyceae, the genera Micromonas and Ostreococcus dominated in marginal coastal areas affected by open water, whereas Bathycoccus dominated in the lower euphotic depths of open oligotrophic waters. Cryptophyceae and Phaeocystis (of the Prymnesiophyceae) dominated in areas affected principally by coastal water. We also defined the biogeographical distributions of Chrysophyceae, Prasinophyceae, Bacillariophyceaea, and Pelagophyceae. These distributions were influenced by temperature, salinity, and chlorophyll a and nutrient concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Broderick, Robert; Quiroz, Jimmy; Grijalva, Santiago
2014-07-15
Matlab Toolbox for simulating the impact of solar energy on the distribution grid. The majority of the functions are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving GridPV Toolbox information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in the OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included to show potential uses of the toolbox functions.
A New Architecture for Visualization: Open Mission Control Technologies
NASA Technical Reports Server (NTRS)
Trimble, Jay
2017-01-01
Open Mission Control Technologies (MCT) is a new architecture for visualisation of mission data. Driven by requirements for new mission capabilities, including distributed mission operations, access to data anywhere, customization by users, synthesis of multiple data sources, and flexibility for multi-mission adaptation, Open MCT provides users with an integrated customizable environment. Developed at NASAs Ames Research Center (ARC), in collaboration with NASAs Advanced Multimission Operations System (AMMOS) and NASAs Jet Propulsion Laboratory (JPL), Open MCT is getting its first mission use on the Jason 3 Mission, and is also available in the testbed for the Mars 2020 Rover and for development use for NASAs Resource Prospector Lunar Rover. The open source nature of the project provides for use outside of space missions, including open source contributions from a community of users. The defining features of Open MCT for mission users are data integration, end user composition and multiple views. Data integration provides access to mission data across domains in one place, making data such as activities, timelines, telemetry, imagery, event timers and procedures available in one place, without application switching. End user composition provides users with layouts, which act as a canvas to assemble visualisations. Multiple views provide the capability to view the same data in different ways, with live switching of data views in place. Open MCT is browser based, and works on the desktop as well as tablets and phones, providing access to data anywhere. An early use case for mobile data access took place on the Resource Prospector (RP) Mission Distributed Operations Test, in which rover engineers in the field were able to view telemetry on their phones. We envision this capability providing decision support to on console operators from off duty personnel. The plug-in architecture also allows for adaptation for different mission capabilities. Different data types and capabilities may be added or removed using plugins. An API provides a means to write new capabilities and to create data adaptors. Data plugins exist for mission data sources for NASA missions. Adaptors have been written by international and commercial users. Open MCT is open source. Open source enables collaborative development across organizations and also makes the product available outside of the space community, providing a potential source of usage and ideas to drive product design and development. The combination of open source with an Apache 2 license, and distribution on GitHub, has enabled an active community of users and contributors. The spectrum of users for Open MCT is, to our knowledge, unprecedented for mission software. In addition to our NASA users, we have, through open source, had users and inquires on projects ranging from Internet of Things, to radio hobbyists, to farming projects. We have an active community of contributors, enabling a flow of ideas inside and outside of the space community.
A security mechanism based on evolutionary game in fog computing.
Sun, Yan; Lin, Fuhong; Zhang, Nan
2018-02-01
Fog computing is a distributed computing paradigm at the edge of the network and requires cooperation of users and sharing of resources. When users in fog computing open their resources, their devices are easily intercepted and attacked because they are accessed through wireless network and present an extensive geographical distribution. In this study, a credible third party was introduced to supervise the behavior of users and protect the security of user cooperation. A fog computing security mechanism based on human nervous system is proposed, and the strategy for a stable system evolution is calculated. The MATLAB simulation results show that the proposed mechanism can reduce the number of attack behaviors effectively and stimulate users to cooperate in application tasks positively.
Deducing growth mechanisms for minerals from the shapes of crystal size distributions
Eberl, D.D.; Drits, V.A.; Srodon, J.
1998-01-01
Crystal size distributions (CSDs) of natural and synthetic samples are observed to have several distinct and different shapes. We have simulated these CSDs using three simple equations: the Law of Proportionate Effect (LPE), a mass balance equation, and equations for Ostwald ripening. The following crystal growth mechanisms are simulated using these equations and their modifications: (1) continuous nucleation and growth in an open system, during which crystals nucleate at either a constant, decaying, or accelerating nucleation rate, and then grow according to the LPE; (2) surface-controlled growth in an open system, during which crystals grow with an essentially unlimited supply of nutrients according to the LPE; (3) supply-controlled growth in an open system, during which crystals grow with a specified, limited supply of nutrients according to the LPE; (4) supply- or surface-controlled Ostwald ripening in a closed system, during which the relative rate of crystal dissolution and growth is controlled by differences in specific surface area and by diffusion rate; and (5) supply-controlled random ripening in a closed system, during which the rate of crystal dissolution and growth is random with respect to specific surface area. Each of these mechanisms affects the shapes of CSDs. For example, mechanism (1) above with a constant nucleation rate yields asymptotically-shaped CSDs for which the variance of the natural logarithms of the crystal sizes (??2) increases exponentially with the mean of the natural logarithms of the sizes (??). Mechanism (2) yields lognormally-shaped CSDs, for which ??2 increases linearly with ??, whereas mechanisms (3) and (5) do not change the shapes of CSDs, with ??2 remaining constant with increasing ??. During supply-controlled Ostwald ripening (4), initial lognormally-shaped CSDs become more symmetric, with ??2 decreasing with increasing ??. Thus, crystal growth mechanisms often can be deduced by noting trends in ?? versus ??2 of CSDs for a series of related samples.
Vision for an Open, Global Greenhouse Gas Information System (GHGIS)
NASA Astrophysics Data System (ADS)
Duren, R. M.; Butler, J. H.; Rotman, D.; Ciais, P.; Greenhouse Gas Information System Team
2010-12-01
Over the next few years, an increasing number of entities ranging from international, national, and regional governments, to businesses and private land-owners, are likely to become more involved in efforts to limit atmospheric concentrations of greenhouse gases. In such a world, geospatially resolved information about the location, amount, and rate of greenhouse gas (GHG) emissions will be needed, as well as the stocks and flows of all forms of carbon through the earth system. The ability to implement policies that limit GHG concentrations would be enhanced by a global, open, and transparent greenhouse gas information system (GHGIS). An operational and scientifically robust GHGIS would combine ground-based and space-based observations, carbon-cycle modeling, GHG inventories, synthesis analysis, and an extensive data integration and distribution system, to provide information about anthropogenic and natural sources, sinks, and fluxes of greenhouse gases at temporal and spatial scales relevant to decision making. The GHGIS effort was initiated in 2008 as a grassroots inter-agency collaboration intended to identify the needs for such a system, assess the capabilities of current assets, and suggest priorities for future research and development. We will present a vision for an open, global GHGIS including latest analysis of system requirements, critical gaps, and relationship to related efforts at various agencies, the Group on Earth Observations, and the Intergovernmental Panel on Climate Change.
Demographic management in a federated healthcare environment.
Román, I; Roa, L M; Reina-Tosina, J; Madinabeitia, G
2006-09-01
The purpose of this paper is to provide a further step toward the decentralization of identification and demographic information about persons by solving issues related to the integration of demographic agents in a federated healthcare environment. The aim is to identify a particular person in every system of a federation and to obtain a unified view of his/her demographic information stored in different locations. This work is based on semantic models and techniques, and pursues the reconciliation of several current standardization works including ITU-T's Open Distributed Processing, CEN's prEN 12967, OpenEHR's dual and reference models, CEN's General Purpose Information Components and CORBAmed's PID service. We propose a new paradigm for the management of person identification and demographic data, based on the development of an open architecture of specialized distributed components together with the incorporation of techniques for the efficient management of domain ontologies, in order to have a federated demographic service. This new service enhances previous correlation solutions sharing ideas with different standards and domains like semantic techniques and database systems. The federation philosophy enforces us to devise solutions to the semantic, functional and instance incompatibilities in our approach. Although this work is based on several models and standards, we have improved them by combining their contributions and developing a federated architecture that does not require the centralization of demographic information. The solution is thus a good approach to face integration problems and the applied methodology can be easily extended to other tasks involved in the healthcare organization.
Blood Vessel Adaptation with Fluctuations in Capillary Flow Distribution
Hu, Dan; Cai, David; Rangan, Aaditya V.
2012-01-01
Throughout the life of animals and human beings, blood vessel systems are continuously adapting their structures – the diameter of vessel lumina, the thickness of vessel walls, and the number of micro-vessels – to meet the changing metabolic demand of the tissue. The competition between an ever decreasing tendency of luminal diameters and an increasing stimulus from the wall shear stress plays a key role in the adaptation of luminal diameters. However, it has been shown in previous studies that the adaptation dynamics based only on these two effects is unstable. In this work, we propose a minimal adaptation model of vessel luminal diameters, in which we take into account the effects of metabolic flow regulation in addition to wall shear stresses and the decreasing tendency of luminal diameters. In particular, we study the role, in the adaptation process, of fluctuations in capillary flow distribution which is an important means of metabolic flow regulation. The fluctuation in the flow of a capillary group is idealized as a switch between two states, i.e., an open-state and a close-state. Using this model, we show that the adaptation of blood vessel system driven by wall shear stress can be efficiently stabilized when the open time ratio responds sensitively to capillary flows. As micro-vessel rarefaction is observed in our simulations with a uniformly decreased open time ratio of capillary flows, our results point to a possible origin of micro-vessel rarefaction, which is believed to induce hypertension. PMID:23029014
NASA Astrophysics Data System (ADS)
Thomas, W. A.; McAnally, W. H., Jr.
1985-07-01
TABS-2 is a generalized numerical modeling system for open-channel flows, sedimentation, and constituent transport. It consists of more than 40 computer programs to perform modeling and related tasks. The major modeling components--RMA-2V, STUDH, and RMA-4--calculate two-dimensional, depth-averaged flows, sedimentation, and dispersive transport, respectively. The other programs in the system perform digitizing, mesh generation, data management, graphical display, output analysis, and model interfacing tasks. Utilities include file management and automatic generation of computer job control instructions. TABS-2 has been applied to a variety of waterways, including rivers, estuaries, bays, and marshes. It is designed for use by engineers and scientists who may not have a rigorous computer background. Use of the various components is described in Appendices A-O. The bound version of the report does not include the appendices. A looseleaf form with Appendices A-O is distributed to system users.
General Formalism of Decision Making Based on Theory of Open Quantum Systems
NASA Astrophysics Data System (ADS)
Asano, M.; Ohya, M.; Basieva, I.; Khrennikov, A.
2013-01-01
We present the general formalism of decision making which is based on the theory of open quantum systems. A person (decision maker), say Alice, is considered as a quantum-like system, i.e., a system which information processing follows the laws of quantum information theory. To make decision, Alice interacts with a huge mental bath. Depending on context of decision making this bath can include her social environment, mass media (TV, newspapers, INTERNET), and memory. Dynamics of an ensemble of such Alices is described by Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) equation. We speculate that in the processes of evolution biosystems (especially human beings) designed such "mental Hamiltonians" and GKSL-operators that any solution of the corresponding GKSL-equation stabilizes to a diagonal density operator (In the basis of decision making.) This limiting density operator describes population in which all superpositions of possible decisions has already been resolved. In principle, this approach can be used for the prediction of the distribution of possible decisions in human populations.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.
A Mobile Multi-Agent Information System for Ubiquitous Fetal Monitoring
Su, Chuan-Jun; Chu, Ta-Wei
2014-01-01
Electronic fetal monitoring (EFM) systems integrate many previously separate clinical activities related to fetal monitoring. Promoting the use of ubiquitous fetal monitoring services with real time status assessments requires a robust information platform equipped with an automatic diagnosis engine. This paper presents the design and development of a mobile multi-agent platform-based open information systems (IMAIS) with an automated diagnosis engine to support intensive and distributed ubiquitous fetal monitoring. The automatic diagnosis engine that we developed is capable of analyzing data in both traditional paper-based and digital formats. Issues related to interoperability, scalability, and openness in heterogeneous e-health environments are addressed through the adoption of a FIPA2000 standard compliant agent development platform—the Java Agent Development Environment (JADE). Integrating the IMAIS with light-weight, portable fetal monitor devices allows for continuous long-term monitoring without interfering with a patient’s everyday activities and without restricting her mobility. The system architecture can be also applied to vast monitoring scenarios such as elder care and vital sign monitoring. PMID:24452256
Implementation of an OAIS Repository Using Free, Open Source Software
NASA Astrophysics Data System (ADS)
Flathers, E.; Gessler, P. E.; Seamon, E.
2015-12-01
The Northwest Knowledge Network (NKN) is a regional data repository located at the University of Idaho that focuses on the collection, curation, and distribution of research data. To support our home institution and others in the region, we offer services to researchers at all stages of the data lifecycle—from grant application and data management planning to data distribution and archive. In this role, we recognize the need to work closely with other data management efforts at partner institutions and agencies, as well as with larger aggregation efforts such as our state geospatial data clearinghouses, data.gov, DataONE, and others. In the past, one of our challenges with monolithic, prepackaged data management solutions is that customization can be difficult to implement and maintain, especially as new versions of the software are released that are incompatible with our local codebase. Our solution is to break the monolith up into its constituent parts, which offers us several advantages. First, any customizations that we make are likely to fall into areas that can be accessed through Application Program Interfaces (API) that are likely to remain stable over time, so our code stays compatible. Second, as components become obsolete or insufficient to meet new demands that arise, we can replace the individual components with minimal effect on the rest of the infrastructure, causing less disruption to operations. Other advantages include increased system reliability, staggered rollout of new features, enhanced compatibility with legacy systems, reduced dependence on a single software company as a point of failure, and the separation of development into manageable tasks. In this presentation, we describe our application of the Service Oriented Architecture (SOA) design paradigm to assemble a data repository that conforms to the Open Archival Information System (OAIS) Reference Model primarily using a collection of free and open-source software. We detail the design of the repository, based upon open standards to support interoperability with other institutions' systems and with future versions of our own software components. We also describe the implementation process, including our use of GitHub as a collaboration tool and code repository.
Cunningham, James; Ainsworth, John
2017-01-01
The rise of distributed ledger technology, initiated and exemplified by the Bitcoin blockchain, is having an increasing impact on information technology environments in which there is an emphasis on trust and security. Management of electronic health records, where both conformation to legislative regulations and maintenance of public trust are paramount, is an area where the impact of these new technologies may be particularly beneficial. We present a system that enables fine-grained personalized control of third-party access to patients' electronic health records, allowing individuals to specify when and how their records are accessed for research purposes. The use of the smart contract based Ethereum blockchain technology to implement this system allows it to operate in a verifiably secure, trustless, and openly auditable environment, features crucial to health information systems moving forward.
Aerosol and Cloud Observations and Data Products by the GLAS Polar Orbiting Lidar Instrument
NASA Technical Reports Server (NTRS)
Spinhirne, J. D.; Palm, S. P.; Hlavka, D. L.; Hart, W. D.; Mahesh, A.; Welton, E. J.
2005-01-01
The Geoscience Laser Altimeter System (GLAS) launched in 2003 is the first polar orbiting satellite lidar. The instrument was designed for high performance observations of the distribution and optical scattering cross sections of clouds and aerosol. The backscatter lidar operates at two wavelengths, 532 and 1064 nm. Both receiver channels meet and exceed their design goals, and beginning with a two month period through October and November 2003, an excellent global lidar data set now exists. The data products for atmospheric observations include the calibrated, attenuated backscatter cross section for cloud and aerosol; height detection for multiple cloud layers; planetary boundary layer height; cirrus and aerosol optical depth and the height distribution of aerosol and cloud scattering cross section profiles. The data sets are now in open release through the NASA data distribution system. The initial results on global statistics for cloud and aerosol distribution has been produced and in some cases compared to other satellite observations. The sensitivity of the cloud measurements is such that the 70% global cloud coverage result should be the most accurate to date. Results on the global distribution of aerosol are the first that produce the true height distribution for model inter-comparison.
Distributed computer system enhances productivity for SRB joint optimization
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.
1987-01-01
Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.
A Structured Approach for Reviewing Architecture Documentation
2009-12-01
as those found in ISO 12207 [ ISO /IEC 12207 :2008] (for software engineering), ISO 15288 [ ISO /IEC 15288:2008] (for systems engineering), the Rational...Open Distributed Processing - Reference Model: Foundations ( ISO /IEC 10746-2). 1996. [ ISO /IEC 12207 :2008] International Organization for...Standardization & International Electrotechnical Commission. Sys- tems and software engineering – Software life cycle processes ( ISO /IEC 12207 ). 2008. [ ISO
Free and Open Source Software for Geospatial in the field of planetary science
NASA Astrophysics Data System (ADS)
Frigeri, A.
2012-12-01
Information technology applied to geospatial analyses has spread quickly in the last ten years. The availability of OpenData and data from collaborative mapping projects increased the interest on tools, procedures and methods to handle spatially-related information. Free Open Source Software projects devoted to geospatial data handling are gaining a good success as the use of interoperable formats and protocols allow the user to choose what pipeline of tools and libraries is needed to solve a particular task, adapting the software scene to his specific problem. In particular, the Free Open Source model of development mimics the scientific method very well, and researchers should be naturally encouraged to take part to the development process of these software projects, as this represent a very agile way to interact among several institutions. When it comes to planetary sciences, geospatial Free Open Source Software is gaining a key role in projects that commonly involve different subjects in an international scenario. Very popular software suites for processing scientific mission data (for example, ISIS) and for navigation/planning (SPICE) are being distributed along with the source code and the interaction between user and developer is often very strict, creating a continuum between these two figures. A very widely spread library for handling geospatial data (GDAL) has started to support planetary data from the Planetary Data System, and recent contributions enabled the support to other popular data formats used in planetary science, as the Vicar one. The use of Geographic Information System in planetary science is now diffused, and Free Open Source GIS, open GIS formats and network protocols allow to extend existing tools and methods developed to solve Earth based problems, also to the case of the study of solar system bodies. A day in the working life of a researcher using Free Open Source Software for geospatial will be presented, as well as benefits and solutions to possible detriments coming from the effort required by using, supporting and contributing.
A Transparent Translation from Legacy System Model into Common Information Model: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Simpson, Jeffrey; Zhang, Yingchen
Advance in smart grid is forcing utilities towards better monitoring, control and analysis of distribution systems, and requires extensive cyber-based intelligent systems and applications to realize various functionalities. The ability of systems, or components within systems, to interact and exchange services or information with each other is the key to the success of smart grid technologies, and it requires efficient information exchanging and data sharing infrastructure. The Common Information Model (CIM) is a standard that allows different applications to exchange information about an electrical system, and it has become a widely accepted solution for information exchange among different platforms andmore » applications. However, most existing legacy systems are not developed using CIM, but using their own languages. Integrating such legacy systems is a challenge for utilities, and the appropriate utilization of the integrated legacy systems is even more intricate. Thus, this paper has developed an approach and open-source tool in order to translate legacy system models into CIM format. The developed tool is tested for a commercial distribution management system and simulation results have proved its effectiveness.« less
Assessment of velocity fields through open-channel flows with an empiric law.
Bardiaux, J B; Vazquez, J; Mosé, R
2008-01-01
Most sewer managers are currently confronted with the evaluation of the water discharges, that flow through their networks or go to the discharge system, i.e. rivers in the majority of cases. In this context, the Urban Hydraulic Systems laboratory of the ENGEES is working on the relation between velocity fields and metrology assessment through a partnership with the Fluid and Solid Mechanics Institute of Strasbourg (IMFS). The responsibility is clearly to transform a velocity profile measurement, given by a Doppler sensor developed by the IMFS team, into a water discharge evaluation. The velocity distribution in a cross section of the flow in a channel has attracted the interests of many researchers over the years, due to its practical applications. In the case of free surface flows in narrow open channels the maximum velocity is below the free surface. This phenomenon, usually called "dip-phenomenon", amongst other things, raises the problem of the area explored in the section of measurements. The work presented here tries to create a simple relation making possible to associate the flow with the velocity distribution. This step allows to insert the sensor position into the flow calculation.
Database technology and the management of multimedia data in the Mirror project
NASA Astrophysics Data System (ADS)
de Vries, Arjen P.; Blanken, H. M.
1998-10-01
Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.
End-to-end security for personal telehealth.
Koster, Paul; Asim, Muhammad; Petkovic, Milan
2011-01-01
Personal telehealth is in rapid development with innovative emerging applications like disease management. With personal telehealth people participate in their own care supported by an open distributed system with health services. This poses new end-to-end security and privacy challenges. In this paper we introduce new end-to-end security requirements and present a design for consent management in the context of the Continua Health Alliance architecture. Thus, we empower patients to control how their health information is shared and used in a personal telehealth eco-system.
An Automated Tool to Enable the Distributed Operations of Air Force Satellites
2002-01-01
workstations, home PCs, PDAs, pagers) over connections with various bandwidths (e.g., dial-up 56k , wireless 9.6k), SERS has different USis to support the...demonstration and evaluation activities, and (3) CERES employs more modem and open ground systems than are currently deployed in the space operations...COTS or custom tools. • Yes, we demonstrated that our software can interface with a modem Air Force ground system (CERES’ COBRA). • We identified new
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Fei; Nagarajan, Adarsh; Chakraborty, Sudipta
This report presents an impact assessment study of distributed photovoltaic (PV) with smart inverter Volt-VAR control on conservation voltage reduction (CVR) energy savings and distribution system power quality. CVR is a methodology of flattening and lowering a distribution system voltage profile in order to conserve energy. Traditional CVR relies on operating utility voltage regulators and switched capacitors. However, with the increased penetration of distributed PV systems, smart inverters provide the new opportunity to control local voltage and power factor by regulating the reactive power output, leading to a potential increase in CVR energy savings. This report proposes a methodology tomore » implement CVR scheme by operating voltage regulators, capacitors, and autonomous smart inverter Volt-VAR control in order to achieve increased CVR benefit. Power quality is an important consideration when operating a distribution system, especially when implementing CVR. It is easy to measure the individual components that make up power quality, but a comprehensive method to incorporate all of these values into a single score has yet to be undertaken. As a result, this report proposes a power quality scoring mechanism to measure the relative power quality of distribution systems using a single number, which is aptly named the 'power quality score' (PQS). Both the CVR and PQS methodologies were applied to two distribution system models, one obtained from the Hawaiian Electric Company (HECO) and another obtained from Pacific Gas and Electric (PG&E). These two models were converted to the OpenDSS platform using previous model conversion tools that were developed by NREL. Multiple scenarios including various PV penetration levels and smart inverter densities were simulated to analyze the impact of smart inverter Volt-VAR support on CVR energy savings and feeder power quality. In order to analyze the CVR benefit and PQS, an annual simulation was conducted for each scenario.« less
C3-PRO: Connecting ResearchKit to the Health System Using i2b2 and FHIR.
Pfiffner, Pascal B; Pinyol, Isaac; Natter, Marc D; Mandl, Kenneth D
2016-01-01
A renewed interest by consumer information technology giants in the healthcare domain is focused on transforming smartphones into personal health data storage devices. With the introduction of the open source ResearchKit, Apple provides a framework for researchers to inform and consent research subjects, and to readily collect personal health data and patient reported outcomes (PRO) from distributed populations. However, being research backend agnostic, ResearchKit does not provide data transmission facilities, leaving research apps disconnected from the health system. Personal health data and PROs are of the most value when presented in context along with health system data. Our aim was to build a toolchain that allows easy and secure integration of personal health and PRO data into an open source platform widely adopted across 140 academic medical centers. We present C3-PRO: the Consent, Contact, and Community framework for Patient Reported Outcomes. This open source toolchain connects, in a standards-compliant fashion, any ResearchKit app to the widely-used clinical research infrastructure Informatics for Integrating Biology and the Bedside (i2b2). C3-PRO leverages the emerging health data standard Fast Healthcare Interoperability Resources (FHIR).
C3-PRO: Connecting ResearchKit to the Health System Using i2b2 and FHIR
Pfiffner, Pascal B.; Pinyol, Isaac; Natter, Marc D.; Mandl, Kenneth D.
2016-01-01
A renewed interest by consumer information technology giants in the healthcare domain is focused on transforming smartphones into personal health data storage devices. With the introduction of the open source ResearchKit, Apple provides a framework for researchers to inform and consent research subjects, and to readily collect personal health data and patient reported outcomes (PRO) from distributed populations. However, being research backend agnostic, ResearchKit does not provide data transmission facilities, leaving research apps disconnected from the health system. Personal health data and PROs are of the most value when presented in context along with health system data. Our aim was to build a toolchain that allows easy and secure integration of personal health and PRO data into an open source platform widely adopted across 140 academic medical centers. We present C3-PRO: the Consent, Contact, and Community framework for Patient Reported Outcomes. This open source toolchain connects, in a standards-compliant fashion, any ResearchKit app to the widely-used clinical research infrastructure Informatics for Integrating Biology and the Bedside (i2b2). C3-PRO leverages the emerging health data standard Fast Healthcare Interoperability Resources (FHIR). PMID:27031856
Aguiar, Paulo; Mendonça, Luís; Galhardo, Vasco
2007-10-15
Operant animal behavioral tests require the interaction of the subject with sensors and actuators distributed in the experimental environment of the arena. In order to provide user independent reliable results and versatile control of these devices it is vital to use an automated control system. Commercial systems for control of animal mazes are usually based in software implementations that restrict their application to the proprietary hardware of the vendor. In this paper we present OpenControl: an opensource Visual Basic software that permits a Windows-based computer to function as a system to run fully automated behavioral experiments. OpenControl integrates video-tracking of the animal, definition of zones from the video signal for real-time assignment of animal position in the maze, control of the maze actuators from either hardware sensors or from the online video tracking, and recording of experimental data. Bidirectional communication with the maze hardware is achieved through the parallel-port interface, without the need for expensive AD-DA cards, while video tracking is attained using an inexpensive Firewire digital camera. OpenControl Visual Basic code is structurally general and versatile allowing it to be easily modified or extended to fulfill specific experimental protocols and custom hardware configurations. The Visual Basic environment was chosen in order to allow experimenters to easily adapt the code and expand it at their own needs.
Building Better Planet Populations for EXOSIMS
NASA Astrophysics Data System (ADS)
Garrett, Daniel; Savransky, Dmitry
2018-01-01
The Exoplanet Open-Source Imaging Mission Simulator (EXOSIMS) software package simulates ensembles of space-based direct imaging surveys to provide a variety of science and engineering yield distributions for proposed mission designs. These mission simulations rely heavily on assumed distributions of planetary population parameters including semi-major axis, planetary radius, eccentricity, albedo, and orbital orientation to provide heuristics for target selection and to simulate planetary systems for detection and characterization. The distributions are encoded in PlanetPopulation modules within EXOSIMS which are selected by the user in the input JSON script when a simulation is run. The earliest written PlanetPopulation modules available in EXOSIMS are based on planet population models where the planetary parameters are considered to be independent from one another. While independent parameters allow for quick computation of heuristics and sampling for simulated planetary systems, results from planet-finding surveys have shown that many parameters (e.g., semi-major axis/orbital period and planetary radius) are not independent. We present new PlanetPopulation modules for EXOSIMS which are built on models based on planet-finding survey results where semi-major axis and planetary radius are not independent and provide methods for sampling their joint distribution. These new modules enhance the ability of EXOSIMS to simulate realistic planetary systems and give more realistic science yield distributions.
Contingency theoretic methodology for agent-based web-oriented manufacturing systems
NASA Astrophysics Data System (ADS)
Durrett, John R.; Burnell, Lisa J.; Priest, John W.
2000-12-01
The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.
NASA Astrophysics Data System (ADS)
Rodrigo-Ilarri, J.; Li, T.; Grathwohl, P.; Blum, P.; Bayer, P.
2009-04-01
The design of geothermal systems such as aquifer thermal energy storage systems (ATES) must account for a comprehensive characterisation of all relevant parameters considered for the numerical design model. Hydraulic and thermal conductivities are the most relevant parameters and its distribution determines not only the technical design but also the economic viability of such systems. Hence, the knowledge of the spatial distribution of these parameters is essential for a successful design and operation of such systems. This work shows the first results obtained when applying geostatistical techniques to the characterisation of the Esseling Site in Germany. In this site a long-term thermal tracer test (> 1 year) was performed. On this open system the spatial temperature distribution inside the aquifer was observed over time in order to obtain as much information as possible that yield to a detailed characterisation both of the hydraulic and thermal relevant parameters. This poster shows the preliminary results obtained for the Esseling Site. It has been observed that the common homogeneous approach is not sufficient to explain the observations obtained from the TRT and that parameter heterogeneity must be taken into account.
Deployment of Directory Service for IEEE N Bus Test System Information
NASA Astrophysics Data System (ADS)
Barman, Amal; Sil, Jaya
2008-10-01
Exchanging information over Internet and Intranet becomes a defacto standard in computer applications, among various users and organizations. Distributed system study, e-governance etc require transparent information exchange between applications, constituencies, manufacturers, and vendors. To serve these purposes database system is needed for storing system data and other relevant information. Directory service, which is a specialized database along with access protocol, could be the single solution since it runs over TCP/IP, supported by all POSIX compliance platforms and is based on open standard. This paper describes a way to deploy directory service, to store IEEE n bus test system data and integrating load flow program with it.
A resilient and secure software platform and architecture for distributed spacecraft
NASA Astrophysics Data System (ADS)
Otte, William R.; Dubey, Abhishek; Karsai, Gabor
2014-06-01
A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.
Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Procacci, Piero
2015-04-21
In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of onlymore » two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems.« less
Device Access Abstractions for Resilient Information Architecture Platform for Smart Grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dubey, Abhishek; Karsai, Gabor; Volgyesi, Peter
An open application platform distributes the intelligence and control capability to local endpoints (or nodes) reducing total network traffic, improving speed of local actions by avoiding latency, and improving reliability by reducing dependencies on numerous devices and communication interfaces. The platform must be multi-tasking and able to host multiple applications running simultaneously. Given such a system, the core functions of power grid control systems include grid state determination, low level control, fault intelligence and reconfiguration, outage intelligence, power quality measurement, remote asset monitoring, configuration management, power and energy management (including local distributed energy resources, such as wind, solar and energymore » storage) can be eventually distributed. However, making this move requires extensive regression testing of systems to prove out new technologies, such as phasor measurement units (PMU). Additionally, as the complexity of the systems increase with the inclusion of new functionality (especially at the distribution and consumer levels), hidden coupling issues becomes a challenge with possible N-way interactions known and not known by device and application developers. Therefore, it is very important to provide core abstractions that ensure uniform operational semantics across such interactions. Here in this paper, we describe the pattern for abstracting device interactions we have developed for the RIAPS platform in the context of a microgrid control application we have developed.« less
Device Access Abstractions for Resilient Information Architecture Platform for Smart Grid
Dubey, Abhishek; Karsai, Gabor; Volgyesi, Peter; ...
2018-06-12
An open application platform distributes the intelligence and control capability to local endpoints (or nodes) reducing total network traffic, improving speed of local actions by avoiding latency, and improving reliability by reducing dependencies on numerous devices and communication interfaces. The platform must be multi-tasking and able to host multiple applications running simultaneously. Given such a system, the core functions of power grid control systems include grid state determination, low level control, fault intelligence and reconfiguration, outage intelligence, power quality measurement, remote asset monitoring, configuration management, power and energy management (including local distributed energy resources, such as wind, solar and energymore » storage) can be eventually distributed. However, making this move requires extensive regression testing of systems to prove out new technologies, such as phasor measurement units (PMU). Additionally, as the complexity of the systems increase with the inclusion of new functionality (especially at the distribution and consumer levels), hidden coupling issues becomes a challenge with possible N-way interactions known and not known by device and application developers. Therefore, it is very important to provide core abstractions that ensure uniform operational semantics across such interactions. Here in this paper, we describe the pattern for abstracting device interactions we have developed for the RIAPS platform in the context of a microgrid control application we have developed.« less
Padilla-Valverde, David; Sanchez-Garcia, Susana; García-Santos, Esther; Marcote-Ibañez, Carlos; Molina-Robles, Mercedes; Martín-Fernández, Jesús; Villarejo-Campos, Pedro
2016-09-30
To determine the effectiveness of thermography to control the distribution of abdominal temperature in the development of a closed chemohyperthermia model. For thermographic analysis, we divided the abdominopelvic cavity into nine regions according to a modification of carcinomatosis peritoneal index. A difference of 2.5 °C between and within the quadrants, and thermographic colours, were used as asymmetric criteria. Preclinical study:· Rats Model: Six athymic nude rats, male, rnu/rnu. They were treated with closed technique and open technique. Porcine Model: 12 female large white pigs. Four were treated with open technique and eight with closed recirculation CO 2 technique. Clinical Pilot Study, EUDRACT 2011-006319-69: 18 patients with ovarian cancer were treated with cytoreductive surgery and hyperthermia intraperitoneal chemotherapy, HIPEC, with a closed recirculating CO 2 system. Thermographic control and intra-abdominal temperature assessment was performed at the baseline, when outflow temperature reached 41 °C, and at 30´. The thermographic images showed a higher homogeneity of the intra-abdominal temperature in the closed model respect to the open technique. The thermogram showed a temperature distribution homogeneity when starting the circulation of chemotherapy. There was correlation between the temperature thermographic map in the closed porcine model and pilot study, and reached inflow and outflow temperatures, at half time of HIPEC, of 42/41.4 °C and 42 ± 0.2/41 ± 0.8 °C, respectively. There was no significant impact to the core temperature of patients after reaching the homogeneous temperature distribution. To control homogeneity of temperature distribution is feasible using infra-red digital images in a closed HIPEC with CO 2 recirculation.
NASA Astrophysics Data System (ADS)
Halpern, J. B.
2017-12-01
Libretexts is an online open system for distributing educational materials with over 5 million page views per month. Covering geophysics, chemistry, physics and more it offers a platform for authors and users including faculty and students to access curated educational materials. Currently there are on line texts covering geology, geobiology, natural hazards and understanding the refusal to accept climate change as well as relevant materials in other sections on aquatic and atmospheric chemistry. In addition to "written" materials Libretexts provides access to simulations and demonstrations that are relevant. Most importantly the Libretext project welcomes new contributors. Faculty can use available materials to construct their own texts or supplementary materials in relatively short order. Since all material is covered by a Creative Commons Copyright, material can be added to as needed for teaching.
A Scalable Data Access Layer to Manage Structured Heterogeneous Biomedical Data
Lianas, Luca; Frexia, Francesca; Zanetti, Gianluigi
2016-01-01
This work presents a scalable data access layer, called PyEHR, designed to support the implementation of data management systems for secondary use of structured heterogeneous biomedical and clinical data. PyEHR adopts the openEHR’s formalisms to guarantee the decoupling of data descriptions from implementation details and exploits structure indexing to accelerate searches. Data persistence is guaranteed by a driver layer with a common driver interface. Interfaces for two NoSQL Database Management Systems are already implemented: MongoDB and Elasticsearch. We evaluated the scalability of PyEHR experimentally through two types of tests, called “Constant Load” and “Constant Number of Records”, with queries of increasing complexity on synthetic datasets of ten million records each, containing very complex openEHR archetype structures, distributed on up to ten computing nodes. PMID:27936191
NASA Astrophysics Data System (ADS)
Davies, Nigel; Raymond, Kerry; Blair, Gordon
1999-03-01
In recent years the distributed systems community has witnessed a growth in the number of conferences, leading to difficulties in tracking the literature and a consequent loss of awareness of work done by others in this important research domain. In an attempt to synthesize many of the smaller workshops and conferences in the field, and to bring together research communities which were becoming fragmented, IFIP staged Middleware'98: The IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing. The conference was widely publicized and attracted over 150 technical submissions including 135 full paper submissions. The final programme consisted of 28 papers, giving an acceptance ratio of a little over one in five. More crucially, the programme accurately reflected the state of the art in middleware research, addressing issues such as ORB architectures, engineering of large-scale systems and multimedia. The traditional role of middleware as a point of integration and service provision was clearly intact, but the programme stressed the importance of emerging `must-have' features such as support for extensibility, mobility and quality of service. The Middleware'98 conference was held in the Lake District, UK in September 1998. Over 160 delegates made the journey to one of the UK's most beautiful regions and contributed to a lively series of presentations and debates. A permanent record of the conference, including transcripts of the panel discussions which took place, is available at: http://www.comp.lancs.ac.uk/computing/middleware98/ Based on their original reviews and the reactions of delegates to the ensuing presentations we have selected six papers from the conference for publication in this special issue of Distributed Systems Engineering. The first paper, entitled `Jonathan: an open distributed processing environment in Java', by Dumant et al describes a minimal, modular ORB framework which can be used for supporting real-time and multimedia applications. The framework provides mechanisms by which services such as CORBA ORBs can be constructed as personalities which exploit the services provided by the underlying minimal kernel. The issue of engineering ORBs is taken further in the second paper, `The implementation of a high-performance ORB over multiple network transports' by Lo and Pope. This paper is of particular interest since it presents the concrete results of running a modern ORB, i.e. omniORB2, over a range of transport mechanisms, including TCP/IP, shared memory and ATM AAL5. However, in order for middleware to progress, future platforms must tackle the issue of scalability as well as that of performance. For this reason we have included two papers, `Systems support for scalable and fault tolerant Internet services' by Chawathe and Brewer and `A scalable middleware solution for advanced wide-area Web services' by van Steen et al, which address the problems inherent in developing scalable middleware. Although the two papers focus on different problems in this area, they are both motivated by the explosion of services and information made available through the World Wide Web. Indeed, the role of the World Wide Web as a component in middleware platforms featured prominently in the conference and this is reflected in our choice of the paper by Cao et al entitled `Active Cache: caching dynamic contents on the Web'. Motivated once again by the problems of scalability, Cao et al propose a system to support the caching of dynamic documents. This is achieved by enabling small applets to be cached along with pages and run by the cache servers. The issues of security, trust and resource utilization raised by such a system are explored in detail by the authors. Finally, `Mobile Java objects' by Hayton et al considers these issues still further as part of the authors' work on adding object mobility to Java. Together, the six papers contained within this issue of Distributed Systems Engineering capture the essence of Middleware'98 and demonstrate the progress that has been made in the field. Of particular note is the systems-oriented focus of these papers: the field has clearly matured beyond modelling and into the domain of advanced systems development. We hope that the papers contained here stimulate and inform you and we look forward to meeting you at a future Middleware conference.
Karthikeyan, M; Krishnan, S; Pandey, Anil Kumar; Bender, Andreas; Tropsha, Alexander
2008-04-01
We present the application of a Java remote method invocation (RMI) based open source architecture to distributed chemical computing. This architecture was previously employed for distributed data harvesting of chemical information from the Internet via the Google application programming interface (API; ChemXtreme). Due to its open source character and its flexibility, the underlying server/client framework can be quickly adopted to virtually every computational task that can be parallelized. Here, we present the server/client communication framework as well as an application to distributed computing of chemical properties on a large scale (currently the size of PubChem; about 18 million compounds), using both the Marvin toolkit as well as the open source JOELib package. As an application, for this set of compounds, the agreement of log P and TPSA between the packages was compared. Outliers were found to be mostly non-druglike compounds and differences could usually be explained by differences in the underlying algorithms. ChemStar is the first open source distributed chemical computing environment built on Java RMI, which is also easily adaptable to user demands due to its "plug-in architecture". The complete source codes as well as calculated properties along with links to PubChem resources are available on the Internet via a graphical user interface at http://moltable.ncl.res.in/chemstar/.
Building Energy Management Open Source Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
This is the repository for Building Energy Management Open Source Software (BEMOSS), which is an open source operating system that is engineered to improve sensing and control of equipment in small- and medium-sized commercial buildings. BEMOSS offers the following key features: (1) Open source, open architecture – BEMOSS is an open source operating system that is built upon VOLTTRON – a distributed agent platform developed by Pacific Northwest National Laboratory (PNNL). BEMOSS was designed to make it easy for hardware manufacturers to seamlessly interface their devices with BEMOSS. Software developers can also contribute to adding additional BEMOSS functionalities and applications.more » (2) Plug & play – BEMOSS was designed to automatically discover supported load controllers (including smart thermostats, VAV/RTUs, lighting load controllers and plug load controllers) in commercial buildings. (3) Interoperability – BEMOSS was designed to work with load control devices form different manufacturers that operate on different communication technologies and data exchange protocols. (4) Cost effectiveness – Implementation of BEMOSS deemed to be cost-effective as it was built upon a robust open source platform that can operate on a low-cost single-board computer, such as Odroid. This feature could contribute to its rapid deployment in small- or medium-sized commercial buildings. (5) Scalability and ease of deployment – With its multi-node architecture, BEMOSS provides a distributed architecture where load controllers in a multi-floor and high occupancy building could be monitored and controlled by multiple single-board computers hosting BEMOSS. This makes it possible for a building engineer to deploy BEMOSS in one zone of a building, be comfortable with its operation, and later on expand the deployment to the entire building to make it more energy efficient. (6) Ability to provide local and remote monitoring – BEMOSS provides both local and remote monitoring ability with role-based access control. (7) Security – In addition to built-in security features provided by VOLTTRON, BEMOSS provides enhanced security features, including BEMOSS discovery approval process, encrypted core-to-node communication, thermostat anti-tampering feature and many more. (8) Support from the Advisory Committee – BEMOSS was developed in consultation with an advisory committee from the beginning of the project. BEMOSS advisory committee comprises representatives from 22 organizations from government and industry.« less
Short-term airing by natural ventilation - implication on IAQ and thermal comfort.
Heiselberg, P; Perino, M
2010-04-01
The need to improve the energy efficiency of buildings requires new and more efficient ventilation systems. It has been demonstrated that innovative operating concepts that make use of natural ventilation seem to be more appreciated by occupants. Among the available ventilation strategies that are currently available, buoyancy driven, single-sided natural ventilation has proved to be very effective and can provide high air change rates for temperature and Indoor Air Quality (IAQ) control. However, to promote a wider distribution of these systems an improvement in the knowledge of their working principles is necessary. The present study analyses and presents the results of an experimental evaluation of airing performance in terms of ventilation characteristics, IAQ and thermal comfort. It includes investigations of the consequences of opening time, opening frequency, opening area and expected airflow rate, ventilation efficiency, thermal comfort and dynamic temperature conditions. A suitable laboratory test rig was developed to perform extensive experimental analyses of the phenomenon under controlled and repeatable conditions. The results showed that short-term window airing is very effective and can provide both acceptable IAQ and thermal comfort conditions in buildings. Practical Implications This study gives the necessary background and in-depth knowledge of the performance of window airing by single-sided natural ventilation necessary for the development of control strategies for window airing (length of opening period and opening frequency) for optimum IAQ and thermal comfort in naturally ventilated buildings.
Sailfish: A flexible multi-GPU implementation of the lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Januszewski, M.; Kostur, M.
2014-09-01
We present Sailfish, an open source fluid simulation package implementing the lattice Boltzmann method (LBM) on modern Graphics Processing Units (GPUs) using CUDA/OpenCL. We take a novel approach to GPU code implementation and use run-time code generation techniques and a high level programming language (Python) to achieve state of the art performance, while allowing easy experimentation with different LBM models and tuning for various types of hardware. We discuss the general design principles of the code, scaling to multiple GPUs in a distributed environment, as well as the GPU implementation and optimization of many different LBM models, both single component (BGK, MRT, ELBM) and multicomponent (Shan-Chen, free energy). The paper also presents results of performance benchmarks spanning the last three NVIDIA GPU generations (Tesla, Fermi, Kepler), which we hope will be useful for researchers working with this type of hardware and similar codes. Catalogue identifier: AETA_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETA_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Lesser General Public License, version 3 No. of lines in distributed program, including test data, etc.: 225864 No. of bytes in distributed program, including test data, etc.: 46861049 Distribution format: tar.gz Programming language: Python, CUDA C, OpenCL. Computer: Any with an OpenCL or CUDA-compliant GPU. Operating system: No limits (tested on Linux and Mac OS X). RAM: Hundreds of megabytes to tens of gigabytes for typical cases. Classification: 12, 6.5. External routines: PyCUDA/PyOpenCL, Numpy, Mako, ZeroMQ (for multi-GPU simulations), scipy, sympy Nature of problem: GPU-accelerated simulation of single- and multi-component fluid flows. Solution method: A wide range of relaxation models (LBGK, MRT, regularized LB, ELBM, Shan-Chen, free energy, free surface) and boundary conditions within the lattice Boltzmann method framework. Simulations can be run in single or double precision using one or more GPUs. Restrictions: The lattice Boltzmann method works for low Mach number flows only. Unusual features: The actual numerical calculations run exclusively on GPUs. The numerical code is built dynamically at run-time in CUDA C or OpenCL, using templates and symbolic formulas. The high-level control of the simulation is maintained by a Python process. Additional comments: !!!!! The distribution file for this program is over 45 Mbytes and therefore is not delivered directly when Download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. !!!!! Running time: Problem-dependent, typically minutes (for small cases or short simulations) to hours (large cases or long simulations).
Distributed data analysis in ATLAS
NASA Astrophysics Data System (ADS)
Nilsson, Paul; Atlas Collaboration
2012-12-01
Data analysis using grid resources is one of the fundamental challenges to be addressed before the start of LHC data taking. The ATLAS detector will produce petabytes of data per year, and roughly one thousand users will need to run physics analyses on this data. Appropriate user interfaces and helper applications have been made available to ensure that the grid resources can be used without requiring expertise in grid technology. These tools enlarge the number of grid users from a few production administrators to potentially all participating physicists. ATLAS makes use of three grid infrastructures for the distributed analysis: the EGEE sites, the Open Science Grid, and Nordu Grid. These grids are managed by the gLite workload management system, the PanDA workload management system, and ARC middleware; many sites can be accessed via both the gLite WMS and PanDA. Users can choose between two front-end tools to access the distributed resources. Ganga is a tool co-developed with LHCb to provide a common interface to the multitude of execution backends (local, batch, and grid). The PanDA workload management system provides a set of utilities called PanDA Client; with these tools users can easily submit Athena analysis jobs to the PanDA-managed resources. Distributed data is managed by Don Quixote 2, a system developed by ATLAS; DQ2 is used to replicate datasets according to the data distribution policies and maintains a central catalog of file locations. The operation of the grid resources is continually monitored by the Ganga Robot functional testing system, and infrequent site stress tests are performed using the Hammer Cloud system. In addition, the DAST shift team is a group of power users who take shifts to provide distributed analysis user support; this team has effectively relieved the burden of support from the developers.
On the Interaction between Marine Boundary Layer Cellular Cloudiness and Surface Heat Fluxes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kazil, J.; Feingold, G.; Wang, Hailong
2014-01-02
The interaction between marine boundary layer cellular cloudiness and surface uxes of sensible and latent heat is investigated. The investigation focuses on the non-precipitating closed-cell state and the precipitating open-cell state at low geostrophic wind speed. The Advanced Research WRF model is used to conduct cloud-system-resolving simulations with interactive surface fluxes of sensible heat, latent heat, and of sea salt aerosol, and with a detailed representation of the interaction between aerosol particles and clouds. The mechanisms responsible for the temporal evolution and spatial distribution of the surface heat fluxes in the closed- and open-cell state are investigated and explained. Itmore » is found that the horizontal spatial structure of the closed-cell state determines, by entrainment of dry free tropospheric air, the spatial distribution of surface air temperature and water vapor, and, to a lesser degree, of the surface sensible and latent heat flux. The synchronized dynamics of the the open-cell state drives oscillations in surface air temperature, water vapor, and in the surface fluxes of sensible and latent heat, and of sea salt aerosol. Open-cell cloud formation, cloud optical depth and liquid water path, and cloud and rain water path are identified as good predictors of the spatial distribution of surface air temperature and sensible heat flux, but not of surface water vapor and latent heat flux. It is shown that by enhancing the surface sensible heat flux, the open-cell state creates conditions by which it is maintained. While the open-cell state under consideration is not depleted in aerosol, and is insensitive to variations in sea-salt fluxes, it also enhances the sea-salt flux relative to the closed-cell state. In aerosol-depleted conditions, this enhancement may replenish the aerosol needed for cloud formation, and hence contribute to the perpetuation of the open-cell state as well. Spatial homogenization of the surface fluxes is found to have only a small effect on cloud properties in the investigated cases. This indicates that sub-grid scale spatial variability in the surface flux of sensible and latent heat and of sea salt aerosol may not be required in large scale and global models to describe marine boundary layer cellular cloudiness.« less
An Inverse Modeling Plugin for HydroDesktop using the Method of Anchored Distributions (MAD)
NASA Astrophysics Data System (ADS)
Ames, D. P.; Osorio, C.; Over, M. W.; Rubin, Y.
2011-12-01
The CUAHSI Hydrologic Information System (HIS) software stack is based on an open and extensible architecture that facilitates the addition of new functions and capabilities at both the server side (using HydroServer) and the client side (using HydroDesktop). The HydroDesktop client plugin architecture is used here to expose a new scripting based plugin that makes use of the R statistics software as a means for conducting inverse modeling using the Method of Anchored Distributions (MAD). MAD is a Bayesian inversion technique for conditioning computational model parameters on relevant field observations yielding probabilistic distributions of the model parameters, related to the spatial random variable of interest, by assimilating multi-type and multi-scale data. The implementation of a desktop software tool for using the MAD technique is expected to significantly lower the barrier to use of inverse modeling in education, research, and resource management. The HydroDesktop MAD plugin is being developed following a community-based, open-source approach that will help both its adoption and long term sustainability as a user tool. This presentation will briefly introduce MAD, HydroDesktop, and the MAD plugin and software development effort.
An inexpensive Arduino-based LED stimulator system for vision research.
Teikari, Petteri; Najjar, Raymond P; Malkki, Hemi; Knoblauch, Kenneth; Dumortier, Dominique; Gronfier, Claude; Cooper, Howard M
2012-11-15
Light emitting diodes (LEDs) are being used increasingly as light sources in life sciences applications such as in vision research, fluorescence microscopy and in brain-computer interfacing. Here we present an inexpensive but effective visual stimulator based on light emitting diodes (LEDs) and open-source Arduino microcontroller prototyping platform. The main design goal of our system was to use off-the-shelf and open-source components as much as possible, and to reduce design complexity allowing use of the system to end-users without advanced electronics skills. The main core of the system is a USB-connected Arduino microcontroller platform designed initially with a specific emphasis on the ease-of-use creating interactive physical computing environments. The pulse-width modulation (PWM) signal of Arduino was used to drive LEDs allowing linear light intensity control. The visual stimulator was demonstrated in applications such as murine pupillometry, rodent models for cognitive research, and heterochromatic flicker photometry in human psychophysics. These examples illustrate some of the possible applications that can be easily implemented and that are advantageous for students, educational purposes and universities with limited resources. The LED stimulator system was developed as an open-source project. Software interface was developed using Python with simplified examples provided for Matlab and LabVIEW. Source code and hardware information are distributed under the GNU General Public Licence (GPL, version 3). Copyright © 2012 Elsevier B.V. All rights reserved.
Datacube Services in Action, Using Open Source and Open Standards
NASA Astrophysics Data System (ADS)
Baumann, P.; Misev, D.
2016-12-01
Array Databases comprise novel, promising technology for massive spatio-temporal datacubes, extending the SQL paradigm of "any query, anytime" to n-D arrays. On server side, such queries can be optimized, parallelized, and distributed based on partitioned array storage. The rasdaman ("raster data manager") system, which has pioneered Array Databases, is available in open source on www.rasdaman.org. Its declarative query language extends SQL with array operators which are optimized and parallelized on server side. The rasdaman engine, which is part of OSGeo Live, is mature and in operational use databases individually holding dozens of Terabytes. Further, the rasdaman concepts have strongly impacted international Big Data standards in the field, including the forthcoming MDA ("Multi-Dimensional Array") extension to ISO SQL, the OGC Web Coverage Service (WCS) and Web Coverage Processing Service (WCPS) standards, and the forthcoming INSPIRE WCS/WCPS; in both OGC and INSPIRE, OGC is WCS Core Reference Implementation. In our talk we present concepts, architecture, operational services, and standardization impact of open-source rasdaman, as well as experiences made.
Understanding User Behavioral Patterns in Open Knowledge Communities
ERIC Educational Resources Information Center
Yang, Xianmin; Song, Shuqiang; Zhao, Xinshuo; Yu, Shengquan
2018-01-01
Open knowledge communities (OKCs) have become popular in the era of knowledge economy. This study aimed to explore how users collaboratively create and share knowledge in OKCs. In particular, this research identified the behavior distribution and behavioral patterns of users by conducting frequency distribution and lag sequential analyses. Some…
Takeoka, Masahiro; Seshadreesan, Kaushik P; Wilde, Mark M
2017-10-13
We consider quantum key distribution (QKD) and entanglement distribution using a single-sender multiple-receiver pure-loss bosonic broadcast channel. We determine the unconstrained capacity region for the distillation of bipartite entanglement and secret key between the sender and each receiver, whenever they are allowed arbitrary public classical communication. A practical implication of our result is that the capacity region demonstrated drastically improves upon rates achievable using a naive time-sharing strategy, which has been employed in previously demonstrated network QKD systems. We show a simple example of a broadcast QKD protocol overcoming the limit of the point-to-point strategy. Our result is thus an important step toward opening a new framework of network channel-based quantum communication technology.
Local instability driving extreme events in a pair of coupled chaotic electronic circuits
NASA Astrophysics Data System (ADS)
de Oliveira, Gilson F.; Di Lorenzo, Orlando; de Silans, Thierry Passerat; Chevrollier, Martine; Oriá, Marcos; Cavalcante, Hugo L. D. de Souza
2016-06-01
For a long time, extreme events happening in complex systems, such as financial markets, earthquakes, and neurological networks, were thought to follow power-law size distributions. More recently, evidence suggests that in many systems the largest and rarest events differ from the other ones. They are dragon kings, outliers that make the distribution deviate from a power law in the tail. Understanding the processes of formation of extreme events and what circumstances lead to dragon kings or to a power-law distribution is an open question and it is a very important one to assess whether extreme events will occur too often in a specific system. In the particular system studied in this paper, we show that the rate of occurrence of dragon kings is controlled by the value of a parameter. The system under study here is composed of two nearly identical chaotic oscillators which fail to remain in a permanently synchronized state when coupled. We analyze the statistics of the desynchronization events in this specific example of two coupled chaotic electronic circuits and find that modifying a parameter associated to the local instability responsible for the loss of synchronization reduces the occurrence of dragon kings, while preserving the power-law distribution of small- to intermediate-size events with the same scaling exponent. Our results support the hypothesis that the dragon kings are caused by local instabilities in the phase space.
Halligan, Brian D.; Geiger, Joey F.; Vallejos, Andrew K.; Greene, Andrew S.; Twigger, Simon N.
2009-01-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step by step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center website (http://proteomics.mcw.edu/vipdac). PMID:19358578
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Bing; Lougovski, Pavel; Pooser, Raphael C.
Continuous-variable quantum key distribution (CV-QKD) protocols based on coherent detection have been studied extensively in both theory and experiment. In all the existing implementations of CV-QKD, both the quantum signal and the local oscillator (LO) are generated from the same laser and propagate through the insecure quantum channel. This arrangement may open security loopholes and limit the potential applications of CV-QKD. In our paper, we propose and demonstrate a pilot-aided feedforward data recovery scheme that enables reliable coherent detection using a “locally” generated LO. Using two independent commercial laser sources and a spool of 25-km optical fiber, we construct amore » coherent communication system. The variance of the phase noise introduced by the proposed scheme is measured to be 0.04 (rad 2), which is small enough to enable secure key distribution. This technology opens the door for other quantum communication protocols, such as the recently proposed measurement-device-independent CV-QKD, where independent light sources are employed by different users.« less
Halligan, Brian D; Geiger, Joey F; Vallejos, Andrew K; Greene, Andrew S; Twigger, Simon N
2009-06-01
One of the major difficulties for many laboratories setting up proteomics programs has been obtaining and maintaining the computational infrastructure required for the analysis of the large flow of proteomics data. We describe a system that combines distributed cloud computing and open source software to allow laboratories to set up scalable virtual proteomics analysis clusters without the investment in computational hardware or software licensing fees. Additionally, the pricing structure of distributed computing providers, such as Amazon Web Services, allows laboratories or even individuals to have large-scale computational resources at their disposal at a very low cost per run. We provide detailed step-by-step instructions on how to implement the virtual proteomics analysis clusters as well as a list of current available preconfigured Amazon machine images containing the OMSSA and X!Tandem search algorithms and sequence databases on the Medical College of Wisconsin Proteomics Center Web site ( http://proteomics.mcw.edu/vipdac ).
Dynamic federation of grid and cloud storage
NASA Astrophysics Data System (ADS)
Furano, Fabrizio; Keeble, Oliver; Field, Laurence
2016-09-01
The Dynamic Federations project ("Dynafed") enables the deployment of scalable, distributed storage systems composed of independent storage endpoints. While the Uniform Generic Redirector at the heart of the project is protocol-agnostic, we have focused our effort on HTTP-based protocols, including S3 and WebDAV. The system has been deployed on testbeds covering the majority of the ATLAS and LHCb data, and supports geography-aware replica selection. The work done exploits the federation potential of HTTP to build systems that offer uniform, scalable, catalogue-less access to the storage and metadata ensemble and the possibility of seamless integration of other compatible resources such as those from cloud providers. Dynafed can exploit the potential of the S3 delegation scheme, effectively federating on the fly any number of S3 buckets from different providers and applying a uniform authorization to them. This feature has been used to deploy in production the BOINC Data Bridge, which uses the Uniform Generic Redirector with S3 buckets to harmonize the BOINC authorization scheme with the Grid/X509. The Data Bridge has been deployed in production with good results. We believe that the features of a loosely coupled federation of open-protocolbased storage elements open many possibilities of smoothly evolving the current computing models and of supporting new scientific computing projects that rely on massive distribution of data and that would appreciate systems that can more easily be interfaced with commercial providers and can work natively with Web browsers and clients.
XTALOPT: An open-source evolutionary algorithm for crystal structure prediction
NASA Astrophysics Data System (ADS)
Lonie, David C.; Zurek, Eva
2011-02-01
The implementation and testing of XTALOPT, an evolutionary algorithm for crystal structure prediction, is outlined. We present our new periodic displacement (ripple) operator which is ideally suited to extended systems. It is demonstrated that hybrid operators, which combine two pure operators, reduce the number of duplicate structures in the search. This allows for better exploration of the potential energy surface of the system in question, while simultaneously zooming in on the most promising regions. A continuous workflow, which makes better use of computational resources as compared to traditional generation based algorithms, is employed. Various parameters in XTALOPT are optimized using a novel benchmarking scheme. XTALOPT is available under the GNU Public License, has been interfaced with various codes commonly used to study extended systems, and has an easy to use, intuitive graphical interface. Program summaryProgram title:XTALOPT Catalogue identifier: AEGX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL v2.1 or later [1] No. of lines in distributed program, including test data, etc.: 36 849 No. of bytes in distributed program, including test data, etc.: 1 149 399 Distribution format: tar.gz Programming language: C++ Computer: PCs, workstations, or clusters Operating system: Linux Classification: 7.7 External routines: QT [2], OpenBabel [3], AVOGADRO [4], SPGLIB [8] and one of: VASP [5], PWSCF [6], GULP [7]. Nature of problem: Predicting the crystal structure of a system from its stoichiometry alone remains a grand challenge in computational materials science, chemistry, and physics. Solution method: Evolutionary algorithms are stochastic search techniques which use concepts from biological evolution in order to locate the global minimum on their potential energy surface. Our evolutionary algorithm, XTALOPT, is freely available to the scientific community for use and collaboration under the GNU Public License. Running time: User dependent. The program runs until stopped by the user.
ERIC Educational Resources Information Center
Armbruster, Chris
2008-01-01
Open source, open content and open access are set to fundamentally alter the conditions of knowledge production and distribution. Open source, open content and open access are also the most tangible result of the shift towards e-science and digital networking. Yet, widespread misperceptions exist about the impact of this shift on knowledge…
The Schrödinger–Langevin equation with and without thermal fluctuations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, R., E-mail: roland.katz@subatech.in2p3.fr; Gossiaux, P.B., E-mail: Pol-Bernard.Gossiaux@subatech.in2p3.fr
2016-05-15
The Schrödinger–Langevin equation (SLE) is considered as an effective open quantum system formalism suitable for phenomenological applications involving a quantum subsystem interacting with a thermal bath. We focus on two open issues relative to its solutions: the stationarity of the excited states of the non-interacting subsystem when one considers the dissipation only and the thermal relaxation toward asymptotic distributions with the additional stochastic term. We first show that a proper application of the Madelung/polar transformation of the wave function leads to a non zero damping of the excited states of the quantum subsystem. We then study analytically and numerically themore » SLE ability to bring a quantum subsystem to the thermal equilibrium of statistical mechanics. To do so, concepts about statistical mixed states and quantum noises are discussed and a detailed analysis is carried with two kinds of noise and potential. We show that within our assumptions the use of the SLE as an effective open quantum system formalism is possible and discuss some of its limitations.« less
Cyber-physical geographical information service-enabled control of diverse in-situ sensors.
Chen, Nengcheng; Xiao, Changjiang; Pu, Fangling; Wang, Xiaolei; Wang, Chao; Wang, Zhili; Gong, Jianya
2015-01-23
Realization of open online control of diverse in-situ sensors is a challenge. This paper proposes a Cyber-Physical Geographical Information Service-enabled method for control of diverse in-situ sensors, based on location-based instant sensing of sensors, which provides closed-loop feedbacks. The method adopts the concepts and technologies of newly developed cyber-physical systems (CPSs) to combine control with sensing, communication, and computation, takes advantage of geographical information service such as services provided by the Tianditu which is a basic geographic information service platform in China and Sensor Web services to establish geo-sensor applications, and builds well-designed human-machine interfaces (HMIs) to support online and open interactions between human beings and physical sensors through cyberspace. The method was tested with experiments carried out in two geographically distributed scientific experimental fields, Baoxie Sensor Web Experimental Field in Wuhan city and Yemaomian Landslide Monitoring Station in Three Gorges, with three typical sensors chosen as representatives using the prototype system Geospatial Sensor Web Common Service Platform. The results show that the proposed method is an open, online, closed-loop means of control.
Cyber-Physical Geographical Information Service-Enabled Control of Diverse In-Situ Sensors
Chen, Nengcheng; Xiao, Changjiang; Pu, Fangling; Wang, Xiaolei; Wang, Chao; Wang, Zhili; Gong, Jianya
2015-01-01
Realization of open online control of diverse in-situ sensors is a challenge. This paper proposes a Cyber-Physical Geographical Information Service-enabled method for control of diverse in-situ sensors, based on location-based instant sensing of sensors, which provides closed-loop feedbacks. The method adopts the concepts and technologies of newly developed cyber-physical systems (CPSs) to combine control with sensing, communication, and computation, takes advantage of geographical information service such as services provided by the Tianditu which is a basic geographic information service platform in China and Sensor Web services to establish geo-sensor applications, and builds well-designed human-machine interfaces (HMIs) to support online and open interactions between human beings and physical sensors through cyberspace. The method was tested with experiments carried out in two geographically distributed scientific experimental fields, Baoxie Sensor Web Experimental Field in Wuhan city and Yemaomian Landslide Monitoring Station in Three Gorges, with three typical sensors chosen as representatives using the prototype system Geospatial Sensor Web Common Service Platform. The results show that the proposed method is an open, online, closed-loop means of control. PMID:25625906
CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.
2013-12-01
As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.
Architectures for mission control at the Jet Propulsion Laboratory
NASA Technical Reports Server (NTRS)
Davidson, Reger A.; Murphy, Susan C.
1992-01-01
JPL is currently converting to an innovative control center data system which is a distributed, open architecture for telemetry delivery and which is enabling advancement towards improved automation and operability, as well as new technology, in mission operations at JPL. The scope of mission control within mission operations is examined. The concepts of a mission control center and how operability can affect the design of a control center data system are discussed. Examples of JPL's mission control architecture, data system development, and prototype efforts at the JPL Operations Engineering Laboratory are provided. Strategies for the future of mission control architectures are outlined.
Derived virtual devices: a secure distributed file system mechanism
NASA Technical Reports Server (NTRS)
VanMeter, Rodney; Hotz, Steve; Finn, Gregory
1996-01-01
This paper presents the design of derived virtual devices (DVDs). DVDs are the mechanism used by the Netstation Project to provide secure shared access to network-attached peripherals distributed in an untrusted network environment. DVDs improve Input/Output efficiency by allowing user processes to perform I/O operations directly from devices without intermediate transfer through the controlling operating system kernel. The security enforced at the device through the DVD mechanism includes resource boundary checking, user authentication, and restricted operations, e.g., read-only access. To illustrate the application of DVDs, we present the interactions between a network-attached disk and a file system designed to exploit the DVD abstraction. We further discuss third-party transfer as a mechanism intended to provide for efficient data transfer in a typical NAP environment. We show how DVDs facilitate third-party transfer, and provide the security required in a more open network environment.
An integrated content and metadata based retrieval system for art.
Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James
2004-03-01
A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.
Fully Quantum Fluctuation Theorems
NASA Astrophysics Data System (ADS)
Åberg, Johan
2018-02-01
Systems that are driven out of thermal equilibrium typically dissipate random quantities of energy on microscopic scales. Crooks fluctuation theorem relates the distribution of these random work costs to the corresponding distribution for the reverse process. By an analysis that explicitly incorporates the energy reservoir that donates the energy and the control system that implements the dynamic, we obtain a quantum generalization of Crooks theorem that not only includes the energy changes in the reservoir but also the full description of its evolution, including coherences. Moreover, this approach opens up the possibility for generalizations of the concept of fluctuation relations. Here, we introduce "conditional" fluctuation relations that are applicable to nonequilibrium systems, as well as approximate fluctuation relations that allow for the analysis of autonomous evolution generated by global time-independent Hamiltonians. We furthermore extend these notions to Markovian master equations, implicitly modeling the influence of the heat bath.
Scheduling policies of intelligent sensors and sensor/actuators in flexible structures
NASA Astrophysics Data System (ADS)
Demetriou, Michael A.; Potami, Raffaele
2006-03-01
In this note, we revisit the problem of actuator/sensor placement in large civil infrastructures and flexible space structures within the context of spatial robustness. The positioning of these devices becomes more important in systems employing wireless sensor and actuator networks (WSAN) for improved control performance and for rapid failure detection. The ability of the sensing and actuating devices to possess the property of spatial robustness results in reduced control energy and therefore the spatial distribution of disturbances is integrated into the location optimization measures. In our studies, the structure under consideration is a flexible plate clamped at all sides. First, we consider the case of sensor placement and the optimization scheme attempts to produce those locations that minimize the effects of the spatial distribution of disturbances on the state estimation error; thus the sensor locations produce state estimators with minimized disturbance-to-error transfer function norms. A two-stage optimization procedure is employed whereby one first considers the open loop system and the spatial distribution of disturbances is found that produces the maximal effects on the entire open loop state. Once this "worst" spatial distribution of disturbances is found, the optimization scheme subsequently finds the locations that produce state estimators with minimum transfer function norms. In the second part, we consider the collocated actuator/sensor pairs and the optimization scheme produces those locations that result in compensators with the smallest norms of the disturbance-to-state transfer functions. Going a step further, an intelligent control scheme is presented which, at each time interval, activates a subset of the actuator/sensor pairs in order provide robustness against spatiotemporally moving disturbances and minimize power consumption by keeping some sensor/actuators in sleep mode.
Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations
NASA Technical Reports Server (NTRS)
Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.
Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mather, Barry A; Hunsberger, Randolph J
This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less
NASA Technical Reports Server (NTRS)
Murphy, James R.; Otto, Neil M.
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The project's integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
NASA Technical Reports Server (NTRS)
Murphy, Jim; Otto, Neil
2017-01-01
NASA's Unmanned Aircraft Systems Integration in the National Airspace System Project is conducting human in the loop simulations and flight testing intended to reduce barriers associated with enabling routine airspace access for unmanned aircraft. The primary focus of these tests is interaction of the unmanned aircraft pilot with the display of detect and avoid alerting and guidance information. The projects integrated test and evaluation team was charged with developing the test infrastructure. As with any development effort, compromises in the underlying system architecture and design were made to allow for the rapid prototyping and open-ended nature of the research. In order to accommodate these design choices, a distributed test environment was developed incorporating Live, Virtual, Constructive, (LVC) concepts. The LVC components form the core infrastructure support simulation of UAS operations by integrating live and virtual aircraft in a realistic air traffic environment. This LVC infrastructure enables efficient testing by leveraging the use of existing assets distributed across multiple NASA Centers. Using standard LVC concepts enable future integration with existing simulation infrastructure.
Drinking water for dairy cattle: always a benefit or a microbiological risk?
Van Eenige, M J E M; Counotte, G H M; Noordhuizen, J P T M
2013-02-01
Drinking water can be considered an essential nutrient for dairy cattle. However, because it comes from different sources, its chemical and microbiological quality does not always reach accepted standards. Moreover, water quality is not routinely assessed on dairy farms. The microecology of drinking water sources and distribution systems is rather complex and still not fully understood. Water quality is adversely affected by the formation of biofilms in distribution systems, which form a persistent reservoir for potentially pathogenic bacteria. Saprophytic microorganisms associated with such biofilms interact with organic and inorganic matter in water, with pathogens, and even with each other. In addition, the presence of biofilms in water distribution systems makes cleaning and disinfection difficult and sometimes impossible. This article describes the complex dynamics of microorganisms in water distribution systems. Water quality is diminished primarily as a result of faecal contamination and rarely as a result of putrefaction in water distribution systems. The design of such systems (with/ without anti-backflow valves and pressure) and the materials used (polyethylene enhances biofilm; stainless steel does not) affect the quality of water they provide. The best option is an open, funnel-shaped galvanized drinking trough, possibly with a pressure system, air inlet, and anti-backflow valves. A poor microbiological quality of drinking water may adversely affect feed intake, and herd health and productivity. In turn, public health may be affected because cattle can become a reservoir of microorganisms hazardous to humans, such as some strains of E. coli, Yersinia enterocolitica, and Campylobacter jejuni. A better understanding of the biological processes in water sources and distribution systems and of the viability of microorganisms in these systems may contribute to better advice on herd health and productivity at a farm level. Certain on-farm risk factors for water quality have been identified. A practical approach will facilitate the control and management of these risks, and thereby improve herd health and productivity.
Open exchange of scientific knowledge and European copyright: The case of biodiversity information
Egloff, Willi; Patterson, David J.; Agosti, Donat; Hagedorn, Gregor
2014-01-01
Abstract Background. The 7th Framework Programme for Research and Technological Development is helping the European Union to prepare for an integrative system for intelligent management of biodiversity knowledge. The infrastructure that is envisaged and that will be further developed within the Programme “Horizon 2020” aims to provide open and free access to taxonomic information to anyone with a requirement for biodiversity data, without the need for individual consent of other persons or institutions. Open and free access to information will foster the re-use and improve the quality of data, will accelerate research, and will promote new types of research. Progress towards the goal of free and open access to content is hampered by numerous technical, economic, sociological, legal, and other factors. The present article addresses barriers to the open exchange of biodiversity knowledge that arise from European laws, in particular European legislation on copyright and database protection rights. We present a legal point of view as to what will be needed to bring distributed information together and facilitate its re-use by data mining, integration into semantic knowledge systems, and similar techniques. We address exceptions and limitations of copyright or database protection within Europe, and we point to the importance of data use agreements. We illustrate how exceptions and limitations have been transformed into national legislations within some European states to create inconsistencies that impede access to biodiversity information. Conclusions. The legal situation within the EU is unsatisfactory because there are inconsistencies among states that hamper the deployment of an open biodiversity knowledge management system. Scientists within the EU who work with copyright protected works or with protected databases have to be aware of regulations that vary from country to country. This is a major stumbling block to international collaboration and is an impediment to the open exchange of biodiversity knowledge. Such differences should be removed by unifying exceptions and limitations for research purposes in a binding, Europe-wide regulation. PMID:25009418
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
Katzman, G L; Morris, D; Lauman, J; Cochella, C; Goede, P; Harnsberger, H R
2001-06-01
To foster a community supported evaluation processes for open-source digital teaching file (DTF) development and maintenance. The mechanisms used to support this process will include standard web browsers, web servers, forum software, and custom additions to the forum software to potentially enable a mediated voting protocol. The web server will also serve as a focal point for beta and release software distribution, which is the desired end-goal of this process. We foresee that www.mdtf.org will provide for widespread distribution of open source DTF software that will include function and interface design decisions from community participation on the website forums.
Triplet correlation in sheared suspensions of Brownian particles
NASA Astrophysics Data System (ADS)
Yurkovetsky, Yevgeny; Morris, Jeffrey F.
2006-05-01
Triplet microstructure of sheared concentrated suspensions of Brownian monodisperse spherical particles is studied by sampling realizations of a three-dimensional unit cell subject to periodic boundary conditions obtained in accelerated Stokesian dynamics simulations. Triplets are regarded as a bridge between particle pairs and many-particle clusters thought responsible for shear thickening. Triplet-correlation data for weakly sheared near-equilibrium systems display an excluded volume effect of accumulated correlation for equilateral contacting triplets. As the Péclet number increases, there is a change in the preferred contacting isosceles triplet configuration, away from the "closed" triplet where the particles lie at the vertices of an equilateral triangle and toward the fully extended rod-like linear arrangement termed the "open" triplet. This transition is most pronounced for triplets lying in the plane of shear, where the open triplets' angular orientation with respect to the flow is very similar to that of a contacting pair. The correlation of suspension rheology to observed structure signals onset of larger clusters. An investigation of the predictive ability of Kirkwood's superposition approximation (KSA) provides valuable insights into the relationship between the pair and triplet probability distributions and helps achieve a better and more detailed understanding of the interplay of the pair and triplet dynamics. The KSA is seen more successfully to predict the shape of isosceles contacting triplet nonequilibrium distributions in the plane of shear than for similar configurations in equilibrium hard-sphere systems; in the sheared case, the discrepancies in magnitudes of distribution peaks are attributable to two interaction effects when pair average trajectories and locations of particles change in response to real, or "hard," and probabilistically favored ("soft") neighboring excluded volumes and, in the case of open triplets, due to changes in the correlation of the farthest separated pair caused by the fixed presence of the particle in the middle.
Magma Vesiculation and Infrasonic Activity in Open Conduit Volcanoes
NASA Astrophysics Data System (ADS)
Colo', L.; Baker, D. R.; Polacci, M.; Ripepe, M.
2007-12-01
At persistently active basaltic volcanoes such as Stromboli, Italy degassing of the magma column can occur in "passive" and "active" conditions. Passive degassing is generally understood as a continuous, non explosive release of gas mainly from the open summit vents and subordinately from the conduit's wall or from fumaroles. In passive degassing generally gas is in equilibrium with atmospheric pressure, while in active degassing the gas approaches the surface at overpressurized conditions. During active degassing (or puffing), the magma column is interested by the bursting of small gas bubbles at the magma free surface and, as a consequence, the active degassing process generates infrasonic signals. We postulated, in this study, that the rate and the amplitude of infrasonic activity is somehow linked to the rate and the volume of the overpressured gas bubbles, which are generated in the magma column. Our hypothesis is that infrasound is controlled by the quantities of gas exsolved in the magma column and then, that a relationship between infrasound and the vesiculation process should exist. In order to achieve this goal, infrasonic records and bubble size distributions of scoria samples from normal explosive activity at Stromboli processed via X ray tomography have been compared. We observed that the cumulative distribution for both data sets follow similar power laws, indicating that both processes are controlled by a scale invariant phenomenon. However the power law is not stable but changes in different scoria clasts, reflecting when gas bubble nucleation is predominant over bubbles coalescence and viceversa. The power law also changes for the infrasonic activity from time to time, suggesting that infrasound may be controlled also by a different gas exsolution within the magma column. Changes in power law distributions are the same for infrasound and scoria indicating that they are linked to the same process acting in the magmatic system. We suggest that monitoring infrasound on an active volcano could represent an alternative way to monitor the vesiculation process of an open conduit system.
Differential Distributions of Synechococcus Subgroups Across the California Current System
Paerl, Ryan W.; Johnson, Kenneth S.; Welsh, Rory M.; Worden, Alexandra Z.; Chavez, Francisco P.; Zehr, Jonathan P.
2011-01-01
Synechococcus is an abundant marine cyanobacterial genus composed of different populations that vary physiologically. Synechococcus narB gene sequences (encoding for nitrate reductase in cyanobacteria) obtained previously from isolates and the environment (e.g., North Pacific Gyre Station ALOHA, Hawaii or Monterey Bay, CA, USA) were used to develop quantitative PCR (qPCR) assays. These qPCR assays were used to quantify populations from specific narB phylogenetic clades across the California Current System (CCS), a region composed of dynamic zones between a coastal-upwelling zone and the oligotrophic Pacific Ocean. Targeted populations (narB subgroups) had different biogeographic patterns across the CCS, which appear to be driven by environmental conditions. Subgroups C_C1, D_C1, and D_C2 were abundant in coastal-upwelling to coastal-transition zone waters with relatively high to intermediate ammonium, nitrate, and chl. a concentrations. Subgroups A_C1 and F_C1 were most abundant in coastal-transition zone waters with intermediate nutrient concentrations. E_O1 and G_O1 were most abundant at different depths of oligotrophic open-ocean waters (either in the upper mixed layer or just below). E_O1, A_C1, and F_C1 distributions differed from other narB subgroups and likely possess unique ecologies enabling them to be most abundant in waters between coastal and open-ocean waters. Different CCS zones possessed distinct Synechococcus communities. Core California current water possessed low numbers of narB subgroups relative to counted Synechococcus cells, and coastal-transition waters contained high abundances of Synechococcus cells and total number of narB subgroups. The presented biogeographic data provides insight on the distributions and ecologies of Synechococcus present in an eastern boundary current system. PMID:21833315
NASA Astrophysics Data System (ADS)
Nguyen, L.; Chee, T.; Palikonda, R.; Smith, W. L., Jr.; Bedka, K. M.; Spangenberg, D.; Vakhnin, A.; Lutz, N. E.; Walter, J.; Kusterer, J.
2017-12-01
Cloud Computing offers new opportunities for large-scale scientific data producers to utilize Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) IT resources to process and deliver data products in an operational environment where timely delivery, reliability, and availability are critical. The NASA Langley Research Center Atmospheric Science Data Center (ASDC) is building and testing a private and public facing cloud for users in the Science Directorate to utilize as an everyday production environment. The NASA SatCORPS (Satellite ClOud and Radiation Property Retrieval System) team processes and derives near real-time (NRT) global cloud products from operational geostationary (GEO) satellite imager datasets. To deliver these products, we will utilize the public facing cloud and OpenShift to deploy a load-balanced webserver for data storage, access, and dissemination. The OpenStack private cloud will host data ingest and computational capabilities for SatCORPS processing. This paper will discuss the SatCORPS migration towards, and usage of, the ASDC Cloud Services in an operational environment. Detailed lessons learned from use of prior cloud providers, specifically the Amazon Web Services (AWS) GovCloud and the Government Cloud administered by the Langley Managed Cloud Environment (LMCE) will also be discussed.
NASA Astrophysics Data System (ADS)
Bao, Yi; Valipour, Mahdi; Meng, Weina; Khayat, Kamal H.; Chen, Genda
2017-08-01
This study develops a delamination detection system for smart ultra-high-performance concrete (UHPC) overlays using a fully distributed fiber optic sensor. Three 450 mm (length) × 200 mm (width) × 25 mm (thickness) UHPC overlays were cast over an existing 200 mm thick concrete substrate. The initiation and propagation of delamination due to early-age shrinkage of the UHPC overlay were detected as sudden increases and their extension in spatial distribution of shrinkage-induced strains measured from the sensor based on pulse pre-pump Brillouin optical time domain analysis. The distributed sensor is demonstrated effective in detecting delamination openings from microns to hundreds of microns. A three-dimensional finite element model with experimental material properties is proposed to understand the complete delamination process measured from the distributed sensor. The model is validated using the distributed sensor data. The finite element model with cohesive elements for the overlay-substrate interface can predict the complete delamination process.
NASA Astrophysics Data System (ADS)
Burnett, M.
2010-12-01
One topic that is beginning to influence the systems that support these goals is that of Information Technology (IT) Security. Unsecure systems are vulnerable to increasing attacks and other negative consequences; sponsoring agencies are correspondingly responding with more refined policies and more stringent security requirements. These affect how EO systems can meet the goals of data and service interoperability and harmonization through open access, transformation and visualization services. Contemporary systems, including the vision of a system-of-systems (such as GEOSS, the Global Earth Observation System of Systems), utilize technologies that support a distributed, global, net-centric environment. These types of systems have a high reliance on the open systems, web services, shared infrastructure and data standards. The broader IT industry has developed and used these technologies in their business and mission critical systems for many years. Unfortunately, the IT industry, and their customers have learned the importance of protecting their assets and resources (computing and information) as they have been forced to respond to an ever increasing number and more complex illegitimate “attackers”. This presentation will offer an overview of work done by the CEOS WGISS organization in summarizing security threats, the challenges to responding to them and capturing the current state of the practice within the EO community.
SeeStar: an open-source, low-cost imaging system for subsea observations
NASA Astrophysics Data System (ADS)
Cazenave, F.; Kecy, C. D.; Haddock, S.
2016-02-01
Scientists and engineers at the Monterey Bay Aquarium Research Institute (MBARI) have collaborated to develop SeeStar, a modular, light weight, self-contained, low-cost subsea imaging system for short- to long-term monitoring of marine ecosystems. SeeStar is composed of separate camera, battery, and LED lighting modules. Two versions of the system exist: one rated to 300 meters depth, the other rated to 1500 meters. Users can download plans and instructions from an online repository and build the system using low-cost off-the-shelf components. The system utilizes an easily programmable Arduino based controller, and the widely distributed GoPro camera. The system can be deployed in a variety of scenarios taking still images and video and can be operated either autonomously or tethered on a range of platforms, including ROVs, AUVs, landers, piers, and moorings. Several Seestar systems have been built and used for scientific studies and engineering tests. The long-term goal of this project is to have a widely distributed marine imaging network across thousands of locations, to develop baselines of biological information.
A Distributed Signature Detection Method for Detecting Intrusions in Sensor Systems
Kim, Ilkyu; Oh, Doohwan; Yoon, Myung Kuk; Yi, Kyueun; Ro, Won Woo
2013-01-01
Sensor nodes in wireless sensor networks are easily exposed to open and unprotected regions. A security solution is strongly recommended to prevent networks against malicious attacks. Although many intrusion detection systems have been developed, most systems are difficult to implement for the sensor nodes owing to limited computation resources. To address this problem, we develop a novel distributed network intrusion detection system based on the Wu–Manber algorithm. In the proposed system, the algorithm is divided into two steps; the first step is dedicated to a sensor node, and the second step is assigned to a base station. In addition, the first step is modified to achieve efficient performance under limited computation resources. We conduct evaluations with random string sets and actual intrusion signatures to show the performance improvement of the proposed method. The proposed method achieves a speedup factor of 25.96 and reduces 43.94% of packet transmissions to the base station compared with the previously proposed method. The system achieves efficient utilization of the sensor nodes and provides a structural basis of cooperative systems among the sensors. PMID:23529146
A distributed signature detection method for detecting intrusions in sensor systems.
Kim, Ilkyu; Oh, Doohwan; Yoon, Myung Kuk; Yi, Kyueun; Ro, Won Woo
2013-03-25
Sensor nodes in wireless sensor networks are easily exposed to open and unprotected regions. A security solution is strongly recommended to prevent networks against malicious attacks. Although many intrusion detection systems have been developed, most systems are difficult to implement for the sensor nodes owing to limited computation resources. To address this problem, we develop a novel distributed network intrusion detection system based on the Wu-Manber algorithm. In the proposed system, the algorithm is divided into two steps; the first step is dedicated to a sensor node, and the second step is assigned to a base station. In addition, the first step is modified to achieve efficient performance under limited computation resources. We conduct evaluations with random string sets and actual intrusion signatures to show the performance improvement of the proposed method. The proposed method achieves a speedup factor of 25.96 and reduces 43.94% of packet transmissions to the base station compared with the previously proposed method. The system achieves efficient utilization of the sensor nodes and provides a structural basis of cooperative systems among the sensors.
NASA Astrophysics Data System (ADS)
Dib, Alain; Kavvas, M. Levent
2018-03-01
The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.
Thermal control of low-pressure fractionation processes. [in basaltic magma solidification
NASA Technical Reports Server (NTRS)
Usselman, T. M.; Hodge, D. S.
1978-01-01
Thermal models detailing the solidification paths for shallow basaltic magma chambers (both open and closed systems) were calculated using finite-difference techniques. The total solidification time for closed chambers are comparable to previously published calculations; however, the temperature-time paths are not. These paths are dependent on the phase relations and the crystallinity of the system, because both affect the manner in which the latent heat of crystallization is distributed. In open systems, where a chamber would be periodically replenished with additional parental liquid, calculations indicate that the possibility is strong that a steady-state temperature interval is achieved near a major phase boundary. In these cases it is straightforward to analyze fractionation models of the basaltic liquid evolution and their corresponding cumulate sequences. This steady thermal fractionating state can be invoked to explain large amounts of erupted basalts of similar composition over long time periods from the same volcanic center and some rhythmically layered basic cumulate sequences.
Falcão-Reis, Filipa; Correia, Manuel E
2010-01-01
With the advent of more sophisticated and comprehensive healthcare information systems, system builders are becoming more interested in patient interaction and what he can do to help to improve his own health care. Information systems play nowadays a crucial and fundamental role in hospital work-flows, thus providing great opportunities to introduce and improve upon "patient empowerment" processes for the personalization and management of Electronic Health Records (EHRs). In this paper, we present a patient's privacy generic control mechanisms scenarios based on the Extended OpenID (eOID), a user centric digital identity provider previously developed by our group, which leverages a secured OpenID 2.0 infrastructure with the recently released Portuguese Citizen Card (CC) for secure authentication in a distributed health information environment. eOID also takes advantage of Oauth assertion based mechanisms to implement patient controlled secure qualified role based access to his EHR, by third parties.
Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob
2013-01-01
We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992
Design of an elemental analysis system for CELSS research
NASA Technical Reports Server (NTRS)
Schwartzkopf, Steven H.
1987-01-01
The results of experiments conducted with higher plants in tightly sealed growth chambers provide definite evidence that the physical closure of a chamber has significant effects on many aspects of a plant's biology. One of these effects is seen in the change in rates of uptake, distribution, and re-release or nutrient elements by the plant (mass balance). Experimental data indicates that these rates are different from those recorded for plants grown in open field agriculture, or in open growth chambers. Since higher plants are a crucial component of a controlled ecological life support system (CELSS), it is important that the consequences of these rate differences be understood with regard to the growth and yield of the plants. A description of a system for elemental analysis which can be used to monitor the mass balance of nutrient elements in CELSS experiments is given. Additionally, data on the uptake of nutrient elements by higher plants grown in a growth chamber is presented.
Rnomads: An R Interface with the NOAA Operational Model Archive and Distribution System
NASA Astrophysics Data System (ADS)
Bowman, D. C.; Lees, J. M.
2014-12-01
The National Oceanic and Atmospheric Administration Operational Model Archive and Distribution System (NOMADS) facilitates rapid delivery of real time and archived environmental data sets from multiple agencies. These data are distributed free to the scientific community, industry, and the public. The rNOMADS package provides an interface between NOMADS and the R programming language. Like R itself, rNOMADS is open source and cross platform. It utilizes server-side functionality on the NOMADS system to subset model outputs for delivery to client R users. There are currently 57 real time and 10 archived models available through rNOMADS. Atmospheric models include the Global Forecast System and North American Mesoscale. Oceanic models include WAVEWATCH III and U. S. Navy Operational Global Ocean Model. rNOMADS has been downloaded 1700 times in the year since it was released. At the time of writing, it is being used for wind and solar power modeling, climate monitoring related to food security concerns, and storm surge/inundation calculations, among others. We introduce this new package and show how it can be used to extract data for infrasonic waveform modeling in the atmosphere.
Data Access Tools And Services At The Goddard Distributed Active Archive Center (GDAAC)
NASA Technical Reports Server (NTRS)
Pham, Long; Eng, Eunice; Sweatman, Paul
2003-01-01
As one of the largest providers of Earth Science data from the Earth Observing System, GDAAC provides the latest data from the Moderate Resolution Imaging Spectroradiometer (MODIS), Atmospheric Infrared Sounder (AIRS), Solar Radiation and Climate Experiment (SORCE) data products via GDAAC's data pool (50TB of disk cache). In order to make this huge volume of data more accessible to the public and science communities, the GDAAC offers multiple data access tools and services: Open Source Project for Network Data Access Protocol (OPeNDAP), Grid Analysis and Display System (GrADS/DODS) (GDS), Live Access Server (LAS), OpenGlS Web Map Server (WMS) and Near Archive Data Mining (NADM). The objective is to assist users in retrieving electronically a smaller, usable portion of data for further analysis. The OPeNDAP server, formerly known as the Distributed Oceanographic Data System (DODS), allows the user to retrieve data without worrying about the data format. OPeNDAP is capable of server-side subsetting of HDF, HDF-EOS, netCDF, JGOFS, ASCII, DSP, FITS and binary data formats. The GrADS/DODS server is capable of serving the same data formats as OPeNDAP. GDS has an additional feature of server-side analysis. Users can analyze the data on the server there by decreasing the computational load on their client's system. The LAS is a flexible server that allows user to graphically visualize data on the fly, to request different file formats and to compare variables from distributed locations. Users of LAS have options to use other available graphics viewers such as IDL, Matlab or GrADS. WMS is based on the OPeNDAP for serving geospatial information. WMS supports OpenGlS protocol to provide data in GIs-friendly formats for analysis and visualization. NADM is another access to the GDAAC's data pool. NADM gives users the capability to use a browser to upload their C, FORTRAN or IDL algorithms, test the algorithms, and mine data in the data pool. With NADM, the GDAAC provides an environment physically close to the data source. NADM will benefit users with mining or offer data reduction algorithms by reducing large volumes of data before transmission over the network to the user.
Weiqi games as a tree: Zipf's law of openings and beyond
NASA Astrophysics Data System (ADS)
Xu, Li-Gong; Li, Ming-Xia; Zhou, Wei-Xing
2015-06-01
Weiqi is one of the most complex board games played by two persons. The placement strategies adopted by Weiqi players are often used to analog the philosophy of human wars. Contrary to the western chess, Weiqi games are less studied by academics partially because Weiqi is popular only in East Asia, especially in China, Japan and Korea. Here, we propose to construct a directed tree using a database of extensive Weiqi games and perform a quantitative analysis of the Weiqi tree. We find that the popularity distribution of Weiqi openings with the same number of moves is distributed according to a power law and the tail exponent increases with the number of moves. Intriguingly, the superposition of the popularity distributions of Weiqi openings with a number of moves not higher than a given number also has a power-law tail in which the tail exponent increases with the number of moves, and the superposed distribution approaches the Zipf law. These findings are the same as for chess and support the conjecture that the popularity distribution of board game openings follows the Zipf law with a universal exponent. We also find that the distribution of out-degrees has a power-law form, the distribution of branching ratios has a very complicated pattern, and the distribution of uniqueness scores defined by the path lengths from the root vertex to the leaf vertices exhibits a unimodal shape. Our work provides a promising direction for the study of the decision-making process of Weiqi playing from the perspective of directed branching tree.
Flexible distributed architecture for semiconductor process control and experimentation
NASA Astrophysics Data System (ADS)
Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.
1997-01-01
Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.
Pietrobon, Ricardo; Lima, Raquel; Shah, Anand; Jacobs, Danny O; Harker, Matthew; McCready, Mariana; Martins, Henrique; Richardson, William
2007-01-01
Background Studies have shown that 4% of hospitalized patients suffer from an adverse event caused by the medical treatment administered. Some institutions have created systems to encourage medical workers to report these adverse events. However, these systems often prove to be inadequate and/or ineffective for reviewing the data collected and improving the outcomes in patient safety. Objective To describe the Web-application Duke Surgery Patient Safety, designed for the anonymous reporting of adverse and near-miss events as well as scheduled reporting to surgeons and hospital administration. Software architecture DSPS was developed primarily using Java language running on a Tomcat server and with MySQL database as its backend. Results Formal and field usability tests were used to aid in development of DSPS. Extensive experience with DSPS at our institution indicate that DSPS is easy to learn and use, has good speed, provides needed functionality, and is well received by both adverse-event reporters and administrators. Discussion This is the first description of an open-source application for reporting patient safety, which allows the distribution of the application to other institutions in addition for its ability to adapt to the needs of different departments. DSPS provides a mechanism for anonymous reporting of adverse events and helps to administer Patient Safety initiatives. Conclusion The modifiable framework of DSPS allows adherence to evolving national data standards. The open-source design of DSPS permits surgical departments with existing reporting mechanisms to integrate them with DSPS. The DSPS application is distributed under the GNU General Public License. PMID:17472749
Pietrobon, Ricardo; Lima, Raquel; Shah, Anand; Jacobs, Danny O; Harker, Matthew; McCready, Mariana; Martins, Henrique; Richardson, William
2007-05-01
Studies have shown that 4% of hospitalized patients suffer from an adverse event caused by the medical treatment administered. Some institutions have created systems to encourage medical workers to report these adverse events. However, these systems often prove to be inadequate and/or ineffective for reviewing the data collected and improving the outcomes in patient safety. To describe the Web-application Duke Surgery Patient Safety, designed for the anonymous reporting of adverse and near-miss events as well as scheduled reporting to surgeons and hospital administration. SOFTWARE ARCHITECTURE: DSPS was developed primarily using Java language running on a Tomcat server and with MySQL database as its backend. Formal and field usability tests were used to aid in development of DSPS. Extensive experience with DSPS at our institution indicate that DSPS is easy to learn and use, has good speed, provides needed functionality, and is well received by both adverse-event reporters and administrators. This is the first description of an open-source application for reporting patient safety, which allows the distribution of the application to other institutions in addition for its ability to adapt to the needs of different departments. DSPS provides a mechanism for anonymous reporting of adverse events and helps to administer Patient Safety initiatives. The modifiable framework of DSPS allows adherence to evolving national data standards. The open-source design of DSPS permits surgical departments with existing reporting mechanisms to integrate them with DSPS. The DSPS application is distributed under the GNU General Public License.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erlich, R.N.; Sofer, Z.; Pratt, L.M.
1993-02-01
Late Cretaceous [open quotes]source rocks[close quotes] from Costa Rica, western and eastern Venezuela, and Trinidad were studied using organic and inorganic geochemistry, biostratigraphy, and sedimentology in order to determine their depositional environments. Bulk mineralogy and major element geochemistry for 304 samples were combined with Rock Eval data and extract biomaker analysis to infer the types and distributions of the various Late Cretaceous productivity systems represented in the dataset. When data from this study are combined with published and proprietary data from offshore West Africa, Guyana/Suriname, and the central Caribbean, they show that these Late Cretaceous units can be correlated bymore » their biogeochemical characteristics to establish their temporal and spatial relationships. Paleogeographic maps constructed for the early to late Cenomanian, Turonian, Coniacian to middle Santonian, and late Santonian to latest Campanian show that upwelling and excessive fluvial runoff were probably the dominant sources of nutrient supply to the coastal productivity systems. The late Santonian to Maastrichtian rocks examined in this study indicate that organic material was poorly preserved after deposition, even though biologic productivity remained constant or changed only slightly. A rapid influx of oxygenated bottom water may have occurred following the opening of a deep water connection between the North and South Atlantic oceans, and/or separation of India from Africa and the establishment of an Antarctic oceanic connection. This study suggests that the most important factors that controlled source rock quality in northern South America were productivity, preservation, degree of clastic dilution, and subsurface diagenesis.« less
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Choi, Dong H; An, Sung M; Chun, Sungjun; Yang, Eun C; Selph, Karen E; Lee, Charity M; Noh, Jae H
2016-02-01
Photosynthetic picoeukaryotes (PPEs) are major oceanic primary producers. However, the diversity of such communities remains poorly understood, especially in the northwestern (NW) Pacific. We investigated the abundance and diversity of PPEs, and recorded environmental variables, along a transect from the coast to the open Pacific Ocean. High-throughput tag sequencing (using the MiSeq system) revealed the diversity of plastid 16S rRNA genes. The dominant PPEs changed at the class level along the transect. Prymnesiophyceae were the only dominant PPEs in the warm pool of the NW Pacific, but Mamiellophyceae dominated in coastal waters of the East China Sea. Phylogenetically, most Prymnesiophyceae sequences could not be resolved at lower taxonomic levels because no close relatives have been cultured. Within the Mamiellophyceae, the genera Micromonas and Ostreococcus dominated in marginal coastal areas affected by open water, whereas Bathycoccus dominated in the lower euphotic depths of oligotrophic open waters. Cryptophyceae and Phaeocystis (of the Prymnesiophyceae) dominated in areas affected principally by coastal water. We also defined the biogeographical distributions of Chrysophyceae, prasinophytes, Bacillariophyceaea and Pelagophyceae. These distributions were influenced by temperature, salinity and chlorophyll a and nutrient concentrations. © FEMS 2015. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Magee, Jeff; Moffett, Jonathan
1996-06-01
Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.
GBU-X bounding requirements for highly flexible munitions
NASA Astrophysics Data System (ADS)
Bagby, Patrick T.; Shaver, Jonathan; White, Reed; Cafarelli, Sergio; Hébert, Anthony J.
2017-04-01
This paper will present the results of an investigation into requirements for existing software and hardware solutions for open digital communication architectures that support weapon subsystem integration. The underlying requirements of such a communication architecture would be to achieve the lowest latency possible at a reasonable cost point with respect to the mission objective of the weapon. The determination of the latency requirements of the open architecture software and hardware were derived through the use of control system and stability margins analyses. Studies were performed on the throughput and latency of different existing communication transport methods. The two architectures that were tested in this study include Data Distribution Service (DDS) and Modular Open Network Architecture (MONARCH). This paper defines what levels of latency can be achieved with current technology and how this capability may translate to future weapons. The requirements moving forward within communications solutions are discussed.
Experimental research on the infrared gas fire detection system
NASA Astrophysics Data System (ADS)
Jiang, Yalong; Liu, Yangyang
2018-02-01
Open fires and smoldering fires were differentiated using five experiments: wood pyrolysis, polyurethane smoldering, wood fire, polyurethane fire and cotton rope smoldering. At the same time, the distribution of CO2 and CO concentration in combustion products at different heights was studied. Real fire and environmental interference were distinguished using burning cigarettes and sandalwood. The results showed that open fires and smoldering fires produced significantly different ratios of CO2 and CO concentrations. By judging the order of magnitudes of the ratio CO2 and CO concentrations in the combustion products, open fire and smoldering fire could be effectively distinguished. At the same time, the comparison experiment showed that the rate of increase of the concentration of CO in the smoldering fire was higher than that under non-fire conditions. With the criterion of the rate of increase of CO concentration, smoldering fire and non-fire could be distinguished.
Fils, D.; Cervato, C.; Reed, J.; Diver, P.; Tang, X.; Bohling, G.; Greer, D.
2009-01-01
CHRONOS's purpose is to transform Earth history research by seamlessly integrating stratigraphic databases and tools into a virtual on-line stratigraphic record. In this paper, we describe the various components of CHRONOS's distributed data system, including the encoding of semantic and descriptive data into a service-based architecture. We give examples of how we have integrated well-tested resources available from the open-source and geoinformatic communities, like the GeoSciML schema and the simple knowledge organization system (SKOS), into the services-oriented architecture to encode timescale and phylogenetic synonymy data. We also describe on-going efforts to use geospatially enhanced data syndication and informally including semantic information by embedding it directly into the XHTML Document Object Model (DOM). XHTML DOM allows machine-discoverable descriptive data such as licensing and citation information to be incorporated directly into data sets retrieved by users. ?? 2008 Elsevier Ltd. All rights reserved.
Sundvall, Erik; Wei-Kleiner, Fang; Freire, Sergio M; Lambrix, Patrick
2017-01-01
Archetype-based Electronic Health Record (EHR) systems using generic reference models from e.g. openEHR, ISO 13606 or CIMI should be easy to update and reconfigure with new types (or versions) of data models or entries, ideally with very limited programming or manual database tweaking. Exploratory research (e.g. epidemiology) leading to ad-hoc querying on a population-wide scale can be a challenge in such environments. This publication describes implementation and test of an archetype-aware Dewey encoding optimization that can be used to produce such systems in environments supporting relational operations, e.g. RDBMs and distributed map-reduce frameworks like Hadoop. Initial testing was done using a nine-node 2.2 GHz quad-core Hadoop cluster querying a dataset consisting of targeted extracts from 4+ million real patient EHRs, query results with sub-minute response time were obtained.
Development of a digital solar simulator based on full-bridge converter
NASA Astrophysics Data System (ADS)
Liu, Chen; Feng, Jian; Liu, Zhilong; Tong, Weichao; Ji, Yibo
2014-02-01
With the development of solar photovoltaic, distribution schemes utilized in power grid had been commonly application, and photovoltaic (PV) inverter is an essential equipment in grid. In this paper, a digital solar simulator based on full-bridge structure is presented. The output characteristic curve of system is electrically similar to silicon solar cells, which can greatly simplify research methods of PV inverter, improve the efficiency of research and development. The proposed simulator consists on a main control board based on TM320F28335, phase-shifted zero-voltage-switching (ZVS) DC-DC full-bridge converter and voltage and current sampling circuit, that allows emulating the voltage-current curve with the open-circuit voltage (Voc) of 900V and the short-circuit current (Isc) of 18A .When the system connected to a PV inverter, the inverter can quickly track from the open-circuit to the maximum power point and keep stability.
Safety drain system for fluid reservoir
NASA Technical Reports Server (NTRS)
England, John Dwight (Inventor); Kelley, Anthony R. (Inventor); Cronise, Raymond J. (Inventor)
2012-01-01
A safety drain system includes a plurality of drain sections, each of which defines distinct fluid flow paths. At least a portion of the fluid flow paths commence at a side of the drain section that is in fluid communication with a reservoir's fluid. Each fluid flow path at the side communicating with the reservoir's fluid defines an opening having a smallest dimension not to exceed approximately one centimeter. The drain sections are distributed over at least one surface of the reservoir. A manifold is coupled to the drain sections.
Isolation contactor state control system
Bissontz, Jay E.
2017-05-16
A controller area network (CAN) installed on a hybrid electric vehicle provides one node with control of high voltage power distribution system isolation contactors and the capacity to energize a secondary electro-mechanical relay device. The output of the secondary relay provides a redundant and persistent backup signal to the output of the node. The secondary relay is relatively immune to CAN message traffic interruptions and, as a result, the high voltage isolation contactor(s) are less likely to transition open in the event that the intelligent output driver should fail.
NASA Astrophysics Data System (ADS)
Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.
2009-12-01
This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.
Modern Grid Initiative Distribution Taxonomy Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Kevin P.; Chen, Yousu; Chassin, David P.
2008-11-01
This is the final report for the development of a toxonomy of prototypical electrical distribution feeders. Two of the primary goals of the Department of Energy's (DOE) Modern Grid Initiative (MGI) are 'to accelerate the modernization of our nation's electricity grid' and to 'support demonstrations of systems of key technologies that can serve as the foundation for an integrated, modern power grid'. A key component to the realization of these goals is the effective implementation of new, as well as existing, 'smart grid technologies'. Possibly the largest barrier that has been identified in the deployment of smart grid technologies ismore » the inability to evaluate how their deployment will affect the electricity infrastructure, both locally and on a regional scale. The inability to evaluate the impacts of these technologies is primarily due to the lack of detailed electrical distribution feeder information. While detailed distribution feeder information does reside with the various distribution utilities, there is no central repository of information that can be openly accessed. The role of Pacific Northwest National Laboratory (PNNL) in the MGI for FY08 was to collect distribution feeder models, in the SynerGEE{reg_sign} format, from electric utilities around the nation so that they could be analyzed to identify regional differences in feeder design and operation. Based on this analysis PNNL developed a taxonomy of 24 prototypical feeder models in the GridLAB-D simulations environment that contain the fundamental characteristics of non-urban core, radial distribution feeders from the various regions of the U.S. Weighting factors for these feeders are also presented so that they can be used to generate a representative sample for various regions within the United States. The final product presented in this report is a toolset that enables the evaluation of new smart grid technologies, with the ability to aggregate their effects to regional and national levels. The distribution feeder models presented in this report are based on actual utility models but do not contain any proprietary or system specific information. As a result, the models discussed in this report can be openly distributed to industry, academia, or any interested entity, in order to facilitate the ability to evaluate smart grid technologies.« less
Global EOS: exploring the 300-ms-latency region
NASA Astrophysics Data System (ADS)
Mascetti, L.; Jericho, D.; Hsu, C.-Y.
2017-10-01
EOS, the CERN open-source distributed disk storage system, provides the highperformance storage solution for HEP analysis and the back-end for various work-flows. Recently EOS became the back-end of CERNBox, the cloud synchronisation service for CERN users. EOS can be used to take advantage of wide-area distributed installations: for the last few years CERN EOS uses a common deployment across two computer centres (Geneva-Meyrin and Budapest-Wigner) about 1,000 km apart (∼20-ms latency) with about 200 PB of disk (JBOD). In late 2015, the CERN-IT Storage group and AARNET (Australia) set-up a challenging R&D project: a single EOS instance between CERN and AARNET with more than 300ms latency (16,500 km apart). This paper will report about the success in deploy and run a distributed storage system between Europe (Geneva, Budapest), Australia (Melbourne) and later in Asia (ASGC Taipei), allowing different type of data placement and data access across these four sites.
Mean, covariance, and effective dimension of stochastic distributed delay dynamics
NASA Astrophysics Data System (ADS)
René, Alexandre; Longtin, André
2017-11-01
Dynamical models are often required to incorporate both delays and noise. However, the inherently infinite-dimensional nature of delay equations makes formal solutions to stochastic delay differential equations (SDDEs) challenging. Here, we present an approach, similar in spirit to the analysis of functional differential equations, but based on finite-dimensional matrix operators. This results in a method for obtaining both transient and stationary solutions that is directly amenable to computation, and applicable to first order differential systems with either discrete or distributed delays. With fewer assumptions on the system's parameters than other current solution methods and no need to be near a bifurcation, we decompose the solution to a linear SDDE with arbitrary distributed delays into natural modes, in effect the eigenfunctions of the differential operator, and show that relatively few modes can suffice to approximate the probability density of solutions. Thus, we are led to conclude that noise makes these SDDEs effectively low dimensional, which opens the possibility of practical definitions of probability densities over their solution space.
Needs assessment under the Maternal and Child Health Services Block Grant: Massachusetts.
Guyer, B; Schor, L; Messenger, K P; Prenney, B; Evans, F
1984-09-01
The Massachusetts maternal and child health (MCH) agency has developed a needs assessment process which includes four components: a statistical measure of need based on indirect, proxy health and social indicators; clinical standards for services to be provided; an advisory process which guides decision making and involves constituency groups; and a management system for implementing funds distribution, namely open competitive bidding in response to a Request for Proposals. In Fiscal Years 1982 and 1983, the process was applied statewide in the distribution of primary prenatal (MIC) and pediatric (C&Y) care services and lead poisoning prevention projects. Both processes resulted in clearer definitions of services to be provided under contract to the state as well as redistribution of funds to serve localities that had previously received no resources. Although the needs assessment process does not provide a direct measure of unmet need in a complex system of private and public services, it can be used to advocate for increased MCH funding and guide the distribution of new MCH service dollars.
NASA Astrophysics Data System (ADS)
Neto, José Antônio Baptista; Gingele, Franz Xaver; Leipe, Thomas; Brehme, Isa
2006-04-01
Ninety-two surface sediment samples were collected in Guanabara Bay, one of the most prominent urban bays in SE Brazil, to investigate the spatial distribution of anthropogenic pollutants. The concentrations of heavy metals, organic carbon and particle size were examined in all samples. Large spatial variations of heavy metals and particle size were observed. The highest concentrations of heavy metals were found in the muddy sediments from the north western region of the bay near the main outlets of the most polluted rivers, municipal waste drainage systems and one of the major oil refineries. Another anomalous concentration of metals was found adjacent to Rio de Janeiro Harbour. The heavy metal concentrations decrease to the northeast, due to intact rivers and the mangrove systems in this area, and to the south where the sand fraction and open-marine processes dominate. The geochemical normalization of metal data to Li or Al has also demonstrated that the anthropogenic input of heavy metals have altered the natural sediment heavy metal distribution.
A Walk through TRIDEC's intermediate Tsunami Early Warning System
NASA Astrophysics Data System (ADS)
Hammitzsch, M.; Reißland, S.; Lendholt, M.
2012-04-01
The management of natural crises is an important application field of the technology developed in the project Collaborative, Complex, and Critical Decision-Support in Evolving Crises (TRIDEC), co-funded by the European Commission in its Seventh Framework Programme. TRIDEC is based on the development of the German Indonesian Tsunami Early Warning System (GITEWS) and the Distant Early Warning System (DEWS) providing a service platform for both sensor integration and warning dissemination. In TRIDEC new developments in Information and Communication Technology (ICT) are used to extend the existing platform realising a component-based technology framework for building distributed tsunami warning systems for deployment, e.g. in the North-eastern Atlantic, the Mediterranean and Connected Seas (NEAM) region. The TRIDEC system will be implemented in three phases, each with a demonstrator. Successively, the demonstrators are addressing challenges, such as the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulation tools and data fusion tools. In addition to conventional sensors also unconventional sensors and sensor networks play an important role in TRIDEC. The system version presented is based on service-oriented architecture (SOA) concepts and on relevant standards of the Open Geospatial Consortium (OGC), the World Wide Web Consortium (W3C) and the Organization for the Advancement of Structured Information Standards (OASIS). In this way the system continuously gathers, processes and displays events and data coming from open sensor platforms to enable operators to quickly decide whether an early warning is necessary and to send personalized warning messages to the authorities and the population at large through a wide range of communication channels. The system integrates OGC Sensor Web Enablement (SWE) compliant sensor systems for the rapid detection of hazardous events, like earthquakes, sea level anomalies, ocean floor occurrences, and ground displacements. Using OGC Web Map Service (WMS) and Web Feature Service (WFS) spatial data are utilized to depict the situation picture. The integration of a simulation system to identify affected areas is considered using the OGC Web Processing Service (WPS). Warning messages are compiled and transmitted in the OASIS Common Alerting Protocol (CAP) together with addressing information defined via the OASIS Emergency Data Exchange Language - Distribution Element (EDXL-DE). The first system demonstrator has been designed and implemented to support plausible scenarios demonstrating the treatment of simulated tsunami threats with an essential subset of a National Tsunami Warning Centre (NTWC). The feasibility and the potentials of the implemented approach are demonstrated covering standard operations as well as tsunami detection and alerting functions. The demonstrator presented addresses information management and decision-support processes in a hypothetical natural crisis situation caused by a tsunami in the Eastern Mediterranean. Developments of the system are based to the largest extent on free and open source software (FOSS) components and industry standards. Emphasis has been and will be made on leveraging open source technologies that support mature system architecture models wherever appropriate. All open source software produced is foreseen to be published on a publicly available software repository thus allowing others to reuse results achieved and enabling further development and collaboration with a wide community including scientists, developers, users and stakeholders. This live demonstration is linked with the talk "TRIDEC Natural Crisis Management Demonstrator for Tsunamis" (EGU2012-7275) given in the session "Architecture of Future Tsunami Warning Systems" (NH5.7/ESSI1.7).
Designing spin-channel geometries for entanglement distribution
NASA Astrophysics Data System (ADS)
Levi, E. K.; Kirton, P. G.; Lovett, B. W.
2016-09-01
We investigate different geometries of spin-1/2 nitrogen impurity channels for distributing entanglement between pairs of remote nitrogen vacancy centers (NVs) in diamond. To go beyond the system size limits imposed by directly solving the master equation, we implement a matrix product operator method to describe the open system dynamics. In so doing, we provide an early demonstration of how the time-evolving block decimation algorithm can be used for answering a problem related to a real physical system that could not be accessed by other methods. For a fixed NV separation there is an interplay between incoherent impurity spin decay and coherent entanglement transfer: Long-transfer-time, few-spin systems experience strong dephasing that can be overcome by increasing the number of spins in the channel. We examine how missing spins and disorder in the coupling strengths affect the dynamics, finding that in some regimes a spin ladder is a more effective conduit for information than a single-spin chain.
Livermore Big Artificial Neural Network Toolkit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Essen, Brian Van; Jacobs, Sam; Kim, Hyojin
2016-07-01
LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.
Wigner Function Reconstruction in Levitated Optomechanics
NASA Astrophysics Data System (ADS)
Rashid, Muddassar; Toroš, Marko; Ulbricht, Hendrik
2017-10-01
We demonstrate the reconstruction of theWigner function from marginal distributions of the motion of a single trapped particle using homodyne detection. We show that it is possible to generate quantum states of levitated optomechanical systems even under the efect of continuous measurement by the trapping laser light. We describe the opto-mechanical coupling for the case of the particle trapped by a free-space focused laser beam, explicitly for the case without an optical cavity. We use the scheme to reconstruct the Wigner function of experimental data in perfect agreement with the expected Gaussian distribution of a thermal state of motion. This opens a route for quantum state preparation in levitated optomechanics.
2010-06-01
Scenario – 12 gallons of readily available toxic substance – pump ($150 rental) – wrench to open a fire hydrant ($10) One (1) terrorist, or...6 Gallons Water General Comments Aflatoxin 7.6 Potent Carcinogen Aldicarb 1.1 Cycloheximide 2.1 LSD 0.2 Highly Toxic , Psychoactive Mercuric Chloride...Chlorfenvinphos, Formetanate Hydrochloride, Acrolein, Chloropicrin, Sodium chloroacetate, Thyoglycolate medium, Crotoxyphos, Glyphosate , Jimsonweed, Methanol
Modeling Surfzone/Inner-shelf Exchange
2013-09-30
goal here is the use a wave-resolving Boussinesq model to figure out how to parameterize the vorticity generation due to short-crested breaking of...individual waves. The Boussinesq model funwaveC used here, developed by the PI and distributed as open-source software, has been val- idated in ONR funded...shading of bottom bathymetry, mooring locations (green squares) and the local co-ordinate system (black arrows). Positive x is directed towards the
Software Design Description for the Tidal Open-boundary Prediction System (TOPS)
2010-05-04
Naval Research Laboratory Stennis Space Center, MS 39529-5004 NRL/MR/7320--10-9209 Approved for public release; distribution is unlimited. Software ...Department of Defense, Washington Headquarters Services , Directorate for Information Operations and Reports (0704-0188), 1215 Jefferson Davis Highway, Suite...RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Software Design
D-RATS 2011: RAFT Protocol Overview
NASA Technical Reports Server (NTRS)
Utz, Hans
2011-01-01
A brief overview presentation on the protocol used during the D-RATS2011 field test for file transfer from the field-test robots at Black Point Lava Flow AZ to Johnson Space Center, Houston TX over a simulated time-delay. The file transfer actually uses a commercial implementation of an open communications standard. The focus of the work lies on how to make the state of the distributed system observable.
Introduction: The SERENITY vision
NASA Astrophysics Data System (ADS)
Maña, Antonio; Spanoudakis, George; Kokolakis, Spyros
In this chapter we present an overview of the SERENITY approach. We describe the SERENITY model of secure and dependable applications and show how it addresses the challenge of developing, integrating and dynamically maintaining security and dependability mechanisms in open, dynamic, distributed and heterogeneous computing systems and in particular Ambient Intelligence scenarios. The chapter describes the basic concepts used in the approach and introduces the different processes supported by SERENITY, along with the tools provided.
NASA Astrophysics Data System (ADS)
Partridge, Jamie; Linden, Paul
2013-11-01
We examine the flows and stratification established in a naturally ventilated enclosure containing both a localised and vertically distributed source of buoyancy. The enclosure is ventilated through upper and lower openings which connect the space to an external ambient. Small scale laboratory experiments were carried out with water as the working medium and buoyancy being driven directly by temperature differences. A point source plume gave localised heating while the distributed source was driven by a controllable heater mat located in the side wall of the enclosure. The transient temperatures, as well as steady state temperature profiles, were recorded and are reported here. The temperature profiles inside the enclosure were found to be dependent on the effective opening area A*, a combination of the upper and lower openings, and the ratio of buoyancy fluxes from the distributed and localised source Ψ =Bw/Bp . Industrial CASE award with ARUP.