Sample records for distribution system experience

  1. Solar thermal power systems point-focusing thermal and electric applications projects. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Marriott, A.

    1980-01-01

    The activities of the Point-Focusing Thermal and Electric Applications (PETEA) project for the fiscal year 1979 are summarized. The main thrust of the PFTEA Project, the small community solar thermal power experiment, was completed. Concept definition studies included a small central receiver approach, a point-focusing distributed receiver system with central power generation, and a point-focusing distributed receiver concept with distributed power generation. The first experiment in the Isolated Application Series was initiated. Planning for the third engineering experiment series, which addresses the industrial market sector, was also initiated. In addition to the experiment-related activities, several contracts to industry were let and studies were conducted to explore the market potential for point-focusing distributed receiver (PFDR) systems. System analysis studies were completed that looked at PFDR technology relative to other small power system technology candidates for the utility market sector.

  2. Data Reprocessing on Worldwide Distributed Systems

    NASA Astrophysics Data System (ADS)

    Wicke, Daniel

    The DØ experiment faces many challenges in terms of enabling access to large datasets for physicists on four continents. The strategy for solving these problems on worldwide distributed computing clusters is presented. Since the beginning of Run II of the Tevatron (March 2001) all Monte-Carlo simulations for the experiment have been produced at remote systems. For data analysis, a system of regional analysis centers (RACs) was established which supply the associated institutes with the data. This structure, which is similar to the tiered structure foreseen for the LHC was used in Fall 2003 to reprocess all DØ data with a much improved version of the reconstruction software. This makes DØ the first running experiment that has implemented and operated all important computing tasks of a high energy physics experiment on systems distributed worldwide.

  3. The UCLA Design Diversity Experiment (DEDIX) system: A distributed testbed for multiple-version software

    NASA Technical Reports Server (NTRS)

    Avizienis, A.; Gunningberg, P.; Kelly, J. P. J.; Strigini, L.; Traverse, P. J.; Tso, K. S.; Voges, U.

    1986-01-01

    To establish a long-term research facility for experimental investigations of design diversity as a means of achieving fault-tolerant systems, a distributed testbed for multiple-version software was designed. It is part of a local network, which utilizes the Locus distributed operating system to operate a set of 20 VAX 11/750 computers. It is used in experiments to measure the efficacy of design diversity and to investigate reliability increases under large-scale, controlled experimental conditions.

  4. Cometabolism of Monochloramine by Nitrosomonas europaea under Distribution System Conditions

    EPA Science Inventory

    Batch kinetic experiments were carried out with a pure culture of N. europaea to characterize the kinetics of NH2Cl cometabolism. Nitrite, nitrate, NH2Cl, ammonia and pH were measured. The experiments were performed at a variety of conditions relevant to distribution system nitri...

  5. Methods and tools for profiling and control of distributed systems

    NASA Astrophysics Data System (ADS)

    Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.

    2018-02-01

    This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  6. Ramsey Interference in One-Dimensional Systems: The Full Distribution Function of Fringe Contrast as a Probe of Many-Body Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kitagawa, Takuya; Pielawa, Susanne; Demler, Eugene

    2010-06-25

    We theoretically analyze Ramsey interference experiments in one-dimensional quasicondensates and obtain explicit expressions for the time evolution of full distribution functions of fringe contrast. We show that distribution functions contain unique signatures of the many-body mechanism of decoherence. We argue that Ramsey interference experiments provide a powerful tool for analyzing strongly correlated nature of 1D interacting systems.

  7. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  8. Injection System for Multi-Well Injection Using a Single Pump

    PubMed Central

    Wovkulich, Karen; Stute, Martin; Protus, Thomas J.; Mailloux, Brian J.; Chillrud, Steven N.

    2015-01-01

    Many hydrological and geochemical studies rely on data resulting from injection of tracers and chemicals into groundwater wells. The even distribution of liquids to multiple injection points can be challenging or expensive, especially when using multiple pumps. An injection system was designed using one chemical metering pump to evenly distribute the desired influent simultaneously to 15 individual injection points through an injection manifold. The system was constructed with only one metal part contacting the fluid due to the low pH of the injection solutions. The injection manifold system was used during a three-month pilot scale injection experiment at the Vineland Chemical Company Superfund site. During the two injection phases of the experiment (Phase I = 0.27 L/min total flow, Phase II = 0.56 L/min total flow), flow measurements were made 20 times over three months; an even distribution of flow to each injection well was maintained (RSD <4%). This durable system is expandable to at least 16 injection points and should be adaptable to other injection experiments that require distribution of air-stable liquids to multiple injection points with a single pump. PMID:26140014

  9. MECDAS: A distributed data acquisition system for experiments at MAMI

    NASA Astrophysics Data System (ADS)

    Krygier, K. W.; Merle, K.

    1994-02-01

    For the coincidence experiments with the three spectrometer setup at MAMI an experiment control and data acquisition system has been built and was put successfully into final operation in 1992. MECDAS is designed as a distributed system using communication via Ethernet and optical links. As the front end, VME bus systems are used for real time purposes and direct hardware access via CAMAC, Fastbus or VMEbus. RISC workstations running UNIX are used for monitoring, data archiving and online and offline analysis of the experiment. MECDAS consists of several fixed programs and libraries, but large parts of readout and analysis can be configured by the user. Experiment specific configuration files are used to generate efficient and powerful code well adapted to special problems without additional programming. The experiment description is added to the raw collection of partially analyzed data to get self-descriptive data files.

  10. Experience in Construction and Operation of the Distributed Information Systems on the Basis of the Z39.50 Protocol

    NASA Astrophysics Data System (ADS)

    Zhizhimov, Oleg; Mazov, Nikolay; Skibin, Sergey

    Questions concerned with construction and operation of the distributed information systems on the basis of ANSI/NISO Z39.50 Information Retrieval Protocol are discussed in the paper. The paper is based on authors' practice in developing ZooPARK server. Architecture of distributed information systems, questions of reliability of such systems, minimization of search time and administration are examined. Problems with developing of distributed information systems are also described.

  11. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  12. A controlled experiment on the impact of software structure on maintainability

    NASA Technical Reports Server (NTRS)

    Rombach, Dieter H.

    1987-01-01

    The impact of software structure on maintainability aspects including comprehensibility, locality, modifiability, and reusability in a distributed system environment is studied in a controlled maintenance experiment involving six medium-size distributed software systems implemented in LADY (language for distributed systems) and six in an extended version of sequential PASCAL. For all maintenance aspects except reusability, the results were quantitatively given in terms of complexity metrics which could be automated. The results showed LADY to be better suited to the development of maintainable software than the extension of sequential PASCAL. The strong typing combined with high parametrization of units is suggested to improve the reusability of units in LADY.

  13. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  14. Distriblets: Java-Based Distributed Computing on the Web.

    ERIC Educational Resources Information Center

    Finkel, David; Wills, Craig E.; Brennan, Brian; Brennan, Chris

    1999-01-01

    Describes a system for using the World Wide Web to distribute computational tasks to multiple hosts on the Web that is written in Java programming language. Describes the programs written to carry out the load distribution, the structure of a "distriblet" class, and experiences in using this system. (Author/LRW)

  15. The Healthcare Administrator's Associate: an experiment in distributed healthcare information systems.

    PubMed Central

    Fowler, J.; Martin, G.

    1997-01-01

    The Healthcare Administrator's Associate is a collection of portable tools designed to support analysis of data retrieved via the Internet from diverse distributed healthcare information systems by means of the InfoSleuth system of distributed software agents. Development of these tools is part of an effort to enhance access to diverse and geographically distributed healthcare data in order to improve the basis upon which administrative and clinical decisions are made. PMID:9357686

  16. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  17. Superstatistics model for T₂ distribution in NMR experiments on porous media.

    PubMed

    Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S

    2014-07-01

    We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Research on the novel FBG detection system for temperature and strain field distribution

    NASA Astrophysics Data System (ADS)

    Liu, Zhi-chao; Yang, Jin-hua

    2017-10-01

    In order to collect the information of temperature and strain field distribution information, the novel FBG detection system was designed. The system applied linear chirped FBG structure for large bandwidth. The structure of novel FBG cover was designed as a linear change in thickness, in order to have a different response at different locations. It can obtain the temperature and strain field distribution information by reflection spectrum simultaneously. The structure of novel FBG cover was designed, and its theoretical function is calculated. Its solution is derived for strain field distribution. By simulation analysis the change trend of temperature and strain field distribution were analyzed in the conditions of different strain strength and action position, the strain field distribution can be resolved. The FOB100 series equipment was used to test the temperature in experiment, and The JSM-A10 series equipment was used to test the strain field distribution in experiment. The average error of experimental results was better than 1.1% for temperature, and the average error of experimental results was better than 1.3% for strain. There were individual errors when the strain was small in test data. It is feasibility by theoretical analysis, simulation calculation and experiment, and it is very suitable for application practice.

  19. Reconfiguring practice: the interdependence of experimental procedure and computing infrastructure in distributed earthquake engineering.

    PubMed

    De La Flor, Grace; Ojaghi, Mobin; Martínez, Ignacio Lamata; Jirotka, Marina; Williams, Martin S; Blakeborough, Anthony

    2010-09-13

    When transitioning local laboratory practices into distributed environments, the interdependent relationship between experimental procedure and the technologies used to execute experiments becomes highly visible and a focal point for system requirements. We present an analysis of ways in which this reciprocal relationship is reconfiguring laboratory practices in earthquake engineering as a new computing infrastructure is embedded within three laboratories in order to facilitate the execution of shared experiments across geographically distributed sites. The system has been developed as part of the UK Network for Earthquake Engineering Simulation e-Research project, which links together three earthquake engineering laboratories at the universities of Bristol, Cambridge and Oxford. We consider the ways in which researchers have successfully adapted their local laboratory practices through the modification of experimental procedure so that they may meet the challenges of coordinating distributed earthquake experiments.

  20. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  1. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  2. Developing Fluorescence Sensor Systems for Early Detection of Nitrification Events in Chloraminated Drinking Water Distribution Systems

    EPA Science Inventory

    Detection of nitrification events in chloraminated drinking water distribution systems remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification events ...

  3. Advanced Operating System Technologies

    NASA Astrophysics Data System (ADS)

    Cittolin, Sergio; Riccardi, Fabio; Vascotto, Sandro

    In this paper we describe an R&D effort to define an OS architecture suitable for the requirements of the Data Acquisition and Control of an LHC experiment. Large distributed computing systems are foreseen to be the core part of the DAQ and Control system of the future LHC experiments. Neworks of thousands of processors, handling dataflows of several gigaBytes per second, with very strict timing constraints (microseconds), will become a common experience in the following years. Problems like distributyed scheduling, real-time communication protocols, failure-tolerance, distributed monitoring and debugging will have to be faced. A solid software infrastructure will be required to manage this very complicared environment, and at this moment neither CERN has the necessary expertise to build it, nor any similar commercial implementation exists. Fortunately these problems are not unique to the particle and high energy physics experiments, and the current research work in the distributed systems field, especially in the distributed operating systems area, is trying to address many of the above mentioned issues. The world that we are going to face in the next ten years will be quite different and surely much more interconnected than the one we see now. Very ambitious projects exist, planning to link towns, nations and the world in a single "Data Highway". Teleconferencing, Video on Demend, Distributed Multimedia Applications are just a few examples of the very demanding tasks to which the computer industry is committing itself. This projects are triggering a great research effort in the distributed, real-time micro-kernel based operating systems field and in the software enginering areas. The purpose of our group is to collect the outcame of these different research efforts, and to establish a working environment where the different ideas and techniques can be tested, evaluated and possibly extended, to address the requirements of a DAQ and Control System suitable for LHC. Our work started in the second half of 1994, with a research agreement between CERN and Chorus Systemes (France), world leader in the micro-kernel OS technology. The Chorus OS is targeted to distributed real-time applications, and it can very efficiently support different "OS personalities" in the same environment, like Posix, UNIX, and a CORBA compliant distributed object architecture. Projects are being set-up to verify the suitability of our work for LHC applications, we are building a scaled-down prototype of the DAQ system foreseen for the CMS experiment at LHC, where we will directly test our protocols and where we will be able to make measurements and benchmarks, guiding our development and allowing us to build an analytical model of the system, suitable for simulation and large scale verification.

  4. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  5. Pressure Distribution and Air Data System for the Aeroassist Flight Experiment

    NASA Technical Reports Server (NTRS)

    Gibson, Lorelei S.; Siemers, Paul M., III; Kern, Frederick A.

    1989-01-01

    The Aeroassist Flight Experiment (AFE) is designed to provide critical flight data necessary for the design of future Aeroassist Space Transfer Vehicles (ASTV). This flight experiment will provide aerodynamic, aerothermodynamic, and environmental data for verification of experimental and computational flow field techniques. The Pressure Distribution and Air Data System (PD/ADS), one of the measurement systems incorporated into the AFE spacecraft, is designed to provide accurate pressure measurements on the windward surface of the vehicle. These measurements will be used to determine the pressure distribution and air data parameters (angle of attack, angle of sideslip, and free-stream dynamic pressure) encountered by the blunt-bodied vehicle over an altitude range of 76.2 km to 94.5 km. Design and development data are presented and include: measurement requirements, measurement heritage, theoretical studies to define the vehicle environment, flush-mounted orifice configuration, pressure transducer selection and performance evaluation data, and pressure tubing response analysis.

  6. Evaluation of the Performance of the Distributed Phased-MIMO Sonar.

    PubMed

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-11

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.

  7. Evaluation of the Performance of the Distributed Phased-MIMO Sonar

    PubMed Central

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-01

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071

  8. Observing System Evaluations Using GODAE Systems

    DTIC Science & Technology

    2009-09-01

    DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution is unlimite 13. SUPPLEMENTARY NOTES 20091228151 14. ABSTRACT Global ocean...forecast systems, developed under the Global Ocean Data Assimilation Experiment (GODAE), are a powerful means of assessing the impact of different...components of the Global Ocean Observing System (GOOS). Using a range of analysis tools and approaches, GODAE systems are useful for quantifying the

  9. N-body experiments and missing mass in clusters of galaxies

    NASA Technical Reports Server (NTRS)

    Smith, H.; Hintzen, P.; Sofia, S.; Oegerle, W.; Scott, J.; Holman, G.

    1979-01-01

    It is commonly assumed that the distributions of surface density and radial-velocity dispersion in clusters of galaxies are sensitive tracers of the underlying distribution of any unseen mass. N-body experiments have been used to test this assumption. Calculations with equal-mass systems indicate that the effects of the underlying mass distribution cannot be detected by observations of the surface-density or radial-velocity distributions, and the existence of an extended binding mass in all well-studied clusters would be consistent with available observations.

  10. RF-based power distribution system for optogenetic experiments

    NASA Astrophysics Data System (ADS)

    Filipek, Tomasz A.; Kasprowicz, Grzegorz H.

    2017-08-01

    In this paper, the wireless power distribution system for optogenetic experiment was demonstrated. The design and the analysis of the power transfer system development is described in details. The architecture is outlined in the context of performance requirements that had to be met. We show how to design a wireless power transfer system using resonant coupling circuits which consist of a number of receivers and one transmitter covering the entire cage area with a specific power density. The transmitter design with the full automated protection stage is described with detailed consideration of the specification and the construction of the transmitting loop antenna. In addition, the design of the receiver is described, including simplification of implementation and the minimization of the impact of component tolerances on the performance of the distribution system. The conducted analysis has been confirmed by calculations and measurement results. The presented distribution system was designed to provide 100 mW power supply to each of the ten possible receivers in a limited 490 x 350 mm cage space while using a single transmitter working at the coupling resonant frequency of 27 MHz.

  11. A Closer Look at Split Visual Attention in System- and Self-Paced Instruction in Multimedia Learning

    ERIC Educational Resources Information Center

    Schmidt-Weigand, Florian; Kohnert, Alfred; Glowalla, Ulrich

    2010-01-01

    Two experiments examined visual attention distribution in learning from text and pictures. Participants watched a 16-step multimedia instruction on the formation of lightning. In Experiment 1 (N=90) the instruction was system-paced (fast, medium, slow pace), while it was self-paced in Experiment 2 (N=31). In both experiments the text modality was…

  12. A Web-Based Multi-Database System Supporting Distributed Collaborative Management and Sharing of Microarray Experiment Information

    PubMed Central

    Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco

    2006-01-01

    We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488

  13. Effects of dispersal on total biomass in a patchy, heterogeneous system: analysis and experiment.

    USGS Publications Warehouse

    Zhang, Bo; Liu, Xin; DeAngelis, Donald L.; Ni, Wei-Ming; Wang, G Geoff

    2015-01-01

    An intriguing recent result from mathematics is that a population diffusing at an intermediate rate in an environment in which resources vary spatially will reach a higher total equilibrium biomass than the population in an environment in which the same total resources are distributed homogeneously. We extended the current mathematical theory to apply to logistic growth and also showed that the result applies to patchy systems with dispersal among patches, both for continuous and discrete time. This allowed us to make specific predictions, through simulations, concerning the biomass dynamics, which were verified by a laboratory experiment. The experiment was a study of biomass growth of duckweed (Lemna minor Linn.), where the resources (nutrients added to water) were distributed homogeneously among a discrete series of water-filled containers in one treatment, and distributed heterogeneously in another treatment. The experimental results showed that total biomass peaked at an intermediate, relatively low, diffusion rate, higher than the total carrying capacity of the system and agreeing with the simulation model. The implications of the experiment to dynamics of source, sink, and pseudo-sink dynamics are discussed.

  14. Distributed gas sensing with optical fibre photothermal interferometry.

    PubMed

    Lin, Yuechuan; Liu, Fei; He, Xiangge; Jin, Wei; Zhang, Min; Yang, Fan; Ho, Hoi Lut; Tan, Yanzhen; Gu, Lijuan

    2017-12-11

    We report the first distributed optical fibre trace-gas detection system based on photothermal interferometry (PTI) in a hollow-core photonic bandgap fibre (HC-PBF). Absorption of a modulated pump propagating in the gas-filled HC-PBF generates distributed phase modulation along the fibre, which is detected by a dual-pulse heterodyne phase-sensitive optical time-domain reflectometry (OTDR) system. Quasi-distributed sensing experiment with two 28-meter-long HC-PBF sensing sections connected by single-mode transmission fibres demonstrated a limit of detection (LOD) of ∼10 ppb acetylene with a pump power level of 55 mW and an effective noise bandwidth (ENBW) of 0.01 Hz, corresponding to a normalized detection limit of 5.5ppb⋅W/Hz. Distributed sensing experiment over a 200-meter-long sensing cable made of serially connected HC-PBFs demonstrated a LOD of ∼ 5 ppm with 62.5 mW peak pump power and 11.8 Hz ENBW, or a normalized detection limit of 312ppb⋅W/Hz. The spatial resolution of the current distributed detection system is limited to ∼ 30 m, but it is possible to reduce down to 1 meter or smaller by optimizing the phase detection system.

  15. Evolution of the ATLAS PanDA workload management system for exascale computational science

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Klimentov, A.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.; Yu, D.; Atlas Collaboration

    2014-06-01

    An important foundation underlying the impressive success of data processing and analysis in the ATLAS experiment [1] at the LHC [2] is the Production and Distributed Analysis (PanDA) workload management system [3]. PanDA was designed specifically for ATLAS and proved to be highly successful in meeting all the distributed computing needs of the experiment. However, the core design of PanDA is not experiment specific. The PanDA workload management system is capable of meeting the needs of other data intensive scientific applications. Alpha-Magnetic Spectrometer [4], an astro-particle experiment on the International Space Station, and the Compact Muon Solenoid [5], an LHC experiment, have successfully evaluated PanDA and are pursuing its adoption. In this paper, a description of the new program of work to develop a generic version of PanDA will be given, as well as the progress in extending PanDA's capabilities to support supercomputers and clouds and to leverage intelligent networking. PanDA has demonstrated at a very large scale the value of automated dynamic brokering of diverse workloads across distributed computing resources. The next generation of PanDA will allow other data-intensive sciences and a wider exascale community employing a variety of computing platforms to benefit from ATLAS' experience and proven tools.

  16. Distribution of Off-Diagonal Cross Sections in Quantum Chaotic Scattering: Exact Results and Data Comparison.

    PubMed

    Kumar, Santosh; Dietz, Barbara; Guhr, Thomas; Richter, Achim

    2017-12-15

    The recently derived distributions for the scattering-matrix elements in quantum chaotic systems are not accessible in the majority of experiments, whereas the cross sections are. We analytically compute distributions for the off-diagonal cross sections in the Heidelberg approach, which is applicable to a wide range of quantum chaotic systems. Thus, eventually, we fully solve a problem that already arose more than half a century ago in compound-nucleus scattering. We compare our results with data from microwave and compound-nucleus experiments, particularly addressing the transition from isolated resonances towards the Ericson regime of strongly overlapping ones.

  17. Distribution of Off-Diagonal Cross Sections in Quantum Chaotic Scattering: Exact Results and Data Comparison

    NASA Astrophysics Data System (ADS)

    Kumar, Santosh; Dietz, Barbara; Guhr, Thomas; Richter, Achim

    2017-12-01

    The recently derived distributions for the scattering-matrix elements in quantum chaotic systems are not accessible in the majority of experiments, whereas the cross sections are. We analytically compute distributions for the off-diagonal cross sections in the Heidelberg approach, which is applicable to a wide range of quantum chaotic systems. Thus, eventually, we fully solve a problem that already arose more than half a century ago in compound-nucleus scattering. We compare our results with data from microwave and compound-nucleus experiments, particularly addressing the transition from isolated resonances towards the Ericson regime of strongly overlapping ones.

  18. Security in the CernVM File System and the Frontier Distributed Database Caching System

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Blomer, J.

    2014-06-01

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.

  19. Optoelectronics in TESLA, LHC, and pi-of-the-sky experiments

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Wrochna, Grzegorz; Simrock, Stefan

    2004-09-01

    Optical and optoelectronics technologies are more and more widely used in the biggest world experiments of high energy and nuclear physics, as well as in the astronomy. The paper is a kind of a broad digest describing the usage of optoelectronics is such experiments and information about some of the involved teams. The described experiments include: TESLA linear accelerator and FEL, Compact Muon Solenoid at LHC and recently started π-of-the-sky global gamma ray bursts (with asociated optical flashes) observation experiment. Optoelectornics and photonics offer several key features which are either extending the technical parameters of existing solutions or adding quite new practical application possibilities. Some of these favorable features of photonic systems are: high selectivity of optical sensors, immunity to some kinds of noise processes, extremely broad bandwidth exchangeable for either terabit rate transmission or ultrashort pulse generation, parallel image processing capability, etc. The following groups of photonic components and systems were described: (1) discrete components applications like: LED, PD, LD, CCD and CMOS cameras, active optical crystals and optical fibers in radiation dosimetry, astronomical image processing and for building of more complex photonic systems; (2) optical fiber networks serving as very stable phase distribution, clock signal distribution, distributed dosimeters, distributed gigabit transmission for control, diagnostics and data acquisition/processing; (3) fast and stable coherent femtosecond laser systems with active optical components for electro-optical sampling and photocathode excitation in the RF electron gun for linac; The parameters of some of these systems were quoted and discussed. A number of the debated solutions seems to be competitive against the classical ones. Several future fields seem to emerge involving direct coupling between the ultrafast photonic and the VLSI FPGA based technologies.

  20. A rocket-borne microprocessor-based experiment for investigation of energetic particles in the D and E regions

    NASA Technical Reports Server (NTRS)

    Braswell, F. M.

    1981-01-01

    An energetic experiment using the Z80 family of microcomputer components is described. Data collected from the experiment allowed fast and efficient postprocessing, yielding both energy-spectrum and pitch-angle distribution of energetic particles in the D and E regions. Advanced microprocessor system architecture and software concepts were used in the design to cope with the large amount of data being processed. This required the Z80 system to operate at over 80% of its total capacity. The microprocessor system was included in the payloads of three rockets launched during the Energy Budget Campaign at ESRANGE, Kiruna, Sweden in November 1980. Based on preliminary examination of the data, the performance of the experiment was satisfactory and good data were obtained on the energy spectrum and pitch-angle distribution of the particles.

  1. CHLORINE DECAY AND BIOFILM STUDIES IN A PILOT SCALE DRINKING WATER DISTRIBUTION DEAD END PIPE SYSTEM

    EPA Science Inventory

    Chlorine decay experiments using a pilot-scale water distribution dead end pipe system were conducted to define relationships between chlorine decay and environmental factors. These included flow rate, biomass concentration and biofilm density, and initial chlorine concentrations...

  2. Clinical Physiologic Research Instrumentation: An Approach Using Modular Elements and Distributed Processing

    PubMed Central

    Hagen, R. W.; Ambos, H. D.; Browder, M. W.; Roloff, W. R.; Thomas, L. J.

    1979-01-01

    The Clinical Physiologic Research System (CPRS) developed from our experience in applying computers to medical instrumentation problems. This experience revealed a set of applications with a commonality in data acquisition, analysis, input/output, and control needs that could be met by a portable system. The CPRS demonstrates a practical methodology for integrating commercial instruments with distributed modular elements of local design in order to make facile responses to changing instrumentation needs in clinical environments. ImagesFigure 3

  3. Distributed Processing System for Restoration of Electric Power Distribution Network Using Two-Layered Contract Net Protocol

    NASA Astrophysics Data System (ADS)

    Kodama, Yu; Hamagami, Tomoki

    Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.

  4. Derivation of hydrous pyrolysis kinetic parameters from open-system pyrolysis

    NASA Astrophysics Data System (ADS)

    Tseng, Yu-Hsin; Huang, Wuu-Liang

    2010-05-01

    Kinetic information is essential to predict the temperature, timing or depth of hydrocarbon generation within a hydrocarbon system. The most common experiments for deriving kinetic parameters are mainly by open-system pyrolysis. However, it has been shown that the conditions of open-system pyrolysis are deviant from nature by its low near-ambient pressure and high temperatures. Also, the extrapolation of heating rates in open-system pyrolysis to geological conditions may be questionable. Recent study of Lewan and Ruble shows hydrous-pyrolysis conditions can simulate the natural conditions better and its applications are supported by two case studies with natural thermal-burial histories. Nevertheless, performing hydrous pyrolysis experiment is really tedious and requires large amount of sample, while open-system pyrolysis is rather convenient and efficient. Therefore, the present study aims at the derivation of convincing distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data. Our results unveil that there is a good correlation between open-system Rock-Eval parameter Tmax and the activation energy (Ea) derived from hydrous pyrolysis. The hydrous pyrolysis single Ea can be predicted from Tmax based on the correlation, while the frequency factor (A0) is estimated based on the linear relationship between single Ea and log A0. Because the Ea distribution is more rational than single Ea, we modify the predicted single hydrous pyrolysis Ea into distributed Ea by shifting the pattern of Ea distribution from open-system pyrolysis until the weight mean Ea distribution equals to the single hydrous pyrolysis Ea. Moreover, it has been shown that the shape of the Ea distribution is very much alike the shape of Tmax curve. Thus, in case of the absence of open-system Ea distribution, we may use the shape of Tmax curve to get the distributed hydrous pyrolysis Ea. The study offers a new approach as a simple method for obtaining distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data, which will allow for better estimating hydrocarbon generation.

  5. Air data system optimization using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    Deshpande, Samir M.; Kumar, Renjith R.; Seywald, Hans; Siemers, Paul M., III

    1992-01-01

    An optimization method for flush-orifice air data system design has been developed using the Genetic Algorithm approach. The optimization of the orifice array minimizes the effect of normally distributed random noise in the pressure readings on the calculation of air data parameters, namely, angle of attack, sideslip angle and freestream dynamic pressure. The optimization method is applied to the design of Pressure Distribution/Air Data System experiment (PD/ADS) proposed for inclusion in the Aeroassist Flight Experiment (AFE). Results obtained by the Genetic Algorithm method are compared to the results obtained by conventional gradient search method.

  6. Experimental demonstration of an active phase randomization and monitor module for quantum key distribution

    NASA Astrophysics Data System (ADS)

    Sun, Shi-Hai; Liang, Lin-Mei

    2012-08-01

    Phase randomization is a very important assumption in the BB84 quantum key distribution (QKD) system with weak coherent source; otherwise, eavesdropper may spy the final key. In this Letter, a stable and monitored active phase randomization scheme for the one-way and two-way QKD system is proposed and demonstrated in experiments. Furthermore, our scheme gives an easy way for Alice to monitor the degree of randomization in experiments. Therefore, we expect our scheme to become a standard part in future QKD systems due to its secure significance and feasibility.

  7. Electric Transport Traction Power Supply System With Distributed Energy Sources

    NASA Astrophysics Data System (ADS)

    Abramov, E. Y.; Schurov, N. I.; Rozhkova, M. V.

    2016-04-01

    The paper states the problem of traction substation (TSS) leveling of daily-load curve for urban electric transport. The circuit of traction power supply system (TPSS) with distributed autonomous energy source (AES) based on photovoltaic (PV) and energy storage (ES) units is submitted here. The distribution algorithm of power flow for the daily traction load curve leveling is also introduced in this paper. In addition, it illustrates the implemented experiment model of power supply system.

  8. Fluorescence Sensors for Early Detection of Nitrification in Drinking Water Distribution Systems – Interference Corrections (Abstract)

    EPA Science Inventory

    Nitrification event detection in chloraminated drinking water distribution systems (DWDSs) remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification eve...

  9. Fluorescence Sensors for Early Detection of Nitrification in Drinking Water Distribution Systems – Interference Corrections (Poster)

    EPA Science Inventory

    Nitrification event detection in chloraminated drinking water distribution systems (DWDSs) remains an ongoing challenge for many drinking water utilities, including Dallas Water Utilities (DWU) and the City of Houston (CoH). Each year, these utilities experience nitrification eve...

  10. Thermal Power Systems (TPS); Point-Focusing Thermal and Electric Applications (PFTEA). Volume 2: Detailed report, fiscal year 1979

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Progress in the development of systems which employ point focusing distributed receiver technology is reported. Emphasis is placed on the first engineering experiment, the Small Community Solar Thermal Power Experiment. Procurement activities for the Military Module Power Experiment the first of a series of experiments planned as part of the Isolated Load Series are included.

  11. A High-Availability, Distributed Hardware Control System Using Java

    NASA Technical Reports Server (NTRS)

    Niessner, Albert F.

    2011-01-01

    Two independent coronagraph experiments that require 24/7 availability with different optical layouts and different motion control requirements are commanded and controlled with the same Java software system executing on many geographically scattered computer systems interconnected via TCP/IP. High availability of a distributed system requires that the computers have a robust communication messaging system making the mix of TCP/IP (a robust transport), and XML (a robust message) a natural choice. XML also adds the configuration flexibility. Java then adds object-oriented paradigms, exception handling, heavily tested libraries, and many third party tools for implementation robustness. The result is a software system that provides users 24/7 access to two diverse experiments with XML files defining the differences

  12. Determining the strengths of HCP slip systems using harmonic analyses of lattice strain distributions

    DOE PAGES

    Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang; ...

    2017-10-15

    A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less

  13. Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)

    Treesearch

    Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline

    2008-01-01

    The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...

  14. Staghorn: An Automated Large-Scale Distributed System Analysis Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gabert, Kasimir; Burns, Ian; Elliott, Steven

    2016-09-01

    Conducting experiments on large-scale distributed computing systems is becoming significantly easier with the assistance of emulation. Researchers can now create a model of a distributed computing environment and then generate a virtual, laboratory copy of the entire system composed of potentially thousands of virtual machines, switches, and software. The use of real software, running at clock rate in full virtual machines, allows experiments to produce meaningful results without necessitating a full understanding of all model components. However, the ability to inspect and modify elements within these models is bound by the limitation that such modifications must compete with the model,more » either running in or alongside it. This inhibits entire classes of analyses from being conducted upon these models. We developed a mechanism to snapshot an entire emulation-based model as it is running. This allows us to \\freeze time" and subsequently fork execution, replay execution, modify arbitrary parts of the model, or deeply explore the model. This snapshot includes capturing packets in transit and other input/output state along with the running virtual machines. We were able to build this system in Linux using Open vSwitch and Kernel Virtual Machines on top of Sandia's emulation platform Firewheel. This primitive opens the door to numerous subsequent analyses on models, including state space exploration, debugging distributed systems, performance optimizations, improved training environments, and improved experiment repeatability.« less

  15. Research on distributed temperature sensor (DTS) applied in underground tunnel

    NASA Astrophysics Data System (ADS)

    Hu, Chuanlong; Wang, Jianfeng; Zhang, Zaixuan; Shen, Changyu; Jin, Yongxing; Jin, Shangzhong

    2011-11-01

    A distributed temperature sensor (DTS) system with a sensing distance of 4 km was developed for applications in tunnel temperature measurement and fire alarm. Characteristics of DTS and experiment results are introduced. The results show that DTS system can play an important role in tunnel fire alarm.

  16. DYNAMIC ENERGY SAVING IN BUILDINGS WITH UNDERFLOOR AIR DISTRIBUTION SYSTEM – EXPERIMENTAL AND SIMULATION STUDIES

    EPA Science Inventory

    The present study is aimed at seeking a better understanding of the thermodynamics involved with the air distribution strategies associated with UFAD systems and its impact on the energy saving dynamics.
    Thus objectives are:

    • Experiment...

    • Second-Order Chlorine Decay and Trihalomethanes Formation in a Pilot-Scale Water Distribution Systems

      EPA Science Inventory

      It is well known that model-building of chlorine decay in real water distribution systems is difficult because chlorine decay is influenced by many factors (e.g., bulk water demand, pipe-wall demand, piping material, flow velocity, and residence time). In this paper, experiments ...

    • Online monitoring of seismic damage in water distribution systems

      NASA Astrophysics Data System (ADS)

      Liang, Jianwen; Xiao, Di; Zhao, Xinhua; Zhang, Hongwei

      2004-07-01

      It is shown that water distribution systems can be damaged by earthquakes, and the seismic damages cannot easily be located, especially immediately after the events. Earthquake experiences show that accurate and quick location of seismic damage is critical to emergency response of water distribution systems. This paper develops a methodology to locate seismic damage -- multiple breaks in a water distribution system by monitoring water pressure online at limited positions in the water distribution system. For the purpose of online monitoring, supervisory control and data acquisition (SCADA) technology can well be used. A neural network-based inverse analysis method is constructed for locating the seismic damage based on the variation of water pressure. The neural network is trained by using analytically simulated data from the water distribution system, and validated by using a set of data that have never been used in the training. It is found that the methodology provides an effective and practical way in which seismic damage in a water distribution system can be accurately and quickly located.

    • Autonomic Management in a Distributed Storage System

      NASA Astrophysics Data System (ADS)

      Tauber, Markus

      2010-07-01

      This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.

    • Space station data management system - A common GSE test interface for systems testing and verification

      NASA Technical Reports Server (NTRS)

      Martinez, Pedro A.; Dunn, Kevin W.

      1987-01-01

      This paper examines the fundamental problems and goals associated with test, verification, and flight-certification of man-rated distributed data systems. First, a summary of the characteristics of modern computer systems that affect the testing process is provided. Then, verification requirements are expressed in terms of an overall test philosophy for distributed computer systems. This test philosophy stems from previous experience that was gained with centralized systems (Apollo and the Space Shuttle), and deals directly with the new problems that verification of distributed systems may present. Finally, a description of potential hardware and software tools to help solve these problems is provided.

  1. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  2. Effects of dispersal on total biomass in a patchy, heterogeneous system: Analysis and experiment.

    PubMed

    Zhang, Bo; Liu, Xin; DeAngelis, D L; Ni, Wei-Ming; Wang, G Geoff

    2015-06-01

    An intriguing recent result from mathematics is that a population diffusing at an intermediate rate in an environment in which resources vary spatially will reach a higher total equilibrium biomass than the population in an environment in which the same total resources are distributed homogeneously. We extended the current mathematical theory to apply to logistic growth and also showed that the result applies to patchy systems with dispersal among patches, both for continuous and discrete time. This allowed us to make specific predictions, through simulations, concerning the biomass dynamics, which were verified by a laboratory experiment. The experiment was a study of biomass growth of duckweed (Lemna minor Linn.), where the resources (nutrients added to water) were distributed homogeneously among a discrete series of water-filled containers in one treatment, and distributed heterogeneously in another treatment. The experimental results showed that total biomass peaked at an intermediate, relatively low, diffusion rate, higher than the total carrying capacity of the system and agreeing with the simulation model. The implications of the experiment to dynamics of source, sink, and pseudo-sink dynamics are discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME.

    PubMed

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2016-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments.

  4. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  5. Formation and Release Behavior of Iron Corrosion Products under the Influence of Bacterial Communities in a Simulated Water Distribution System

    EPA Science Inventory

    Understanding the effects of biofilm on the iron corrosion, iron release and associated corrosion by-products is critical for maintaining the water quality and the integrity of drinking water distribution system (DWDS). In this work, iron corrosion experiments under sterilized a...

  6. Comparison between Synthesized Lead Particles and Lead Solids Formed on Surfaces in Real Drinking Water Distribution Systems

    EPA Science Inventory

    The objective of this work is to compare the properties of lead solids formed during bench-scale precipitation experiments to solids found on lead pipe removed from real drinking water distribution systems and metal coupons used in pilot scale corrosion testing. Specifically, so...

  7. Monochloramine Cometabolism by Mixed-Culture Nitrifiers ...

    EPA Pesticide Factsheets

    The current research investigated monochloramine cometabolism by nitrifying mixed cultures grown under drinking water relevant conditions and harvested from sand-packed reactors before conducting suspended growth batch kinetic experiments. Three batch reactors were used in each experiment: (1) a positive control to estimate ammonia kinetic parameters, (2) a negative control to account for abiotic reactions, and (3) a cometabolism reactor to estimate cometabolism kinetic constants. Kinetic parameters were estimated in AQUASIM with a simultaneous fit to all experimental data. Cometabolism kinetics were best described by a first order model. Monochloramine cometabolism kinetics were similar to those of ammonia metabolism, and monochloramine cometabolism was a significant loss mechanism (30% of the observed monochloramine loss). These results demonstrated that monochloramine cometabolism occurred in mixed cultures similar to those found in drinking water distribution systems; thus, cometabolism may be a significant contribution to monochloramine loss during nitrification episodes in drinking water distribution systems. The results demonstrated that monochloramine cometabolism occurred in mixed cultures similar to those found in drinking water distribution systems; thus, cometabolism may be a significant contribution to monochloramine loss during nitrification episodes in drinking water distribution systems.

  8. Distributed energy store railguns experiment and analysis

    NASA Astrophysics Data System (ADS)

    Holland, L. D.

    1984-02-01

    Electromagnetic acceleration of projectiles holds the potential for achieving higher velocities than yet achieved by any other means. A railgun is the simplest form of electromagnetic macroparticle accelerator and can generate the highest sustained accelerating force. The practical length of conventional railguns is limited by the impedance of the rails because current must be carried along the entire length of the rails. A railgun and power supply system called the distributed energy store railgun was proposed as a solution to this limitation. A distributed energy storage railgun was constructed and successfully operated. In addition to this demonstration of the distributed energy store railgun principle, a theoretical model of the system was also constructed. A simple simulation of the railgun system based on this model, but ignoring frictional drag, was compared with the experimental results. During the process of comparing results from the simulation and the experiment, the effect of significant frictional drag of the projectile on the sidewalls of the bore was observed.

  9. Optimization of an interactive distributive computer network

    NASA Technical Reports Server (NTRS)

    Frederick, V.

    1985-01-01

    The activities under a cooperative agreement for the development of a computer network are briefly summarized. Research activities covered are: computer operating systems optimization and integration; software development and implementation of the IRIS (Infrared Imaging of Shuttle) Experiment; and software design, development, and implementation of the APS (Aerosol Particle System) Experiment.

  10. Laboratory for Computer Science Progress Report 19, 1 July 1981-30 June 1982.

    DTIC Science & Technology

    1984-05-01

    Multiprocessor Architectures 202 4. TRIX Operating System 209 5. VLSI Tools 212 ’SYSTEMATIC PROGRAM DEVELOPMENT, 221 1. Introduction 222 2. Specification...exploring distributed operating systems and the architecture of single-user powerful computers that are interconnected by communication networks. The...to now. In particular, we expect to experiment with languages, operating systems , and applications that establish the feasibility of distributed

  11. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  12. Development of a web service for analysis in a distributed network.

    PubMed

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.

  13. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  14. Contents of the NASA ocean data system archive, version 11-90

    NASA Technical Reports Server (NTRS)

    Smith, Elizabeth A. (Editor); Lassanyi, Ruby A. (Editor)

    1990-01-01

    The National Aeronautics and Space Administration (NASA) Ocean Data System (NODS) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global-change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea-surface height, surface-wind vector, sea-surface temperature, atmospheric liquid water, and surface pigment concentration. NODS will become the Data Archive and Distribution Service of the JPL Distributed Active Archive Center for the Earth Observing System Data and Information System (EOSDIS) and will be the United States distribution site for Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  15. Proposal for optimal placement platform of bikes using queueing networks.

    PubMed

    Mizuno, Shinya; Iwamoto, Shogo; Seki, Mutsumi; Yamaki, Naokazu

    2016-01-01

    In recent social experiments, rental motorbikes and rental bicycles have been arranged at nodes, and environments where users can ride these bikes have been improved. When people borrow bikes, they return them to nearby nodes. Some experiments have been conducted using the models of Hamachari of Yokohama, the Niigata Rental Cycle, and Bicing. However, from these experiments, the effectiveness of distributing bikes was unclear, and many models were discontinued midway. Thus, we need to consider whether these models are effectively designed to represent the distribution system. Therefore, we construct a model to arrange the nodes for distributing bikes using a queueing network. To adopt realistic values for our model, we use the Google Maps application program interface. Thus, we can easily obtain values of distance and transit time between nodes in various places in the world. Moreover, we apply the distribution of a population to a gravity model and we compute the effective transition probability for this queueing network. If the arrangement of the nodes and number of bikes at each node is known, we can precisely design the system. We illustrate our system using convenience stores as nodes and optimize the node configuration. As a result, we can optimize simultaneously the number of nodes, node places, and number of bikes for each node, and we can construct a base for a rental cycle business to use our system.

  16. Challenges of Using CSCL in Open Distributed Learning.

    ERIC Educational Resources Information Center

    Nilsen, Anders Grov; Instefjord, Elen J.

    As a compulsory part of the study in Pedagogical Information Science at the University of Bergen and Stord/Haugesund College (Norway) during the spring term of 1999, students participated in a distributed group activity that provided experience on distributed collaboration and use of online groupware systems. The group collaboration process was…

  17. A self-organizing neural network for job scheduling in distributed systems

    NASA Astrophysics Data System (ADS)

    Newman, Harvey B.; Legrand, Iosif C.

    2001-08-01

    The aim of this work is to describe a possible approach for the optimization of the job scheduling in large distributed systems, based on a self-organizing Neural Network. This dynamic scheduling system should be seen as adaptive middle layer software, aware of current available resources and making the scheduling decisions using the "past experience." It aims to optimize job specific parameters as well as the resource utilization. The scheduling system is able to dynamically learn and cluster information in a large dimensional parameter space and at the same time to explore new regions in the parameters space. This self-organizing scheduling system may offer a possible solution to provide an effective use of resources for the off-line data processing jobs for future HEP experiments.

  18. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  19. Apollo experience report: Command and service module electrical power distribution on subsystem

    NASA Technical Reports Server (NTRS)

    Munford, R. E.; Hendrix, B.

    1974-01-01

    A review of the design philosophy and development of the Apollo command and service modules electrical power distribution subsystem, a brief history of the evolution of the total system, and some of the more significant components within the system are discussed. The electrical power distribution primarily consisted of individual control units, interconnecting units, and associated protective devices. Because each unit within the system operated more or less independently of other units, the discussion of the subsystem proceeds generally in descending order of complexity; the discussion begins with the total system, progresses to the individual units of the system, and concludes with the components within the units.

  20. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  1. Software Management System

    NASA Technical Reports Server (NTRS)

    1994-01-01

    A software management system, originally developed for Goddard Space Flight Center (GSFC) by Century Computing, Inc. has evolved from a menu and command oriented system to a state-of-the art user interface development system supporting high resolution graphics workstations. Transportable Applications Environment (TAE) was initially distributed through COSMIC and backed by a TAE support office at GSFC. In 1993, Century Computing assumed the support and distribution functions and began marketing TAE Plus, the system's latest version. The software is easy to use and does not require programming experience.

  2. Distribution and Recovery of Crude Oil in Various Types of Porous Media and Heterogeneity Configurations

    NASA Astrophysics Data System (ADS)

    Tick, G. R.; Ghosh, J.; Greenberg, R. R.; Akyol, N. H.

    2015-12-01

    A series of pore-scale experiments were conducted to understand the interfacial processes contributing to the removal of crude oil from various porous media during surfactant-induced remediation. Effects of physical heterogeneity (i.e. media uniformity) and carbonate soil content on oil recovery and distribution were evaluated through pore scale quantification techniques. Additionally, experiments were conducted to evaluate impacts of tetrachloroethene (PCE) content on crude oil distribution and recovery under these same conditions. Synchrotron X-ray microtomography (SXM) was used to obtain high-resolution images of the two-fluid-phase oil/water system, and quantify temporal changes in oil blob distribution, blob morphology, and blob surface area before and after sequential surfactant flooding events. The reduction of interfacial tension in conjunction with the sufficient increase in viscous forces as a result of surfactant flushing was likely responsible for mobilization and recovery of lighter fractions of crude oil. Corresponding increases in viscous forces were insufficient to initiate and maintain the displacement of the heavy crude oil in more homogeneous porous media systems during surfactant flushing. Interestingly, higher relative recoveries of heavy oil fractions were observed within more heterogeneous porous media indicating that wettability may be responsible for controlling mobilization in these systems. Compared to the "pure" crude oil experiments, preliminary results show that crude oil with PCE produced variability in oil distribution and recovery before and after each surfactant-flooding event. Such effects were likely influenced by viscosity and interfacial tension modifications associated with the crude-oil/solvent mixed systems.

  3. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  4. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  5. Fermilab Muon Campus g-2 Cryogenic Distribution Remote Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pei, L.; Theilacker, J.; Klebaner, A.

    2015-11-05

    The Muon Campus (MC) is able to measure Muon g-2 with high precision and comparing its value to the theoretical prediction. The MC has four 300 KW screw compressors and four liquid helium refrigerators. The centerpiece of the Muon g-2 experiment at Fermilab is a large, 50-foot-diameter superconducting muon storage ring. This one-of-a-kind ring, made of steel, aluminum and superconducting wire, was built for the previous g-2 experiment at Brookhaven. Due to each subsystem has to be far away from each other and be placed in the distant location, therefore, Siemens Process Control System PCS7-400, Automation Direct DL205 & DL05more » PLC, Synoptic and Fermilab ACNET HMI are the ideal choices as the MC g-2 cryogenic distribution real-time and on-Line remote control system. This paper presents a method which has been successfully used by many Fermilab distribution cryogenic real-time and On-Line remote control systems.« less

  6. New directions in the CernVM file system

    NASA Astrophysics Data System (ADS)

    Blomer, Jakob; Buncic, Predrag; Ganis, Gerardo; Hardi, Nikola; Meusel, Rene; Popescu, Radu

    2017-10-01

    The CernVM File System today is commonly used to host and distribute application software stacks. In addition to this core task, recent developments expand the scope of the file system into two new areas. Firstly, CernVM-FS emerges as a good match for container engines to distribute the container image contents. Compared to native container image distribution (e.g. through the “Docker registry”), CernVM-FS massively reduces the network traffic for image distribution. This has been shown, for instance, by a prototype integration of CernVM-FS into Mesos developed by Mesosphere, Inc. We present a path for a smooth integration of CernVM-FS and Docker. Secondly, CernVM-FS recently raised new interest as an option for the distribution of experiment conditions data. Here, the focus is on improved versioning capabilities of CernVM-FS that allows to link the conditions data of a run period to the state of a CernVM-FS repository. Lastly, CernVM-FS has been extended to provide a name space for physics data for the LIGO and CMS collaborations. Searching through a data namespace is often done by a central, experiment specific database service. A name space on CernVM-FS can particularly benefit from an existing, scalable infrastructure and from the POSIX file system interface.

  7. An Investigation into the Potential Benefits of Distributed Electric Propulsion on Small UAVs at Low Reynolds Numbers

    NASA Astrophysics Data System (ADS)

    Baris, Engin

    Distributed electric propulsion systems benefit from the inherent scale independence of electric propulsion. This property allows the designer to place multiple small electric motors along the wing of an aircraft instead of using a single or several internal combustion motors with gear boxes or other power train components. Aircraft operating at low Reynolds numbers are ideal candidates for benefiting from increased local flow velocities as provided by distributed propulsion systems. In this study, a distributed electric propulsion system made up of eight motor/propellers was integrated into the leading edge of a small fixed wing-body model to investigate the expected improvements on the aerodynamics available to small UAVs operating at low Reynolds numbers. Wind tunnel tests featuring a Design of Experiments (DOE) methodology were used for aerodynamic characterization. Experiments were performed in four modes: all-propellers-on, wing-tip-propellers-alone-on, wing-alone mode, and two-inboard-propellers-on-alone mode. In addition, the all-propeller-on, wing-alone, and a single-tractor configuration were analyzed using VSPAERO, a vortex lattice code, to make comparisons between these different configurations. Results show that the distributed propulsion system has higher normal force, endurance, and range features, despite a potential weight penalty.

  8. The Nimrod computational workbench: a case study in desktop metacomputing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramson, D.; Sosic, R.; Foster, I.

    The coordinated use of geographically distributed computers, or metacomputing, can in principle provide more accessible and cost- effective supercomputing than conventional high-performance systems. However, we lack evidence that metacomputing systems can be made easily usable, or that there exist large numbers of applications able to exploit metacomputing resources. In this paper, we present work that addresses both these concerns. The basis for this work is a system called Nimrod that provides a desktop problem-solving environment for parametric experiments. We describe how Nimrod has been extended to support the scheduling of computational resources located in a wide-area environment, and report onmore » an experiment in which Nimrod was used to schedule a large parametric study across the Australian Internet. The experiment provided both new scientific results and insights into Nimrod capabilities. We relate the results of this experiment to lessons learned from the I-WAY distributed computing experiment, and draw conclusions as to how Nimrod and I-WAY- like computing environments should be developed to support desktop metacomputing.« less

  9. Monitoring and control requirement definition study for Dispersed Storage and Generation (DSG), volume 1

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Twenty-four functional requirements were prepared under six categories and serve to indicate how to integrate dispersed storage generation (DSG) systems with the distribution and other portions of the electric utility system. Results indicate that there are no fundamental technical obstacles to prevent the connection of dispersed storage and generation to the distribution system. However, a communication system of some sophistication is required to integrate the distribution system and the dispersed generation sources for effective control. The large-size span of generators from 10 KW to 30 MW means that a variety of remote monitoring and control may be required. Increased effort is required to develop demonstration equipment to perform the DSG monitoring and control functions and to acquire experience with this equipment in the utility distribution environment.

  10. Capillary Discharge Thruster Experiments and Modeling (Briefing Charts)

    DTIC Science & Technology

    2016-06-01

    Martin1 ERC INC.1, IN-SPACE PROPULSION BRANCH, AIR FORCE RESEARCH LABORATORY EDWARDS AIR FORCE BASE, CA USA Electric propulsion systems June 2016... PROPULSION MODELS & EXPERIMENTS Spacecraft Propulsion Relevant Plasma: From hall thrusters to plumes and fluxes on components Complex reaction physics i.e... Propulsion Plumes FRC Chamber Environment R.S. MARTIN (ERC INC.) DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA# 16279 3 / 30 ELECTRIC

  11. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  12. Modeling a space-based quantum link that includes an adaptive optics system

    NASA Astrophysics Data System (ADS)

    Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.

    2017-10-01

    Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.

  13. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil

    2012-01-01

    Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.

  14. Capacitors in Series: A Laboratory Activity to Promote Critical Thinking.

    ERIC Educational Resources Information Center

    Noll, Ellis D.; Kowalski, Ludwik

    1996-01-01

    Describes experiments designed to explore the distribution of potential difference between two uncharged capacitors when they are suddenly connected to a source of constant voltage. Enables students to explore the evolution of a system in which initial voltage distribution depends on capacitor values, and the final voltage distribution depends on…

  15. First Experiences Using XACML for Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lorch, Marcus; Proctor, Seth; Lepro, Rebekah; Kafura, Dennis; Shah, Sumit

    2003-01-01

    Authorization systems today are increasingly complex. They span domains of administration, rely on many different authentication sources, and manage permissions that can be as complex as the system itself. Worse still, while there are many standards that define authentication mechanisms, the standards that address authorization are less well defined and tend to work only within homogeneous systems. This paper presents XACML, a standard access control language, as one component of a distributed and inter-operable authorization framework. Several emerging systems which incorporate XACML are discussed. These discussions illustrate how authorization can be deployed in distributed, decentralized systems. Finally, some new and future topics are presented to show where this work is heading and how it will help connect the general components of an authorization system.

  16. Laser diffraction particle sizing in STRESS

    NASA Astrophysics Data System (ADS)

    Agrawal, Y. C.; Pottsmith, H. C.

    1994-08-01

    An autonomous instrument system for measuring particle size spectra in the sea is described. The instrument records the small-angle scattering characteristics of the particulate ensemble present in water. The small-angle scattering distribution is inverted into size spectra. The discussion of the instrument in this paper is included with a review of the information content of the data. It is noted that the inverse problem is sensitive to the forward model for light scattering employed in the construction of the matrix. The instrument system is validated using monodisperse polystyrene and NIST standard distributions of glass spheres. Data from a long-term deployment on the California shelf during the field experiment Sediment Transport Events on Shelves and Slopes (STRESS) are included. The size distribution in STRESS, measured at a fixed height-above-bed 1.2 m, showed significant variability over time. In particular, the volume distribution sometimes changed from mono-modal to bi-modal during the experiment. The data on particle-size distribution are combined with friction velocity measurements in the current boundary layer to produce a size-dependent estimate of the suspended mass at 10 cm above bottom. It is argued that these concentrations represent the reference concentration at the bed for the smaller size classes. The suspended mass at all sizes shows a strong correlation with wave variance. Using the size distribution, corrections in the optical transmissometry calibration factor are estimated for the duration of the experiment. The change in calibration at 1.2 m above bed (mab) is shown to have a standard error of 30% over the duration of the experiment with a range of 1.8-0.8.

  17. MOLNs: A CLOUD PLATFORM FOR INTERACTIVE, REPRODUCIBLE, AND SCALABLE SPATIAL STOCHASTIC COMPUTATIONAL EXPERIMENTS IN SYSTEMS BIOLOGY USING PyURDME

    PubMed Central

    Drawert, Brian; Trogdon, Michael; Toor, Salman; Petzold, Linda; Hellander, Andreas

    2017-01-01

    Computational experiments using spatial stochastic simulations have led to important new biological insights, but they require specialized tools and a complex software stack, as well as large and scalable compute and data analysis resources due to the large computational cost associated with Monte Carlo computational workflows. The complexity of setting up and managing a large-scale distributed computation environment to support productive and reproducible modeling can be prohibitive for practitioners in systems biology. This results in a barrier to the adoption of spatial stochastic simulation tools, effectively limiting the type of biological questions addressed by quantitative modeling. In this paper, we present PyURDME, a new, user-friendly spatial modeling and simulation package, and MOLNs, a cloud computing appliance for distributed simulation of stochastic reaction-diffusion models. MOLNs is based on IPython and provides an interactive programming platform for development of sharable and reproducible distributed parallel computational experiments. PMID:28190948

  18. Decision Criteria for Distributed Versus Non-Distributed Information Systems in the Health Care Environment

    PubMed Central

    McGinnis, John W.

    1980-01-01

    The very same technological advances that support distributed systems have also dramatically increased the efficiency and capabilities of centralized systems making it more complex for health care managers to select the “right” system architecture to meet their particular needs. How this selection can be made with a reasonable degree of managerial comfort is the focus of this paper. The approach advocated is based on experience in developing the Tri-Service Medical Information System (TRIMIS) program. Along with this technical standards and configuration management procedures were developed that provided the necessary guidance to implement the selected architecture and to allow it to change in a controlled way over its life cycle.

  19. Network topology of an experimental futures exchange

    NASA Astrophysics Data System (ADS)

    Wang, S. C.; Tseng, J. J.; Tai, C. C.; Lai, K. H.; Wu, W. S.; Chen, S. H.; Li, S. P.

    2008-03-01

    Many systems of different nature exhibit scale free behaviors. Economic systems with power law distribution in the wealth are one of the examples. To better understand the working behind the complexity, we undertook an experiment recording the interactions between market participants. A Web server was setup to administer the exchange of futures contracts whose liquidation prices were coupled to event outcomes. After free registration, participants started trading to compete for the money prizes upon maturity of the futures contracts at the end of the experiment. The evolving `cash' flow network was reconstructed from the transactions between players. We show that the network topology is hierarchical, disassortative and small-world with a power law exponent of 1.02±0.09 in the degree distribution after an exponential decay correction. The small-world property emerged early in the experiment while the number of participants was still small. We also show power law-like distributions of the net incomes and inter-transaction time intervals. Big winners and losers are associated with high degree, high betweenness centrality, low clustering coefficient and low degree-correlation. We identify communities in the network as groups of the like-minded. The distribution of the community sizes is shown to be power-law distributed with an exponent of 1.19±0.16.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Blomer, J.

    Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFSmore » and Frontier.« less

  1. A precise clock distribution network for MRPC-based experiments

    NASA Astrophysics Data System (ADS)

    Wang, S.; Cao, P.; Shang, L.; An, Q.

    2016-06-01

    In high energy physics experiments, the MRPC (Multi-Gap Resistive Plate Chamber) detectors are widely used recently which can provide higher-resolution measurement for particle identification. However, the application of MRPC detectors leads to a series of challenges in electronics design with large number of front-end electronic channels, especially for distributing clock precisely. To deal with these challenges, this paper presents a universal scheme of clock transmission network for MRPC-based experiments with advantages of both precise clock distribution and global command synchronization. For precise clock distributing, the clock network is designed into a tree architecture with two stages: the first one has a point-to-multipoint long range bidirectional distribution with optical channels and the second one has a fan-out structure with copper link inside readout crates. To guarantee the precision of clock frequency or phase, the r-PTP (reduced Precision Time Protocol) and the DDMTD (digital Dual Mixer Time Difference) methods are used for frequency synthesis, phase measurement and adjustment, which is implemented by FPGA (Field Programmable Gate Array) in real-time. In addition, to synchronize global command execution, based upon this clock distribution network, synchronous signals are coded with clock for transmission. With technique of encoding/decoding and clock data recovery, signals such as global triggers or system control commands, can be distributed to all front-end channels synchronously, which greatly simplifies the system design. The experimental results show that both the clock jitter (RMS) and the clock skew can be less than 100 ps.

  2. Simulating Sustainment for an Unmanned Logistics System Concept of Operation in Support of Distributed Operations

    DTIC Science & Technology

    2017-06-01

    designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily in...systems, simulation, discrete event simulation, design of experiments, data analysis, simplekit, nearly orthogonal and balanced designs 15. NUMBER OF... designed experiment to model and explore a ship-to-shore logistics process supporting dispersed units via three types of ULSs, which vary primarily

  3. Converting Existing Copper Wire Firing System to a Fiber Optically Controlled Firing System for Electromagnetic Pulsed Power Experiments

    DTIC Science & Technology

    2017-12-19

    Firing System for Electromagnetic Pulsed Power Experiments by Robert Borys Jr and Colby Adams Approved for public release...Belcamp, MD Approved for public release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188... Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions

  4. Kristin Munch | NREL

    Science.gov Websites

    Information Management System, Materials Research Society Fall Meeting (2013) Photovoltaics Informatics scientific data management, database and data systems design, database clusters, storage systems integration , and distributed data analytics. She has used her experience in laboratory data management systems, lab

  5. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  6. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    NASA Astrophysics Data System (ADS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  7. [Analysis of foreign experience of usage of automation systems of medication distribution in prevention and treatment facilities].

    PubMed

    Miroshnichenko, Iu V; Umarov, S Z

    2012-12-01

    One of the ways of increase of effectiveness and safety of patients medication supplement is the use of automated systems of distribution, through which substantially increases the efficiency and safety of patients' medication supplement, achieves significant economy of material and financial resources for medication assistance and possibility of systematical improvement of its accessibility and quality.

  8. The planning and implementation of a demand-side management/distribution automation system at Taiwan Power Company

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, S.S.; Chen, Yun-Wu

    1994-12-31

    This paper would describe the Taipower`s experience of DSM/DAS development. For the past 5 years, the demand of electricity has maintained a high annual growth rate of 8.45% due to economic prosperity in Taiwan. As the environmental protection consciousness has recently made Taipower difficult to develope and construct new power plants, substations, transmission and distribution lines, and our power grid is an independent system, we do need to concern how to do DSM to manage the load problems. Since 1984, Taipower has established two pilot systems and these systems performed the functions of fault detect and isolation certainly good formore » Distribution Automation. With the rapid development of computer, communication and control technology, the concept of the DAS has gradually been implemented in real cases. Taipower organized an engineering task group to study DAS several years ago, and based on the operation experience of the existing systems, today Taipower is planning to launch a new DAS project for Tai-Chung area. According to Taipower requirements, the DAS will have the functions of feeder automation, automatic meter reading, load management and disteibution system analysis.« less

  9. Acquisition of He3 Cryostat Insert for Experiments on Topological Insulators

    DTIC Science & Technology

    2016-02-03

    facilitated transport experiments on topological insulators and Dirac and Weyl semimetals. These experiments resulted in several notable achievements and...Approved for Public Release; Distribution Unlimited Final Report: Acquisition of He3 Cryostat Insert for Experiments on Topological Insulators . The views...Experiments on Topological Insulators . Report Title The award enabled the PI to acquire a complete cryogenic system with a 9-Tesla superconducting magnet. The

  10. Learning from Experience? Evidence on the Impact and Distribution of Teacher Experience and the Implications for Teacher Policy

    ERIC Educational Resources Information Center

    Rice, Jennifer King

    2013-01-01

    Teacher experience has long been a central pillar of teacher workforce policies in U.S. school systems. The underlying assumption behind many of these policies is that experience promotes effectiveness, but is this really the case? What does existing evidence tell us about how, why, and for whom teacher experience matters? This policy brief…

  11. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    PubMed

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  12. Principal Investigator Microgravity Services Role in ISS Acceleration Data Distribution

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin

    1999-01-01

    Measurement of the microgravity acceleration environment on the International Space Station will be accomplished by two accelerometer systems. The Microgravity Acceleration Measurement System will record the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime comprised of vehicle, crew, and equipment disturbances will be accomplished by the Space Acceleration Measurement System-II. Due to the dynamic nature of the microgravity environment and its potential to influence sensitive experiments, Principal Investigators require distribution of microgravity acceleration in a timely and straightforward fashion. In addition to this timely distribution of the data, long term access to International Space Station microgravity environment acceleration data is required. The NASA Glenn Research Center's Principal Investigator Microgravity Services project will provide the means for real-time and post experiment distribution of microgravity acceleration data to microgravity science Principal Investigators. Real-time distribution of microgravity environment acceleration data will be accomplished via the World Wide Web. Data packets from the Microgravity Acceleration Measurement System and the Space Acceleration Measurement System-II will be routed from onboard the International Space Station to the NASA Glenn Research Center's Telescience Support Center. Principal Investigator Microgravity Services' ground support equipment located at the Telescience Support Center will be capable of generating a standard suite of acceleration data displays, including various time domain and frequency domain options. These data displays will be updated in real-time and will periodically update images available via the Principal Investigator Microgravity Services web page.

  13. Phase 1 of the First Solar Small Power System Experiment (experimental System No. 1). Volume 1: Technical Studies for Solar Point-focusing, Distributed Collector System, with Energy Conversion at the Collector, Category C

    NASA Technical Reports Server (NTRS)

    Clark, T. B. (Editor)

    1979-01-01

    The technical and economic feasibility of a solar electric power plant for a small community is evaluated and specific system designs for development and demonstration are selected. All systems investigated are defined as point focusing, distributed receiver concepts, with energy conversion at the collector. The preferred system is comprised of multiple parabolic dish concentrators employing Stirling cycle engines for power conversion. The engine, AC generator, cavity receiver, and integral sodium pool boiler/heat transport system are combined in a single package and mounted at the focus of each concentrator. The output of each concentrator is collected by a conventional electrical distribution system which permits grid-connected or stand-alone operation, depending on the storage system selected.

  14. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  15. The Archive Solution for Distributed Workflow Management Agents of the CMS Experiment at LHC

    DOE PAGES

    Kuznetsov, Valentin; Fischer, Nils Leif; Guo, Yuyi

    2018-03-19

    The CMS experiment at the CERN LHC developed the Workflow Management Archive system to persistently store unstructured framework job report documents produced by distributed workflow management agents. In this paper we present its architecture, implementation, deployment, and integration with the CMS and CERN computing infrastructures, such as central HDFS and Hadoop Spark cluster. The system leverages modern technologies such as a document oriented database and the Hadoop eco-system to provide the necessary flexibility to reliably process, store, and aggregatemore » $$\\mathcal{O}$$(1M) documents on a daily basis. We describe the data transformation, the short and long term storage layers, the query language, along with the aggregation pipeline developed to visualize various performance metrics to assist CMS data operators in assessing the performance of the CMS computing system.« less

  16. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  17. Complementary Density Measurements for the 200W Busek Hall Thruster (PREPRINT)

    DTIC Science & Technology

    2006-07-12

    Hall thruster are presented. Both a Faraday probe and microwave interferometry system are used to examine the density distribution of the thruster plasma at regular spatial intervals. Both experiments are performed in situ under the same conditions. The resulting density distributions obtained from both experiments are presented. Advantages and uncertainties of both methods are presented, as well as how comparison between the two data sets can account for the uncertainties of each method

  18. Service Discovery Oriented Management System Construction Method

    NASA Astrophysics Data System (ADS)

    Li, Huawei; Ren, Ying

    2017-10-01

    In order to solve the problem that there is no uniform method for design service quality management system in large-scale complex service environment, this paper proposes a distributed service-oriented discovery management system construction method. Three measurement functions are proposed to compute nearest neighbor user similarity at different levels. At present in view of the low efficiency of service quality management systems, three solutions are proposed to improve the efficiency of the system. Finally, the key technologies of distributed service quality management system based on service discovery are summarized through the factor addition and subtraction of quantitative experiment.

  19. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  20. Phase-Reference-Free Experiment of Measurement-Device-Independent Quantum Key Distribution

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Song, Xiao-Tian; Yin, Zhen-Qiang; Wang, Shuang; Chen, Wei; Zhang, Chun-Mei; Guo, Guang-Can; Han, Zheng-Fu

    2015-10-01

    Measurement-device-independent quantum key distribution (MDI QKD) is a substantial step toward practical information-theoretic security for key sharing between remote legitimate users (Alice and Bob). As with other standard device-dependent quantum key distribution protocols, such as BB84, MDI QKD assumes that the reference frames have been shared between Alice and Bob. In practice, a nontrivial alignment procedure is often necessary, which requires system resources and may significantly reduce the secure key generation rate. Here, we propose a phase-coding reference-frame-independent MDI QKD scheme that requires no phase alignment between the interferometers of two distant legitimate parties. As a demonstration, a proof-of-principle experiment using Faraday-Michelson interferometers is presented. The experimental system worked at 1 MHz, and an average secure key rate of 8.309 bps was obtained at a fiber length of 20 km between Alice and Bob. The system can maintain a positive key generation rate without phase compensation under normal conditions. The results exhibit the feasibility of our system for use in mature MDI QKD devices and its value for network scenarios.

  1. Fermilab DART run control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1996-02-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the control and monitoring of the data acquisition systems. The authors discuss the unique and interesting concepts of the run control and some of the experiences in developing it. They also give a brief update and status of the whole DART system.

  2. Fermilab DART run control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-05-01

    DART is the high speed, Unix based data acquisition system being developed by Fermilab in collaboration with seven High Energy Physics Experiments. This paper describes DART run control, which has been developed over the past year and is a flexible, distributed, extensible system for the, control and monitoring of the data acquisition systems. We discuss the unique and interesting concepts of the run control and some of our experiences in developing it. We also give a brief update and status of the whole DART system.

  3. Distributed Storage Algorithm for Geospatial Image Data Based on Data Access Patterns.

    PubMed

    Pan, Shaoming; Li, Yongkai; Xu, Zhengquan; Chong, Yanwen

    2015-01-01

    Declustering techniques are widely used in distributed environments to reduce query response time through parallel I/O by splitting large files into several small blocks and then distributing those blocks among multiple storage nodes. Unfortunately, however, many small geospatial image data files cannot be further split for distributed storage. In this paper, we propose a complete theoretical system for the distributed storage of small geospatial image data files based on mining the access patterns of geospatial image data using their historical access log information. First, an algorithm is developed to construct an access correlation matrix based on the analysis of the log information, which reveals the patterns of access to the geospatial image data. Then, a practical heuristic algorithm is developed to determine a reasonable solution based on the access correlation matrix. Finally, a number of comparative experiments are presented, demonstrating that our algorithm displays a higher total parallel access probability than those of other algorithms by approximately 10-15% and that the performance can be further improved by more than 20% by simultaneously applying a copy storage strategy. These experiments show that the algorithm can be applied in distributed environments to help realize parallel I/O and thereby improve system performance.

  4. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  5. Federated data storage system prototype for LHC experiments and data intensive science

    NASA Astrophysics Data System (ADS)

    Kiryanov, A.; Klimentov, A.; Krasnopevtsev, D.; Ryabinkin, E.; Zarochentsev, A.

    2017-10-01

    Rapid increase of data volume from the experiments running at the Large Hadron Collider (LHC) prompted physics computing community to evaluate new data handling and processing solutions. Russian grid sites and universities’ clusters scattered over a large area aim at the task of uniting their resources for future productive work, at the same time giving an opportunity to support large physics collaborations. In our project we address the fundamental problem of designing a computing architecture to integrate distributed storage resources for LHC experiments and other data-intensive science applications and to provide access to data from heterogeneous computing facilities. Studies include development and implementation of federated data storage prototype for Worldwide LHC Computing Grid (WLCG) centres of different levels and University clusters within one National Cloud. The prototype is based on computing resources located in Moscow, Dubna, Saint Petersburg, Gatchina and Geneva. This project intends to implement a federated distributed storage for all kind of operations such as read/write/transfer and access via WAN from Grid centres, university clusters, supercomputers, academic and commercial clouds. The efficiency and performance of the system are demonstrated using synthetic and experiment-specific tests including real data processing and analysis workflows from ATLAS and ALICE experiments, as well as compute-intensive bioinformatics applications (PALEOMIX) running on supercomputers. We present topology and architecture of the designed system, report performance and statistics for different access patterns and show how federated data storage can be used efficiently by physicists and biologists. We also describe how sharing data on a widely distributed storage system can lead to a new computing model and reformations of computing style, for instance how bioinformatics program running on supercomputers can read/write data from the federated storage.

  6. Contents of the JPL Distributed Active Archive Center (DAAC) archive, version 2-91

    NASA Technical Reports Server (NTRS)

    Smith, Elizabeth A. (Editor); Lassanyi, Ruby A. (Editor)

    1991-01-01

    The Distributed Active Archive Center (DAAC) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea surface height, surface wind vector, sea surface temperature, atmospheric liquid water, and surface pigment concentration. The Jet Propulsion Laboratory DAAC is an element of the Earth Observing System Data and Information System (EOSDIS) and will be the United States distribution site for the Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  7. JPL Physical Oceanography Distributed Active Archive Center (PO.DAAC) data availability, version 1-94

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The Physical Oceanography Distributed Active Archive Center (PO.DAAC) archive at the Jet Propulsion Laboratory (JPL) includes satellite data sets for the ocean sciences and global-change research to facilitate multidisciplinary use of satellite ocean data. Parameters include sea-surface height, surface-wind vector, sea-surface temperature, atmospheric liquid water, and integrated water vapor. The JPL PO.DAAC is an element of the Earth Observing System Data and Information System (EOSDIS) and is the United States distribution site for Ocean Topography Experiment (TOPEX)/POSEIDON data and metadata.

  8. Use of an automated drug distribution cabinet system in a disaster response mobile emergency department.

    PubMed

    Morchel, Herman; Ogedegbe, Chinwe; Desai, Nilesh; Faley, Brian; Mahmood, Nasir; Moro, Gary Del; Feldman, Joseph

    2015-01-01

    This article describes the innovative use of an automated drug distribution cabinet system for medication supply in a disaster response mobile Emergency Department vehicle. Prior to the use of the automated drug distribution cabinet system described in this article, the mobile hospitals were stocked as needed with drugs in individual boxes and draws. Experience with multiple deployments found this method to be very cumbersome and labor intensive, both in preparation, operational use, and demobilization. For a recent deployment to provide emergency medical care at the 2014 Super Bowl football event, the automated drug distribution cabinet system in the Institution's main campus Emergency Department was duplicated and incorporated into the mobile Emergency Department. This method of drug stocking and dispensing was found to be far more efficient than gathering and placing drugs in onboard draws and racks. Automated drug distribution cabinet systems can be used to significantly improve patient care and overall efficiency in mobile hospital deployments.

  9. Multi-heuristic dynamic task allocation using genetic algorithms in a heterogeneous distributed system

    PubMed Central

    Page, Andrew J.; Keane, Thomas M.; Naughton, Thomas J.

    2010-01-01

    We present a multi-heuristic evolutionary task allocation algorithm to dynamically map tasks to processors in a heterogeneous distributed system. It utilizes a genetic algorithm, combined with eight common heuristics, in an effort to minimize the total execution time. It operates on batches of unmapped tasks and can preemptively remap tasks to processors. The algorithm has been implemented on a Java distributed system and evaluated with a set of six problems from the areas of bioinformatics, biomedical engineering, computer science and cryptography. Experiments using up to 150 heterogeneous processors show that the algorithm achieves better efficiency than other state-of-the-art heuristic algorithms. PMID:20862190

  10. Diagnostic layer integration in FPGA-based pipeline measurement systems for HEP experiments

    NASA Astrophysics Data System (ADS)

    Pozniak, Krzysztof T.

    2007-08-01

    Integrated triggering and data acquisition systems for high energy physics experiments may be considered as fast, multichannel, synchronous, distributed, pipeline measurement systems. A considerable extension of functional, technological and monitoring demands, which has recently been imposed on them, forced a common usage of large field-programmable gate array (FPGA), digital signal processing-enhanced matrices and fast optical transmission for their realization. This paper discusses modelling, design, realization and testing of pipeline measurement systems. A distribution of synchronous data stream flows is considered in the network. A general functional structure of a single network node is presented. A suggested, novel block structure of the node model facilitates full implementation in the FPGA chip, circuit standardization and parametrization, as well as integration of functional and diagnostic layers. A general method for pipeline system design was derived. This method is based on a unified model of the synchronous data network node. A few examples of practically realized, FPGA-based, pipeline measurement systems were presented. The described systems were applied in ZEUS and CMS.

  11. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  12. Experimental investigation of static ice refrigeration air conditioning system driven by distributed photovoltaic energy system

    NASA Astrophysics Data System (ADS)

    Xu, Y. F.; Li, M.; Luo, X.; Wang, Y. F.; Yu, Q. F.; Hassanien, R. H. E.

    2016-08-01

    The static ice refrigeration air conditioning system (SIRACS) driven by distributed photovoltaic energy system (DPES) was proposed and the test experiment have been investigated in this paper. Results revealed that system energy utilization efficiency is low because energy losses were high in ice making process of ice slide maker. So the immersed evaporator and co-integrated exchanger were suggested in system structure optimization analysis and the system COP was improved nearly 40%. At the same time, we have researched that ice thickness and ice super-cooled temperature changed along with time and the relationship between system COP and ice thickness was obtained.

  13. DIRAC3 - the new generation of the LHCb grid software

    NASA Astrophysics Data System (ADS)

    Tsaregorodtsev, A.; Brook, N.; Casajus Ramo, A.; Charpentier, Ph; Closier, J.; Cowan, G.; Graciani Diaz, R.; Lanciotti, E.; Mathe, Z.; Nandakumar, R.; Paterson, S.; Romanovsky, V.; Santinelli, R.; Sapunov, M.; Smith, A. C.; Seco Miguelez, M.; Zhelezov, A.

    2010-04-01

    DIRAC, the LHCb community Grid solution, was considerably reengineered in order to meet all the requirements for processing the data coming from the LHCb experiment. It is covering all the tasks starting with raw data transportation from the experiment area to the grid storage, data processing up to the final user analysis. The reengineered DIRAC3 version of the system includes a fully grid security compliant framework for building service oriented distributed systems; complete Pilot Job framework for creating efficient workload management systems; several subsystems to manage high level operations like data production and distribution management. The user interfaces of the DIRAC3 system providing rich command line and scripting tools are complemented by a full-featured Web portal providing users with a secure access to all the details of the system status and ongoing activities. We will present an overview of the DIRAC3 architecture, new innovative features and the achieved performance. Extending DIRAC3 to manage computing resources beyond the WLCG grid will be discussed. Experience with using DIRAC3 by other user communities than LHCb and in other application domains than High Energy Physics will be shown to demonstrate the general-purpose nature of the system.

  14. Characterization of mixing of suspension in a mechanically stirred precipitation system

    NASA Astrophysics Data System (ADS)

    Farkas, B.; Blickle, T.; Ulbert, Zs.; Hasznos-Nezdei, M.

    1996-09-01

    In the case of precipitational crystallization, the particle size distribution of the resulting product is greatly influenced by the mixing rate of the system. We have worked out a method of characterizing the mixing of precipitated suspensions by applying a function of mean residence time and particle size distribution. For the experiments a precipitated suspension of β-lactam-type antibiotic has been used in a mechanically stirred tank.

  15. Distributed electrical time domain reflectometry (ETDR) structural sensors: design models and proof-of-concept experiments

    NASA Astrophysics Data System (ADS)

    Stastny, Jeffrey A.; Rogers, Craig A.; Liang, Chen

    1993-07-01

    A parametric design model has been created to optimize the sensitivity of the sensing cable in a distributed sensing system. The system consists of electrical time domain reflectometry (ETDR) signal processing equipment and specially designed sensing cables. The ETDR equipment sends a high-frequency electric pulse (in the giga hertz range) along the sensing cable. Some portion of the electric pulse will be reflected back to the ETDR equipment as a result of the variation of the cable impedance. The electric impedance variation in the sensing cable can be related to its mechanical deformation, such as cable elongation (change in the resistance), shear deformation (change in the capacitance), corrosion of the cable or the materials around the cable (change in inductance and capacitance), etc. The time delay, amplitude, and shape of the reflected pulse provides the means to locate, determine the magnitude, and indicate the nature of the change in the electrical impedance, which is then related to the distributed structural deformation. The sensing cables are an essential part of the health-monitoring system. By using the parametric design model, the optimum cable parameters can be determined for specific deformation. Proof-of-concept experiments also are presented in the paper to demonstrate the utility of an electrical TDR system in distributed sensing applications.

  16. Mentoring in a Distributed Learning Social Work Program

    ERIC Educational Resources Information Center

    Jensen, Donna

    2017-01-01

    Students in alternative education programs often experience differential access to faculty, advisors, university support systems, and the supportive culture established by being on campus. This study is a descriptive-exploratory program evaluation of the distributed learning social work mentoring program at California State University, Chico. The…

  17. BES-III distributed computing status

    NASA Astrophysics Data System (ADS)

    Belov, S. D.; Deng, Z. Y.; Korenkov, V. V.; Li, W. D.; Lin, T.; Ma, Z. T.; Nicholson, C.; Pelevanyuk, I. S.; Suo, B.; Trofimov, V. V.; Tsaregorodtsev, A. U.; Uzhinskiy, A. V.; Yan, T.; Yan, X. F.; Zhang, X. M.; Zhemchugov, A. S.

    2016-09-01

    The BES-III experiment at the Institute of High Energy Physics (Beijing, China) is aimed at the precision measurements in e+e- annihilation in the energy range from 2.0 till 4.6 GeV. The world's largest samples of J/psi and psi' events and unique samples of XYZ data have been already collected. The expected increase of the data volume in the coming years required a significant evolution of the computing model, namely shift from a centralized data processing to a distributed one. This report summarizes a current design of the BES-III distributed computing system, some of key decisions and experience gained during 2 years of operations.

  18. Distributions of underdense meteor trail amplitudes and its application to meteor scatter communication system design

    NASA Astrophysics Data System (ADS)

    Weitzen, J. A.; Bourque, S.; Ostergaard, J. C.; Bench, P. M.; Baily, A. D.

    1991-04-01

    Analysis of data from recent experiments leads to the observation that distributions of underdense meteor trail peak signal amplitudes differ from classic predictions. In this paper the distribution of trail amplitudes in decibels relative 1 W (dBw) is considered, and it is shown that Lindberg's theorem can be used to apply central limit arguments to this problem. It is illustrated that a Gaussian model for the distribution of the logarithm of the peak received signal level of underdense trails provides a better fit to data than classic approaches. Distributions of underdense meteor trail amplitudes at five frequencies are compared to a Gaussian distribution and the classic model. Implications of the Gaussian assumption on the design of communication systems are discussed.

  19. Design and implementation of a prototype data system for earth radiation budget, cloud, aerosol, and chemistry data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baum, B.A.; Barkstrom, B.R.

    1993-04-01

    The Earth Observing System (EOS) will collect data from a large number of satellite-borne instruments, beginning later in this decade, to make data accessible to the scientific community, NASA will build an EOS Data and Information System (EOSDIS). As an initial effort to accelerate the development of EOSDIS and to gain experience with such an information system, NASA and other agencies are working on a prototype system called Version O (VO). This effort will provide improved access to pre-EOS earth science data throughout the early EOSDIS period. Based on recommendations from the EOSDIS Science Advisory Panel, EOSDIS will have severalmore » distributed active archive centers (DAACs). Each DAAC will specialize in particular data sets. This paper describes work at the NASA Langley Research Center's (LaRC) DAAC. The Version 0 Langley DAAC began archiving and distributing existing data sets pertaining to the earth's radiation budget, clouds, aerosols, and tropospheric chemistry in late 1992. The primary goals of the LaRC VO effort are the following: (1) Enhance scientific use of existing data; (2) Develop institutional expertise in maintaining and distributing data; (3) Use institutional capability for processing data from previous missions such as the Earth Radiation Budget Experiment and the Stratospheric Aerosol and Gas Experiment to prepare for processing future EOS satellite data; (4) Encourage cooperative interagency and international involvement with data sets and research; and (5) Incorporate technological hardware and software advances quickly.« less

  20. Apollo 17 ultraviolet spectrometer experiment (S-169)

    NASA Technical Reports Server (NTRS)

    Fastie, W. G.

    1974-01-01

    The scientific objectives of the ultraviolet spectrometer experiment are discussed, along with design and operational details, instrument preparation and performance, and scientific results. Information gained from the experiment is given concerning the lunar atmosphere and albedo, zodiacal light, astronomical observations, spacecraft environment, and the distribution of atomic hydrogen in the solar system and in the earth's atmosphere.

  1. A Performance Comparison of Tree and Ring Topologies in Distributed System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Min

    A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less

  2. Design of distributed FBG vibration measuring system based on Fabry-Perot tunable filter

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng; Miao, Changyun; Li, Hongqiang; Gao, Hua; Gan, Jingmeng

    2011-11-01

    A distributed optical fiber grating wavelength interrogator based on fiber Fabry Perot tunable filter(FFP-TF) was proposed, which could measure dynamic strain or vibration of multi-sensing fiber gratings in one optical fiber by time division way. The wavelength demodulated mathematical model was built, the formulas of system output voltage and sensitivity were deduced and the method of finding static operating point was determined. The wavelength drifting characteristic of FFP-TF was discussed when the center wavelength of FFP-TF was set on the static operating point. A wavelength locking method was proposed by introducing a high-frequency driving voltage signal. A demodulated system was established based on Labview and its demodulated wavelength dynamic range is 290pm in theory. In experiment, by digital filtering applied to the system output data, 100Hz and 250Hz vibration signals were measured. The experiment results proved the feasibility of the demodulated method.

  3. Distributed sensor for water and pH measurements using fiber optics and swellable polymeric systems

    NASA Astrophysics Data System (ADS)

    Michie, W. C.; Culshaw, B.; McKenzie, I.; Konstantakis, M.; Graham, N. B.; Moran, C.; Santos, F.; Bergqvist, E.; Carlstrom, B.

    1995-01-01

    We report on the design, construction and test of a generic form of sensor for making distributed measurements of a range of chemical parameters. The technique combines optical time-domain reflectometry with chemically sensitive water-swellable polymers (hydrogels). Initial experiments have concentrated on demonstrating a distributed water detector; however, gels have been developed that enable this sensor to be

  4. Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope.

    PubMed

    Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo

    2018-01-17

    Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.

  5. Department Networks and Distributed Leadership in Schools

    ERIC Educational Resources Information Center

    de Lima, Jorge Avila

    2008-01-01

    Many schools are organised into departments which function as contexts that frame teachers' professional experiences in important ways. Some educational systems have adopted distributed forms of leadership within schools that rely strongly on the departmental structure and on the role of the department coordinator as teacher leader. This paper…

  6. Estimating parameters with pre-specified accuracies in distributed parameter systems using optimal experiment design

    NASA Astrophysics Data System (ADS)

    Potters, M. G.; Bombois, X.; Mansoori, M.; Hof, Paul M. J. Van den

    2016-08-01

    Estimation of physical parameters in dynamical systems driven by linear partial differential equations is an important problem. In this paper, we introduce the least costly experiment design framework for these systems. It enables parameter estimation with an accuracy that is specified by the experimenter prior to the identification experiment, while at the same time minimising the cost of the experiment. We show how to adapt the classical framework for these systems and take into account scaling and stability issues. We also introduce a progressive subdivision algorithm that further generalises the experiment design framework in the sense that it returns the lowest cost by finding the optimal input signal, and optimal sensor and actuator locations. Our methodology is then applied to a relevant problem in heat transfer studies: estimation of conductivity and diffusivity parameters in front-face experiments. We find good correspondence between numerical and theoretical results.

  7. Anomalous yet Brownian.

    PubMed

    Wang, Bo; Anthony, Stephen M; Bae, Sung Chul; Granick, Steve

    2009-09-08

    We describe experiments using single-particle tracking in which mean-square displacement is simply proportional to time (Fickian), yet the distribution of displacement probability is not Gaussian as should be expected of a classical random walk but, instead, is decidedly exponential for large displacements, the decay length of the exponential being proportional to the square root of time. The first example is when colloidal beads diffuse along linear phospholipid bilayer tubes whose radius is the same as that of the beads. The second is when beads diffuse through entangled F-actin networks, bead radius being less than one-fifth of the actin network mesh size. We explore the relevance to dynamic heterogeneity in trajectory space, which has been extensively discussed regarding glassy systems. Data for the second system might suggest activated diffusion between pores in the entangled F-actin networks, in the same spirit as activated diffusion and exponential tails observed in glassy systems. But the first system shows exceptionally rapid diffusion, nearly as rapid as for identical colloids in free suspension, yet still displaying an exponential probability distribution as in the second system. Thus, although the exponential tail is reminiscent of glassy systems, in fact, these dynamics are exceptionally rapid. We also compare with particle trajectories that are at first subdiffusive but Fickian at the longest measurement times, finding that displacement probability distributions fall onto the same master curve in both regimes. The need is emphasized for experiments, theory, and computer simulation to allow definitive interpretation of this simple and clean exponential probability distribution.

  8. Sparse distributed memory overview

    NASA Technical Reports Server (NTRS)

    Raugh, Mike

    1990-01-01

    The Sparse Distributed Memory (SDM) project is investigating the theory and applications of massively parallel computing architecture, called sparse distributed memory, that will support the storage and retrieval of sensory and motor patterns characteristic of autonomous systems. The immediate objectives of the project are centered in studies of the memory itself and in the use of the memory to solve problems in speech, vision, and robotics. Investigation of methods for encoding sensory data is an important part of the research. Examples of NASA missions that may benefit from this work are Space Station, planetary rovers, and solar exploration. Sparse distributed memory offers promising technology for systems that must learn through experience and be capable of adapting to new circumstances, and for operating any large complex system requiring automatic monitoring and control. Sparse distributed memory is a massively parallel architecture motivated by efforts to understand how the human brain works. Sparse distributed memory is an associative memory, able to retrieve information from cues that only partially match patterns stored in the memory. It is able to store long temporal sequences derived from the behavior of a complex system, such as progressive records of the system's sensory data and correlated records of the system's motor controls.

  9. Channel Formation in Physical Experiments: Examples from Deep and Shallow Water Clastic Sedimentary Systems

    NASA Astrophysics Data System (ADS)

    Hoyal, D. C.; Sheets, B. A.

    2005-12-01

    The degree to which experimental sedimentary systems form channels has an important bearing on their applicability as analogs of large-scale natural systems, where channels and their associated landforms are ubiquitous. The internal geometry and properties (e.g., grain size, vertical succession and stacking) of many depositional landforms can be directly linked to the processes of channel initiation and evolution. Unfortunately, strong self-channelization, a prerequisite for certain natural phenomena (e.g. mouth lobe development, meandering, etc.), has been difficult to reproduce at laboratory scales. In shallow-water experiments (sub-aerial), although weak channelization develops relatively easily, as is commonly observed in gutters after a rain storm, strong channelization with well-developed banks has proved difficult to model. In deep water experiments the challenge is even greater. Despite considerable research effort experimental conditions for deep water channel initiation have only recently been identified. Experiments on the requisite conditions for channelization in shallow and deep water have been ongoing at the ExxonMobil Upstream Research Company (EMURC) for several years. By primarily manipulating the cohesiveness of the sediment supply we have developed models of distributive systems with well-defined channels in shallow water, reminiscent of fine grained river-dominated deltas like the Mississippi. In deep water we have developed models that demonstrate strong channelization and associated lobe behavior in a distributive setting, by scaling up an approach developed by another group using salt-water flows and low-density plastic sediment. The experiments highlight a number of important controls on experimental channel formation, including: (1) bed strength or cohesiveness; (2) bedform development; and (3) Reynolds number. Among these controls bed forms disrupt the channel forming instability, reducing the energy available for channelization. The fundamental channel instability develops in both laminar and turbulent flow but with important differences. The scaling of these effects is the focus of ongoing research. In general it was observed that there are strong similarities between the processes and sedimentary products in shallow and deep water systems. Further, strong channelization in EMURC experiments provides insights into the evolution of distributive systems including: (1) the cyclic process of lobe formation and channel growth at a channel mouth, (2) types of channel fill, (3) architectural differences between channel fill and lobe deposits, (4) channel backfilling and avulsion, (5) Channel initiation vs. entrenched channel phases, (6) knickpoints and channel erosion, (7) structure of overbank, levee-building flows, and (8) the role of levees in altering the distributive channel pattern.

  10. TiD-Introducing and Benchmarking an Event-Delivery System for Brain-Computer Interfaces.

    PubMed

    Breitwieser, Christian; Tavella, Michele; Schreuder, Martijn; Cincotti, Febo; Leeb, Robert; Muller-Putz, Gernot R

    2017-12-01

    In this paper, we present and analyze an event distribution system for brain-computer interfaces. Events are commonly used to mark and describe incidents during an experiment and are therefore critical for later data analysis or immediate real-time processing. The presented approach, called Tools for brain-computer interaction interface D (TiD), delivers messages in XML format via a buslike system using transmission control protocol connections or shared memory. A dedicated server dispatches TiD messages to distributed or local clients. The TiD message is designed to be flexible and contains time stamps for event synchronization, whereas events describe incidents, which occur during an experiment. TiD was tested extensively toward stability and latency. The effect of an occurring event jitter was analyzed and benchmarked on a reference implementation under different conditions as gigabit and 100-Mb Ethernet or Wi-Fi with a different number of event receivers. A 3-dB signal attenuation, which occurs when averaging jitter influenced trials aligned by events, is starting to become visible at around 1-2 kHz in the case of a gigabit connection. Mean event distribution times across operating systems are ranging from 0.3 to 0.5ms for a gigabit network connection for 10 6 events. Results for other environmental conditions are available in this paper. References already using TiD for event distribution are provided showing the applicability of TiD for event delivery with distributed or local clients.

  11. Devices development and techniques research for space life sciences

    NASA Astrophysics Data System (ADS)

    Zhang, A.; Liu, B.; Zheng, C.

    The development process and the status quo of the devices and techniques for space life science in China and the main research results in this field achieved by Shanghai Institute of Technical Physics SITP CAS are reviewed concisely in this paper On the base of analyzing the requirements of devices and techniques for supporting space life science experiments and researches one designment idea of developing different intelligent modules with professional function standard interface and easy to be integrated into system is put forward and the realization method of the experiment system with intelligent distributed control based on the field bus are discussed in three hierarchies Typical sensing or control function cells with certain self-determination control data management and communication abilities are designed and developed which are called Intelligent Agents Digital hardware network system which are consisted of the distributed Agents as the intelligent node is constructed with the normative opening field bus technology The multitask and real-time control application softwares are developed in the embedded RTOS circumstance which is implanted into the system hardware and space life science experiment system platform with characteristic of multitasks multi-courses professional and instant integration will be constructed

  12. Scheduling based on a dynamic resource connection

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.

    2017-02-01

    The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.

  13. Research on simulation based material delivery system for an automobile company with multi logistics center

    NASA Astrophysics Data System (ADS)

    Luo, D.; Guan, Z.; Wang, C.; Yue, L.; Peng, L.

    2017-06-01

    Distribution of different parts to the assembly lines is significant for companies to improve production. Current research investigates the problem of distribution method optimization of a logistics system in a third party logistic company that provide professional services to an automobile manufacturing case company in China. Current research investigates the logistics leveling the material distribution and unloading platform of the automobile logistics enterprise and proposed logistics distribution strategy, material classification method, as well as logistics scheduling. Moreover, the simulation technology Simio is employed on assembly line logistics system which helps to find and validate an optimization distribution scheme through simulation experiments. Experimental results indicate that the proposed scheme can solve the logistic balance and levels the material problem and congestion of the unloading pattern in an efficient way as compared to the original method employed by the case company.

  14. Conference on Real-Time Computer Applications in Nuclear, Particle and Plasma Physics, 6th, Williamsburg, VA, May 15-19, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Pordes, Ruth (Editor)

    1989-01-01

    Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.

  15. Persistence of exponential bed thickness distributions in the stratigraphic record: Experiments and theory

    NASA Astrophysics Data System (ADS)

    Straub, K. M.; Ganti, V. K.; Paola, C.; Foufoula-Georgiou, E.

    2010-12-01

    Stratigraphy preserved in alluvial basins houses the most complete record of information necessary to reconstruct past environmental conditions. Indeed, the character of the sedimentary record is inextricably related to the surface processes that formed it. In this presentation we explore how the signals of surface processes are recorded in stratigraphy through the use of physical and numerical experiments. We focus on linking surface processes to stratigraphy in 1D by quantifying the probability distributions of processes that govern the evolution of depositional systems to the probability distribution of preserved bed thicknesses. In this study we define a bed as a package of sediment bounded above and below by erosional surfaces. In a companion presentation we document heavy-tailed statistics of erosion and deposition from high-resolution temporal elevation data recorded during a controlled physical experiment. However, the heavy tails in the magnitudes of erosional and depositional events are not preserved in the experimental stratigraphy. Similar to many bed thickness distributions reported in field studies we find that an exponential distribution adequately describes the thicknesses of beds preserved in our experiment. We explore the generation of exponential bed thickness distributions from heavy-tailed surface statistics using 1D numerical models. These models indicate that when the full distribution of elevation fluctuations (both erosional and depositional events) is symmetrical, the resulting distribution of bed thicknesses is exponential in form. Finally, we illustrate that a predictable relationship exists between the coefficient of variation of surface elevation fluctuations and the scale-parameter of the resulting exponential distribution of bed thicknesses.

  16. Electrical Capacitance Volume Tomography for the Packed Bed Reactor ISS Flight Experiment

    NASA Technical Reports Server (NTRS)

    Marashdeh, Qussai; Motil, Brian; Wang, Aining; Liang-Shih, Fan

    2013-01-01

    Fixed packed bed reactors are compact, require minimum power and maintenance to operate, and are highly reliable. These features make this technology a highly desirable unit operation for long duration life support systems in space. NASA is developing an ISS experiment to address this technology with particular focus on water reclamation and air revitalization. Earlier research and development efforts funded by NASA have resulted in two hydrodynamic models which require validation with appropriate instrumentation in an extended microgravity environment. To validate these models, the instantaneous distribution of the gas and liquid phases must be measured.Electrical Capacitance Volume Tomography (ECVT) is a non-invasive imaging technology recently developed for multi-phase flow applications. It is based on distributing flexible capacitance plates on the peripheral of a flow column and collecting real-time measurements of inter-electrode capacitances. Capacitance measurements here are directly related to dielectric constant distribution, a physical property that is also related to material distribution in the imaging domain. Reconstruction algorithms are employed to map volume images of dielectric distribution in the imaging domain, which is in turn related to phase distribution. ECVT is suitable for imaging interacting materials of different dielectric constants, typical in multi-phase flow systems. ECVT is being used extensively for measuring flow variables in various gas-liquid and gas-solid flow systems. Recent application of ECVT include flows in risers and exit regions of circulating fluidized beds, gas-liquid and gas-solid bubble columns, trickle beds, and slurry bubble columns. ECVT is also used to validate flow models and CFD simulations. The technology is uniquely qualified for imaging phase concentrations in packed bed reactors for the ISS flight experiments as it exhibits favorable features of compact size, low profile sensors, high imaging speed, and flexibility to fit around columns of various shapes and sizes. ECVT is also safer than other commonly used imaging modalities as it operates in the range of low frequencies (1 MHz) and does not radiate radioactive energy. In this effort, ECVT is being used to image flow parameters in a packed bed reactor for an ISS flight experiment.

  17. Cooperative action of coherent groups in broadly heterogeneous populations of interacting chemical oscillators

    PubMed Central

    Mikhailov, A. S.; Zanette, D. H.; Zhai, Y. M.; Kiss, I. Z.; Hudson, J. L.

    2004-01-01

    We present laboratory experiments on the effects of global coupling in a population of electrochemical oscillators with a multimodal frequency distribution. The experiments show that complex collective signals are generated by this system through spontaneous emergence and joint operation of coherently acting groups representing hierarchically organized resonant clusters. Numerical simulations support these experimental findings. Our results suggest that some forms of internal self-organization, characteristic for complex multiagent systems, are already possible in simple chemical systems. PMID:15263084

  18. Contraceptive social marketing and community-based distribution systems in Colombia.

    PubMed

    Vernon, R; Ojeda, G; Townsend, M C

    1988-01-01

    Three operations research experiments were carried out in three provinces of Colombia to improve the cost-effectiveness of Profamilia's nonclinic-based programs. The experiments tested: (a) whether a contraceptive social marketing (CSM) strategy can replace a community-based distribution (CBD) program in a high contraceptive use area; (b) if wage incentives for salaried CBD instructors will increase contraceptive sales; and (c) whether a specially equipped information, education, and communication (IEC) team can replace a cadre of rural promoters to expand family planning coverage. All three strategies proved to be effective, but only the CSM system yielded a profit. Despite this, Profamilia discontinued its CSM program soon after the experiment was completed. Unexpected government controls regulating the price and sale of contraceptives in Colombia made the program unprofitable. As a result, family planning agencies are cautioned against replacing CBD programs with CSM. Instead, CBD programs might adopt a more commercial approach to become more efficient.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang

    A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less

  20. A distributed, graphical user interface based, computer control system for atomic physics experiments

    NASA Astrophysics Data System (ADS)

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  1. A distributed, graphical user interface based, computer control system for atomic physics experiments.

    PubMed

    Keshet, Aviv; Ketterle, Wolfgang

    2013-01-01

    Atomic physics experiments often require a complex sequence of precisely timed computer controlled events. This paper describes a distributed graphical user interface-based control system designed with such experiments in mind, which makes use of off-the-shelf output hardware from National Instruments. The software makes use of a client-server separation between a user interface for sequence design and a set of output hardware servers. Output hardware servers are designed to use standard National Instruments output cards, but the client-server nature should allow this to be extended to other output hardware. Output sequences running on multiple servers and output cards can be synchronized using a shared clock. By using a field programmable gate array-generated variable frequency clock, redundant buffers can be dramatically shortened, and a time resolution of 100 ns achieved over effectively arbitrary sequence lengths.

  2. Application of SLURM, BOINC, and GlusterFS as Software System for Sustainable Modeling and Data Analytics

    NASA Astrophysics Data System (ADS)

    Kashansky, Vladislav V.; Kaftannikov, Igor L.

    2018-02-01

    Modern numerical modeling experiments and data analytics problems in various fields of science and technology reveal a wide variety of serious requirements for distributed computing systems. Many scientific computing projects sometimes exceed the available resource pool limits, requiring extra scalability and sustainability. In this paper we share the experience and findings of our own on combining the power of SLURM, BOINC and GlusterFS as software system for scientific computing. Especially, we suggest a complete architecture and highlight important aspects of systems integration.

  3. Artificial intelligence and space power systems automation

    NASA Technical Reports Server (NTRS)

    Weeks, David J.

    1987-01-01

    Various applications of artificial intelligence to space electrical power systems are discussed. An overview is given of completed, on-going, and planned knowledge-based system activities. These applications include the Nickel-Cadmium Battery Expert System (NICBES) (the expert system interfaced with the Hubble Space Telescope electrical power system test bed); the early work with the Space Station Experiment Scheduler (SSES); the three expert systems under development in the space station advanced development effort in the core module power management and distribution system test bed; planned cooperation of expert systems in the Core Module Power Management and Distribution (CM/PMAD) system breadboard with expert systems for the space station at other research centers; and the intelligent data reduction expert system under development.

  4. Displaced path integral formulation for the momentum distribution of quantum particles.

    PubMed

    Lin, Lin; Morrone, Joseph A; Car, Roberto; Parrinello, Michele

    2010-09-10

    The proton momentum distribution, accessible by deep inelastic neutron scattering, is a very sensitive probe of the potential of mean force experienced by the protons in hydrogen-bonded systems. In this work we introduce a novel estimator for the end-to-end distribution of the Feynman paths, i.e., the Fourier transform of the momentum distribution. In this formulation, free particle and environmental contributions factorize. Moreover, the environmental contribution has a natural analogy to a free energy surface in statistical mechanics, facilitating the interpretation of experiments. The new formulation is not only conceptually but also computationally advantageous. We illustrate the method with applications to an empirical water model, ab initio ice, and one dimensional model systems.

  5. A system for intelligent teleoperation research

    NASA Technical Reports Server (NTRS)

    Orlando, N. E.

    1983-01-01

    The Automation Technology Branch of NASA Langley Research Center is developing a research capability in the field of artificial intelligence, particularly as applicable in teleoperator/robotics development for remote space operations. As a testbed for experimentation in these areas, a system concept has been developed and is being implemented. This system termed DAISIE (Distributed Artificially Intelligent System for Interacting with the Environment), interfaces the key processes of perception, reasoning, and manipulation by linking hardware sensors and manipulators to a modular artificial intelligence (AI) software system in a hierarchical control structure. Verification experiments have been performed: one experiment used a blocksworld database and planner embedded in the DAISIE system to intelligently manipulate a simple physical environment; the other experiment implemented a joint-space collision avoidance algorithm. Continued system development is planned.

  6. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  7. A novel method for the investigation of liquid/liquid distribution coefficients and interface permeabilities applied to the water-octanol-drug system.

    PubMed

    Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette

    2011-09-01

    In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.

  8. Can airborne ultrasound monitor bubble size in chocolate?

    NASA Astrophysics Data System (ADS)

    Watson, N.; Hazlehurst, T.; Povey, M.; Vieira, J.; Sundara, R.; Sandoz, J.-P.

    2014-04-01

    Aerated chocolate products consist of solid chocolate with the inclusion of bubbles and are a popular consumer product in many countries. The volume fraction and size distribution of the bubbles has an effect on their sensory properties and manufacturing cost. For these reasons it is important to have an online real time process monitoring system capable of measuring their bubble size distribution. As these products are eaten by consumers it is desirable that the monitoring system is non contact to avoid food contaminations. In this work we assess the feasibility of using an airborne ultrasound system to monitor the bubble size distribution in aerated chocolate bars. The experimental results from the airborne acoustic experiments were compared with theoretical results for known bubble size distributions using COMSOL Multiphysics. This combined experimental and theoretical approach is used to develop a greater understanding of how ultrasound propagates through aerated chocolate and to assess the feasibility of using airborne ultrasound to monitor bubble size distribution in these systems. The results indicated that a smaller bubble size distribution would result in an increase in attenuation through the product.

  9. Influence of coupling distribution on some acoustic resonant effects in amorphous compounds

    NASA Astrophysics Data System (ADS)

    Devaud, M.; Prieur, J.-Y.

    1988-08-01

    The consequences of taking into account the distribution of the coupling constants between phonons and two level systems in amorphous compounds are considered in two concrete experimental situations: saturation and hole-burning experiments. It is shown that, if the general shapes of the variation curves are not much influenced by the distribution, however it is essential to take it into account when determining the critical power.

  10. Experience with an integrated control and monitoring system at the El Segundo generating station

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papilla, R.P.; McKinley, J.H.; Blanco, M.A.

    1992-01-01

    This paper describes the EPRI/Southern California Edison (SCE) El Segundo Integrated Control and Monitoring System (ICMS) project and relates key project experiences. The ICMS project is a cost-shared effort between EPRI and SCE designed to address the issues involved with integrating power plant diagnostic and condition monitoring with control. A digital distributed control system retrofit for SCE's El Segundo Units 3 and 4 provided the case study. although many utilities have retrofitted power plant units with distributed control systems (DCS's) and have applied diagnostics and monitoring programs to improve operations and performance, the approach taken in this project, that is,more » integrating the monitoring function with the control function, is profoundly new and unique. Over the life of the El Segundo ICMS, SCE expects to realize savings form life optimization, increased operating flexibility, improved heat rate, reduced NO{sub x} emissions, and lower maintenance costs. These savings are expected to be significant over the life of the system.« less

  11. Focusing Intense Charged Particle Beams with Achromatic Effects for Heavy Ion Fusion

    NASA Astrophysics Data System (ADS)

    Mitrani, James; Kaganovich, Igor

    2012-10-01

    Final focusing systems designed to minimize the effects of chromatic aberrations in the Neutralized Drift Compression Experiment (NDCX-II) are described. NDCX-II is a linear induction accelerator, designed to accelerate short bunches at high current. Previous experiments showed that neutralized drift compression significantly compresses the beam longitudinally (˜60x) in the z-direction, resulting in a narrow distribution in z-space, but a wide distribution in pz-space. Using simple lenses (e.g., solenoids, quadrupoles) to focus beam bunches with wide distributions in pz-space results in chromatic aberrations, leading to lower beam intensities (J/cm^2). Therefore, the final focusing system must be designed to compensate for chromatic aberrations. The paraxial ray equations and beam envelope equations are numerically solved for parameters appropriate to NDCX-II. Based on these results, conceptual designs for final focusing systems using a combination of solenoids and/or quadrupoles are optimized to compensate for chromatic aberrations. Lens aberrations and emittance growth will be investigated, and analytical results will be compared with results from numerical particle-in-cell (PIC) simulation codes.

  12. James Reilly | NREL

    Science.gov Websites

    experience and expertise in energy projects ranging from 50 kW to 150 MW across distribution and transmission security for Department of Defense Transmission and distribution system design. Education B.S., Energy Working in the Engineering and Modeling Group of NREL's Integrated Applications Center under the Energy

  13. Atomic pair distribution function at the Brazilian Synchrotron Light Laboratory: application to the Pb 1–x La xZr 0.40Ti 0.60O 3 ferroelectric system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleta, M. E.; Eleotério, M.; Mesquita, A.

    2017-07-29

    This work reports the setting up of the X-ray diffraction and spectroscopy beamline at the Brazilian Synchrotron Light Laboratory for performing total scattering experiments to be analyzed by atomic pair distribution function (PDF) studies. The results of a PDF refinement for Al 2O 3 standard are presented and compared with data acquired at a beamline of the Advanced Photon Source, where it is common to perform this type of experiment. A preliminary characterization of the Pb 1–xLa xZr 0.40Ti 0.60O 3 ferroelectric system, withx= 0.11, 0.12 and 0.15, is also shown.

  14. Structural Heterogeneity and Quantitative FRET Efficiency Distributions of Polyprolines through a Hybrid Atomistic Simulation and Monte Carlo Approach

    PubMed Central

    Hoefling, Martin; Lima, Nicola; Haenni, Dominik; Seidel, Claus A. M.; Schuler, Benjamin; Grubmüller, Helmut

    2011-01-01

    Förster Resonance Energy Transfer (FRET) experiments probe molecular distances via distance dependent energy transfer from an excited donor dye to an acceptor dye. Single molecule experiments not only probe average distances, but also distance distributions or even fluctuations, and thus provide a powerful tool to study biomolecular structure and dynamics. However, the measured energy transfer efficiency depends not only on the distance between the dyes, but also on their mutual orientation, which is typically inaccessible to experiments. Thus, assumptions on the orientation distributions and averages are usually made, limiting the accuracy of the distance distributions extracted from FRET experiments. Here, we demonstrate that by combining single molecule FRET experiments with the mutual dye orientation statistics obtained from Molecular Dynamics (MD) simulations, improved estimates of distances and distributions are obtained. From the simulated time-dependent mutual orientations, FRET efficiencies are calculated and the full statistics of individual photon absorption, energy transfer, and photon emission events is obtained from subsequent Monte Carlo (MC) simulations of the FRET kinetics. All recorded emission events are collected to bursts from which efficiency distributions are calculated in close resemblance to the actual FRET experiment, taking shot noise fully into account. Using polyproline chains with attached Alexa 488 and Alexa 594 dyes as a test system, we demonstrate the feasibility of this approach by direct comparison to experimental data. We identified cis-isomers and different static local environments as sources of the experimentally observed heterogeneity. Reconstructions of distance distributions from experimental data at different levels of theory demonstrate how the respective underlying assumptions and approximations affect the obtained accuracy. Our results show that dye fluctuations obtained from MD simulations, combined with MC single photon kinetics, provide a versatile tool to improve the accuracy of distance distributions that can be extracted from measured single molecule FRET efficiencies. PMID:21629703

  15. Validation of the MCNP6 electron-photon transport algorithm: multiple-scattering of 13- and 20-MeV electrons in thin foils

    NASA Astrophysics Data System (ADS)

    Dixon, David A.; Hughes, H. Grady

    2017-09-01

    This paper presents a validation test comparing angular distributions from an electron multiple-scattering experiment with those generated using the MCNP6 Monte Carlo code system. In this experiment, a 13- and 20-MeV electron pencil beam is deflected by thin foils with atomic numbers from 4 to 79. To determine the angular distribution, the fluence is measured down range of the scattering foil at various radii orthogonal to the beam line. The characteristic angle (the angle for which the max of the distribution is reduced by 1/e) is then determined from the angular distribution and compared with experiment. Multiple scattering foils tested herein include beryllium, carbon, aluminum, copper, and gold. For the default electron-photon transport settings, the calculated characteristic angle was statistically distinguishable from measurement and generally broader than the measured distributions. The average relative difference ranged from 5.8% to 12.2% over all of the foils, source energies, and physics settings tested. This validation illuminated a deficiency in the computation of the underlying angular distributions that is well understood. As a result, code enhancements were made to stabilize the angular distributions in the presence of very small substeps. However, the enhancement only marginally improved results indicating that additional algorithmic details should be studied.

  16. The Separatory Cylinder: A Novel Solvent Extraction System for the Study of Chemical Equilibrium in Solution.

    ERIC Educational Resources Information Center

    Cwikel, Dori; And Others

    1986-01-01

    Dicusses the use of the separatory cylinder in student laboratory experiments for investigating equilibrium distribution of a solute between immiscible phases. Describes the procedures for four sets of experiments of this nature. Lists of materials needed and quantities of reagents are provided. (TW)

  17. The ISIS Project: Real Experience with a Fault Tolerant Programming System

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert

    1990-01-01

    The ISIS project has developed a distributed programming toolkit and a collection of higher level applications based on these tools. ISIS is now in use at more than 300 locations world-wise. The lessons (and surprises) gained from this experience with the real world are discussed.

  18. Case study of open-source enterprise resource planning implementation in a small business

    NASA Astrophysics Data System (ADS)

    Olson, David L.; Staley, Jesse

    2012-02-01

    Enterprise resource planning (ERP) systems have been recognised as offering great benefit to some organisations, although they are expensive and problematic to implement. The cost and risk make well-developed proprietorial systems unaffordable to small businesses. Open-source software (OSS) has become a viable means of producing ERP system products. The question this paper addresses is the feasibility of OSS ERP systems for small businesses. A case is reported involving two efforts to implement freely distributed ERP software products in a small US make-to-order engineering firm. The case emphasises the potential of freely distributed ERP systems, as well as some of the hurdles involved in their implementation. The paper briefly reviews highlights of OSS ERP systems, with the primary focus on reporting the case experiences for efforts to implement ERPLite software and xTuple software. While both systems worked from a technical perspective, both failed due to economic factors. While these economic conditions led to imperfect results, the case demonstrates the feasibility of OSS ERP for small businesses. Both experiences are evaluated in terms of risk dimension.

  19. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    PubMed

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  20. Local oscillator distribution using a geostationary satellite

    NASA Technical Reports Server (NTRS)

    Bardin, Joseph; Weinreb, Sander; Bagri, Durga

    2004-01-01

    A satellite communication system suitable for distribution of local oscillator reference signals for a widely spaced microwave array has been developed and tested experimentally. The system uses a round-trip correction method of the satellite This experiment was carried out using Telstar-5, a commercial Ku-band geostationary satellite. For this initial experiment, both earth stations were located at the same site to facilitate direct comparison of the received signals. The local oscillator reference frequency was chosen to be 300MHz and was sent as the difference between two Ku-band tones. The residual error after applying the round trip correction has been measured to be better than 3psec for integration times ranging from 1 to 2000 seconds. For integration times greater then 500 seconds, the system outperforms a pair of hydrogen masers with the limitation believed to be ground-based equipment phase stability. The idea of distributing local oscillators using a geostationary satellite is not new; several researchers experimented with this technique in the eighties, but the achieved accuracy was 3 to 100 times worse than the present results. Since substantially and the performance of various components has improved. An important factor is the leasing of small amounts of satellite communication bandwidth. We lease three 100kHz bands at approximately one hundredth the cost of a full 36 MHz transponder. Further tests of the system using terminal separated by large distances and comparison tests with two hydrogen masers and radio interferometry is needed.

  1. Spectral atmospheric observations at Nantucket Island, May 7-14, 1981

    NASA Technical Reports Server (NTRS)

    Talay, T. A.; Poole, L. R.

    1981-01-01

    An experiment was conducted by the National Langley Research Center to measure atmospheric optical conditions using a 10-channel solar spectral photometer system. This experiment was part of a larger series of multidisciplinary experiments performed in the area of Nantucket Shoals aimed at studying the dynamics of phytoplankton production processes. Analysis of the collected atmospheric data yield total and aerosol optical depths, transmittances, normalized sky radiance distributions, and total and sky irradiances. Results of this analysis may aid in atmospheric corrections of remote sensor data obtained by several sensors overflying the Nantucket Shoals area. Recommendations are presented concerning future experiments using the described solar photometer system and calibration and operational deficiencies uncovered during the experiment.

  2. Data acquisition software for DIRAC experiment

    NASA Astrophysics Data System (ADS)

    Olshevsky, V.; Trusov, S.

    2001-08-01

    The structure and basic processes of data acquisition software of the DIRAC experiment for the measurement of π +π - atom lifetime are described. The experiment is running on the PS accelerator of CERN. The developed software allows one to accept, record and distribute up to 3 Mbytes of data to consumers in one accelerator supercycle of 14.4 s duration. The described system is successfully in use in the experiment since its startup in 1998.

  3. Equilibrium distribution of rare earth elements between molten KCl-LiCl eutectic salt and liquid cadmium

    NASA Astrophysics Data System (ADS)

    Sakata, Masahiro; Kurata, Masaki; Hijikata, Takatoshi; Inoue, Tadashi

    1991-11-01

    Distribution experiments for several rare earth elements (La, Ce, Pr, Nd and Y) between molten KCl-LiCl eutectic salt and liquid Cd were carried out at 450, 500 and 600°C. The material balance of rare earth elements after reaching the equilibrium and their distribution and chemical states in a Cd sample frozen after the experiment were examined. The results suggested the formation of solid intermetallic compounds at the lower concentrations of rare earth metals dissolved in liquid Cd than those solubilities measured in the binary alloy system. The distribution coefficients of rare earth elements between two phases (mole fraction in the Cd phase divided by mole fraction in the salt phase) were determined at each temperature. These distribution coefficients were explained satisfactorily by using the activity coefficients of chlorides and metals in salt and Cd. Both the activity coefficients of metal and chloride caused a much smaller distribution coefficient of Y relative to those of other elements.

  4. A Down-to-Earth Educational Operating System for Up-in-the-Cloud Many-Core Architectures

    ERIC Educational Resources Information Center

    Ziwisky, Michael; Persohn, Kyle; Brylow, Dennis

    2013-01-01

    We present "Xipx," the first port of a major educational operating system to a processor in the emerging class of many-core architectures. Through extensions to the proven Embedded Xinu operating system, Xipx gives students hands-on experience with system programming in a distributed message-passing environment. We expose the software primitives…

  5. Modeling occupancy distribution in large spaces with multi-feature classification algorithm

    DOE PAGES

    Wang, Wei; Chen, Jiayu; Hong, Tianzhen

    2018-04-07

    We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less

  6. Modeling occupancy distribution in large spaces with multi-feature classification algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wei; Chen, Jiayu; Hong, Tianzhen

    We present that occupancy information enables robust and flexible control of heating, ventilation, and air-conditioning (HVAC) systems in buildings. In large spaces, multiple HVAC terminals are typically installed to provide cooperative services for different thermal zones, and the occupancy information determines the cooperation among terminals. However, a person count at room-level does not adequately optimize HVAC system operation due to the movement of occupants within the room that creates uneven load distribution. Without accurate knowledge of the occupants’ spatial distribution, the uneven distribution of occupants often results in under-cooling/heating or over-cooling/heating in some thermal zones. Therefore, the lack of high-resolutionmore » occupancy distribution is often perceived as a bottleneck for future improvements to HVAC operation efficiency. To fill this gap, this study proposes a multi-feature k-Nearest-Neighbors (k-NN) classification algorithm to extract occupancy distribution through reliable, low-cost Bluetooth Low Energy (BLE) networks. An on-site experiment was conducted in a typical office of an institutional building to demonstrate the proposed methods, and the experiment outcomes of three case studies were examined to validate detection accuracy. One method based on City Block Distance (CBD) was used to measure the distance between detected occupancy distribution and ground truth and assess the results of occupancy distribution. Finally, the results show the accuracy when CBD = 1 is over 71.4% and the accuracy when CBD = 2 can reach up to 92.9%.« less

  7. The effect of entrapped nonaqueous phase liquids on tracer transport in heterogeneous porous media: Laboratory experiments at the intermediate scale

    USGS Publications Warehouse

    Barth, Gilbert R.; Illangasekare, T.H.; Rajaram, H.

    2003-01-01

    This work considers the applicability of conservative tracers for detecting high-saturation nonaqueous-phase liquid (NAPL) entrapment in heterogeneous systems. For this purpose, a series of experiments and simulations was performed using a two-dimensional heterogeneous system (10??1.2 m), which represents an intermediate scale between laboratory and field scales. Tracer tests performed prior to injecting the NAPL provide the baseline response of the heterogeneous porous medium. Two NAPL spill experiments were performed and the entrapped-NAPL saturation distribution measured in detail using a gamma-ray attenuation system. Tracer tests following each of the NAPL spills produced breakthrough curves (BTCs) reflecting the impact of entrapped NAPL on conservative transport. To evaluate significance, the impact of NAPL entrapment on the conservative-tracer breakthrough curves was compared to simulated breakthrough curve variability for different realizations of the heterogeneous distribution. Analysis of the results reveals that the NAPL entrapment has a significant impact on the temporal moments of conservative-tracer breakthrough curves. ?? 2003 Elsevier B.V. All rights reserved.

  8. High-Penetration PV Integration Handbook for Distribution Engineers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seguin, Rich; Woyak, Jeremy; Costyk, David

    2016-01-01

    This handbook has been developed as part of a five-year research project which began in 2010. The National Renewable Energy Laboratory (NREL), Southern California Edison (SCE), Quanta Technology, Satcon Technology Corporation, Electrical Distribution Design (EDD), and Clean Power Research (CPR) teamed together to analyze the impacts of high-penetration levels of photovoltaic (PV) systems interconnected onto the SCE distribution system. This project was designed specifically to leverage the experience that SCE and the project team would gain during the significant installation of 500 MW of commercial scale PV systems (1-5 MW typically) starting in 2010 and completing in 2015 within SCE’smore » service territory through a program approved by the California Public Utility Commission (CPUC).« less

  9. Power Distribution System Planning with GIS Consideration

    NASA Astrophysics Data System (ADS)

    Wattanasophon, Sirichai; Eua-Arporn, Bundhit

    This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.

  10. Implementation and characterization of a stable optical frequency distribution system.

    PubMed

    Bernhardt, Birgitta; Hänsch, Theodor W; Holzwarth, Ronald

    2009-09-14

    An optical frequency distribution system has been developed that continuously delivers a stable optical frequency of 268 THz (corresponding to a wavelength of 1118 nm) to different experiments in our institute. For that purpose, a continuous wave (cw) fiber laser has been stabilized onto a frequency comb and distributed across the building by the use of a fiber network. While the light propagates through the fiber, acoustic and thermal effects counteract against the stability and accuracy of the system. However, by employing proper stabilization methods a stability of 2 x 10(-13) tau(-1/2) is achieved, limited by the available radio frequency (RF) reference. Furthermore, the issue of counter-dependant results of the Allan deviation was examined during the data evaluation.

  11. Challenges facing the distribution of an artificial-intelligence-based system for nursing.

    PubMed

    Evans, S

    1985-04-01

    The marketing and successful distribution of artificial-intelligence-based decision-support systems for nursing face special barriers and challenges. Issues that must be confronted arise particularly from the present culture of the nursing profession as well as the typical organizational structures in which nurses predominantly work. Generalizations in the literature based on the limited experience of physician-oriented artificial intelligence applications (predominantly in diagnosis and pharmacologic treatment) must be modified for applicability to other health professions.

  12. Experiences with an Augmented Human Intellect System: A Revolution in Communication.

    ERIC Educational Resources Information Center

    Bair, James H.

    The Augmented Human Intellect System (AHI) has been designed to facilitate communication among knowledge workers who may accomplish their entire job utilizing this advanced technology. The system is capable of sending information to geographically distributed users. It permits access to and modification of stored information by a number of persons…

  13. Cronus, A Distributed Operating System: Functional Definition and System Concept.

    DTIC Science & Technology

    1984-02-01

    55 Report No 5041 Bolt Beranek and Neman Inc 3 7 Interprocess Communication The objective of the DOS interprocess communication HIPCi facility is to...comprehensive enough to support performance monitcr:n: experiments - 63- AA~ Report No 5041 ol t Beranek and Neman n2 4 System Integrity and Survivability Users

  14. Spread and SpreadRecorder An Architecture for Data Distribution

    NASA Technical Reports Server (NTRS)

    Wright, Ted

    2006-01-01

    The Space Acceleration Measurement System (SAMS) project at the NASA Glenn Research Center (GRC) has been measuring the microgravity environment of the space shuttle, the International Space Station, MIR, sounding rockets, drop towers, and aircraft since 1991. The Principle Investigator Microgravity Services (PIMS) project at NASA GRC has been collecting, analyzing, reducing, and disseminating over 3 terabytes of collected SAMS and other microgravity sensor data to scientists so they can understand the disturbances that affect their microgravity science experiments. The years of experience with space flight data generation, telemetry, operations, analysis, and distribution give the SAMS/ PIMS team a unique perspective on space data systems. In 2005, the SAMS/PIMS team was asked to look into generalizing their data system and combining it with the nascent medical instrumentation data systems being proposed for ISS and beyond, specifically the Medical Computer Interface Adapter (MCIA) project. The SpreadRecorder software is a prototype system developed by SAMS/PIMS to explore ways of meeting the needs of both the medical and microgravity measurement communities. It is hoped that the system is general enough to be used for many other purposes.

  15. Contrasting distribution patterns of invasive and naturalized non-native species along environmental gradients in a semi-arid montane ecosystem

    Treesearch

    Kelly M. Andersen; Bridgett J. Naylor; Bryan A. Endress; Catherine G. Parks

    2015-01-01

    Questions: Mountain systems have high abiotic heterogeneity over local spatial scales, offering natural experiments for examining plant species invasions. We ask whether functional groupings explain non-native species spread into native vegetation and up elevation gradients.We examine whether non-native species distribution patterns are related to environmental...

  16. Application of distributed optical fiber sensing technologies to the monitoring of leakage and abnormal disturbance of oil pipeline

    NASA Astrophysics Data System (ADS)

    Yang, Xiaojun; Zhu, Xiaofei; Deng, Chi; Li, Junyi; Liu, Cheng; Yu, Wenpeng; Luo, Hui

    2017-10-01

    To improve the level of management and monitoring of leakage and abnormal disturbance of long distance oil pipeline, the distributed optical fiber temperature and vibration sensing system is employed to test the feasibility for the healthy monitoring of a domestic oil pipeline. The simulating leakage and abnormal disturbance affairs of oil pipeline are performed in the experiment. It is demonstrated that the leakage and abnormal disturbance affairs of oil pipeline can be monitored and located accurately with the distributed optical fiber sensing system, which exhibits good performance in the sensitivity, reliability, operation and maintenance etc., and shows good market application prospect.

  17. MonALISA, an agent-based monitoring and control system for the LHC experiments

    NASA Astrophysics Data System (ADS)

    Balcas, J.; Kcira, D.; Mughal, A.; Newman, H.; Spiropulu, M.; Vlimant, J. R.

    2017-10-01

    MonALISA, which stands for Monitoring Agents using a Large Integrated Services Architecture, has been developed over the last fifteen years by California Insitute of Technology (Caltech) and its partners with the support of the software and computing program of the CMS and ALICE experiments at the Large Hadron Collider (LHC). The framework is based on Dynamic Distributed Service Architecture and is able to provide complete system monitoring, performance metrics of applications, Jobs or services, system control and global optimization services for complex systems. A short overview and status of MonALISA is given in this paper.

  18. One-Dimensional Hydraulic Theory Applied to Experimental Subaqueous Fans with Supercritical Distributaries

    NASA Astrophysics Data System (ADS)

    Hamilton, P.; Strom, K.; Hoyal, D. C. J. D.

    2015-12-01

    Subaqueous fans are distributive channel systems that form in a variety of settings including offshore marine, sub-lacustrine, and reservoirs. These distributive systems create complex sedimentation patterns through repeated avulsion to fill in a basin. Here we ran a series of experiments to explore the intrinsic controls on avulsion cycles on subaqueous fans. Experiments are a convenient way to study these systems since the time-scale of fan development is dramatically shortened compared to natural settings, all boundary conditions can be controlled, and the experimental domain can be instrumented to monitor the pertinent hydraulic and morphologic variables. Experiments in this study used saline underflows and crushed plastic sediment fed down an imposed slope covered in the sediment. Avulsion cycles are a central feature in these experiments which are characterized by: (1) channel extension and stagnation; (2) bar aggradation and hydraulic jump initiation; (3) upstream retreat; and (4) flow avulsion. Looking at and analyzing these cycles yield the following conclusions: (1) distributive channels cease progradation due to a drop in sediment transport capacity in an expanded region ahead of the channel; (2) mouth bar aggradation leads to a large flow obstacle to cause the hydraulic jump feedback; (3) hydraulic jump regions are a significant locus of deposition; and (4) the upstream retreat rate is a function of sediment supply and the strength of the jump. We found that simple one-dimensional hydraulic principles such as the choked flow condition and the sequent depth ratio help to explain hydraulic jump initiation and emplaced lobe thickness respectively.

  19. Simulations and Experiments Reveal the Relative Significance of the Free Chlorine/Nitrite Reaction in Chloraminated Systems

    EPA Science Inventory

    Nitrification can be a problem in distribution systems where chloramines are used as secondary disinfectants. A very rapid monochloramine residual loss is often associated with the onset of nitrification. During nitrification, ammonia-oxidizing bacteria biologically oxidize fre...

  20. Proceedings : Seminar on the use of Composite Third Rail in Electrified Transit and Commuter Rail Systems

    DOT National Transportation Integrated Search

    1978-11-01

    The seminar was organized at the request of UMTA to disseminate accurate information on, and experience with, composite (aluminum and steel) third, or contact rail, in wayside power distribution systems of electrified urban rail properties. The semin...

  1. An adaptable product for material processing and life science missions

    NASA Technical Reports Server (NTRS)

    Wassick, Gregory; Dobbs, Michael

    1995-01-01

    The Experiment Control System II (ECS-II) is designed to make available to the microgravity research community the same tools and mode of automated experimentation that their ground-based counterparts have enjoyed for the last two decades. The design goal was accomplished by combining commercial automation tools familiar to the experimenter community with system control components that interface with the on-orbit platform in a distributed architecture. The architecture insulates the tools necessary for managing a payload. By using commercial software and hardware components whenever possible, development costs were greatly reduced when compared to traditional space development projects. Using commercial-off-the-shelf (COTS) components also improved the usability documentation, and reducing the need for training of the system by providing familiar user interfaces, providing a wealth of readily available documentation, and reducing the need for training on system-specific details. The modularity of the distributed architecture makes it very amenable for modification to different on-orbit experiments requiring robotics-based automation.

  2. Astronaut David Wolf in medical experiment in SLS-2

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Astronaut David A. Wolf, mission specialist, participates in an experiment that investigates in-space distribution and movement of blood and gas in the pulmonary system. The data gathered during the two-week flight will be compared with results of tests performed on Earth to determine the changes that occur in pulmonary functions.

  3. Transition path time distributions

    NASA Astrophysics Data System (ADS)

    Laleman, M.; Carlon, E.; Orland, H.

    2017-12-01

    Biomolecular folding, at least in simple systems, can be described as a two state transition in a free energy landscape with two deep wells separated by a high barrier. Transition paths are the short part of the trajectories that cross the barrier. Average transition path times and, recently, their full probability distribution have been measured for several biomolecular systems, e.g., in the folding of nucleic acids or proteins. Motivated by these experiments, we have calculated the full transition path time distribution for a single stochastic particle crossing a parabolic barrier, including inertial terms which were neglected in previous studies. These terms influence the short time scale dynamics of a stochastic system and can be of experimental relevance in view of the short duration of transition paths. We derive the full transition path time distribution as well as the average transition path times and discuss the similarities and differences with the high friction limit.

  4. Framework of distributed coupled atmosphere-ocean-wave modeling system

    NASA Astrophysics Data System (ADS)

    Wen, Yuanqiao; Huang, Liwen; Deng, Jian; Zhang, Jinfeng; Wang, Sisi; Wang, Lijun

    2006-05-01

    In order to research the interactions between the atmosphere and ocean as well as their important role in the intensive weather systems of coastal areas, and to improve the forecasting ability of the hazardous weather processes of coastal areas, a coupled atmosphere-ocean-wave modeling system has been developed. The agent-based environment framework for linking models allows flexible and dynamic information exchange between models. For the purpose of flexibility, portability and scalability, the framework of the whole system takes a multi-layer architecture that includes a user interface layer, computational layer and service-enabling layer. The numerical experiment presented in this paper demonstrates the performance of the distributed coupled modeling system.

  5. Proof-of-principle experiment of reference-frame-independent quantum key distribution with phase coding

    PubMed Central

    Liang, Wen-Ye; Wang, Shuang; Li, Hong-Wei; Yin, Zhen-Qiang; Chen, Wei; Yao, Yao; Huang, Jing-Zheng; Guo, Guang-Can; Han, Zheng-Fu

    2014-01-01

    We have demonstrated a proof-of-principle experiment of reference-frame-independent phase coding quantum key distribution (RFI-QKD) over an 80-km optical fiber. After considering the finite-key bound, we still achieve a distance of 50 km. In this scenario, the phases of the basis states are related by a slowly time-varying transformation. Furthermore, we developed and realized a new decoy state method for RFI-QKD systems with weak coherent sources to counteract the photon-number-splitting attack. With the help of a reference-frame-independent protocol and a Michelson interferometer with Faraday rotator mirrors, our system is rendered immune to the slow phase changes of the interferometer and the polarization disturbances of the channel, making the procedure very robust. PMID:24402550

  6. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1995-01-01

    The scope of this project dealt with the investigation of the requirements to support distributed computing of scientific computations over a cluster of cooperative workstations. Various experiments on computations for the solution of simultaneous linear equations were performed in the early phase of the project to gain experience in the general nature and requirements of scientific applications. A specification of a distributed integrated computing environment, DICE, based on a distributed shared memory communication paradigm has been developed and evaluated. The distributed shared memory model facilitates porting existing parallel algorithms that have been designed for shared memory multiprocessor systems to the new environment. The potential of this new environment is to provide supercomputing capability through the utilization of the aggregate power of workstations cooperating in a cluster interconnected via a local area network. Workstations, generally, do not have the computing power to tackle complex scientific applications, making them primarily useful for visualization, data reduction, and filtering as far as complex scientific applications are concerned. There is a tremendous amount of computing power that is left unused in a network of workstations. Very often a workstation is simply sitting idle on a desk. A set of tools can be developed to take advantage of this potential computing power to create a platform suitable for large scientific computations. The integration of several workstations into a logical cluster of distributed, cooperative, computing stations presents an alternative to shared memory multiprocessor systems. In this project we designed and evaluated such a system.

  7. Distributed weighted least-squares estimation with fast convergence for large-scale systems☆

    PubMed Central

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976

  8. Attitudes of Germans towards distributive issues in the German health system.

    PubMed

    Ahlert, Marlies; Pfarr, Christian

    2016-05-01

    Social health care systems are inevitably confronted with the scarcity of resources and the resulting distributional challenges. Since prioritization implies distributional effects, decisions regarding respective rules should take citizens' preferences into account. In this study we concentrate on two distributive issues in the German health system: firstly, we analyze the acceptance of prioritizing decisions concerning the treatment of certain patient groups, in this case patients who all need a heart operation. We focus on the patient criteria smoking behavior, age and whether the patient has or does not have young children. Secondly, we investigate Germans' opinions towards income-dependent health services. The results reveal the strong effects of individuals' attitudes regarding general aspects of the health system on priorities, e.g. that individuals with an unhealthy lifestyle should not be prioritized. In addition, experience of limited access to health services is found to have a strong influence on citizens' attitudes, too. Finally, decisions on different prioritization criteria are found to be not independent.

  9. Space Object and Light Attribute Rendering (SOLAR) Projection System

    DTIC Science & Technology

    2017-05-08

    AVAILABILITY STATEMENT A DISTRIBUTION UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT A state of the art planetarium style projection system...Rendering (SOLAR) Projection System 1 Abstract A state of the art planetarium style projection system called Space Object and Light Attribute Rendering...planetarium style projection system for emulation of a variety of close proximity and long range imaging experiments. University at Buffalo’s Space

  10. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  11. ITEL Experiment Module and its Flight on MASER9

    NASA Astrophysics Data System (ADS)

    Löth, K.; Schneider, H.; Larsson, B.; Jansson, O.; Houltz, Y.

    2002-01-01

    The ITEL (Interfacial Turbulence in Evaporating Liquid) module is built under contract from the European Space Agency (ESA) and is scheduled to fly onboard a Sounding Rocket (MASER 9) in March 2002. The project is conducted by Swedish Space Corporation (SSC) with Lambda-X as a subcontractor responsible for the optical system. The Principle Investigator is Pierre Colinet from Université Libre de Bruxelles (ULB). The experiment in ITEL on Maser 9 is part of a research program, which will make use of the International Space Station. The purpose of the flight on Maser 9 is to observe the cellular convection (Marangoni-Bénard instability) which arise when the surface tension varies with temperature yielding thermocapillary instabilities. During the 6 minutes of microgravity of the ITEL experiment, a highly volatile liquid layer (ethyl alcohol) will be evaporated, and the convection phenomena generated by the evaporation process will be visualized. Due to the cooling by latent heat consumption at the level of the evaporating free surface, a temperature gradient is induced perpendicularly to it. The flight experiment module contains one experiment cell, including a gas system for regulation of nitrogen flow over the evaporating surface and an injection unit that is used for injection of liquid into the cell both initially and during surface regulation. The experiment cell is equipped with pressure and flow sensors as well as thermocouples both inside the liquid and at different positions in the cell. Two optical diagnostic systems have been developed around the experiment cell. An interferometric optical tomograph measures the 3-dimensional distribution of temperature in the evaporating liquid and a Schlieren system visualizes the temperature gradients inside the liquid together with the liquid surface deformation. A PC/104 based electronic system is used for management and control of the experiment. The electronic system handles measurements, housekeeping, image capture system, surface and pressure regulation as well as storage of data. The images are stored onboard on three DV tape recorders. At flight, video images as well as data is sent to ground and the experiment can be controlled via telecommands. In this presentation we will focus on the technical parts of the experiment, the overall module and the preliminary technical results obtained from the flight, including reconstructions of 3-dimensional temperature distributions.

  12. Simulations and Experiments Reveal the Relative Significance of the Free Chlorine/Nitrite Reaction in Chloraminated Systems - slides

    EPA Science Inventory

    Nitrification can be a problem in distribution systems where chloramines are used as secondary disinfectants. A very rapid monochloramine residual loss is often associated with the onset of nitrification. During nitrification, ammonia-oxidizing bacteria biologically oxidize fre...

  13. A model for a drug distribution system in remote Australia as a social determinant of health using event structure analysis.

    PubMed

    Rovers, John P; Mages, Michelle D

    2017-09-25

    The social determinants of health include the health systems under which people live and utilize health services. One social determinant, for which pharmacists are responsible, is designing drug distribution systems that ensure patients have safe and convenient access to medications. This is critical for settings with poor access to health care. Rural and remote Australia is one example of a setting where the pharmacy profession, schools of pharmacy, and regulatory agencies require pharmacists to assure medication access. Studies of drug distribution systems in such settings are uncommon. This study describes a model for a drug distribution system in an Aboriginal Health Service in remote Australia. The results may be useful for policy setting, pharmacy system design, health professions education, benchmarking, or quality assurance efforts for health system managers in similarly remote locations. The results also suggest that pharmacists can promote access to medications as a social determinant of health. The primary objective of this study was to propose a model for a drug procurement, storage, and distribution system in a remote region of Australia. The secondary objective was to learn the opinions and experiences of healthcare workers under the model. Qualitative research methods were used. Semi-structured interviews were performed with a convenience sample of 11 individuals employed by an Aboriginal health service. Transcripts were analyzed using Event Structure Analysis (ESA) to develop the model. Transcripts were also analyzed to determine the opinions and experiences of health care workers. The model was comprised of 24 unique steps with seven distinct components: choosing a supplier; creating a list of preferred medications; budgeting and ordering; supply and shipping; receipt and storage in the clinic; prescribing process; dispensing and patient counseling. Interviewees described opportunities for quality improvement in choosing suppliers, legal issues and staffing, cold chain integrity, medication shortages and wastage, and adherence to policies. The model illustrates how pharmacists address medication access as a social determinant of health, and may be helpful for policy setting, system design, benchmarking, and quality assurance by health system designers. ESA is an effective and novel method of developing such models.

  14. Cyber Security Research Frameworks For Coevolutionary Network Defense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rush, George D.; Tauritz, Daniel Remy

    Several architectures have been created for developing and testing systems used in network security, but most are meant to provide a platform for running cyber security experiments as opposed to automating experiment processes. In the first paper, we propose a framework termed Distributed Cyber Security Automation Framework for Experiments (DCAFE) that enables experiment automation and control in a distributed environment. Predictive analysis of adversaries is another thorny issue in cyber security. Game theory can be used to mathematically analyze adversary models, but its scalability limitations restrict its use. Computational game theory allows us to scale classical game theory to larger,more » more complex systems. In the second paper, we propose a framework termed Coevolutionary Agent-based Network Defense Lightweight Event System (CANDLES) that can coevolve attacker and defender agent strategies and capabilities and evaluate potential solutions with a custom network defense simulation. The third paper is a continuation of the CANDLES project in which we rewrote key parts of the framework. Attackers and defenders have been redesigned to evolve pure strategy, and a new network security simulation is devised which specifies network architecture and adds a temporal aspect. We also add a hill climber algorithm to evaluate the search space and justify the use of a coevolutionary algorithm.« less

  15. Enhanced Remedial Amendment Delivery to Subsurface Using Shear Thinning Fluid and Aqueous Foam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Lirong; Szecsody, James E.; Oostrom, Martinus

    2011-04-23

    A major issue with in situ subsurface remediation is the ability to achieve an even spatial distribution of remedial amendments to the contamination zones in an aquifer or vadose zone. Delivery of amendment to the aquifer using shear thinning fluid and to the vadose zone using aqueous foam has the potential to enhance the amendment distribution into desired locations and improve the remediation. 2-D saturated flow cell experiments were conducted to evaluate the enhanced sweeping, contaminant removal, and amendment persistence achieved by shear thinning fluid delivery. Bio-polymer xanthan gum solution was used as the shear thinning fluid. Unsaturated 1-D columnmore » and 2-D flow cell experiments were conducted to evaluate the mitigation of contaminant mobilization, amendment uniform distribution enhancement, and lateral delivery improvement by foam delivery. Surfactant sodium lauryl ether sulfate was used as the foaming agent. It was demonstrated that the shear thinning fluid injection enhanced the fluid sweeping over a heterogeneous system and increased the delivery of remedial amendment into low-permeability zones. The persistence of the amendment distributed into the low-perm zones by the shear thinning fluid was prolonged compared to that of amendment distributed by water injection. Foam delivery of amendment was shown to mitigate the mobilization of highly mobile contaminant from sediments under vadose zone conditions. Foam delivery also achieved more uniform amendment distribution in a heterogeneous unsaturated system, and demonstrated remarkable increasing in lateral distribution of the injected liquid compared to direct liquid injection.« less

  16. Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.

  17. ENGINEERING APPLICATIONS OF ANALOG COMPUTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryant, L.T.; Janicke, M.J.; Just, L.C.

    1963-10-31

    Six experiments from the fields of reactor engineering, heat transfer, and dynamics are presented to illustrate the engineering applications of analog computers. The steps required for producing the analog solution are shown, as well as complete information for duplicating the solution. Graphical results are provided. The experiments include: deceleration of a reactor control rod, pressure variations through a packed bed, reactor kinetics over many decades with thermal feedback, a vibrating system with two degrees of freedom, temperature distribution in a radiating fin, temperature distribution in an infinite slab considering variable thermal properties, and iodine -xenon buildup in a reactor. (M.C.G.)

  18. Estimating Pore Properties from NMR Relaxation Time Measurements in Heterogeneous Media

    NASA Astrophysics Data System (ADS)

    Grunewald, E.; Knight, R.

    2008-12-01

    The link between pore geometry and the nuclear magnetic resonance (NMR) relaxation time T2 is well- established for simple systems but is poorly understood for complex media with heterogeneous pores. Conventional interpretation of NMR relaxation data employs a model of isolated pores in which each hydrogen proton samples only one pore type, and the T2-distribution is directly scaled to estimate a pore-size distribution. During an actual NMR measurement, however, each proton diffuses through a finite volume of the pore network, and so may sample multiple pore types encountered within this diffusion cell. For cases in which heterogeneous pores are strongly coupled by diffusion, the meaning of the T2- distribution is not well understood and further research is required to determine how such measurements should be interpreted. In this study we directly investigate the implications of pore coupling in two groups of laboratory NMR experiments. We conduct two suites of experiments, in which samples are synthesized to exhibit a range of pore coupling strengths using two independent approaches: (a) varying the scale of the diffusion cell and (b) varying the scale over which heterogeneous pores are encountered. In the first set of experiments, we vary the scale of the diffusion cell in silica gels which have a bimodal pore-size distribution comprised of intragrannular micropores and much larger intergrannular pores. The untreated gel exhibits strong pore coupling with a single broad peak observed in the T2-distribution. By treating the gel with varied amounts of paramagnetic iron surface coatings, we decrease the surface relaxation time, T2S, and effectively decrease both the size of the diffusion cell and the degree of pore coupling. As more iron is coated to the grain surfaces, we observe a separation of the broad T2-distribution into two peaks that more accurately represent the true bimodal pore-size distribution. In the second set of experiments, we vary the scale over which heterogeneous pores are encountered in bimodal grain packs of pure quartz (long T2S) and hematite (short T2S). The scale of heterogeneity is varied by changing the mean grain size and relative mineral concentrations. When the mean grain size is small and the mineral concentrations are comparable, the T2-distribution is roughly monomodal indicating strong pore coupling. As the grain size is increased or the mineral concentrations are made increasingly uneven, the T2- distribution develops a bimodal character, more representative of the actual distribution of pore types. Numerical simulations of measurements in both experiment groups allow us to more closely investigate how the relaxing magnetization evolves in both time and space. Collectively, these experiments provide important insights into the effects of pore coupling on NMR measurements in heterogeneous systems and contribute to our ultimate goal of improving the interpretation of these data in complex near-surface sediments.

  19. Impact of the Glymphatic System on the Kinetic and Distribution of Gadodiamide in the Rat Brain: Observations by Dynamic MRI and Effect of Circadian Rhythm on Tissue Gadolinium Concentrations.

    PubMed

    Taoka, Toshiaki; Jost, Gregor; Frenzel, Thomas; Naganawa, Shinji; Pietsch, Hubertus

    2018-04-12

    The glymphatic system is a recently hypothesized waste clearance system of the brain in which perivascular space constitutes a pathway similar to the lymphatic system in other body regions. Sleep and anesthesia are reported to influence the activity of the glymphatic system. Because rats are nocturnal animals, the glymphatic system is expected to be more active during the day. We attempted to elucidate the influence of the glymphatic system for intravenously injected gadodiamide in the rat brain by 2 experiments. One was a magnetic resonance imaging (MRI) experiment to evaluate the short-term dynamics of signal intensity changes after gadodiamide administration. The other was a quantification experiment to evaluate the concentration of retained gadolinium within the rat brain after repeated intravenous administration of gadodiamide at different times of day and levels of anesthesia. The imaging experiment was performed on 6 rats that received an intravenous injection of gadodiamide (1 mmol/kg) and dynamic MRI for 3 hours at 2.4-minute intervals. The time course of the signal intensity changes was evaluated for different brain structures. The tissue quantification experiment was performed on 24 rats divided into 4 groups by injection time (morning, late afternoon) and anesthesia (none, short, long) during administration. All animals received gadodiamide (1.8 mmol/kg, 8 times over 2 weeks). Gadolinium concentration of dissected brain tissues was quantified 5 weeks after the last administration by inductively coupled plasma mass spectrometry. In the imaging experiment, muscle and the fourth ventricle showed an instantaneous signal intensity increase immediately after gadodiamide injection. The signal curve of the cerebral cortex and deep cerebellar nuclei reached the peak signal intensity later than the fourth ventricle but earlier than that of the prepontine cistern. In the gadolinium quantification experiment, the concentration in the group with the morning injection showed a significantly lower concentration than the late afternoon injection group. The lowest tissue gadolinium concentrations were found in the groups injected in the morning during long anesthesia. Instantaneous transition of gadodiamide from blood to cerebrospinal fluid was indicated by dynamic MRI. The gadodiamide distribution to the cerebral cortex and deep cerebellar nuclei seemed to depend on both blood flow and cerebrospinal fluid. This confirms previous studies indicating that the cerebrospinal fluid is one potential pathway of gadolinium-based contrast agent entry into the brain. For the distribution and clearance of the gadodiamide from brain tissue, involvement of the glymphatic system seemed to be indicated in terms of the influence of sleep and anesthesia.

  20. High sensitivity optical molecular imaging system

    NASA Astrophysics Data System (ADS)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  1. Automated crystallographic system for high-throughput protein structure determination.

    PubMed

    Brunzelle, Joseph S; Shafaee, Padram; Yang, Xiaojing; Weigand, Steve; Ren, Zhong; Anderson, Wayne F

    2003-07-01

    High-throughput structural genomic efforts require software that is highly automated, distributive and requires minimal user intervention to determine protein structures. Preliminary experiments were set up to test whether automated scripts could utilize a minimum set of input parameters and produce a set of initial protein coordinates. From this starting point, a highly distributive system was developed that could determine macromolecular structures at a high throughput rate, warehouse and harvest the associated data. The system uses a web interface to obtain input data and display results. It utilizes a relational database to store the initial data needed to start the structure-determination process as well as generated data. A distributive program interface administers the crystallographic programs which determine protein structures. Using a test set of 19 protein targets, 79% were determined automatically.

  2. Fusion cross sections for reactions involving medium and heavy nucleus-nucleus systems

    NASA Astrophysics Data System (ADS)

    Atta, Debasis; Basu, D. N.

    2014-12-01

    Existing data on near-barrier fusion excitation functions of medium and heavy nucleus-nucleus systems have been analyzed by using a simple diffused-barrier formula derived assuming the Gaussian shape of the barrier-height distributions. The fusion cross section is obtained by folding the Gaussian barrier distribution with the classical expression for the fusion cross section for a fixed barrier. The energy dependence of the fusion cross section, thus obtained, provides good description to the existing data on near-barrier fusion and capture excitation functions for medium and heavy nucleus-nucleus systems. The theoretical values for the parameters of the barrier distribution are estimated which can be used for fusion or capture cross-section predictions that are especially important for planning experiments for synthesizing new superheavy elements.

  3. High Reynolds Number Hybrid Laminar Flow Control (HLFC) Flight Experiment. Report 4; Suction System Design and Manufacture

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This document describes the design of the leading edge suction system for flight demonstration of hybrid laminar flow control on the Boeing 757 airplane. The exterior pressures on the wing surface and the required suction quantity and distribution were determined in previous work. A system consisting of porous skin, sub-surface spanwise passages ("flutes"), pressure regulating screens and valves, collection fittings, ducts and a turbocompressor was defined to provide the required suction flow. Provisions were also made for flexible control of suction distribution and quantity for HLFC research purposes. Analysis methods for determining pressure drops and flow for transpiration heating for thermal anti-icing are defined. The control scheme used to observe and modulate suction distribution in flight is described.

  4. OAI and NASA's Scientific and Technical Information

    NASA Technical Reports Server (NTRS)

    Nelson, Michael L.; Rocker, JoAnne; Harrison, Terry L.

    2002-01-01

    The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) is an evolving protocol and philosophy regarding interoperability for digital libraries (DLs). Previously, "distributed searching" models were popular for DL interoperability. However, experience has shown distributed searching systems across large numbers of DLs to be difficult to maintain in an Internet environment. The OAI-PMH is a move away from distributed searching, focusing on the arguably simpler model of "metadata harvesting". We detail NASA s involvement in defining and testing the OAI-PMH and experience to date with adapting existing NASA distributed searching DLs (such as the NASA Technical Report Server) to use the OAI-PMH and metadata harvesting. We discuss some of the entirely new DL projects that the OAI-PMH has made possible, such as the Technical Report Interchange project. We explain the strategic importance of the OAI-PMH to the mission of NASA s Scientific and Technical Information Program.

  5. Distributed Deliberative Recommender Systems

    NASA Astrophysics Data System (ADS)

    Recio-García, Juan A.; Díaz-Agudo, Belén; González-Sanz, Sergio; Sanchez, Lara Quijano

    Case-Based Reasoning (CBR) is one of most successful applied AI technologies of recent years. Although many CBR systems reason locally on a previous experience base to solve new problems, in this paper we focus on distributed retrieval processes working on a network of collaborating CBR systems. In such systems, each node in a network of CBR agents collaborates, arguments and counterarguments its local results with other nodes to improve the performance of the system's global response. We describe D2ISCO: a framework to design and implement deliberative and collaborative CBR systems that is integrated as a part of jcolibritwo an established framework in the CBR community. We apply D2ISCO to one particular simplified type of CBR systems: recommender systems. We perform a first case study for a collaborative music recommender system and present the results of an experiment of the accuracy of the system results using a fuzzy version of the argumentation system AMAL and a network topology based on a social network. Besides individual recommendation we also discuss how D2ISCO can be used to improve recommendations to groups and we present a second case of study based on the movie recommendation domain with heterogeneous groups according to the group personality composition and a group topology based on a social network.

  6. An Efficient Modulation Strategy for Cascaded Photovoltaic Systems Suffering From Module Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Cheng; Zhang, Kai; Xiong, Jian

    Modular multilevel cascaded converter (MMCC) is a promising technique for medium/high-voltage high-power photovoltaic systems due to its modularity, scalability, and capability of distributed maximum power point tracking (MPPT) etc. However, distributed MPPT under module-mismatch might polarize the distribution of ac output voltages as well as the dc-link voltages among the modules, distort grid currents, and even cause system instability. For the better acceptance in practical applications, such issues need to be well addressed. Based on mismatch degree that is defined to consider both active power distribution and maximum modulation index, this paper presents an efficient modulation strategy for a cascaded-H-bridge-basedmore » MMCC under module mismatch. It can operate in loss-reducing mode or range-extending mode. By properly switching between the two modes, performance indices such as system efficiency, grid current quality, and balance of dc voltages, can be well coordinated. In this way, the MMCC system can maintain high-performance over a wide range of operating conditions. As a result, effectiveness of the proposed modulation strategy is proved with experiments.« less

  7. An Efficient Modulation Strategy for Cascaded Photovoltaic Systems Suffering From Module Mismatch

    DOE PAGES

    Wang, Cheng; Zhang, Kai; Xiong, Jian; ...

    2017-09-26

    Modular multilevel cascaded converter (MMCC) is a promising technique for medium/high-voltage high-power photovoltaic systems due to its modularity, scalability, and capability of distributed maximum power point tracking (MPPT) etc. However, distributed MPPT under module-mismatch might polarize the distribution of ac output voltages as well as the dc-link voltages among the modules, distort grid currents, and even cause system instability. For the better acceptance in practical applications, such issues need to be well addressed. Based on mismatch degree that is defined to consider both active power distribution and maximum modulation index, this paper presents an efficient modulation strategy for a cascaded-H-bridge-basedmore » MMCC under module mismatch. It can operate in loss-reducing mode or range-extending mode. By properly switching between the two modes, performance indices such as system efficiency, grid current quality, and balance of dc voltages, can be well coordinated. In this way, the MMCC system can maintain high-performance over a wide range of operating conditions. As a result, effectiveness of the proposed modulation strategy is proved with experiments.« less

  8. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  9. Directed Design of Experiments (DOE) for Determining Probability of Detection (POD) Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Ed

    2007-01-01

    This viewgraph presentation reviews some of the issues that people who specialize in Non destructive evaluation (NDE) have with determining the statistics of the probability of detection. There is discussion of the use of the binominal distribution, and the probability of hit. The presentation then reviews the concepts of Directed Design of Experiments for Validating Probability of Detection of Inspection Systems (DOEPOD). Several cases are reviewed, and discussed. The concept of false calls is also reviewed.

  10. Information Systems Should Be Both Useful and Used: The Benetton Experience.

    ERIC Educational Resources Information Center

    Zuccaro, Bruno

    1990-01-01

    Describes the information systems strategy and network development of the Benetton clothing business. Applications in the areas of manufacturing, scheduling, centralized distribution, and centralized cash flow are discussed; the GEIS managed network service is described; and internal and external electronic data interchange (EDI) is explained.…

  11. Proceedings of Tenth Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Papers are presented on the following topics: measurement of software technology, recent studies of the Software Engineering Lab, software management tools, expert systems, error seeding as a program validation technique, software quality assurance, software engineering environments (including knowledge-based environments), the Distributed Computing Design System, and various Ada experiments.

  12. Run control techniques for the Fermilab DART data acquisition system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, G.; Engelfried, J.; Mengel, L.

    1995-10-01

    DART is the high speed, Unix based data acquisition system being developed by the Fermilab Computing Division in collaboration with eight High Energy Physics Experiments. This paper describes DART run-control which implements flexible, distributed, extensible and portable paradigms for the control and monitoring of data acquisition systems. We discuss the unique and interesting aspects of the run-control - why we chose the concepts we did, the benefits we have seen from the choices we made, as well as our experiences in deploying and supporting it for experiments during their commissioning and sub-system testing phases. We emphasize the software and techniquesmore » we believe are extensible to future use, and potential future modifications and extensions for those we feel are not.« less

  13. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  14. Distributed Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2005-01-01

    We demonstrate a new framework for analyzing and controlling distributed systems, by solving constrained optimization problems with an algorithm based on that framework. The framework is ar. information-theoretic extension of conventional full-rationality game theory to allow bounded rational agents. The associated optimization algorithm is a game in which agents control the variables of the optimization problem. They do this by jointly minimizing a Lagrangian of (the probability distribution of) their joint state. The updating of the Lagrange parameters in that Lagrangian is a form of automated annealing, one that focuses the multi-agent system on the optimal pure strategy. We present computer experiments for the k-sat constraint satisfaction problem and for unconstrained minimization of NK functions.

  15. Foundations of data-intensive science: Technology and practice for high throughput, widely distributed, data management and analysis systems

    NASA Astrophysics Data System (ADS)

    Johnston, William; Ernst, M.; Dart, E.; Tierney, B.

    2014-04-01

    Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large-scale, widely distributed data analysis systems, and the experience of the LHC can be applied to other scientific disciplines. In particular, specific analogies to the SKA will be cited in the talk.

  16. The ATLAS PanDA Pilot in Operation

    NASA Astrophysics Data System (ADS)

    Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.

  17. Synchronization Design and Error Analysis of Near-Infrared Cameras in Surgical Navigation.

    PubMed

    Cai, Ken; Yang, Rongqian; Chen, Huazhou; Huang, Yizhou; Wen, Xiaoyan; Huang, Wenhua; Ou, Shanxing

    2016-01-01

    The accuracy of optical tracking systems is important to scientists. With the improvements reported in this regard, such systems have been applied to an increasing number of operations. To enhance the accuracy of these systems further and to reduce the effect of synchronization and visual field errors, this study introduces a field-programmable gate array (FPGA)-based synchronization control method, a method for measuring synchronous errors, and an error distribution map in field of view. Synchronization control maximizes the parallel processing capability of FPGA, and synchronous error measurement can effectively detect the errors caused by synchronization in an optical tracking system. The distribution of positioning errors can be detected in field of view through the aforementioned error distribution map. Therefore, doctors can perform surgeries in areas with few positioning errors, and the accuracy of optical tracking systems is considerably improved. The system is analyzed and validated in this study through experiments that involve the proposed methods, which can eliminate positioning errors attributed to asynchronous cameras and different fields of view.

  18. Design and Evaluation of Candidate Pressure Distribution and Air Data System Tile Penetration for the Aeroassist Flight Experiment

    NASA Technical Reports Server (NTRS)

    Vontheumer, Alfred E.

    1990-01-01

    This program objectives were to produce a pressure measurements system that penetrates the thermal protection system of a spacecraft and is able to obtain accurate pressure data. The design was tested vibro-acoustically, aerothermally, and structurally and found to be adequate. This design is a possible replacement of the current pressure system on the orbiter.

  19. Few-mode fiber based distributed curvature sensor through quasi-single-mode Brillouin frequency shift.

    PubMed

    Wu, Hao; Wang, Ruoxu; Liu, Deming; Fu, Songnian; Zhao, Can; Wei, Huifeng; Tong, Weijun; Shum, Perry Ping; Tang, Ming

    2016-04-01

    We proposed and demonstrated a few-mode fiber (FMF) based optical-fiber sensor for distributed curvature measurement through quasi-single-mode Brillouin frequency shift (BFS). By central-alignment splicing FMF and single-mode fiber (SMF) with a fusion taper, a SMF-components-compatible distributed curvature sensor based on FMF is realized using the conventional Brillouin optical time-domain analysis system. The distributed BFS change induced by bending in FMF has been theoretically and experimentally investigated. The precise BFS response to the curvature along the fiber link has been calibrated. A proof-of-concept experiment is implemented to validate its effectiveness in distributed curvature measurement.

  20. Acceleration Environment of the International Space Station

    NASA Technical Reports Server (NTRS)

    McPherson, Kevin; Kelly, Eric; Keller, Jennifer

    2009-01-01

    Measurement of the microgravity acceleration environment on the International Space Station has been accomplished by two accelerometer systems since 2001. The Microgravity Acceleration Measurement System records the quasi-steady microgravity environment, including the influences of aerodynamic drag, vehicle rotation, and venting effects. Measurement of the vibratory/transient regime, comprised of vehicle, crew, and equipment disturbances, has been accomplished by the Space Acceleration Measurement System-II. Until the arrival of the Columbus Orbital Facility and the Japanese Experiment Module, the location of these sensors, and therefore, the measurement of the microgravity acceleration environment, has been limited to within the United States Laboratory. Japanese Aerospace Exploration Agency has developed a vibratory acceleration measurement system called the Microgravity Measurement Apparatus which will be deployed within the Japanese Experiment Module to make distributed measurements of the Japanese Experiment Module's vibratory acceleration environment. Two Space Acceleration Measurement System sensors from the United States Laboratory will be re-deployed to support vibratory acceleration data measurement within the Columbus Orbital Facility. The additional measurement opportunities resulting from the arrival of these new laboratories allows Principal Investigators with facilities located in these International Space Station research laboratories to obtain microgravity acceleration data in support of their sensitive experiments. The Principal Investigator Microgravity Services project, at NASA Glenn Research Center, in Cleveland, Ohio, has supported acceleration measurement systems and the microgravity scientific community through the processing, characterization, distribution, and archival of the microgravity acceleration data obtained from the International Space Station acceleration measurement systems. This paper summarizes the PIMS capabilities available to the International Space Station scientific community, introduces plans for extending microgravity analysis results to the newly arrived scientific laboratories, and provides summary information for known microgravity environment disturbers.

  1. Rotation And Scale Invariant Object Recognition Using A Distributed Associative Memory

    NASA Astrophysics Data System (ADS)

    Wechsler, Harry; Zimmerman, George Lee

    1988-04-01

    This paper describes an approach to 2-dimensional object recognition. Complex-log conformal mapping is combined with a distributed associative memory to create a system which recognizes objects regardless of changes in rotation or scale. Recalled information from the memorized database is used to classify an object, reconstruct the memorized version of the object, and estimate the magnitude of changes in scale or rotation. The system response is resistant to moderate amounts of noise and occlusion. Several experiments, using real, gray scale images, are presented to show the feasibility of our approach.

  2. Job monitoring on DIRAC for Belle II distributed computing

    NASA Astrophysics Data System (ADS)

    Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo

    2015-12-01

    We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.

  3. The evaluation of a shuttle borne lidar experiment to measure the global distribution of aerosols and their effect on the atmospheric heat budget

    NASA Technical Reports Server (NTRS)

    Shipley, S. T.; Joseph, J. H.; Trauger, J. T.; Guetter, P. J.; Eloranta, E. W.; Lawler, J. E.; Wiscombe, W. J.; Odell, A. P.; Roesler, F. L.; Weinman, J. A.

    1975-01-01

    A shuttle-borne lidar system is described, which will provide basic data about aerosol distributions for developing climatological models. Topics discussed include: (1) present knowledge of the physical characteristics of desert aerosols and the absorption characteristics of atmospheric gas, (2) radiative heating computations, and (3) general circulation models. The characteristics of a shuttle-borne radar are presented along with some laboratory studies which identify schemes that permit the implementation of a high spectral resolution lidar system.

  4. Distributed Control with Collective Intelligence

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Wheeler, Kevin R.; Tumer, Kagan

    1998-01-01

    We consider systems of interacting reinforcement learning (RL) algorithms that do not work at cross purposes , in that their collective behavior maximizes a global utility function. We call such systems COllective INtelligences (COINs). We present the theory of designing COINs. Then we present experiments validating that theory in the context of two distributed control problems: We show that COINs perform near-optimally in a difficult variant of Arthur's bar problem [Arthur] (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance in the master-slave problem.

  5. Modeling nurses' attitude toward using automated unit-based medication storage and distribution systems: an extension of the technology acceptance model.

    PubMed

    Escobar-Rodríguez, Tomás; Romero-Alonso, María Mercedes

    2013-05-01

    This article analyzes the attitude of nurses toward the use of automated unit-based medication storage and distribution systems and identifies influencing factors. Understanding these factors provides an opportunity to explore actions that might be taken to boost adoption by potential users. The theoretical grounding for this research is the Technology Acceptance Model. The Technology Acceptance Model specifies the causal relationships between perceived usefulness, perceived ease of use, attitude toward using, and actual usage behavior. The research model has six constructs, and nine hypotheses were generated from connections between these six constructs. These constructs include perceived risks, experience level, and training. The findings indicate that these three external variables are related to the perceived ease of use and perceived usefulness of automated unit-based medication storage and distribution systems, and therefore, they have a significant influence on attitude toward the use of these systems.

  6. EMC effect for light nuclei: New results from Jefferson Lab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aji Daniel

    High energy lepton scattering has been the primary tool for mapping out the quark distributions of nucleons and nuclei. Measurements of deep inelastic scattering in nuclei show that the quark distributions in heavy nuclei are not simply the sum of the quark distributions of the constituent proton and neutron, as one might expect for a weakly bound system. This modification of the quark distributions in nuclei is known as the EMC effect. I will discuss the results from Jefferson Lab (JLab) experiment E03-103, a precise measurement of the EMC effect in few-body nuclei with emphasis on the large x region.more » Data from the light nuclei suggests that the nuclear dependence of the high x quark distribution may depend on the nucleon's local environment, rather than being a purely bulk effect. In addition, I will also discuss about a future experiment at the upgraded 12 GeV Jefferson Lab facility which will further investigate the role of the local nuclear environment and the influence of detailed nuclear structure to the modification of quark distributions.« less

  7. Fault tolerant features and experiments of ANTS distributed real-time system

    NASA Astrophysics Data System (ADS)

    Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.

    1995-01-01

    The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.

  8. Sodium-sulfur battery flight experiment definition study

    NASA Technical Reports Server (NTRS)

    Chang, Rebecca; Minck, Robert

    1990-01-01

    Sodium-sulfur batteries are considered to be one of the most likely battery systems for space applications. Compared with the Ni-H2 or Ni-Co battery systems, Na-S batteries offer a mass reduction by a factor of 2 to 4, representing significant launch cost savings or increased payload mass capabilities. The Na-S battery operates at between 300 and 400 C, using liquid sodium and sulfur/polysulfide electrodes and solid ceramic electrolyte; the transport of the electrode materials to the surface of the electrolyte is through wicking/capillary forces. This paper describes five tests identified for the Na-S battery flight experiment definition study, which include the cell characterization test, the reactant distribution test, the current/temperature distribution test, the freeze/thaw test, and the multicell LEO test. A schematic diagram of Na-S cell is included.

  9. Study of CT Scan Flooding System at High Temperature and Pressure

    NASA Astrophysics Data System (ADS)

    Chen, X. Y.

    2017-12-01

    CT scan flooding experiment can scan micro-pore in different flooding stages by the use of CT scan technology, without changing the external morphology and internal structure of the core, and observe the distribution characterization in pore medium of different flooding fluid under different pressure.thus,it can rebuilt the distribution images of oil-water distribution in different flooding stages. However,under extreme high pressure and temperature conditions,the CT scan system can not meet the requirements. Container of low density materials or thin shell can not resist high pressure,while high density materials or thick shell will cause attenuation and scattering of X-ray. The experiment uses a simple Ct scanning systems.X ray from a point light source passing trough a micro beryllium shell on High pressure stainless steal container,continuously irradiates the core holder that can continuously 360° rotate along the core axis. A rare earth intensifying screen behind the core holder emitting light when irradiated with X ray can show the core X ray section image. An optical camera record the core X ray images through a transparency high pressure glazing that placed on the High pressure stainless steal container.Thus,multiple core X ray section images can reconstruct the 3D core reconstruction after a series of data processing.The experiment shows that both the micro beryllium shell and rare earth intensifying screen can work in high temperature and high pressure environment in the stainless steal container. This way that X-ray passes through a thin layer of micro beryllium shell , not high pressure stainless steal shell,avoid the attenuation and scattering of X-ray from the container shell,while improving the high-pressure experiment requirements.

  10. Quantitative accuracy of the closed-form least-squares solution for targeted SPECT.

    PubMed

    Shcherbinin, S; Celler, A

    2010-10-07

    The aim of this study is to investigate the quantitative accuracy of the closed-form least-squares solution (LSS) for single photon emission computed tomography (SPECT). The main limitation for employing this method in actual clinical reconstructions is the computational cost related to operations with a large-sized system matrix. However, in some clinical situations, the size of the system matrix can be decreased using targeted reconstruction. For example, some oncology SPECT studies are characterized by intense tracer uptakes that are localized in relatively small areas, while the remaining parts of the patient body have only a low activity background. Conventional procedures reconstruct the activity distribution in the whole object, which leads to relatively poor image accuracy/resolution for tumors while computer resources are wasted, trying to rebuild diagnostically useless background. In this study, we apply a concept of targeted reconstruction to SPECT phantom experiments imitating such oncology scans. Our approach includes two major components: (i) disconnection of the entire imaging system of equations and extraction of only those parts that correspond to the targets, i.e., regions of interest (ROI) encompassing active containers/tumors and (ii) generation of the closed-form LSS for each target ROI. We compared these ROI-based LSS with those reconstructed by the conventional MLEM approach. The analysis of the five processed cases from two phantom experiments demonstrated that the LSS approach outperformed MLEM in terms of the noise level inside ROI. On the other hand, MLEM better recovered total activity if the number of iterations was large enough. For the experiment without background activity, the ROI-based LSS led to noticeably better spatial activity distribution inside ROI. However, the distributions pertaining to both approaches were practically identical for the experiment with the concentration ratio 7:1 between the containers and the background.

  11. Rapid prototyping, astronaut training, and experiment control and supervision: distributed virtual worlds for COLUMBUS, the European Space Laboratory module

    NASA Astrophysics Data System (ADS)

    Freund, Eckhard; Rossmann, Juergen

    2002-02-01

    In 2004, the European COLUMBUS Module is to be attached to the International Space Station. On the way to the successful planning, deployment and operation of the module, computer generated and animated models are being used to optimize performance. Under contract of the German Space Agency DLR, it has become IRF's task to provide a Projective Virtual Reality System to provide a virtual world built after the planned layout of the COLUMBUS module let astronauts and experimentators practice operational procedures and the handling of experiments. The key features of the system currently being realized comprise the possibility for distributed multi-user access to the virtual lab and the visualization of real-world experiment data. Through the capabilities to share the virtual world, cooperative operations can be practiced easily, but also trainers and trainees can work together more effectively sharing the virtual environment. The capability to visualize real-world data will be used to introduce measured data of experiments into the virtual world online in order to realistically interact with the science-reference model hardware: The user's actions in the virtual world are translated into corresponding changes of the inputs of the science reference model hardware; the measured data is than in turn fed back into the virtual world. During the operation of COLUMBUS, the capabilities for distributed access and the capabilities to visualize measured data through the use of metaphors and augmentations of the virtual world may be used to provide virtual access to the COLUMBUS module, e.g. via Internet. Currently, finishing touches are being put to the system. In November 2001 the virtual world shall be operational, so that besides the design and the key ideas, first experimental results can be presented.

  12. Modernizing Distribution System Restoration to Achieve Grid Resiliency Against Extreme Weather Events: An Integrated Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Ton, Dan

    Recent severe power outages caused by extreme weather hazards have highlighted the importance and urgency of improving the resilience of the electric power grid. As the distribution grids still remain vulnerable to natural disasters, the power industry has focused on methods of restoring distribution systems after disasters in an effective and quick manner. The current distribution system restoration practice for utilities is mainly based on predetermined priorities and tends to be inefficient and suboptimal, and the lack of situational awareness after the hazard significantly delays the restoration process. As a result, customers may experience an extended blackout, which causes largemore » economic loss. On the other hand, the emerging advanced devices and technologies enabled through grid modernization efforts have the potential to improve the distribution system restoration strategy. However, utilizing these resources to aid the utilities in better distribution system restoration decision-making in response to extreme weather events is a challenging task. Therefore, this paper proposes an integrated solution: a distribution system restoration decision support tool designed by leveraging resources developed for grid modernization. We first review the current distribution restoration practice and discuss why it is inadequate in response to extreme weather events. Then we describe how the grid modernization efforts could benefit distribution system restoration, and we propose an integrated solution in the form of a decision support tool to achieve the goal. The advantages of the solution include improving situational awareness of the system damage status and facilitating survivability for customers. The paper provides a comprehensive review of how the existing methodologies in the literature could be leveraged to achieve the key advantages. The benefits of the developed system restoration decision support tool include the optimal and efficient allocation of repair crews and resources, the expediting of the restoration process, and the reduction of outage durations for customers, in response to severe blackouts due to extreme weather hazards.« less

  13. Stabilizing operation point technique based on the tunable distributed feedback laser for interferometric sensors

    NASA Astrophysics Data System (ADS)

    Mao, Xuefeng; Zhou, Xinlei; Yu, Qingxu

    2016-02-01

    We describe a stabilizing operation point technique based on the tunable Distributed Feedback (DFB) laser for quadrature demodulation of interferometric sensors. By introducing automatic lock quadrature point and wavelength periodically tuning compensation into an interferometric system, the operation point of interferometric system is stabilized when the system suffers various environmental perturbations. To demonstrate the feasibility of this stabilizing operation point technique, experiments have been performed using a tunable-DFB-laser as light source to interrogate an extrinsic Fabry-Perot interferometric vibration sensor and a diaphragm-based acoustic sensor. Experimental results show that good tracing of Q-point was effectively realized.

  14. Product Distribution Theory and Semi-Coordinate Transformations

    NASA Technical Reports Server (NTRS)

    Airiau, Stephane; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for doing distributed adaptive control of a multiagent system (MAS). We introduce the technique of "coordinate transformations" in PD theory gradient descent. These transformations selectively couple a few agents with each other into "meta-agents". Intuitively, this can be viewed as a generalization of forming binding contracts between those agents. Doing this sacrifices a bit of the distributed nature of the MAS, in that there must now be communication from multiple agents in determining what joint-move is finally implemented However, as we demonstrate in computer experiments, these transformations improve the performance of the MAS.

  15. The Hall D solenoid helium refrigeration system at JLab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laverdure, Nathaniel A.; Creel, Jonathan D.; Dixon, Kelly d.

    Hall D, the new Jefferson Lab experimental facility built for the 12GeV upgrade, features a LASS 1.85 m bore solenoid magnet supported by a 4.5 K helium refrigerator system. This system consists of a CTI 2800 4.5 K refrigerator cold box, three 150 hp screw compressors, helium gas management and storage, and liquid helium and nitrogen storage for stand-alone operation. The magnet interfaces with the cryo refrigeration system through an LN2-shielded distribution box and transfer line system, both designed and fabricated by JLab. The distribution box uses a thermo siphon design to respectively cool four magnet coils and shields withmore » liquid helium and nitrogen. We describe the salient design features of the cryo system and discuss our recent commissioning experience.« less

  16. An online detection system for aggregate sizes and shapes based on digital image processing

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Chen, Sijia

    2017-02-01

    Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.

  17. Design of the Trigger Interface and Distribution Board for CEBAF 12 GeV Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gu, Jianhui; Dong, Hai; Cuevas, R

    The design of the Trigger Interface and Distribution (TID) board for the 12 GeV Upgrade at the Continuous Electron Beam Accelerator Facility (CEBAF) at TJNAL is described. The TID board distributes a low jitter system clock, synchronized trigger, and synchronized multi-purpose SYNC signal. The TID also initiates data acquisition for the crate. With the TID boards, a multi-crate system can be setup for experiment test and commissioning. The TID board can be selectively populated as a Trigger Interface (TI) board, or a Trigger Distribution (TD) board for the 12 GeV upgrade experiments. When the TID is populated as a TI,more » it can be located in the VXS crate and distribute the CLOCK/TRIGGER/SYNC through the VXS P0 connector; it can also be located in the standard VME64 crate, and distribute the CLOCK/TRIGGER/SYNC through the VME P2 connector or front panel. It initiates the data acquisition for the front crate where the TI is positioned in. When the TID is populated as a TD, it fans out the CLOCK/TRIGGER/SYNC from trigger supervisor to the front end crates through optical fibres. The TD monitors the trigger processing on the TIs, and gives feedback to the TS for trigger flow control. Field Programmable Gate Arrays (FPGA) is utilised on TID board to provide programmability. The TID boards were intensively tested on the bench, and various setups.« less

  18. Experimental quantum key distribution with source flaws

    NASA Astrophysics Data System (ADS)

    Xu, Feihu; Wei, Kejin; Sajeed, Shihan; Kaiser, Sarah; Sun, Shihai; Tang, Zhiyuan; Qian, Li; Makarov, Vadim; Lo, Hoi-Kwong

    2015-09-01

    Decoy-state quantum key distribution (QKD) is a standard technique in current quantum cryptographic implementations. Unfortunately, existing experiments have two important drawbacks: the state preparation is assumed to be perfect without errors and the employed security proofs do not fully consider the finite-key effects for general attacks. These two drawbacks mean that existing experiments are not guaranteed to be proven to be secure in practice. Here, we perform an experiment that shows secure QKD with imperfect state preparations over long distances and achieves rigorous finite-key security bounds for decoy-state QKD against coherent attacks in the universally composable framework. We quantify the source flaws experimentally and demonstrate a QKD implementation that is tolerant to channel loss despite the source flaws. Our implementation considers more real-world problems than most previous experiments, and our theory can be applied to general discrete-variable QKD systems. These features constitute a step towards secure QKD with imperfect devices.

  19. Modeling and Compensation Design for a Power Hardware-in-the-Loop Simulation of an AC Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ainsworth, Nathan; Hariri, Ali; Prabakar, Kumaraguru

    Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less

  20. A system of {sup 99m}Tc production based on distributed electron accelerators and thermal separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, R.G.; Christian, J.D.; Petti, D.A.

    1999-04-01

    A system has been developed for the production of {sup 99m}Tc based on distributed electron accelerators and thermal separation. The radioactive decay parent of {sup 99m}Tc, {sup 99}Mo, is produced from {sup 100}Mo by a photoneutron reaction. Two alternative thermal separation processes have been developed to extract {sup 99m}Tc. Experiments have been performed to verify the technical feasibility of the production and assess the efficiency of the extraction processes. A system based on this technology enables the economical supply of {sup 99m}Tc for a large nuclear pharmacy. Twenty such production centers distributed near major metropolitan areas could produce the entiremore » US supply of {sup 99m}Tc at a cost less than the current subsidized price.« less

  1. Modeling and Compensation Design for a Power Hardware-in-the-Loop Simulation of an AC Distribution System: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prabakar, Kumaraguru; Ainsworth, Nathan; Pratt, Annabelle

    Power hardware-in-the-loop (PHIL) simulation, where actual hardware under text is coupled with a real-time digital model in closed loop, is a powerful tool for analyzing new methods of control for emerging distributed power systems. However, without careful design and compensation of the interface between the simulated and actual systems, PHIL simulations may exhibit instability and modeling inaccuracies. This paper addresses issues that arise in the PHIL simulation of a hardware battery inverter interfaced with a simulated distribution feeder. Both the stability and accuracy issues are modeled and characterized, and a methodology for design of PHIL interface compensation to ensure stabilitymore » and accuracy is presented. The stability and accuracy of the resulting compensated PHIL simulation is then shown by experiment.« less

  2. Slow crack growth test method for polyethylene gas pipes. Volume 1. Topical report, December 1992

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leis, B.; Ahmad, J.; Forte, T.

    1992-12-01

    In spite of the excellent performance record of polyethylene (PE) pipes used for gas distribution, a small number of leaks occur in distribution systems each year because of slow growth of cracks through pipe walls. The Slow Crack Growth Test (SCG) has been developed as a key element in a methodology for the assessment of the performance of polyethylene gas distribution systems to resist such leaks. This tropical report describes work conducted in the first part of the research directed at the initial development of the SCG test, including a critical evaluation of the applicability of the SCG test asmore » an element in PE gas pipe system performance methodology. Results of extensive experiments and analysis are reported. The results show that the SCG test should be very useful in performance assessment.« less

  3. [Coupling of brain oscillatory systems with cognitive (experience and valence) and physiological (cardiovascular reactivity) components of emotion].

    PubMed

    Aftanas, L I; Reva, N V; Pavlov, S V; Korenek, V V; Brak, I V

    2014-02-01

    We investigated the coupling of EEG oscillators with cognitive (experience and valence) and physiological (cardiovascular reactivity) components of emotion. Emotions of anger and joy were evoked in healthy males (n = 49) using a guided imagery method, multichannel EEG and cardiovascular reactivity (Finometer) were simultaneously recorded. Correlational analysis revealed that specially distributed EEG oscillators seem to be selectively involved into cognitive (experience and valence) and physiological (cardiovascular reactivity) components of emotional responding. We showed that low theta (4-6 Hz) activity from medial and lateral frontal cortex of the right hemisphere predominantly correlated with the anger experience, high alpha (10-12 and 12-14 Hz) and gamma (30-45 Hz) activity from central-parieto-occipital regions of the left hemisphere--with cardiovascular reactivity to anger and joy, gamma-activity (30-45 Hz) from the left hemisphere in parietal areas--with cardiovascular reactivity to joy. The findings suggest that specially distributed neuronal networks oscillating at different frequencies may be regarded as a putative neurobiological mechanism coordination dynamical balance between cognitive and physiological components of emotion as well as psycho-neuro-somatic relationships within the mind-brain-body system.

  4. Deuteron spin-lattice relaxation in the presence of an activation energy distribution: application to methanols in zeolite NaX.

    PubMed

    Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M

    2013-02-01

    A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Complete distributed computing environment for a HEP experiment: experience with ARC-connected infrastructure for ATLAS

    NASA Astrophysics Data System (ADS)

    Read, A.; Taga, A.; O-Saada, F.; Pajchel, K.; Samset, B. H.; Cameron, D.

    2008-07-01

    Computing and storage resources connected by the Nordugrid ARC middleware in the Nordic countries, Switzerland and Slovenia are a part of the ATLAS computing Grid. This infrastructure is being commissioned with the ongoing ATLAS Monte Carlo simulation production in preparation for the commencement of data taking in 2008. The unique non-intrusive architecture of ARC, its straightforward interplay with the ATLAS Production System via the Dulcinea executor, and its performance during the commissioning exercise is described. ARC support for flexible and powerful end-user analysis within the GANGA distributed analysis framework is also shown. Whereas the storage solution for this Grid was earlier based on a large, distributed collection of GridFTP-servers, the ATLAS computing design includes a structured SRM-based system with a limited number of storage endpoints. The characteristics, integration and performance of the old and new storage solutions are presented. Although the hardware resources in this Grid are quite modest, it has provided more than double the agreed contribution to the ATLAS production with an efficiency above 95% during long periods of stable operation.

  6. High-Throughput Computing on High-Performance Platforms: A Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, D; Panitkin, S; Matteo, Turilli

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less

  7. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  8. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    distribution feeder models for use in hardware-in-the-loop (HIL) experiments. Using this method, a full feeder ; proposes an additional control loop to improve frequency support while ensuring stable operation. The and Frequency Deviation," also proposes an additional control loop, this time to smooth the wind

  9. Hypervelocity Launcher for Aerothermodynamic Experiments. Phase 2

    NASA Technical Reports Server (NTRS)

    Scholz, Timothy J.; Bauer, David P.

    1995-01-01

    The capability of an Ultra Distributed Energy Store System (UDESS) powered electromagnetic launcher (EM) is experimentally assessed. The UDESS system was developed specifically to address the velocity speed limit seen in plasma armature EM launchers. Metal armature launch packages were also developed and tested to assess the usefulness of the UDESS concept for low velocity applications.

  10. Spatiotemporal stick-slip phenomena in a coupled continuum-granular system

    NASA Astrophysics Data System (ADS)

    Ecke, Robert

    In sheared granular media, stick-slip behavior is ubiquitous, especially at very small shear rates and weak drive coupling. The resulting slips are characteristic of natural phenomena such as earthquakes and well as being a delicate probe of the collective dynamics of the granular system. In that spirit, we developed a laboratory experiment consisting of sheared elastic plates separated by a narrow gap filled with quasi-two-dimensional granular material (bi-dispersed nylon rods) . We directly determine the spatial and temporal distributions of strain displacements of the elastic continuum over 200 spatial points located adjacent to the gap. Slip events can be divided into large system-spanning events and spatially distributed smaller events. The small events have a probability distribution of event moment consistent with an M - 3 / 2 power law scaling and a Poisson distributed recurrence time distribution. Large events have a broad, log-normal moment distribution and a mean repetition time. As the applied normal force increases, there are fractionally more (less) large (small) events, and the large-event moment distribution broadens. The magnitude of the slip motion of the plates is well correlated with the root-mean-square displacements of the granular matter. Our results are consistent with mean field descriptions of statistical models of earthquakes and avalanches. We further explore the high-speed dynamics of system events and also discuss the effective granular friction of the sheared layer. We find that large events result from stored elastic energy in the plates in this coupled granular-continuum system.

  11. Adaptive Multi-Agent Systems for Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  12. The Real-Time ObjectAgent Software Architecture for Distributed Satellite Systems

    DTIC Science & Technology

    2001-01-01

    real - time operating system selection are also discussed. The fourth section describes a simple demonstration of real-time ObjectAgent. Finally, the...experience with C++. After selecting the programming language, it was necessary to select a target real - time operating system (RTOS) and embedded...ObjectAgent software to run on the OSE Real Time Operating System . In addition, she is responsible for the integration of ObjectAgent

  13. Migration impact on load balancing - an experience on Amoeba

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, W.; Socko, P.

    1996-12-31

    Load balancing has been extensive study by simulation, positive results were received in most of the researches. With the increase of the availability oftlistributed systems, a few experiments have been carried out on different systems. These experimental studies either depend on task initiation or task initiation plus task migration. In this paper, we present the results of an 0 study of load balancing using a centralizedpolicy to manage the load on a set of processors, which was carried out on an Amoeba system which consists of a set of 386s and linked by 10 Mbps Ethernet. The results on onemore » hand indicate the necessity of a load balancing facility for a distributed system. On the other hand, the results question the impact of using process migration to increase system performance under the configuration used in our experiments.« less

  14. Overview of ATLAS PanDA Workload Management

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G. A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.

  15. Overview of ATLAS PanDA Workload Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maeno T.; De K.; Wenaus T.

    2011-01-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in additionmore » to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.« less

  16. An overview of the EOSDIS V0 information management system: Lessons learned from the implementation of a distributed data system

    NASA Technical Reports Server (NTRS)

    Ryan, Patrick M.

    1994-01-01

    The EOSDIS Version 0 system, released in July, 1994, is a working prototype of a distributed data system. One of the purposes of the V0 project is to take several existing data systems and coordinate them into one system while maintaining the independent nature of the original systems. The project is a learning experience and the lessons are being passed on to the architects of the system which will distribute the data received from the planned EOS satellites. In the V0 system, the data resides on heterogeneous systems across the globe but users are presented with a single, integrated interface. This interface allows users to query the participating data centers based on a wide set of criteria. Because this system is a prototype, we used many novel approaches in trying to connect a diverse group of users with the huge amount of available data. Some of these methods worked and others did not. Now that V0 has been released to the public, we can look back at the design and implementation of the system and also consider some possible future directions for the next generation of EOSDIS.

  17. GEECS (Generalized Equipment and Experiment Control System)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    GONSALVES, ANTHONY; DESHMUKH, AALHAD

    2017-01-12

    GEECS (Generalized Equipment and Experiment Control System) monitors and controls equipment distributed across a network, performs experiments by scanning input variables, and collects and stores various types of data synchronously from devices. Examples of devices include cameras, motors and pressure gauges. GEEKS is based upon LabView graphical object oriented programming (GOOP), allowing for a modular and scalable framework. Data is published for subscription of an arbitrary number of variables over TCP. A secondary framework allows easy development of graphical user interfaces for a combined control of any available devices on the control system without the need of programming knowledge. Thismore » allows for rapid integration of GEECS into a wide variety of systems. A database interface provides for devise and process configuration while allowing the user to save large quantities of data to local or network drives.« less

  18. Lightning protection of distribution systems

    NASA Astrophysics Data System (ADS)

    Darveniza, M.; Uman, M. A.

    1982-09-01

    Research work on the lightning protection of distribution systems is described. The rationale behind the planning of the first major phase of the work - the field experiments conducted in the Tampa Bay area during August 1978 and July to September 1979 is explained. The aims of the field work were to characterize lightning in the Tampa Bay area, and to identify the lightning parameters associated with the occurrence of line outages and equipment damage on the distribution systems of the participating utilities. The equipment developed for these studies is fully described. The field work provided: general data on lightning - e.g., electric and magnetic fields of cloud and ground flashes; data from automated monitoring of lightning activity; stroke current waveshapes and peak currents measured at distribution arresters; and line outage and equipment damage on 13 kV networks in the Tampa Bay area. Computer aided analyses were required to collate and to process the accumulated data. The computer programs developed for this work are described.

  19. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  20. NASA Langley Atmospheric Science Data Center (ASDC) Experience with Aircraft Data

    NASA Astrophysics Data System (ADS)

    Perez, J.; Sorlie, S.; Parker, L.; Mason, K. L.; Rinsland, P.; Kusterer, J.

    2011-12-01

    Over the past decade the NASA Langley ASDC has archived and distributed a variety of aircraft mission data sets. These datasets posed unique challenges for archiving from the rigidity of the archiving system and formats to the lack of metadata. The ASDC developed a state-of-the-art data archive and distribution system to serve the atmospheric sciences data provider and researcher communities. The system, called Archive - Next Generation (ANGe), is designed with a distributed, multi-tier, serviced-based, message oriented architecture enabling new methods for searching, accessing, and customizing data. The ANGe system provides the ease and flexibility to ingest and archive aircraft data through an ad hoc workflow or to develop a new workflow to suit the providers needs. The ASDC will describe the challenges encountered in preparing aircraft data for archiving and distribution. The ASDC is currently providing guidance to the DISCOVER-AQ (Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality) Earth Venture-1 project on developing collection, granule, and browse metadata as well as supporting the ADAM (Airborne Data For Assessing Models) site.

  1. Summary of Research: Study of Substrates in Microgravity

    NASA Technical Reports Server (NTRS)

    Bingham, Gail E.; Yendler, Boris S.; Kliss, Mark

    1996-01-01

    An upcoming series of joint US-Russian plant experiments will use the granular Substrate Nutrient Delivery System (NDS) equipment developed by Russian and Bulgarian scientists for the Mir Space Station's Svet greenhouse. The purpose of this study was to develop a better understanding of granular substrate water relations and to provide the ability to document water distribution in the Svet NDS during the space experiments. To this end, we conducted a study to expanded our understanding of substrate water behavior in granular substrates in microgravity. This report documents the results of our experiments with the Svet substrate water content sensor, explains the results observed in the Svet NDS during the 1990 Greenhouse experiment; describes the development of a miniature version of the Svet type (heat pulse) sensor that has been used to measure the distribution of water content inside the Svet NDS in space, and documents the calibration of these sensors and measurements conducted in both ground and space experiments,

  2. Human factor engineering based design and modernization of control rooms with new I and C systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larraz, J.; Rejas, L.; Ortega, F.

    2012-07-01

    Instrumentation and Control (I and C) systems of the latest nuclear power plants are based on the use of digital technology, distributed control systems and the integration of information in data networks (Distributed Control and Instrumentation Systems). This has a repercussion on Control Rooms (CRs), where the operations and monitoring interfaces correspond to these systems. These technologies are also used in modernizing I and C systems in currently operative nuclear power plants. The new interfaces provide additional capabilities for operation and supervision, as well as a high degree of flexibility, versatility and reliability. An example of this is the implementationmore » of solutions such as compact stations, high level supervision screens, overview displays, computerized procedures, new operational support systems or intelligent alarms processing systems in the modernized Man-Machine Interface (MMI). These changes in the MMI are accompanied by newly added Software (SW) controls and new solutions in automation. Tecnatom has been leading various projects in this area for several years, both in Asian countries and in the United States, using in all cases international standards from which Tecnatom own methodologies have been developed and optimized. The experience acquired in applying this methodology to the design of new control rooms is to a large extent applicable also to the modernization of current control rooms. An adequate design of the interface between the operator and the systems will facilitate safe operation, contribute to the prompt identification of problems and help in the distribution of tasks and communications between the different members of the operating shift. Based on Tecnatom experience in the field, this article presents the methodological approach used as well as the most relevant aspects of this kind of project. (authors)« less

  3. Laboratory information management system for membrane protein structure initiative--from gene to crystal.

    PubMed

    Troshin, Petr V; Morris, Chris; Prince, Stephen M; Papiz, Miroslav Z

    2008-12-01

    Membrane Protein Structure Initiative (MPSI) exploits laboratory competencies to work collaboratively and distribute work among the different sites. This is possible as protein structure determination requires a series of steps, starting with target selection, through cloning, expression, purification, crystallization and finally structure determination. Distributed sites create a unique set of challenges for integrating and passing on information on the progress of targets. This role is played by the Protein Information Management System (PIMS), which is a laboratory information management system (LIMS), serving as a hub for MPSI, allowing collaborative structural proteomics to be carried out in a distributed fashion. It holds key information on the progress of cloning, expression, purification and crystallization of proteins. PIMS is employed to track the status of protein targets and to manage constructs, primers, experiments, protocols, sample locations and their detailed histories: thus playing a key role in MPSI data exchange. It also serves as the centre of a federation of interoperable information resources such as local laboratory information systems and international archival resources, like PDB or NCBI. During the challenging task of PIMS integration, within the MPSI, we discovered a number of prerequisites for successful PIMS integration. In this article we share our experiences and provide invaluable insights into the process of LIMS adaptation. This information should be of interest to partners who are thinking about using LIMS as a data centre for their collaborative efforts.

  4. Spatial Distribution of Fate and Transport Parameters Using Cxtfit in a Karstified Limestone Model

    NASA Astrophysics Data System (ADS)

    Toro, J.; Padilla, I. Y.

    2017-12-01

    Karst environments have a high capacity to transport and store large amounts of water. This makes karst aquifers a productive resource for human consumption and ecological integrity, but also makes them vulnerable to potential contamination of hazardous chemical substances. High heterogeneity and anisotropy of karst aquifer properties make them very difficult to characterize for accurate prediction of contaminant mobility and persistence in groundwater. Current technologies to characterize and quantify flow and transport processes at field-scale is limited by low resolution of spatiotemporal data. To enhance this resolution and provide the essential knowledge of karst groundwater systems, studies at laboratory scale can be conducted. This work uses an intermediate karstified lab-scale physical model (IKLPM) to study fate and transport processes and assess viable tools to characterize heterogeneities in karst systems. Transport experiments are conducted in the IKLPM using step injections of calcium chloride, uranine, and rhodamine wt tracers. Temporal concentration distributions (TCDs) obtained from the experiments are analyzed using the method of moments and CXTFIT to quantify fate and transport parameters in the system at various flow rates. The spatial distribution of the estimated fate and transport parameters for the tracers revealed high variability related to preferential flow heterogeneities and scale dependence. Results are integrated to define spatially-variable transport regions within the system and assess their fate and transport characteristics.

  5. A Simple and Automatic Method for Locating Surgical Guide Hole

    NASA Astrophysics Data System (ADS)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  6. Distribution of Ejecta in Analog Tephra Rings from Discrete Single and Multiple Subsurface Explosions

    NASA Astrophysics Data System (ADS)

    Graettinger, A. H.; Valentine, G. A.; Sonder, I.; Ross, P. S.; White, J. D. L.

    2015-12-01

    Buried-explosion experiments were used to investigate the spatial and volumetric distribution of extra-crater ejecta resulting from a range of explosion configurations with and without a crater present. Explosion configuration is defined in terms of scaled depth, the relationship between depth of burial and the cube root of explosion energy, where an optimal scaled depth explosion produces the largest crater diameter for a given energy. The multiple explosion experiments provide an analog for the formation of maar-diatreme ejecta deposits and the deposits of discrete explosions through existing conduits and hydrothermal systems. Experiments produced meter-sized craters with ejecta distributed between three major facies based on morphology and distance from the crater center. The proximal deposits form a constructional steep-sided ring that extends no more than two-times the crater radius away from center. The medial deposits form a low-angle continuous blanket that transitions with distance into the isolated clasts of the distal ejecta. Single explosion experiments produce a trend of increasing volume proportion of proximal ejecta as scaled depth increases (from 20-90% vol.). Multiple explosion experiments are dominated by proximal deposits (>90% vol.) for all but optimal scaled depth conditions (40-70% vol.). In addition to scaled depth, the presence of a crater influences jet shape and how the jet collapses, resulting in two end-member depositional mechanisms that produce distinctive facies. The experiments use one well-constrained explosion mechanism and, consequently, the variations in depositional facies and distribution are the result of conditions independent of that mechanism. Previous interpretations have invoked variations in fragmentation as the cause of this variability, but these experiments should help with a more complete reconstruction of the configuration and number of explosions that produce a tephra ring.

  7. Towards Information Enrichment through Recommendation Sharing

    NASA Astrophysics Data System (ADS)

    Weng, Li-Tung; Xu, Yue; Li, Yuefeng; Nayak, Richi

    Nowadays most existing recommender systems operate in a single organisational basis, i.e. a recommender system recommends items to customers of one organisation based on the organisation's datasets only. Very often the datasets of a single organisation do not have sufficient resources to be used to generate quality recommendations. Therefore, it would be beneficial if recommender systems of different organisations with similar nature can cooperate together to share their resources and recommendations. In this chapter, we present an Ecommerce-oriented Distributed Recommender System (EDRS) that consists of multiple recommender systems from different organisations. By sharing resources and recommendations with each other, these recommenders in the distributed recommendation system can provide better recommendation service to their users. As for most of the distributed systems, peer selection is often an important aspect. This chapter also presents a recommender selection technique for the proposed EDRS, and it selects and profiles recommenders based on their stability, average performance and selection frequency. Based on our experiments, it is shown that recommenders' recommendation quality can be effectively improved by adopting the proposed EDRS and the associated peer selection technique.

  8. Fault Injection and Monitoring Capability for a Fault-Tolerant Distributed Computation System

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo; Yates, Amy M.; Malekpour, Mahyar R.

    2010-01-01

    The Configurable Fault-Injection and Monitoring System (CFIMS) is intended for the experimental characterization of effects caused by a variety of adverse conditions on a distributed computation system running flight control applications. A product of research collaboration between NASA Langley Research Center and Old Dominion University, the CFIMS is the main research tool for generating actual fault response data with which to develop and validate analytical performance models and design methodologies for the mitigation of fault effects in distributed flight control systems. Rather than a fixed design solution, the CFIMS is a flexible system that enables the systematic exploration of the problem space and can be adapted to meet the evolving needs of the research. The CFIMS has the capabilities of system-under-test (SUT) functional stimulus generation, fault injection and state monitoring, all of which are supported by a configuration capability for setting up the system as desired for a particular experiment. This report summarizes the work accomplished so far in the development of the CFIMS concept and documents the first design realization.

  9. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  10. [Effect on iron release in drinking water distribution systems].

    PubMed

    Niu, Zhang-bin; Wang, Yang; Zhang, Xiao-jian; Chen, Chao; Wang, Sheng-hui

    2007-10-01

    Batch-scale experiments were done to quantitatively study the effect of inorganic chemical parameters on iron release in drinking water distribution systems. The parameters include acid-base condition, oxidation-reduction condition, and neutral ion condition. It was found that the iron release rate decreased with pH, alkalinity, the concentration of dissolved oxygen increasing, and the iron release rate increased with the concentration of chloride increasing. The theoretical critical formula of iron release rate was elucidated. According to the formula, the necessary condition for controlling iron release is that pH is above 7.6, the concentration of alkalinity and dissolved oxygen is more than 150 mg/L and 2 mg/L, and the concentration of chloride is less than 150 mg/L of distributed water.

  11. Study of the zinc-silver oxide battery system

    NASA Technical Reports Server (NTRS)

    Nanis, L.

    1973-01-01

    Theoretical and experimental models for the evaluation of current distribution in flooded, porous electrodes are discussed. An approximation for the local current distribution function was derived for conditions of a linear overpotential, a uniform concentration, and a very conductive matrix. By considering the porous electrode to be an analog of chemical catalyst structures, a dimensionless performance parameter was derived from the approximated current distribution function. In this manner the electrode behavior was characterized in terms of an electrochemical Thiele parameter and an effectiveness factor. It was shown that the electrochemical engineering approach makes possible the organizations of theoretical descriptions and of practical experience in the form of dimensionless parameters, such as the electrochemical Thiele parameters, and hence provides useful information for the design of new electrochemical systems.

  12. AGIS: The ATLAS Grid Information System

    NASA Astrophysics Data System (ADS)

    Anisenkov, Alexey; Belov, Sergey; Di Girolamo, Alessandro; Gayazov, Stavro; Klimentov, Alexei; Oleynik, Danila; Senchenko, Alexander

    2012-12-01

    ATLAS is a particle physics experiment at the Large Hadron Collider at CERN. The experiment produces petabytes of data annually through simulation production and tens petabytes of data per year from the detector itself. The ATLAS Computing model embraces the Grid paradigm and a high degree of decentralization and computing resources able to meet ATLAS requirements of petabytes scale data operations. In this paper we present ATLAS Grid Information System (AGIS) designed to integrate configuration and status information about resources, services and topology of whole ATLAS Grid needed by ATLAS Distributed Computing applications and services.

  13. Polarized structure functions in a constituent quark scenario

    NASA Astrophysics Data System (ADS)

    Scopetta, Sergio; Vento, Vicente; Traini, Marco

    1998-12-01

    Using a simple picture of the constituent quark as a composite system of point-like partons, we construct the polarized parton distributions by a convolution between constituent quark momentum distributions and constituent quark structure functions. Using unpolarized data to fix the parameters we achieve good agreement with the polarization experiments for the proton, while not so for the neutron. By relaxing our assumptions for the sea distributions, we define new quark functions for the polarized case, which reproduce well the proton data and are in better agreement with the neutron data. When our results are compared with similar calculations using non-composite constituent quarks the accord with the experiments of the present scheme is impressive. We conclude that, also in the polarized case, DIS data are consistent with a low energy scenario dominated by composite constituents of the nucleon.

  14. An Overview of NASA's SubsoniC Research Aircraft Testbed (SCRAT)

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Hernandez, Joe; Ruhf, John

    2013-01-01

    National Aeronautics and Space Administration Dryden Flight Research Center acquired a Gulfstream III (GIII) aircraft to serve as a testbed for aeronautics flight research experiments. The aircraft is referred to as SCRAT, which stands for SubsoniC Research Aircraft Testbed. The aircraft’s mission is to perform aeronautics research; more specifically raising the Technology Readiness Level (TRL) of advanced technologies through flight demonstrations and gathering high-quality research data suitable for verifying the technologies, and validating design and analysis tools. The SCRAT has the ability to conduct a range of flight research experiments throughout a transport class aircraft’s flight envelope. Experiments ranging from flight-testing of a new aircraft system or sensor to those requiring structural and aerodynamic modifications to the aircraft can be accomplished. The aircraft has been modified to include an instrumentation system and sensors necessary to conduct flight research experiments along with a telemetry capability. An instrumentation power distribution system was installed to accommodate the instrumentation system and future experiments. An engineering simulation of the SCRAT has been developed to aid in integrating research experiments. A series of baseline aircraft characterization flights has been flown that gathered flight data to aid in developing and integrating future research experiments. This paper describes the SCRAT’s research systems and capabilities

  15. An Overview of NASA's Subsonic Research Aircraft Testbed (SCRAT)

    NASA Technical Reports Server (NTRS)

    Baumann, Ethan; Hernandez, Joe; Ruhf, John C.

    2013-01-01

    National Aeronautics and Space Administration Dryden Flight Research Center acquired a Gulfstream III (GIII) aircraft to serve as a testbed for aeronautics flight research experiments. The aircraft is referred to as SCRAT, which stands for SubsoniC Research Aircraft Testbed. The aircraft's mission is to perform aeronautics research; more specifically raising the Technology Readiness Level (TRL) of advanced technologies through flight demonstrations and gathering high-quality research data suitable for verifying the technologies, and validating design and analysis tools. The SCRAT has the ability to conduct a range of flight research experiments throughout a transport class aircraft's flight envelope. Experiments ranging from flight-testing of a new aircraft system or sensor to those requiring structural and aerodynamic modifications to the aircraft can be accomplished. The aircraft has been modified to include an instrumentation system and sensors necessary to conduct flight research experiments along with a telemetry capability. An instrumentation power distribution system was installed to accommodate the instrumentation system and future experiments. An engineering simulation of the SCRAT has been developed to aid in integrating research experiments. A series of baseline aircraft characterization flights has been flown that gathered flight data to aid in developing and integrating future research experiments. This paper describes the SCRAT's research systems and capabilities.

  16. Electrical system options for space exploration

    NASA Technical Reports Server (NTRS)

    Bercaw, Robert W.; Cull, Ronald C.

    1991-01-01

    The need for a space power utility concept is discussed and the impact of this concept on the engineering of space power systems is examined. Experiences gained from Space Station Freedom and SEI systems studies are used to discuss the factors that may affect the choice of frequency standards on which to build such a space power utility. Emphasis is given to electrical power control, conditioning, and distribution subsystems.

  17. Compact transmission system using single-sideband modulation of light for quantum cryptography.

    PubMed

    Duraffourg, L; Merolla, J M; Goedgebuer, J P; Mazurenko, Y; Rhodes, W T

    2001-09-15

    We report a new transmission that can be used for quantum key distribution. The system uses single-sideband-modulated light in an implementation of the BB84 quantum cryptography protocol. The system is formed by two integrated unbalanced Mach-Zehnder interferometers and is based on interference between phase-modulated sidebands in the spectral domain. Experiments show that high interference visibility can be obtained.

  18. Characterizing Feedbacks Between Environmental Forcing and Sediment Characteristics in Fluvial and Coastal Systems

    NASA Astrophysics Data System (ADS)

    Feehan, S.; Ruggiero, P.; Hempel, L. A.; Anderson, D. L.; Cohn, N.

    2016-12-01

    Characterizing Feedbacks Between Environmental Forcing and Sediment Characteristics in Fluvial and Coastal Systems American Geophysical Union, 2016 Fall Meeting: San Francisco, CA Authors: Scott Feehan, Peter Ruggiero, Laura Hempel, and Dylan Anderson Linking transport processes and sediment characteristics within different environments along the source to sink continuum provides critical insight into the dominant feedbacks between grain size distributions and morphological evolution. This research is focused on evaluating differences in sediment size distributions across both fluvial and coastal environments in the U.S. Pacific Northwest. The Cascades' high relief is characterized by diverse flow regimes with high peak/flashy flows and sub-threshold flows occurring in relative proximity and one of the most energetic wave climates in the world. Combining analyses of both fluvial and coastal environments provides a broader understanding of the dominant forces driving differences between each system's grain size distributions, sediment transport processes, and resultant evolution. We consider sediment samples taken during a large-scale flume experiment that simulated floods representative of both high/flashy peak flows analogous to runoff dominated rivers and sub-threshold flows, analogous to spring-fed rivers. High discharge flows resulted in narrower grain size distributions while low flows where less skewed. Relative sediment size showed clear dependence on distance from source and the environments' dominant fluid motion. Grain size distributions and sediment transport rates were also quantified in both wave dominated nearshore and aeolian dominated backshore portions of Long Beach Peninsula, Washington during SEDEX2, the Sandbar-aEolian-Dune EXchange Experiment of summer 2016. The distributions showed spatial patterns in mean grain size, skewness, and kurtosis dependent on the dominant sediment transport process. The feedback between these grain size distributions and the predominant driver of sediment transport controls the potential for geomorphic change on societally relevant time scales in multiple settings.

  19. Seasat-A ASVT: Commercial demonstration experiments. Results analysis methodology for the Seasat-A case studies

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The SEASAT-A commercial demonstration program ASVT is described. The program consists of a set of experiments involving the evaluation of a real time data distributions system, the SEASAT-A user data distribution system, that provides the capability for near real time dissemination of ocean conditions and weather data products from the U.S. Navy Fleet Numerical Weather Central to a selected set of commercial and industrial users and case studies, performed by commercial and industrial users, using the data gathered by SEASAT-A during its operational life. The impact of the SEASAT-A data on business operations is evaluated by the commercial and industrial users. The approach followed in the performance of the case studies, and the methodology used in the analysis and integration of the case study results to estimate the actual and potential economic benefits of improved ocean condition and weather forecast data are described.

  20. A deployment of fine-grained sensor network and empirical analysis of urban temperature.

    PubMed

    Thepvilojanapong, Niwat; Ono, Takahiro; Tobe, Yoshito

    2010-01-01

    Temperature in an urban area exhibits a complicated pattern due to complexity of infrastructure. Despite geographical proximity, structures of a group of buildings and streets affect changes in temperature. To investigate the pattern of fine-grained distribution of temperature, we installed a densely distributed sensor network called UScan. In this paper, we describe the system architecture of UScan as well as experience learned from installing 200 sensors in downtown Tokyo. The field experiment of UScan system operated for two months to collect long-term urban temperature data. To analyze the collected data in an efficient manner, we propose a lightweight clustering methodology to study the correlation between the pattern of temperature and various environmental factors including the amount of sunshine, the width of streets, and the existence of trees. The analysis reveals meaningful results and asserts the necessity of fine-grained deployment of sensors in an urban area.

  1. Network-based reading system for lung cancer screening CT

    NASA Astrophysics Data System (ADS)

    Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio

    2006-03-01

    This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.

  2. Commissioning and Operational Experience with 1 kW Class Helium Refrigerator/Liquefier for SST-1

    NASA Astrophysics Data System (ADS)

    Dhard, C. P.; Sarkar, B.; Misra, Ruchi; Sahu, A. K.; Tanna, V. L.; Tank, J.; Panchal, P.; Patel, J. C.; Phadke, G. D.; Saxena, Y. C.

    2004-06-01

    The helium refrigerator/liquefier (R/L) for the Steady State Super conducting Tokamak (SST-1) has been developed with very stringent specifications for the different operational modes. The total refrigeration capacity is 650 W at 4.5 K and liquefaction capacity of 200 l/h. A cold circulation pump is used for the forced flow cooling of 300 g/s supercritical helium (SHe) for the magnet system (SCMS). The R/L has been designed also to absorb a 200 W transient heat load of the SCMS. The plant consists of a compressor station, oil removal system, on-line purifier, Main Control Dewar (MCD) with associated heat exchangers, cold circulation pump and warm gas management system. An Integrated Flow Control and Distribution System (IFDCS) has been designed, fabricated and installed for distribution of SHe in the toroidal and poloidal field coils as well as liquid helium for cooling of 10 pairs of current leads. A SCADA based control system has been designed using PLC for R/L as well as IFDCS. The R/L has been commissioned and required parameters were achieved confirming to the process. All the test results and commissioning experiences are discussed in this paper.

  3. Reformulation of a clinical-dose system for carbon-ion radiotherapy treatment planning at the National Institute of Radiological Sciences, Japan

    NASA Astrophysics Data System (ADS)

    Inaniwa, Taku; Kanematsu, Nobuyuki; Matsufuji, Naruhiro; Kanai, Tatsuaki; Shirai, Toshiyuki; Noda, Koji; Tsuji, Hiroshi; Kamada, Tadashi; Tsujii, Hirohiko

    2015-04-01

    At the National Institute of Radiological Sciences (NIRS), more than 8,000 patients have been treated for various tumors with carbon-ion (C-ion) radiotherapy in the past 20 years based on a radiobiologically defined clinical-dose system. Through clinical experience, including extensive dose escalation studies, optimum dose-fractionation protocols have been established for respective tumors, which may be considered as the standards in C-ion radiotherapy. Although the therapeutic appropriateness of the clinical-dose system has been widely demonstrated by clinical results, the system incorporates several oversimplifications such as dose-independent relative biological effectiveness (RBE), empirical nuclear fragmentation model, and use of dose-averaged linear energy transfer to represent the spectrum of particles. We took the opportunity to update the clinical-dose system at the time we started clinical treatment with pencil beam scanning, a new beam delivery method, in 2011. The requirements for the updated system were to correct the oversimplifications made in the original system, while harmonizing with the original system to maintain the established dose-fractionation protocols. In the updated system, the radiation quality of the therapeutic C-ion beam was derived with Monte Carlo simulations, and its biological effectiveness was predicted with a theoretical model. We selected the most used C-ion beam with αr = 0.764 Gy-1 and β = 0.0615 Gy-2 as reference radiation for RBE. The C-equivalent biological dose distribution is designed to allow the prescribed survival of tumor cells of the human salivary gland (HSG) in entire spread-out Bragg peak (SOBP) region, with consideration to the dose dependence of the RBE. This C-equivalent biological dose distribution is scaled to a clinical dose distribution to harmonize with our clinical experiences with C-ion radiotherapy. Treatment plans were made with the original and the updated clinical-dose systems, and both physical and clinical dose distributions were compared with regard to the prescribed dose level, beam energy, and SOBP width. Both systems provided uniform clinical dose distributions within the targets consistent with the prescriptions. The mean physical doses delivered to targets by the updated system agreed with the doses by the original system within ±1.5% for all tested conditions. The updated system reflects the physical and biological characteristics of the therapeutic C-ion beam more accurately than the original system, while at the same time allowing the continued use of the dose-fractionation protocols established with the original system at NIRS.

  4. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing.

    PubMed

    Ölçer, İbrahim; Öncü, Ahmet

    2017-06-05

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ -OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ -OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems.

  5. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing

    PubMed Central

    Ölçer, İbrahim; Öncü, Ahmet

    2017-01-01

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry (ϕ-OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ-OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ-OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems. PMID:28587240

  6. The AMIDAS Website: An Online Tool for Direct Dark Matter Detection Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Chung-Lin

    2010-02-10

    Following our long-erm work on development of model-independent data analysis methods for reconstructing the one-dimensional velocity distribution function of halo WIMPs as well as for determining their mass and couplings on nucleons by using data from direct Dark Matter detection experiments directly, we combined the simulation programs to a compact system: AMIDAS (A Model-Independent Data Analysis System). For users' convenience an online system has also been established at the same time. AMIDAS has the ability to do full Monte Carlo simulations, faster theoretical estimations, as well as to analyze (real) data sets recorded in direct detection experiments without modifying themore » source code. In this article, I give an overview of functions of the AMIDAS code based on the use of its website.« less

  7. A user-oriented synthetic workload generator

    NASA Technical Reports Server (NTRS)

    Kao, Wei-Lun

    1991-01-01

    A user oriented synthetic workload generator that simulates users' file access behavior based on real workload characterization is described. The model for this workload generator is user oriented and job specific, represents file I/O operations at the system call level, allows general distributions for the usage measures, and assumes independence in the file I/O operation stream. The workload generator consists of three parts which handle specification of distributions, creation of an initial file system, and selection and execution of file I/O operations. Experiments on SUN NFS are shown to demonstrate the usage of the workload generator.

  8. A digitally implemented communications experiment utilizing the Hermes (CTS) satellite

    NASA Technical Reports Server (NTRS)

    Jackson, H. D.; Fiala, J. L.

    1977-01-01

    The Hermes (CTS) experiment program made possible a significant effort directed toward new developments which will reduce the costs associated with the distribution of satellite services. Advanced satellite transponder technology and small inexpensive earth terminals were demonstrated as part of the Hermes program. Another system element that holds promise for reduced transmission cost is associated with the communication link implementation. An experiment is described which uses CTS to demonstrate digital link implementation and its advantages over conventional analog systems. A Digitally Implemented Communications experiment which demonstrates the flexibility and efficiency of digital transmission of television video and audio, telephone voice and high-bit-rate data is also described. Presentation of the experiment concept which concentrates on the evaluation of full-duplex digital television in the teleconferencing environment is followed by a description of unique equipment that was developed.

  9. Built But Not Used, Needed But Not Built: Ground System Guidance Based On Cassini-Huygens Experience

    NASA Technical Reports Server (NTRS)

    Larsen, Barbara S.

    2006-01-01

    These reflections share insight gleaned from Cassini-Huygens experience in supporting uplink operations tasks with software. Of particular interest are developed applications that were not widely adopted and tasks for which the appropriate application was not planned. After several years of operations, tasks are better understood providing a clearer picture of the mapping of requirements to applications. The impact on system design of the changing user profile due to distributed operations and greater participation of scientists in operations is also explored. Suggestions are made for improving the architecture, requirements, and design of future systems for uplink operations.

  10. Time-resolved photoion imaging spectroscopy: Determining energy distribution in multiphoton absorption experiments

    NASA Astrophysics Data System (ADS)

    Qian, D. B.; Shi, F. D.; Chen, L.; Martin, S.; Bernard, J.; Yang, J.; Zhang, S. F.; Chen, Z. Q.; Zhu, X. L.; Ma, X.

    2018-04-01

    We propose an approach to determine the excitation energy distribution due to multiphoton absorption in the case of excited systems following decays to produce different ion species. This approach is based on the measurement of the time-resolved photoion position spectrum by using velocity map imaging spectrometry and an unfocused laser beam with a low fluence and homogeneous profile. Such a measurement allows us to identify the species and the origin of each ion detected and to depict the energy distribution using a pure Poisson's equation involving only one variable which is proportional to the absolute photon absorption cross section. A cascade decay model is used to build direct connections between the energy distribution and the probability to detect each ionic species. Comparison between experiments and simulations permits the energy distribution and accordingly the absolute photon absorption cross section to be determined. This approach is illustrated using C60 as an example. It may therefore be extended to a wide variety of molecules and clusters having decay mechanisms similar to those of fullerene molecules.

  11. Astronaut William McArthur in medical experiment in SLS-2

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Astronaut William S. McArthur, mission specialist, participates in an experiment using the rebreathing assembly and a gas analyzer mass spectrometer that investigates in-space distribution and movement of blood and gas in the pulmonary system. The data gathered during the two-week flight will be compared with results of tests performed on Earth to determine the changes that occur in pulmonary functions.

  12. Astronaut David Wolf in medical experiment in SLS-2

    NASA Image and Video Library

    1993-10-28

    STS058-204-014 (18 Oct.-1 Nov. 1993) --- Astronaut David A. Wolf, mission specialist, participates in an experiment that investigates in-space distribution and movement of blood and gas in the pulmonary system. The data gathered during the two-week flight will be compared with results of tests performed on Earth to determine the changes that occur in pulmonary functions. Photo credit: NASA

  13. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  14. Experience in highly parallel processing using DAP

    NASA Technical Reports Server (NTRS)

    Parkinson, D.

    1987-01-01

    Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.

  15. Microgravity effects on water flow and distribution in unsaturated porous media: Analyses of flight experiments

    NASA Astrophysics Data System (ADS)

    Jones, Scott B.; Or, Dani

    1999-04-01

    Plants grown in porous media are part of a bioregenerative life support system designed for long-duration space missions. Reduced gravity conditions of orbiting spacecraft (microgravity) alter several aspects of liquid flow and distribution within partially saturated porous media. The objectives of this study were to evaluate the suitability of conventional capillary flow theory in simulating water distribution in porous media measured in a microgravity environment. Data from experiments aboard the Russian space station Mir and a U.S. space shuttle were simulated by elimination of the gravitational term from the Richards equation. Qualitative comparisons with media hydraulic parameters measured on Earth suggest narrower pore size distributions and inactive or nonparticipating large pores in microgravity. Evidence of accentuated hysteresis, altered soil-water characteristic, and reduced unsaturated hydraulic conductivity from microgravity simulations may be attributable to a number of proposed secondary mechanisms. These are likely spawned by enhanced and modified paths of interfacial flows and an altered force ratio of capillary to body forces in microgravity.

  16. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  17. Predeployment validation of fault-tolerant systems through software-implemented fault insertion

    NASA Technical Reports Server (NTRS)

    Czeck, Edward W.; Siewiorek, Daniel P.; Segall, Zary Z.

    1989-01-01

    Fault injection-based automated testing (FIAT) environment, which can be used to experimentally characterize and evaluate distributed realtime systems under fault-free and faulted conditions is described. A survey is presented of validation methodologies. The need for fault insertion based on validation methodologies is demonstrated. The origins and models of faults, and motivation for the FIAT concept are reviewed. FIAT employs a validation methodology which builds confidence in the system through first providing a baseline of fault-free performance data and then characterizing the behavior of the system with faults present. Fault insertion is accomplished through software and allows faults or the manifestation of faults to be inserted by either seeding faults into memory or triggering error detection mechanisms. FIAT is capable of emulating a variety of fault-tolerant strategies and architectures, can monitor system activity, and can automatically orchestrate experiments involving insertion of faults. There is a common system interface which allows ease of use to decrease experiment development and run time. Fault models chosen for experiments on FIAT have generated system responses which parallel those observed in real systems under faulty conditions. These capabilities are shown by two example experiments each using a different fault-tolerance strategy.

  18. RoMPS concept review automatic control of space robot, volume 2

    NASA Technical Reports Server (NTRS)

    Dobbs, M. E.

    1991-01-01

    Topics related to robot operated materials processing in space (RoMPS) are presented in view graph form and include: (1) system concept; (2) Hitchhiker Interface Requirements; (3) robot axis control concepts; (4) Autonomous Experiment Management System; (5) Zymate Robot Controller; (6) Southwest SC-4 Computer; (7) oven control housekeeping data; and (8) power distribution.

  19. HEPLIB `91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  20. HEPLIB 91: International users meeting on the support and environments of high energy physics computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnstad, H.

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, datamore » base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards.« less

  1. A Content Markup Language for Data Services

    NASA Astrophysics Data System (ADS)

    Noviello, C.; Acampa, P.; Mango Furnari, M.

    Network content delivery and documents sharing is possible using a variety of technologies, such as distributed databases, service-oriented applications, and so forth. The development of such systems is a complex job, because document life cycle involves a strong cooperation between domain experts and software developers. Furthermore, the emerging software methodologies, such as the service-oriented architecture and knowledge organization (e.g., semantic web) did not really solve the problems faced in a real distributed and cooperating settlement. In this chapter the authors' efforts to design and deploy a distribute and cooperating content management system are described. The main features of the system are a user configurable document type definition and a management middleware layer. It allows CMS developers to orchestrate the composition of specialized software components around the structure of a document. In this chapter are also reported some of the experiences gained on deploying the developed framework in a cultural heritage dissemination settlement.

  2. Distributed dynamic simulations of networked control and building performance applications.

    PubMed

    Yahiaoui, Azzedine

    2018-02-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper.

  3. Remote measurement of microwave distribution based on optical detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Zhong; Ding, Wenzheng; Yang, Sihua

    2016-01-04

    In this letter, we present the development of a remote microwave measurement system. This method employs an arc discharge lamp that serves as an energy converter from microwave to visible light, which can propagate without transmission medium. Observed with a charge coupled device, quantitative microwave power distribution can be achieved when the operators and electronic instruments are in a distance from the high power region in order to reduce the potential risk. We perform the experiments using pulsed microwaves, and the results show that the system response is dependent on the microwave intensity over a certain range. Most importantly, themore » microwave distribution can be monitored in real time by optical observation of the response of a one-dimensional lamp array. The characteristics of low cost, a wide detection bandwidth, remote measurement, and room temperature operation make the system a preferred detector for microwave applications.« less

  4. Distributed dynamic simulations of networked control and building performance applications

    PubMed Central

    Yahiaoui, Azzedine

    2017-01-01

    The use of computer-based automation and control systems for smart sustainable buildings, often so-called Automated Buildings (ABs), has become an effective way to automatically control, optimize, and supervise a wide range of building performance applications over a network while achieving the minimum energy consumption possible, and in doing so generally refers to Building Automation and Control Systems (BACS) architecture. Instead of costly and time-consuming experiments, this paper focuses on using distributed dynamic simulations to analyze the real-time performance of network-based building control systems in ABs and improve the functions of the BACS technology. The paper also presents the development and design of a distributed dynamic simulation environment with the capability of representing the BACS architecture in simulation by run-time coupling two or more different software tools over a network. The application and capability of this new dynamic simulation environment are demonstrated by an experimental design in this paper. PMID:29568135

  5. PanDA for COMPASS at JINR

    NASA Astrophysics Data System (ADS)

    Petrosyan, A. Sh.

    2016-09-01

    PanDA (Production and Distributed Analysis System) is a workload management system, widely used for data processing at experiments on Large Hadron Collider and others. COMPASS is a high-energy physics experiment at the Super Proton Synchrotron. Data processing for COMPASS runs locally at CERN, on lxbatch, the data itself stored in CASTOR. In 2014 an idea to start running COMPASS production through PanDA arose. Such transformation in experiment's data processing will allow COMPASS community to use not only CERN resources, but also Grid resources worldwide. During the spring and summer of 2015 installation, validation and migration work is being performed at JINR. Details and results of this process are presented in this paper.

  6. Ka-band MMIC arrays for ACTS Aero Terminal Experiment

    NASA Technical Reports Server (NTRS)

    Raquet, C.; Zakrajsek, R.; Lee, R.; Turtle, J.

    1992-01-01

    An antenna system consisting of three experimental Ka-band active arrays using GaAs MMIC devices at each radiating element for electronic beam steering and distributed power amplification is presented. The MMIC arrays are to be demonstrated in the ACTS Aeronautical Terminal Experiment, planned for early 1994. The experiment is outlined, with emphasis on a description of the antenna system. Attention is given to the way in which proof-of-concept MMIC arrays featuring three different state-of-the-art approaches to Ka-band MMIC insertion are being incorporated into an experimental aircraft terminal for the demonstration of an aircraft-to-satellite link, providing a basis for follow-on MMIC array development.

  7. Distributed Computing for the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Chudoba, J.

    2015-12-01

    Pierre Auger Observatory operates the largest system of detectors for ultra-high energy cosmic ray measurements. Comparison of theoretical models of interactions with recorded data requires thousands of computing cores for Monte Carlo simulations. Since 2007 distributed resources connected via EGI grid are successfully used. The first and the second versions of production system based on bash scripts and MySQL database were able to submit jobs to all reliable sites supporting Virtual Organization auger. For many years VO auger belongs to top ten of EGI users based on the total used computing time. Migration of the production system to DIRAC interware started in 2014. Pilot jobs improve efficiency of computing jobs and eliminate problems with small and less reliable sites used for the bulk production. The new system has also possibility to use available resources in clouds. Dirac File Catalog replaced LFC for new files, which are organized in datasets defined via metadata. CVMFS is used for software distribution since 2014. In the presentation we give a comparison of the old and the new production system and report the experience on migrating to the new system.

  8. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  9. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  10. US EPA Research on Monochloramine Disinfection Kinetics of Nitrosomonas europaea

    EPA Science Inventory

    Based on utility surveys, 30 to 63% of utilities practicing chloramination for secondary disinfection experience nitrification episodes (American Water Works Association 2006). Nitrification in drinking water distribution systems is undesirable and may result in water quality deg...

  11. USEPA Research on Monochloramine Disinfection Kinetics of Nitrosomonas Europaea

    EPA Science Inventory

    Based on utility surveys, 30 to 63% of utilities practicing chloramination for secondary disinfection experience nitrification episodes (American Water Works Association 2006). Nitrification in drinking water distribution systems is undesirable and may result in water quality deg...

  12. Proceedings of the 3rd Annual SCOLE Workshop

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr. (Compiler)

    1987-01-01

    Topics addressed include: modeling and controlling the Spacecraft Control Laboratory Experiment (SCOLE) configurations; slewing maneuvers; mathematical models; vibration damping; gravitational effects; structural dynamics; finite element method; distributed parameter system; on-line pulse control; stability augmentation; and stochastic processes.

  13. A Framework for the Development of Scalable Heterogeneous Robot Teams with Dynamically Distributed Processing

    NASA Astrophysics Data System (ADS)

    Martin, Adrian

    As the applications of mobile robotics evolve it has become increasingly less practical for researchers to design custom hardware and control systems for each problem. This research presents a new approach to control system design that looks beyond end-of-lifecycle performance and considers control system structure, flexibility, and extensibility. Toward these ends the Control ad libitum philosophy is proposed, stating that to make significant progress in the real-world application of mobile robot teams the control system must be structured such that teams can be formed in real-time from diverse components. The Control ad libitum philosophy was applied to the design of the HAA (Host, Avatar, Agent) architecture: a modular hierarchical framework built with provably correct distributed algorithms. A control system for exploration and mapping, search and deploy, and foraging was developed to evaluate the architecture in three sets of hardware-in-the-loop experiments. First, the basic functionality of the HAA architecture was studied, specifically the ability to: a) dynamically form the control system, b) dynamically form the robot team, c) dynamically form the processing network, and d) handle heterogeneous teams. Secondly, the real-time performance of the distributed algorithms was tested, and proved effective for the moderate sized systems tested. Furthermore, the distributed Just-in-time Cooperative Simultaneous Localization and Mapping (JC-SLAM) algorithm demonstrated accuracy equal to or better than traditional approaches in resource starved scenarios, while reducing exploration time significantly. The JC-SLAM strategies are also suitable for integration into many existing particle filter SLAM approaches, complementing their unique optimizations. Thirdly, the control system was subjected to concurrent software and hardware failures in a series of increasingly complex experiments. Even with unrealistically high rates of failure the control system was able to successfully complete its tasks. The HAA implementation designed following the Control ad libitum philosophy proved to be capable of dynamic team formation and extremely robust against both hardware and software failure; and, due to the modularity of the system there is significant potential for reuse of assets and future extensibility. One future goal is to make the source code publically available and establish a forum for the development and exchange of new agents.

  14. If it's not there, where is it? Locating illusory conjunctions.

    PubMed

    Hazeltine, R E; Prinzmetal, W; Elliott, W

    1997-02-01

    There is evidence that complex objects are decomposed by the visual system into features, such as shape and color. Consistent with this theory is the phenomenon of illusory conjunctions, which occur when features are incorrectly combined to form an illusory object. We analyzed the perceived location of illusory conjunctions to study the roles of color and shape in the location of visual objects. In Experiments 1 and 2, participants located illusory conjunctions about halfway between the veridical locations of the component features. Experiment 3 showed that the distribution of perceived locations was not the mixture of two distributions centered at the 2 feature locations. Experiment 4 replicated these results with an identification task rather than a detection task. We concluded that the locations of illusory conjunctions were not arbitrary but were determined by both constituent shape and color.

  15. DIRAC in Large Particle Physics Experiments

    NASA Astrophysics Data System (ADS)

    Stagni, F.; Tsaregorodtsev, A.; Arrabito, L.; Sailer, A.; Hara, T.; Zhang, X.; Consortium, DIRAC

    2017-10-01

    The DIRAC project is developing interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for both Workload and Data Management tasks of large scientific communities. A number of High Energy Physics and Astrophysics collaborations have adopted DIRAC as the base for their computing models. DIRAC was initially developed for the LHCb experiment at LHC, CERN. Later, the Belle II, BES III and CTA experiments as well as the linear collider detector collaborations started using DIRAC for their computing systems. Some of the experiments built their DIRAC-based systems from scratch, others migrated from previous solutions, ad-hoc or based on different middlewares. Adaptation of DIRAC for a particular experiment was enabled through the creation of extensions to meet their specific requirements. Each experiment has a heterogeneous set of computing and storage resources at their disposal that were aggregated through DIRAC into a coherent pool. Users from different experiments can interact with the system in different ways depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. In this contribution we will summarize the experience of using DIRAC in particle physics collaborations. The problems of migration to DIRAC from previous systems and their solutions will be presented. An overview of specific DIRAC extensions will be given. We hope that this review will be useful for experiments considering an update, or for those designing their computing models.

  16. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  17. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  18. Transportability, distributability and rehosting experience with a kernel operating system interface set

    NASA Technical Reports Server (NTRS)

    Blumberg, F. C.; Reedy, A.; Yodis, E.

    1986-01-01

    For the past two years, PRC has been transporting and installing a software engineering environment framework, the Automated Product control Environment (APCE), at a number of PRC and government sites on a variety of different hardware. The APCE was designed using a layered architecture which is based on a standardized set of interfaces to host system services. This interface set called the APCE Interface Set (AIS), was designed to support many of the same goals as the Common Ada Programming Support Environment (APSE) Interface Set (CAIS). The APCE was developed to provide support for the full software lifecycle. Specific requirements of the APCE design included: automation of labor intensive administrative and logistical tasks: freedom for project team members to use existing tools: maximum transportability for APCE programs, interoperability of APCE database data, and distributability of both processes and data: and maximum performance on a wide variety of operating systems. A brief description is given of the APCE and AIS, a comparison of the AIS and CAIS both in terms of functionality and of philosophy and approach and a presentation of PRC's experience in rehosting AIS and transporting APCE programs and project data. Conclusions are drawn from this experience with respect to both the CAIS efforts and Space Station plans.

  19. Tropospheric ozone and aerosols measured by airborne lidar during the 1988 Arctic boundary layer experiment

    NASA Technical Reports Server (NTRS)

    Browell, Edward V.; Butler, Carolyn F.; Kooi, Susan A.

    1991-01-01

    Ozone (O3) and aerosol distributions were measured from an aircraft using a differential absorption lidar (DIAL) system as part of the 1988 NASA Global Tropospheric Experiment - Arctic Boundary Layer Experiment (ABLE-3A) to study the sources and sinks of gases and aerosols over the tundra regions of Alaska during the summer. The tropospheric O3 budget over the Arctic was found to be strongly influenced by stratospheric intrusions. Regions of low aerosol scattering and enhanced O3 mixing ratios were usually correlated with descending air from the upper troposphere or lower stratosphere. Several cases of continental polar air masses were examined during the experiment. The aerosol scattering associated with these air masses was very low, and the atmospheric distribution of aerosols was quite homogeneous for those air masses that had been transported over the ice for greater than or = 3 days. The transition in O3 and aerosol distributions from tundra to marine conditions was examined several times. The aerosol data clearly show an abrupt change in aerosol scattering properties within the mixed layer from lower values over the tundra to generally higher values over the water. The distinct differences in the heights of the mixed layers in the two regions was also readily apparent. Several cases of enhanced O3 were observed during ABLE-3 in conjunction with enhanced aerosol scattering in layers in the free atmosphere. Examples are presented of the large scale variations of O3 and aerosols observed with the airborne lidar system from near the surface to above the tropopause over the Arctic during ABLE-3.

  20. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  1. The NASA Langley Laminar-Flow-Control Experiment on a Swept Supercritical Airfoil: Basic Results for Slotted Configuration

    NASA Technical Reports Server (NTRS)

    Harris, Charles D.; Brooks, Cuyler W., Jr.; Clukey, Patricia G.; Stack, John P.

    1989-01-01

    The effects of Mach number and Reynolds number on the experimental surface pressure distributions and transition patterns for a large chord, swept supercritical airfoil incorporating an active Laminar Flow Control suction system with spanwise slots are presented. The experiment was conducted in the Langley 8 foot Transonic Pressure Tunnel. Also included is a discussion of the influence of model/tunnel liner interactions on the airfoil pressure distribution. Mach number was varied from 0.40 to 0.82 at two chord Reynolds numbers, 10 and 20 x 1,000,000, and Reynolds number was varied from 10 to 20 x 1,000,000 at the design Mach number.

  2. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feihu; Qi, Bing; Liao, Zhongfa

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  3. Research on the control strategy of distributed energy resources inverter based on improved virtual synchronous generator.

    PubMed

    Gao, Changwei; Liu, Xiaoming; Chen, Hai

    2017-08-22

    This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.

  4. Telerobotic system performance measurement - Motivation and methods

    NASA Technical Reports Server (NTRS)

    Kondraske, George V.; Khoury, George J.

    1992-01-01

    A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.

  5. Skylab

    NASA Image and Video Library

    1972-01-01

    This photograph shows the flight article of the Airlock Module (AM)/Multiple Docking Adapter (MDA) assembly being readied for testing in a clean room at the McDornell Douglas Plant in St. Louis, Missouri. Although the AM and the MDA were separate entities, they were in many respects simply two components of a single module. The AM enabled crew members to conduct extravehicular activities outside Skylab as required for experiment support. Oxygen and nitrogen storage tanks needed for Skylab's life support system were mounted on the external truss work of the AM. Major components in the AM included Skylab's electric power control and distribution station, environmental control system, communication system, and data handling and recording systems. The MDA, forward of the AM, provided docking facilities for the Command and Service Module. It also accommodated several experiment systems, among them the Earth Resource Experiment Package, the materials processing facility, and the control and display console needed for the Apollo Telescope Mount solar astronomy studies. The AM was built by McDonnell Douglas and the MDA was built by Martin Marietta. The Marshall Space Flight Center was responsible for the design and development of the Skylab hardware and experiment management.

  6. Skylab

    NASA Image and Video Library

    1972-03-01

    This photograph shows the flight article of the mated Airlock Module (AM) and Multiple Docking Adapter (MDA) being lowering into horizontal position on a transporter. Although the AM and the MDA were separate entities, they were in many respects simply two components of a single module. The AM enabled crew members to conduct extravehicular activities outside Skylab as required for experiment support. Oxygen and nitrogen storage tanks needed for Skylab's life support system were mounted on the external truss work of the AM. Major components in the AM included Skylab's electric power control and distribution station, environmental control system, communication system, and data handling and recording systems. The MDA, forward of the AM, provided docking facilities for the Command and Service Module. It also accommodated several experiment systems, among them the Earth Resource Experiment Package, the materials processing facility, and the control and display console needed for the Apollo Telescope Mount solar astronomy studies. The AM was built by McDornell Douglas and the MDA was built by Martin Marietta. The Marshall Space Flight Center was responsible for the design and development of the Skylab hardware and experiment management.

  7. A wearable sensor system for lower-limb rehabilitation evaluation using the GRF and CoP distributions

    NASA Astrophysics Data System (ADS)

    Tao, Weijun; Zhang, Jianyun; Li, Guangyi; Liu, Tao; Liu, Fengping; Yi, Jingang; Wang, Hesheng; Inoue, Yoshio

    2016-02-01

    Wearable sensors are attractive for gait analysis because these systems can measure and obtain real-time human gait and motion information outside of the laboratory for a longer duration. In this paper, we present a new wearable ground reaction force (GRF) sensing system for ambulatory gait measurement. In addition, the GRF sensor system is also used to quantify the patients' lower-limb gait rehabilitation. We conduct a validation experiment for the sensor system on seven volunteer subjects (weight 62.39 +/- 9.69 kg and height 169.13 +/- 5.64 cm). The experiments include the use of the GRF sensing system for the subjects in the following conditions: (1) normal walking; (2) walking with the rehabilitation training device; and (3) walking with a knee brace and the rehabilitation training device. The experiment results support the hypothesis that the wearable GRF sensor system is capable of quantifying patients' lower-limb rehabilitation. The proposed GRF sensing system can also be used for assessing the effectiveness of a gait rehabilitation system and for providing bio-feedback information to the subjects.

  8. Evolution of the ATLAS distributed computing system during the LHC long shutdown

    NASA Astrophysics Data System (ADS)

    Campana, S.; Atlas Collaboration

    2014-06-01

    The ATLAS Distributed Computing project (ADC) was established in 2007 to develop and operate a framework, following the ATLAS computing model, to enable data storage, processing and bookkeeping on top of the Worldwide LHC Computing Grid (WLCG) distributed infrastructure. ADC development has always been driven by operations and this contributed to its success. The system has fulfilled the demanding requirements of ATLAS, daily consolidating worldwide up to 1 PB of data and running more than 1.5 million payloads distributed globally, supporting almost one thousand concurrent distributed analysis users. Comprehensive automation and monitoring minimized the operational manpower required. The flexibility of the system to adjust to operational needs has been important to the success of the ATLAS physics program. The LHC shutdown in 2013-2015 affords an opportunity to improve the system in light of operational experience and scale it to cope with the demanding requirements of 2015 and beyond, most notably a much higher trigger rate and event pileup. We will describe the evolution of the ADC software foreseen during this period. This includes consolidating the existing Production and Distributed Analysis framework (PanDA) and ATLAS Grid Information System (AGIS), together with the development and commissioning of next generation systems for distributed data management (DDM/Rucio) and production (Prodsys-2). We will explain how new technologies such as Cloud Computing and NoSQL databases, which ATLAS investigated as R&D projects in past years, will be integrated in production. Finally, we will describe more fundamental developments such as breaking job-to-data locality by exploiting storage federations and caches, and event level (rather than file or dataset level) workload engines.

  9. Fluid Transport in Porous Media probed by Relaxation-Exchange NMR

    NASA Astrophysics Data System (ADS)

    Olaru, A. M.; Kowalski, J.; Sethi, V.; Blümich, B.

    2011-12-01

    The characterization of fluid transport in porous media represents a matter of high interest in fields like the construction industry, oil exploitation, and soil science. Moisture migration or flow at low rates, such as those occurring in soil during rain are difficult to characterize by classical high-field NMR velocimetry due to the dedicated hardware and elaborate techniques required for adequate signal encoding. The necessity of field studies raises additional technical problems, which can be solved only by the use of portable low-field NMR instruments. In this work we extend the use of low-field relaxation exchange experiments from the study of diffusive transport to that of advection. Relaxation exchange experiments were performed using a home-built Halbach magnet on model porous systems with controlled pore-size distributions and on natural porous systems (quartz sand with a broad pore-size distribution) exposed to unidirectional flow. Different flow rates leave distinctive marks on the exchange maps obtained by inverse Laplace transformation of the time domain results, due to the superposition of exchange, diffusion and inflow/outflow in multiple relaxation sites of the liquids in the porous media. In the case of slow velocities there is no loss of signal due to outflow, and the relaxation-exchange effects prevail, leading to a tilt of the diagonal distribution around a pivot point with increasing mixing time. The tilt suggests an asymmetry in the exchange between relaxation sites of large and small decay rates. Another observed phenomenon is the presence of a bigger number of exchange cross-peaks compared to the exchange maps obtained for the same systems in zero-flow conditions. We assume that this is due to enhanced exchange caused by the superposition of flow. For high velocities the outflow effects dominate and the relaxation-time distribution collapses towards lower values of the average relaxation times. In both cases the pore-size distribution has a strong effect on the results, the asymmetries being more obvious in the natural porous systems than in the glass bead packs used as models, while the enhanced exchange phenomenon appears predominantly in the maps obtained on the model systems. This is probably due to diffusion occurring in the presence of different internal field gradients. Shifts and tilts in the exchange maps can be simulated by solving the relaxation site-averaged Bloch-Torrey system forward in time and assuming an asymmetric closure for the transport, which might be realistic for preferential flow phenomena or for pore-size distributions with two or more clearly distinct pore size classes. When comparing the simulations results with the experimental data we observed a correspondence of signal collapse and translation towards lower relaxation times. The asymmetries could be qualitatively reproduced by making further assumptions on the pore structure, but further work is required to characterize and model the physical phenomenon behind. The results obtained reveal the possibility of characterizing advective fluid transport in porous systems by simple correlation experiments performed with inexpensive and mobile hardware.

  10. Assess program: Interactive data management systems for airborne research

    NASA Technical Reports Server (NTRS)

    Munoz, R. M.; Reller, J. O., Jr.

    1974-01-01

    Two data systems were developed for use in airborne research. Both have distributed intelligence and are programmed for interactive support among computers and with human operators. The C-141 system (ADAMS) performs flight planning and telescope control functions in addition to its primary role of data acquisition; the CV-990 system (ADDAS) performs data management functions in support of many research experiments operating concurrently. Each system is arranged for maximum reliability in the first priority function, precision data acquisition.

  11. Isolation transformers for utility-interactive photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Kern, E. C., Jr.

    1982-12-01

    Isolation transformers are used in some photovoltaic systems to isolate the photovoltaic system common mode voltage from the utility distribution system. In early system experiments with grid connected photovoltaics, such transformers were the source of significant power losses. A project at the Lincoln Laboratory and at Allied Chemical Corporation developed an improved isolation transformer to minimize such power losses. Experimental results and an analytical model of conventional and improved transformers are presented, showing considerable reductions of losses associated with the improved transformer.

  12. [Research on controlling iron release of desalted water transmitted in existing water distribution system].

    PubMed

    Tian, Yi-Mei; Liu, Yang; Zhao, Peng; Shan, Jin-Lin; Yang, Suo-Yin; Liu, Wei

    2012-04-01

    Desalted water, with strong corrosion characteristics, would possibly lead to serious "red water" when transmitted and distributed in existing municipal water distribution network. The main reason for red water phenomenon is iron release in water pipes. In order to study the methods of controlling iron release in existing drinking water distribution pipe, tubercle analysis of steel pipe and cast iron pipe, which have served the distribution system for 30-40 years, was carried out, the main construction materials were Fe3O4 and FeOOH; and immersion experiments were carried in more corrosive pipes. Through changing mixing volume of tap water and desalted water, pH, alkalinity, chloride and sulfate, the influence of different water quality indexes on iron release were mainly analyzed. Meanwhile, based on controlling iron content, water quality conditions were established to meet with the safety distribution of desalted water: volume ratio of potable water and desalted water should be higher than or equal to 2, pH was higher than 7.6, alkalinity was higher than 200 mg x L(-1).

  13. Air traffic control by distributed management in a MLS environment

    NASA Technical Reports Server (NTRS)

    Kreifeldt, J. G.; Parkin, L.; Hart, S.

    1977-01-01

    The microwave landing system (MLS) is a technically feasible means for increasing runway capacity since it could support curved approaches to a short final. The shorter the final segment of the approach, the wider the variety of speed mixes possible so that theoretically, capacity would ultimately be limited by runway occupance time only. An experiment contrasted air traffic control in a MLS environment under a centralized form of management and under distributed management which was supported by a traffic situation display in each of the 3 piloted simulators. Objective flight data, verbal communication and subjective responses were recorded on 18 trial runs lasting about 20 minutes each. The results were in general agreement with previous distributed management research. In particular, distributed management permitted a smaller spread of intercrossing times and both pilots and controllers perceived distributed management as the more 'ideal' system in this task. It is concluded from this and previous research that distributed management offers a viable alternative to centralized management with definite potential for dealing with dense traffic in a safe, orderly and expeditious manner.

  14. Hydrogen bonds in concreto and in computro

    NASA Astrophysics Data System (ADS)

    Stouten, Pieter F. W.; Kroon, Jan

    1988-07-01

    Molecular dynamics simulations of liquid water and liquid methanol have been carried out. For both liquids an effective pair potential was used. The models were fitted to the heat of vaporization, pressure and various radial distribution functions resulting from diffraction experiments on liquids. In both simulations 216 molecules were put in a cubic periodical ☐. The system was loosely coupled to a temperature bath and to a pressure bath. Following an initial equilibration period relevant data were sampled during 15 ps. The distributions of oxygen—oxygen distances in hydrogen bonds obtained from the two simulations are essentially the same. The distribution obtained from crystal data is somewhat different: the maximum has about the same position, but the curve is much narrower, which can be expected merely from the fact that diffraction experiments only supply average atomic positions and hence average interatomic distances. When thermal motion is taken into account a closer likeness is observed.

  15. Sample data from a Distributed Acoustic Sensing experiment at Garner Valley, California (PoroTomo Subtask 3.2)

    DOE Data Explorer

    Lancelle, Chelsea

    2013-09-10

    In September 2013, an experiment using Distributed Acoustic Sensing (DAS) was conducted at Garner Valley, a test site of the University of California Santa Barbara (Lancelle et al., 2014). This submission includes one 45 kN shear shaker (called “large shaker” on the basemap) test for three different measurement systems. The shaker swept from a rest, up to 10 Hz, and back down to a rest over 60 seconds. Lancelle, C., N. Lord, H. Wang, D. Fratta, R. Nigbor, A. Chalari, R. Karaulanov, J. Baldwin, and E. Castongia (2014), Directivity and Sensitivity of Fiber-Optic Cable Measuring Ground Motion using a Distributed Acoustic Sensing Array (abstract # NS31C-3935), AGU Fall Meeting. https://agu.confex.com/agu/fm1/meetingapp.cgi#Paper/19828 The e-poster is available at: https://agu.confex.com/data/handout/agu/fm14/Paper_19828_handout_696_0.pdf

  16. Experimental Avalanches in a Rotating Drum

    NASA Astrophysics Data System (ADS)

    Hubard, Aline; O'Hern, Corey; Shattuck, Mark

    We address the question of universality in granular avalanches and the system size effects on it. We set up an experiment made from a quasi-two-dimensional rotating drum half-filled with a monolayer of stainless-steel spheres. We measure the size of the avalanches created by the increased gravitational stress on the pile as we quasi-statically rotate the drum. We find two kinds of avalanches determined by the drum size. The size and duration distributions of the avalanches that do not span the whole system follow a power law and the avalanche shapes are self-similar and nearly parabolic. The distributions of the avalanches that span the whole system are limited by the maximal amount of potential energy stored in the system at the moment of the avalanche. NSF CMMI-1462439, CMMI-1463455.

  17. Experiment and application of soft x-ray grazing incidence optical scattering phenomena

    NASA Astrophysics Data System (ADS)

    Chen, Shuyan; Li, Cheng; Zhang, Yang; Su, Liping; Geng, Tao; Li, Kun

    2017-08-01

    For short wavelength imaging systems,surface scattering effects is one of important factors degrading imaging performance. Study of non-intuitive surface scatter effects resulting from practical optical fabrication tolerances is a necessary work for optical performance evaluation of high resolution short wavelength imaging systems. In this paper, Soft X-ray optical scattering distribution is measured by a soft X-ray reflectometer installed by my lab, for different sample mirrors、wavelength and grazing angle. Then aim at space solar telescope, combining these scattered light distributions, and surface scattering numerical model of grazing incidence imaging system, PSF and encircled energy of optical system of space solar telescope are computed. We can conclude that surface scattering severely degrade imaging performance of grazing incidence systems through analysis and computation.

  18. Masterless Distributed Computing Over Mobile Devices

    DTIC Science & Technology

    2012-09-01

    Matrix Computations,” Handbooks in OR & MS, vol. 3, pp. 247–321, 1990. [18] R. Barrett et al ., Templates for the Solution of Linear Systems: Building...the truncated SVD of a matrix. The algorithm used in this thesis was developed by Halko , Martinsson, and Tropp in their journal article, Finding...experiments,” dodbuzz.com, 2011 . [Online]. Available: http://www.dodbuzz.com/ 2011 /06/06/army-begins-mobile- phone-experiments/. [Accessed: 15-Feb

  19. Glenn's Telescience Support Center Provided Around-the-Clock Operations Support for Space Experiments on the International Space Station

    NASA Technical Reports Server (NTRS)

    Malarik, Diane C.

    2005-01-01

    NASA Glenn Research Center s Telescience Support Center (TSC) allows researchers on Earth to operate experiments onboard the International Space Station (ISS) and the space shuttles. NASA s continuing investment in the required software, systems, and networks provides distributed ISS ground operations that enable payload developers and scientists to monitor and control their experiments from the Glenn TSC. The quality of scientific and engineering data is enhanced while the long-term operational costs of experiments are reduced because principal investigators and engineering teams can operate their payloads from their home institutions.

  20. A system for conducting igneous petrology experiments under controlled redox conditions in reduced gravity

    NASA Technical Reports Server (NTRS)

    Williams, R. J.

    1986-01-01

    The Space Shuttle and the planned Space Station will permit experimentation under conditions of reduced gravitational acceleration offering experimental petrologists the opportunity to study crystal growth, element distribution, and phase chemistry. In particular the confounding effects of macro and micro scale buoyancy-induced convection and crystal settling or floatation can be greatly reduced over those observed in experiments in the terrestrial laboratory. Also, for experiments in which detailed replication of the environment is important, the access to reduced gravity will permit a more complete simulation of processes that may have occurred on asteroids or in free space. A technique that was developed to control, measure, and manipulate oxygen fugacites with small quantities of gas which are recirculated over the sample is described. This system should be adaptable to reduced gravity space experiments requiring redox control. Experiments done conventionally and those done using this technique yield identical results done in a 1-g field.

  1. Secure free-space optical communication system based on data fragmentation multipath transmission technology.

    PubMed

    Huang, Qingchao; Liu, Dachang; Chen, Yinfang; Wang, Yuehui; Tan, Jun; Chen, Wei; Liu, Jianguo; Zhu, Ninghua

    2018-05-14

    A secure free-space optical (S-FSO) communication system based on data fragmentation multipath transmission (DFMT) scheme is proposed and demonstrated for enhancing the security of FSO communications. By fragmenting the transmitted data and simultaneously distributing data fragments into different atmospheric channels, the S-FSO communication system can protect confidential messages from being eavesdropped effectively. A field experiment of S-FSO communication between two buildings has been successfully undertaken, and the experiment results demonstrate the feasibility of the scheme. The transmission distance is 50m and the maximum throughput is 1 Gb/s. We also established a theoretical model to analysis the security performance of the S-FSO communication system. To the best of our knowledge, this is the first application of DFMT scheme in FSO communication system.

  2. A Monitoring System for the LHCb Data Flow

    NASA Astrophysics Data System (ADS)

    Barbosa, João; Gaspar, Clara; Jost, Beat; Frank, Markus; Cardoso, Luis G.

    2017-06-01

    The LHCb experiment uses the LHC accelerator for the collisions that produce the physics data necessary for analysis. The data produced by the detector by measuring the results of the collisions at a rate of 40 MHz are read out by a complex data acquisition (DAQ) system, which is summarily described in this paper. Distributed systems of such dimensions rely on monitoring and control systems that account for the numerous faults that can happen throughout the whole operation. With this in mind, a new system was created to extend the monitoring of the readout system, in this case by providing an overview of what is happening in each stage of the DAQ process, starting in the hardware trigger performed right after the detector measurements and ending in the local storage of the experiment. This system, a complement to the current run control (experimental control system), intends to shorten reaction times when a problem occurs by providing the operators with detailed information of where a certain fault is occurring. The architecture of the tool and its utilization by the experiment operators are described in this paper.

  3. DOS Design/Application Tools System/Segment Specification. Volume 3

    DTIC Science & Technology

    1990-09-01

    consume the same information to obtain that information without "manual" translation by people. Solving the information management problem effectively...and consumes ’ even more information than centralized development. Distributed systems cannot be developed successfully by experiment without...human intervention because all tools consume input from and produce output to the same repository. New tools are easily absorbed into the environment

  4. Distributed nestmate recognition in ants.

    PubMed

    Esponda, Fernando; Gordon, Deborah M

    2015-05-07

    We propose a distributed model of nestmate recognition, analogous to the one used by the vertebrate immune system, in which colony response results from the diverse reactions of many ants. The model describes how individual behaviour produces colony response to non-nestmates. No single ant knows the odour identity of the colony. Instead, colony identity is defined collectively by all the ants in the colony. Each ant responds to the odour of other ants by reference to its own unique decision boundary, which is a result of its experience of encounters with other ants. Each ant thus recognizes a particular set of chemical profiles as being those of non-nestmates. This model predicts, as experimental results have shown, that the outcome of behavioural assays is likely to be variable, that it depends on the number of ants tested, that response to non-nestmates changes over time and that it changes in response to the experience of individual ants. A distributed system allows a colony to identify non-nestmates without requiring that all individuals have the same complete information and helps to facilitate the tracking of changes in cuticular hydrocarbon profiles, because only a subset of ants must respond to provide an adequate response.

  5. Study of density distribution in a near-critical simple fluid (19-IML-1)

    NASA Technical Reports Server (NTRS)

    Michels, Teun

    1992-01-01

    This experiment uses visual observation, interferometry, and light scattering techniques to observe and analyze the density distribution in SF6 above and below the critical temperature. Below the critical temperature, the fluid system is split up into two coexisting phases, liquid and vapor. The spatial separation of these phases on earth, liquid below and vapor above, is not an intrinsic property of the fluid system; it is merely an effect of the action of the gravity field. At a fixed temperature, the density of each of the coexisting phases is in principle fixed. However, near T sub c where the fluid is strongly compressible, gravity induced hydrostatic forces will result in a gradual decrease in density with increasing height in the sample container. This hydrostatic density profile is even more pronounced in the one phase fluid at temperatures slightly above T sub c. The experiment is set up to study the intrinsic density distributions and equilibration rates of a critical sample in a small container. Interferometry will be used to determine local density and thickness of surface and interface layers. The light scattering data will reveal the size of the density fluctuations on a microscopic scale.

  6. Depo-Provera--ethical issues in its testing and distribution.

    PubMed Central

    Potts, M; Paxman, J M

    1984-01-01

    Ethical issues relating to the use of the injectable contraceptive in developed and developing countries alike involve public policy decisions concerning both criteria for testing a new drug and individual choices about using a specific form of contraception approved for national distribution. Drug testing consists of an important but still evolving set of procedures. Depo-Provera is not qualitatively different from any other drug and some unpredictable risks are inevitable, even after extensive animal experiments and clinical trials. In assessing the risks and benefits of Depo-Provera use, epidemiological data from large-scale human use is now beginning to become more important than data from animal experiments and clinical trials. The consumer's best interest is central to any ethically responsible system of drug distribution. Systems of informed choice are needed, even in societies where illiteracy remains common and medical services are weak. In the case of a contraceptive, the risks of non-use leading to unintended pregnancy, which can result in high mortality, are relevant as well as the side-effects of the method. An attempt, therefore, is made here to categorise those issues which are universal and those which are country-specific. PMID:6231379

  7. Temperature Data Assimilation with Salinity Corrections: Validation for the NSIPP Ocean Data Assimilation System in the Tropical Pacific Ocean, 1993-1998

    NASA Technical Reports Server (NTRS)

    Troccoli, Alberto; Rienecker, Michele M.; Keppenne, Christian L.; Johnson, Gregory C.

    2003-01-01

    The NASA Seasonal-to-Interannual Prediction Project (NSIPP) has developed an Ocean data assimilation system to initialize the quasi-isopycnal ocean model used in our experimental coupled-model forecast system. Initial tests of the system have focused on the assimilation of temperature profiles in an optimal interpolation framework. It is now recognized that correction of temperature only often introduces spurious water masses. The resulting density distribution can be statically unstable and also have a detrimental impact on the velocity distribution. Several simple schemes have been developed to try to correct these deficiencies. Here the salinity field is corrected by using a scheme which assumes that the temperature-salinity relationship of the model background is preserved during the assimilation. The scheme was first introduced for a zlevel model by Troccoli and Haines (1999). A large set of subsurface observations of salinity and temperature is used to cross-validate two data assimilation experiments run for the 6-year period 1993-1998. In these two experiments only subsurface temperature observations are used, but in one case the salinity field is also updated whenever temperature observations are available.

  8. Mine fire experiments and simulation with MFIRE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laage, L.W.; Yang, Hang

    1995-12-31

    A major concern of mine fires is the heat generated ventilation disturbances which can move products of combustion (POC) through unexpected passageways. Fire emergency planning requires simulation of the interaction of the fire and ventilation system to predict the state of the ventilation system and the subsequent distribution of temperatures and POC. Several computer models were developed by the U.S. Bureau of Mines (USBM) to perform this simulation. The most recent, MFIRE, simulates a mine`s ventilation system and its response to altered ventilation parameters such as the development of new mine workings or changes in ventilation control structures, external influencemore » such as varying outside temperatures, and internal influences such as fires. Extensive output allows quantitative analysis of the effects of the proposed alteration to die ventilation system. This paper describes recent USBM research to validate MFIRE`s calculation of temperature distribution in an airway due to a mine fire, as temperatures are the most significant source of ventilation disturbances. Fire tests were conducted at the Waldo Mine near Magdalena, NM. From these experiments, temperature profiles were developed as functions of time and distance from the fire and compared with simulations from MFIRE.« less

  9. Clinical experience with a high-performance ATM-connected DICOM archive for cardiology

    NASA Astrophysics Data System (ADS)

    Solomon, Harry P.

    1997-05-01

    A system to archive large image sets, such as cardiac cine runs, with near realtime response must address several functional and performance issues, including efficient use of a high performance network connection with standard protocols, an architecture which effectively integrates both short- and long-term mass storage devices, and a flexible data management policy which allows optimization of image distribution and retrieval strategies based on modality and site-specific operational use. Clinical experience with such as archive has allowed evaluation of these systems issues and refinement of a traffic model for cardiac angiography.

  10. Software Management for the NOνAExperiment

    NASA Astrophysics Data System (ADS)

    Davies, G. S.; Davies, J. P.; C Group; Rebel, B.; Sachdev, K.; Zirnstein, J.

    2015-12-01

    The NOvAsoftware (NOνASoft) is written in C++, and built on the Fermilab Computing Division's art framework that uses ROOT analysis software. NOνASoftmakes use of more than 50 external software packages, is developed by more than 50 developers and is used by more than 100 physicists from over 30 universities and laboratories in 3 continents. The software builds are handled by Fermilab's custom version of Software Release Tools (SRT), a UNIX based software management system for large, collaborative projects that is used by several experiments at Fermilab. The system provides software version control with SVN configured in a client-server mode and is based on the code originally developed by the BaBar collaboration. In this paper, we present efforts towards distributing the NOvA software via the CernVM File System distributed file system. We will also describe our recent work to use a CMake build system and Jenkins, the open source continuous integration system, for NOνASoft.

  11. Distributed Prognostics based on Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.

    2014-01-01

    Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less

  13. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    PubMed

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  14. Polarized radiance distribution measurement of skylight. II. Experiment and data.

    PubMed

    Liu, Y; Voss, K

    1997-11-20

    Measurements of the skylight polarized radiance distribution were performed at different measurement sites, atmospheric conditions, and three wavelengths with our newly developed Polarization Radiance Distribution Camera System (RADS-IIP), an analyzer-type Stokes polarimeter. Three Stokes parameters of skylight (I, Q, U), the degree of polarization, and the plane of polarization are presented in image format. The Arago point and neutral lines have been observed with RADS-IIP. Qualitatively, the dependence of the intensity and polarization data on wavelength, solar zenith angle, and surface albedo is in agreement with the results from computations based on a plane-parallel Rayleigh atmospheric model.

  15. Subcarrier Wave Quantum Key Distribution in Telecommunication Network with Bitrate 800 kbit/s

    NASA Astrophysics Data System (ADS)

    Gleim, A. V.; Nazarov, Yu. V.; Egorov, V. I.; Smirnov, S. V.; Bannik, O. I.; Chistyakov, V. V.; Kynev, S. M.; Anisimov, A. A.; Kozlov, S. A.; Vasiliev, V. N.

    2015-09-01

    In the course of work on creating the first quantum communication network in Russia we demonstrated quantum key distribution in metropolitan optical network infrastructure. A single-pass subcarrier wave quantum cryptography scheme was used in the experiments. BB84 protocol with strong reference was chosen for performing key distribution. The registered sifted key rate in an optical cable with 1.5 dB loss was 800 Kbit/s. Signal visibility exceeded 98%, and quantum bit error rate value was 1%. The achieved result is a record for this type of systems.

  16. FRIB Cryogenic Distribution System and Status

    NASA Astrophysics Data System (ADS)

    Ganni, V.; Dixon, K.; Laverdure, N.; Yang, S.; Nellis, T.; Jones, S.; Casagrande, F.

    2015-12-01

    The MSU-FRIB cryogenic distribution system supports the 2 K primary, 4 K primary, and 35 - 55 K shield operation of more than 70 loads in the accelerator and the experimental areas. It is based on JLab and SNS experience with bayonet-type disconnects between the loads and the distribution system for phased commissioning and maintenance. The linac transfer line, which features three separate transfer line segments for additional independence during phased commissioning at 4 K and 2 K, connects the folded arrangement of 49 cryomodules and 4 superconducting dipole magnets and a fourth transfer line supports the separator area cryo loads. The pressure reliefs for the transfer line process lines, located in the refrigeration room outside the tunnel/accelerator area, are piped to be vented outdoors. The transfer line designs integrate supply and return flow paths into a combined vacuum space. The main linac distribution segments are produced in a small number of standard configurations; a prototype of one such configuration has been fabricated at Jefferson Lab and has been installed at MSU to support testing of a prototype FRIB cryomodule.

  17. Distribution uniformity of laser-accelerated proton beams

    NASA Astrophysics Data System (ADS)

    Zhu, Jun-Gao; Zhu, Kun; Tao, Li; Xu, Xiao-Han; Lin, Chen; Ma, Wen-Jun; Lu, Hai-Yang; Zhao, Yan-Ying; Lu, Yuan-Rong; Chen, Jia-Er; Yan, Xue-Qing

    2017-09-01

    Compared with conventional accelerators, laser plasma accelerators can generate high energy ions at a greatly reduced scale, due to their TV/m acceleration gradient. A compact laser plasma accelerator (CLAPA) has been built at the Institute of Heavy Ion Physics at Peking University. It will be used for applied research like biological irradiation, astrophysics simulations, etc. A beamline system with multiple quadrupoles and an analyzing magnet for laser-accelerated ions is proposed here. Since laser-accelerated ion beams have broad energy spectra and large angular divergence, the parameters (beam waist position in the Y direction, beam line layout, drift distance, magnet angles etc.) of the beamline system are carefully designed and optimised to obtain a radially symmetric proton distribution at the irradiation platform. Requirements of energy selection and differences in focusing or defocusing in application systems greatly influence the evolution of proton distributions. With optimal parameters, radially symmetric proton distributions can be achieved and protons with different energy spread within ±5% have similar transverse areas at the experiment target. Supported by National Natural Science Foundation of China (11575011, 61631001) and National Grand Instrument Project (2012YQ030142)

  18. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    PubMed

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  19. NREL/SCE High Penetration PV Integration Project: FY13 Annual Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, B. A.; Shah, S.; Norris, B. L.

    2014-06-01

    In 2010, the National Renewable Energy Laboratory (NREL), Southern California Edison (SCE), Quanta Technology, Satcon Technology Corporation, Electrical Distribution Design (EDD), and Clean Power Research (CPR) teamed to analyze the impacts of high penetration levels of photovoltaic (PV) systems interconnected onto the SCE distribution system. This project was designed specifically to benefit from the experience that SCE and the project team would gain during the installation of 500 megawatts (MW) of utility-scale PV systems (with 1-5 MW typical ratings) starting in 2010 and completing in 2015 within SCE's service territory through a program approved by the California Public Utility Commissionmore » (CPUC). This report provides the findings of the research completed under the project to date.« less

  20. Distributed fiber sensing system with wide frequency response and accurate location

    NASA Astrophysics Data System (ADS)

    Shi, Yi; Feng, Hao; Zeng, Zhoumo

    2016-02-01

    A distributed fiber sensing system merging Mach-Zehnder interferometer and phase-sensitive optical time domain reflectometer (Φ-OTDR) is demonstrated for vibration measurement, which requires wide frequency response and accurate location. Two narrow line-width lasers with delicately different wavelengths are used to constitute the interferometer and reflectometer respectively. A narrow band Fiber Bragg Grating is responsible for separating the two wavelengths. In addition, heterodyne detection is applied to maintain the signal to noise rate of the locating signal. Experiment results show that the novel system has a wide frequency from 1 Hz to 50 MHz, limited by the sample frequency of data acquisition card, and a spatial resolution of 20 m, according to 200 ns pulse width, along 2.5 km fiber link.

  1. Economic Benefits of Improved Information on Worldwide Crop Production: An Optimal Decision Model of Production and Distribution with Application to Wheat, Corn, and Soybeans

    NASA Technical Reports Server (NTRS)

    Andrews, J.

    1977-01-01

    An optimal decision model of crop production, trade, and storage was developed for use in estimating the economic consequences of improved forecasts and estimates of worldwide crop production. The model extends earlier distribution benefits models to include production effects as well. Application to improved information systems meeting the goals set in the large area crop inventory experiment (LACIE) indicates annual benefits to the United States of $200 to $250 million for wheat, $50 to $100 million for corn, and $6 to $11 million for soybeans, using conservative assumptions on expected LANDSAT system performance.

  2. Evolution of user analysis on the grid in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.; ATLAS Collaboration

    2017-10-01

    More than one thousand physicists analyse data collected by the ATLAS experiment at the Large Hadron Collider (LHC) at CERN through 150 computing facilities around the world. Efficient distributed analysis requires optimal resource usage and the interplay of several factors: robust grid and software infrastructures, and system capability to adapt to different workloads. The continuous automatic validation of grid sites and the user support provided by a dedicated team of expert shifters have been proven to provide a solid distributed analysis system for ATLAS users. Typical user workflows on the grid, and their associated metrics, are discussed. Measurements of user job performance and typical requirements are also shown.

  3. Monitoring copper release in drinking water distribution systems.

    PubMed

    d'Antonio, L; Fabbricino, M; Panico, A

    2008-01-01

    A new procedure, recently proposed for on-line monitoring of copper released from metal pipes in household plumbing system for drinking water distribution during the development of corrosion processes, is tested experimentally. Experiments were carried out in laboratory controlled conditions, using synthetic water and varying the water alkalinity. The possibility of using the corrosion potential as a surrogate measure of copper concentration in stagnating water is shown, verifying, in the meantime, the effect of alkalinity on the development of passivation phenomena, which tend to protect the pipe from corrosion processes. Experimental data are discussed, highlighting the potentiality of the procedure, and recognizing its limitations. Copyright IWA Publishing 2008.

  4. Advanced S-Band studies using the TDRSS communications satellite

    NASA Technical Reports Server (NTRS)

    Jenkins, Jeffrey D.; Osborne, William P.; Fan, Yiping

    1994-01-01

    This report will describe the design, implementation, and results of a propagation experiment which used TDRSS to transmit spread signals at S-Band to an instrumented mobile receiver. The results consist of fade measurements and distribution functions in 21 environments across the Continental United States (CONUS). From these distribution functions, some idea may be gained about what system designers should expect for excess path loss in many mobile environments. Some of these results may be compared against similar measurements made with narrowband beacon measurements. Such comparisons provide insight into what gains the spread signaling system may or may not have in multipath and shadowing environments.

  5. First Operational Experience With a High-Energy Physics Run Control System Based on Web Technologies

    NASA Astrophysics Data System (ADS)

    Bauer, Gerry; Beccati, Barbara; Behrens, Ulf; Biery, Kurt; Branson, James; Bukowiec, Sebastian; Cano, Eric; Cheung, Harry; Ciganek, Marek; Cittolin, Sergio; Coarasa Perez, Jose Antonio; Deldicque, Christian; Erhan, Samim; Gigi, Dominique; Glege, Frank; Gomez-Reino, Robert; Gulmini, Michele; Hatton, Derek; Hwong, Yi Ling; Loizides, Constantin; Ma, Frank; Masetti, Lorenzo; Meijers, Frans; Meschi, Emilio; Meyer, Andreas; Mommsen, Remigius K.; Moser, Roland; O'Dell, Vivian; Oh, Alexander; Orsini, Luciano; Paus, Christoph; Petrucci, Andrea; Pieri, Marco; Racz, Attila; Raginel, Olivier; Sakulin, Hannes; Sani, Matteo; Schieferdecker, Philipp; Schwick, Christoph; Shpakov, Dennis; Simon, Michal; Sumorok, Konstanty; Yoon, Andre Sungho

    2012-08-01

    Run control systems of modern high-energy particle physics experiments have requirements similar to those of today's Internet applications. The Compact Muon Solenoid (CMS) collaboration at CERN's Large Hadron Collider (LHC) therefore decided to build the run control system for its detector based on web technologies. The system is composed of Java Web Applications distributed over a set of Apache Tomcat servlet containers that connect to a database back-end. Users interact with the system through a web browser. The present paper reports on the successful scaling of the system from a small test setup to the production data acquisition system that comprises around 10.000 applications running on a cluster of about 1600 hosts. We report on operational aspects during the first phase of operation with colliding beams including performance, stability, integration with the CMS Detector Control System and tools to guide the operator.

  6. Telescience - Optimizing aerospace science return through geographically distributed operations

    NASA Technical Reports Server (NTRS)

    Rasmussen, Daryl N.; Mian, Arshad M.

    1990-01-01

    The paper examines the objectives and requirements of teleoperations, defined as the means and process for scientists, NASA operations personnel, and astronauts to conduct payload operations as if these were colocated. This process is described in terms of Space Station era platforms. Some of the enabling technologies are discussed, including open architecture workstations, distributed computing, transaction management, expert systems, and high-speed networks. Recent testbedding experiments are surveyed to highlight some of the human factors requirements.

  7. Packaging waste prevention in the distribution of fruit and vegetables: An assessment based on the life cycle perspective.

    PubMed

    Tua, Camilla; Nessi, Simone; Rigamonti, Lucia; Dolci, Giovanni; Grosso, Mario

    2017-04-01

    In recent years, alternative food supply chains based on short distance production and delivery have been promoted as being more environmentally friendly than those applied by the traditional retailing system. An example is the supply of seasonal and possibly locally grown fruit and vegetables directly to customers inside a returnable crate (the so-called 'box scheme'). In addition to other claimed environmental and economic advantages, the box scheme is often listed among the packaging waste prevention measures. To check whether such a claim is soundly based, a life cycle assessment was carried out to verify the real environmental effectiveness of the box scheme in comparison to the Italian traditional distribution. The study focused on two reference products, carrots and apples, which are available in the crate all year round. An experience of a box scheme carried out in Italy was compared with some traditional scenarios where the product is distributed loose or packaged at the large-scale retail trade. The packaging waste generation, 13 impact indicators on environment and human health and energy consumptions were calculated. Results show that the analysed experience of the box scheme, as currently managed, cannot be considered a packaging waste prevention measure when compared with the traditional distribution of fruit and vegetables. The weaknesses of the alternative system were identified and some recommendations were given to improve its environmental performance.

  8. Status of Centrifugal Impeller Internal Aerodynamics: Experiments and Calculations

    DTIC Science & Technology

    1979-02-01

    Dan Adler February 1979 TJ Approved for public release; distribution unlimited 267.5 16 Prepared for: A35 Naval Air Systems Command Washington...The work reported herein was supported by the Naval Air Systems Command, Washington, DC. Reproduction of all or part of this report is authorized...6115 3N; N00019-79-WR-91115 II. CONTROLLING OFFICE NAME AND ADDRESS Naval Air Systems Command Washington, DC 20361 12. REPORT DATE

  9. Effect of Electron Energy Distribution on the Hysteresis of Plasma Discharge: Theory, Experiment, and Modeling.

    PubMed

    Lee, Hyo-Chang; Chung, Chin-Wook

    2015-10-20

    Hysteresis, which is the history dependence of physical systems, is one of the most important topics in physics. Interestingly, bi-stability of plasma with a huge hysteresis loop has been observed in inductive plasma discharges. Despite long plasma research, how this plasma hysteresis occurs remains an unresolved question in plasma physics. Here, we report theory, experiment, and modeling of the hysteresis. It was found experimentally and theoretically that evolution of the electron energy distribution (EED) makes a strong plasma hysteresis. In Ramsauer and non-Ramsauer gas experiments, it was revealed that the plasma hysteresis is observed only at high pressure Ramsauer gas where the EED deviates considerably from a Maxwellian shape. This hysteresis was presented in the plasma balance model where the EED is considered. Because electrons in plasmas are usually not in a thermal equilibrium, this EED-effect can be regarded as a universal phenomenon in plasma physics.

  10. Effect of Electron Energy Distribution on the Hysteresis of Plasma Discharge: Theory, Experiment, and Modeling

    PubMed Central

    Lee, Hyo-Chang; Chung, Chin-Wook

    2015-01-01

    Hysteresis, which is the history dependence of physical systems, is one of the most important topics in physics. Interestingly, bi-stability of plasma with a huge hysteresis loop has been observed in inductive plasma discharges. Despite long plasma research, how this plasma hysteresis occurs remains an unresolved question in plasma physics. Here, we report theory, experiment, and modeling of the hysteresis. It was found experimentally and theoretically that evolution of the electron energy distribution (EED) makes a strong plasma hysteresis. In Ramsauer and non-Ramsauer gas experiments, it was revealed that the plasma hysteresis is observed only at high pressure Ramsauer gas where the EED deviates considerably from a Maxwellian shape. This hysteresis was presented in the plasma balance model where the EED is considered. Because electrons in plasmas are usually not in a thermal equilibrium, this EED-effect can be regarded as a universal phenomenon in plasma physics. PMID:26482650

  11. Student Dust Counter I : Science Objectives

    NASA Astrophysics Data System (ADS)

    Mitchell, C.; Bryant, C.; Bunch, N.; Chanthawanich, T.; Colgan, M.; Fernandez, A.; Grogan, B.; Holland, G.; Krauss, C.; Krauss, E.; Krauss, O.; Neeland, M.; Horanyi, M.

    2003-12-01

    The New Horizons mission to Pluto and the Kuiper Belt is scheduled for launch in January 2006. As part of the Education and Public Outreach activity of the mission, undergraduate and graduate students at the Laboratory for Atmospheric and Space Physics, University of Colorado, are building a space experiment: the Student Dust Counter (SDC). This talk will summarize the scientific goals of this experiment. An accompanying poster describes the technical details of SDC. The primary goal of SDC is to map the dust distribution in the Solar System from 1 to 50 AU. It will greatly enhance our knowledge of dust production and transport in the outer Solar System by providing more sensitive observations than earlier experiments past Saturn, and the first in situ dust observations beyond 18 AU.

  12. Common Readout Unit (CRU) - A new readout architecture for the ALICE experiment

    NASA Astrophysics Data System (ADS)

    Mitra, J.; Khan, S. A.; Mukherjee, S.; Paul, R.

    2016-03-01

    The ALICE experiment at the CERN Large Hadron Collider (LHC) is presently going for a major upgrade in order to fully exploit the scientific potential of the upcoming high luminosity run, scheduled to start in the year 2021. The high interaction rate and the large event size will result in an experimental data flow of about 1 TB/s from the detectors, which need to be processed before sending to the online computing system and data storage. This processing is done in a dedicated Common Readout Unit (CRU), proposed for data aggregation, trigger and timing distribution and control moderation. It act as common interface between sub-detector electronic systems, computing system and trigger processors. The interface links include GBT, TTC-PON and PCIe. GBT (Gigabit transceiver) is used for detector data payload transmission and fixed latency path for trigger distribution between CRU and detector readout electronics. TTC-PON (Timing, Trigger and Control via Passive Optical Network) is employed for time multiplex trigger distribution between CRU and Central Trigger Processor (CTP). PCIe (Peripheral Component Interconnect Express) is the high-speed serial computer expansion bus standard for bulk data transport between CRU boards and processors. In this article, we give an overview of CRU architecture in ALICE, discuss the different interfaces, along with the firmware design and implementation of CRU on the LHCb PCIe40 board.

  13. Astro-WISE: Chaining to the Universe

    NASA Astrophysics Data System (ADS)

    Valentijn, E. A.; McFarland, J. P.; Snigula, J.; Begeman, K. G.; Boxhoorn, D. R.; Rengelink, R.; Helmich, E.; Heraudeau, P.; Verdoes Kleijn, G.; Vermeij, R.; Vriend, W.-J.; Tempelaar, M. J.; Deul, E.; Kuijken, K.; Capaccioli, M.; Silvotti, R.; Bender, R.; Neeser, M.; Saglia, R.; Bertin, E.; Mellier, Y.

    2007-10-01

    The recent explosion of recorded digital data and its processed derivatives threatens to overwhelm researchers when analysing their experimental data or looking up data items in archives and file systems. While current hardware developments allow the acquisition, processing and storage of hundreds of terabytes of data at the cost of a modern sports car, the software systems to handle these data are lagging behind. This problem is very general and is well recognized by various scientific communities; several large projects have been initiated, e.g., DATAGRID/EGEE {http://www.eu-egee.org/} federates compute and storage power over the high-energy physical community, while the international astronomical community is building an Internet geared Virtual Observatory {http://www.euro-vo.org/pub/} (Padovani 2006) connecting archival data. These large projects either focus on a specific distribution aspect or aim to connect many sub-communities and have a relatively long trajectory for setting standards and a common layer. Here, we report first light of a very different solution (Valentijn & Kuijken 2004) to the problem initiated by a smaller astronomical IT community. It provides an abstract scientific information layer which integrates distributed scientific analysis with distributed processing and federated archiving and publishing. By designing new abstractions and mixing in old ones, a Science Information System with fully scalable cornerstones has been achieved, transforming data systems into knowledge systems. This break-through is facilitated by the full end-to-end linking of all dependent data items, which allows full backward chaining from the observer/researcher to the experiment. Key is the notion that information is intrinsic in nature and thus is the data acquired by a scientific experiment. The new abstraction is that software systems guide the user to that intrinsic information by forcing full backward and forward chaining in the data modelling.

  14. A numerical study of zone-melting process for the thermoelectric material of Bi2Te3

    NASA Astrophysics Data System (ADS)

    Chen, W. C.; Wu, Y. C.; Hwang, W. S.; Hsieh, H. L.; Huang, J. Y.; Huang, T. K.

    2015-06-01

    In this study, a numerical model has been established by employing a commercial software; ProCAST, to simulate the variation/distribution of temperature and the subsequent microstructure of Bi2Te3 fabricated by zone-melting technique. Then an experiment is conducted to measure the temperature variation/distribution during the zone-melting process to validate the numerical system. Also, the effects of processing parameters on crystallization microstructure such as moving speed and temperature of heater are numerically evaluated. In the experiment, the Bi2Te3 powder are filled into a 30mm diameter quartz cylinder and the heater is set to 800°C with a moving speed 12.5 mm/hr. A thermocouple is inserted in the Bi2Te3 powder to measure the temperature variation/distribution of the zone-melting process. The temperature variation/distribution measured by experiment is compared to the results of numerical simulation. The results show that our model and the experiment are well matched. Then the model is used to evaluate the crystal formation for Bi2Te3 with a 30mm diameter process. It's found that when the moving speed is slower than 17.5 mm/hr, columnar crystal is obtained. In the end, we use this model to predict the crystal formation of zone-melting process for Bi2Te3 with a 45 mm diameter. The results show that it is difficult to grow columnar crystal when the diameter comes to 45mm.

  15. A system for conducting igneous petrology experiments under controlled redox conditions in reduced gravity

    NASA Technical Reports Server (NTRS)

    Williams, Richard J.

    1987-01-01

    The Space Shuttle and the planned Space Station will permit experimentation under conditions of reduced gravitational acceleration offering experimental petrologists the opportunity to study crystal growth, element distribution, and phase chemistry. In particular the confounding effects of macro and micro scale buoyancy-induced convection and crystal settling or flotation can be greatly reduced over those observed in experiments in the terrestrial laboratory. Also, for experiments in which detailed replication of the environment is important, the access to reduced gravity will permit a more complete simulation of processes that may have occurred on asteroids or in free space. A technique that was developed to control, measure, and manipulate oxygen fugacities with small quantities of gas which are recirculated over the sample. This system could be adaptable to reduced gravity space experiments requiring redox control.

  16. Flight Hardware Fabricated for Combustion Science in Space

    NASA Technical Reports Server (NTRS)

    OMalley, Terence F.; Weiland, Karen J.

    2005-01-01

    NASA Glenn Research Center s Telescience Support Center (TSC) allows researchers on Earth to operate experiments onboard the International Space Station (ISS) and the space shuttles. NASA s continuing investment in the required software, systems, and networks provides distributed ISS ground operations that enable payload developers and scientists to monitor and control their experiments from the Glenn TSC. The quality of scientific and engineering data is enhanced while the long-term operational costs of experiments are reduced because principal investigators and engineering teams can operate their payloads from their home institutions.

  17. The ATLAS PanDA Monitoring System and its Evolution

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  18. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet.

    PubMed

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-02-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.

  19. Avionics test bed development plan

    NASA Technical Reports Server (NTRS)

    Harris, L. H.; Parks, J. M.; Murdock, C. R.

    1981-01-01

    A development plan for a proposed avionics test bed facility for the early investigation and evaluation of new concepts for the control of large space structures, orbiter attached flex body experiments, and orbiter enhancements is presented. A distributed data processing facility that utilizes the current laboratory resources for the test bed development is outlined. Future studies required for implementation, the management system for project control, and the baseline system configuration are defined. A background analysis of the specific hardware system for the preliminary baseline avionics test bed system is included.

  20. Building a Propulsion Experiment Project Management Environment

    NASA Technical Reports Server (NTRS)

    Keiser, Ken; Tanner, Steve; Hatcher, Danny; Graves, Sara

    2004-01-01

    What do you get when you cross rocket scientists with computer geeks? It is an interactive, distributed computing web of tools and services providing a more productive environment for propulsion research and development. The Rocket Engine Advancement Program 2 (REAP2) project involves researchers at several institutions collaborating on propulsion experiments and modeling. In an effort to facilitate these collaborations among researchers at different locations and with different specializations, researchers at the Information Technology and Systems Center,' University of Alabama in Huntsville, are creating a prototype web-based interactive information system in support of propulsion research. This system, to be based on experience gained in creating similar systems for NASA Earth science field experiment campaigns such as the Convection and Moisture Experiments (CAMEX), will assist in the planning and analysis of model and experiment results across REAP2 participants. The initial version of the Propulsion Experiment Project Management Environment (PExPM) consists of a controlled-access web portal facilitating the drafting and sharing of working documents and publications. Interactive tools for building and searching an annotated bibliography of publications related to REAP2 research topics have been created to help organize and maintain the results of literature searches. Also work is underway, with some initial prototypes in place, for interactive project management tools allowing project managers to schedule experiment activities, track status and report on results. This paper describes current successes, plans, and expected challenges for this project.

  1. Next Generation Workload Management System For Big Data on Heterogeneous Distributed Computing

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Buncic, P.; De, K.; Jha, S.; Maeno, T.; Mount, R.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Porter, R. J.; Read, K. F.; Vaniachine, A.; Wells, J. C.; Wenaus, T.

    2015-05-01

    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS and ALICE are the largest collaborations ever assembled in the sciences and are at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, both experiments rely on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System (WMS) for managing the workflow for all data processing on hundreds of data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. The scale is demonstrated by the following numbers: PanDA manages O(102) sites, O(105) cores, O(108) jobs per year, O(103) users, and ATLAS data volume is O(1017) bytes. In 2013 we started an ambitious program to expand PanDA to all available computing resources, including opportunistic use of commercial and academic clouds and Leadership Computing Facilities (LCF). The project titled ‘Next Generation Workload Management and Analysis System for Big Data’ (BigPanDA) is funded by DOE ASCR and HEP. Extending PanDA to clouds and LCF presents new challenges in managing heterogeneity and supporting workflow. The BigPanDA project is underway to setup and tailor PanDA at the Oak Ridge Leadership Computing Facility (OLCF) and at the National Research Center "Kurchatov Institute" together with ALICE distributed computing and ORNL computing professionals. Our approach to integration of HPC platforms at the OLCF and elsewhere is to reuse, as much as possible, existing components of the PanDA system. We will present our current accomplishments with running the PanDA WMS at OLCF and other supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications.

  2. Measurements of trap dynamics of cold OH molecules using resonance-enhanced multiphoton ionization

    NASA Astrophysics Data System (ADS)

    Gray, John M.; Bossert, Jason A.; Shyur, Yomay; Lewandowski, H. J.

    2017-08-01

    Trapping cold, chemically important molecules with electromagnetic fields is a useful technique to study small molecules and their interactions. Traps provide long interaction times, which are needed to precisely examine these low-density molecular samples. However, the trapping fields lead to nonuniform molecular density distributions in these systems. Therefore, it is important to be able to experimentally characterize the spatial density distribution in the trap. Ionizing molecules at different locations in the trap using resonance-enhanced multiphoton ionization (REMPI) and detecting the resulting ions can be used to probe the density distribution even at the low density present in these experiments because of the extremely high efficiency of detection. Until recently, one of the most chemically important molecules, OH, did not have a convenient REMPI scheme identified. Here, we use a newly developed 1 +1' REMPI scheme to detect trapped cold OH molecules. We use this capability to measure the trap dynamics of the central density of the cloud and the density distribution. These types of measurements can be used to optimize loading of molecules into traps, as well as to help characterize the energy distribution, which is critical knowledge for interpreting molecular collision experiments.

  3. A Guide for Developing Human-Robot Interaction Experiments in the Robotic Interactive Visualization and Experimentation Technology (RIVET) Simulation

    DTIC Science & Technology

    2016-05-01

    research, Kunkler (2006) suggested that the similarities between computer simulation tools and robotic surgery systems (e.g., mechanized feedback...distribution is unlimited. 49 Davies B. A review of robotics in surgery . Proceedings of the Institution of Mechanical Engineers, Part H: Journal...ARL-TR-7683 ● MAY 2016 US Army Research Laboratory A Guide for Developing Human- Robot Interaction Experiments in the Robotic

  4. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  5. Spatio-temporal patterns of bacteria caused by collective motion

    NASA Astrophysics Data System (ADS)

    Kitsunezaki, So

    2006-04-01

    In incubation experiments on bacterial colonies of Proteus mirabilis, collective motion of bacteria is found to generate macroscopic turbulent patterns on the surface of agar media. We propose a mathematical model to describe the time evolution of the positional and directional distributions of motile bacteria in such systems, and investigate this model both numerically and analytically. It is shown that as the average density of bacteria increases, nonuniform swarming patterns emerge from a uniform stationary state. For a sufficient large density, we find that spiral patterns are caused by interactions between the local bacteria densities and the rotational mode of the collective motion. Unidirectional spiral patterns similar to those observed in experiments appear in the case in which the equilibrium directional distribution is asymmetric.

  6. Probing the Fluctuations of Optical Properties in Time-Resolved Spectroscopy

    NASA Astrophysics Data System (ADS)

    Randi, Francesco; Esposito, Martina; Giusti, Francesca; Misochko, Oleg; Parmigiani, Fulvio; Fausti, Daniele; Eckstein, Martin

    2017-11-01

    We show that, in optical pump-probe experiments on bulk samples, the statistical distribution of the intensity of ultrashort light pulses after interaction with a nonequilibrium complex material can be used to measure the time-dependent noise of the current in the system. We illustrate the general arguments for a photoexcited Peierls material. The transient noise spectroscopy allows us to measure to what extent electronic degrees of freedom dynamically obey the fluctuation-dissipation theorem, and how well they thermalize during the coherent lattice vibrations. The proposed statistical measurement developed here provides a new general framework to retrieve dynamical information on the excited distributions in nonequilibrium experiments, which could be extended to other degrees of freedom of magnetic or vibrational origin.

  7. From experiments to simulations: tracing Na+ distribution around roots under different transpiration rates and salinity levels

    NASA Astrophysics Data System (ADS)

    Perelman, Adi; Jorda, Helena; Vanderborght, Jan; Pohlmeier, Andreas; Lazarovitch, Naftali

    2017-04-01

    When salinity increases beyond a certain threshold it will result in reduced crop yield at a fixed rate, according to Maas and Hoffman model (1976). Thus, there is a great importance of predicting salinization and its impact on crops. Current models do not consider the impact of environmental conditions on plants salt tolerance, even though these conditions are affecting plant water uptake and therefore salt accumulation around the roots. Different factors, such as transpiration rates, can influence the plant sensitivity to salinity by influencing salt concentrations around the roots. Better parametrization of a model can help improving predicting the real effects of salinity on crop growth and yield. The aim of this research is to study Na+ distribution around roots at different scales using different non-invasive methods, and study how this distribution is being affected by transpiration rate and plant water uptake. Results from tomato plants growing on Rhizoslides (capillary paper growth system), show that Na+ concentration is higher at the root- substrate interface, compared with the bulk. Also, Na+ accumulation around the roots decreased under low transpiration rate, which is supporting our hypothesis. Additionally, Rhizoslides enable to study roots' growth rate and architecture under different salinity levels. Root system architecture was retrieved from photos taken during the experiment and enabled us to incorporate real root systems into a simulation. To observe the correlation of root system architectures and Na+ distribution in three dimensions, we used magnetic resonance imaging (MRI). MRI provides fine resolution of Na+ accumulation around a single root without disturbing the root system. With time, Na+ was accumulating only where roots were found in the soil and later on around specific roots. These data are being used for model calibration, which is expected to predict root water uptake in saline soils for different climatic conditions and different soil water availabilities.

  8. A dynamic two-dimensional system for measuring volatile organic compound volatilization and movement in soils.

    PubMed

    Allaire, S E; Yates, S R; Ernst, F F; Gan, J

    2002-01-01

    There is an important need to develop instrumentation that allows better understanding of atmospheric emission of toxic volatile compounds associated with soil management. For this purpose, chemical movement and distribution in the soil profile should be simultaneously monitored with its volatilization. A two-dimensional rectangular soil column was constructed and a dynamic sequential volatilization flux chamber was attached to the top of the column. The flux chamber was connected through a manifold valve to a gas chromatograph (GC) for real-time concentration measurement. Gas distribution in the soil profile was sampled with gas-tight syringes at selected times and analyzed with a GC. A pressure transducer was connected to a scanivalve to automatically measure the pressure distribution in the gas phase of the soil profile. The system application was demonstrated by packing the column with a sandy loam in a symmetrical bed-furrow system. A 5-h furrow irrigation was started 24 h after the injection of a soil fumigant, propargyl bromide (3-bromo-1-propyne; 3BP). The experience showed the importance of measuring lateral volatilization variability, pressure distribution in the gas phase, chemical distribution between the different phases (liquid, gas, and sorbed), and the effect of irrigation on the volatilization. Gas movement, volatilization, water infiltration, and distribution of degradation product (Br-) were symmetric around the bed within 10%. The system saves labor cost and time. This versatile system can be modified and used to compare management practices, estimate concentration-time indexes for pest control, study chemical movement, degradation, and emissions, and test mathematical models.

  9. Meteorological factors in Earth-satellite propagation

    NASA Technical Reports Server (NTRS)

    Levis, C. A.; Taylor, R. C.; Leonard, R.; Lin, K. T.; Pigon, B.; Weller, A.

    1982-01-01

    Using the COMSTAR D/4 28.56 GHz beacon as a source, a differential gain experiment was performed by connecting a 5-meter paraboloidal antenna and a 0.6-meter paraboloidal antenna alternately to the same receiver. Substantial differential gain changes were observed during some, but not all, rain events. A site-diversity experiment was implemented which consists of two 28.56 GHz radiometers separated by 9 km. The look-angle corresponds to that of the D/4 beacon, and data were obtained with one radiometer during several weeks of concurrent beacon operation to verify the system calibration. A theoretical study of the effect of scattering from a nonuniform rain distribution along the path is under way to aid in interpreting the results of this experiment. An improved empirical site diversity-gain model was derived from data in the literature relating to 34 diversity experiments. Work on the experiment control and data acquisition system is continuing with a view toward future experiments.

  10. STEP--a System for Teaching Experimental Psychology using E-Prime.

    PubMed

    MacWhinney, B; St James, J; Schunn, C; Li, P; Schneider, W

    2001-05-01

    Students in psychology need to learn to design and analyze their own experiments. However, software that allows students to build experiments on their own has been limited in a variety of ways. The shipping of the first full release of the E-Prime system later this year will open up a new opportunity for addressing this problem. Because E-Prime promises to become the standard for building experiments in psychology, it is now possible to construct a Web-based resource that uses E-Prime as the delivery engine for a wide variety of instructional materials. This new system, funded by the National Science Foundation, is called STEP (System for the Teaching of Experimental Psychology). The goal of the STEP Project is to provide instructional materials that will facilitate the use of E-Prime in various learning contexts. We are now compiling a large set of classic experiments implemented in E-Prime and available over the Internet from http://step.psy.cmu.edu. The Web site also distributes instructional materials for building courses in experimental psychology based on E-Prime.

  11. Modeling of N2 and O optical emissions for ionosphere HF powerful heating experiments

    NASA Astrophysics Data System (ADS)

    Sergienko, T.; Gustavsson, B.

    Analyses of experiments of F region ionosphere modification by HF powerful radio waves show that optical observations are very useful tools for diagnosing of the interaction of the probing radio wave with the ionospheric plasma Hitherto the emissions usually measured in the heating experiment have been the 630 0 nm and the 557 7 nm lines of atomic oxygen Other emissions for instance O 844 8 nm and N2 427 8 nm have been measured episodically in only a few experiments although the very rich optical spectrum of molecular nitrogen potentially involves important information about ionospheric plasma in the heated region This study addresses the modeling of optical emissions from the O and the N2 triplet states first positive second positive Vegard-Kaplan infrared afterglow and Wu-Benesch band systems excited under a condition of the ionosphere heating experiment The auroral triplet state population distribution model was modified for the ionosphere heating conditions by using the different electron distribution functions suggested by Mishin et al 2000 2003 and Gustavsson at al 2004 2005 Modeling results are discussed from the point of view of efficiency of measurements of the N2 emissions in future experiments

  12. Investigation of the Feasibility of Utilizing Gamma Emission Computed Tomography in Evaluating Fission Product Migration in Irradiated TRISO Fuel Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jason M. Harp; Paul A. Demkowicz

    2014-10-01

    In the High Temperature Gas-Cooled Reactor (HTGR) the TRISO particle fuel serves as the primary fission product containment. However the large number of TRISO particles present in proposed HTGRs dictates that there will be a small fraction (~10 -4 to 10 -5) of as manufactured and in-pile particle failures that will lead to some fission product release. The matrix material surrounding the TRISO particles in fuel compacts and the structural graphite holding the TRISO particles in place can also serve as sinks for containing any released fission products. However data on the migration of solid fission products through these materialsmore » is lacking. One of the primary goals of the AGR-3/4 experiment is to study fission product migration from failed TRISO particles in prototypic HTGR components such as structural graphite and compact matrix material. In this work, the potential for a Gamma Emission Computed Tomography (GECT) technique to non-destructively examine the fission product distribution in AGR-3/4 components and other irradiation experiments is explored. Specifically, the feasibility of using the Idaho National Laboratory (INL) Hot Fuels Examination Facility (HFEF) Precision Gamma Scanner (PGS) system for this GECT application is considered. To test the feasibility, the response of the PGS system to idealized fission product distributions has been simulated using Monte Carlo radiation transport simulations. Previous work that applied similar techniques during the AGR-1 experiment will also be discussed as well as planned uses for the GECT technique during the post irradiation examination of the AGR-2 experiment. The GECT technique has also been applied to other irradiated nuclear fuel systems that were currently available in the HFEF hot cell including oxide fuel pins, metallic fuel pins, and monolithic plate fuel.« less

  13. Distributed MIMO chaotic radar based on wavelength-division multiplexing technology.

    PubMed

    Yao, Tingfeng; Zhu, Dan; Ben, De; Pan, Shilong

    2015-04-15

    A distributed multiple-input multiple-output chaotic radar based on wavelength-division multiplexing technology (WDM) is proposed and demonstrated. The wideband quasi-orthogonal chaotic signals generated by different optoelectronic oscillators (OEOs) are emitted by separated antennas to gain spatial diversity against the fluctuation of a target's radar cross section and enhance the detection capability. The received signals collected by the receive antennas and the reference signals from the OEOs are delivered to the central station for joint processing by exploiting WDM technology. The centralized signal processing avoids precise time synchronization of the distributed system and greatly simplifies the remote units, which improves the localization accuracy of the entire system. A proof-of-concept experiment for two-dimensional localization of a metal target is demonstrated. The maximum position error is less than 6.5 cm.

  14. Implementation of an attack scheme on a practical QKD system

    NASA Astrophysics Data System (ADS)

    Lamas-Linares, Antia; Liu, Qin; Gerhardt, Ilja; Makarov, Vadim; Kurtsiefer, Christian

    2010-03-01

    We report on an experimental implementation of an attack of a practical quantum key distribution system [1], based on a vulnerability of single photon detectors [2]. An intercept/resend-like attack has been carried out which revealed 100% of the raw key generated between the legitimate communication partners. No increase of the error ratio was observed, which is usually considered a reliable witness for any eavesdropping attempt. We also present an experiment which shows that this attack is not revealed by key distribution protocols probing for eavesdroppers by testing a Bell inequality [3], and discuss implications for practical quantum key distribution.[4pt] [1] I. Marcikic, A. Lamas-Linares, C. Kurtsiefer, Appl. Phys. Lett. 89, 101122 (2006); [2] V. Makarov, New J. Phys. 11, 065003 (2009); [3] A. Ling et al., Phys. Rev. A 78, 020301(R), (2008)

  15. Direct and full-scale experimental verifications towards ground-satellite quantum key distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jian-Yu; Yang, Bin; Liao, Sheng-Kai; Zhang, Liang; Shen, Qi; Hu, Xiao-Fang; Wu, Jin-Cai; Yang, Shi-Ji; Jiang, Hao; Tang, Yan-Lin; Zhong, Bo; Liang, Hao; Liu, Wei-Yue; Hu, Yi-Hua; Huang, Yong-Mei; Qi, Bo; Ren, Ji-Gang; Pan, Ge-Sheng; Yin, Juan; Jia, Jian-Jun; Chen, Yu-Ao; Chen, Kai; Peng, Cheng-Zhi; Pan, Jian-Wei

    2013-05-01

    Quantum key distribution (QKD) provides the only intrinsically unconditional secure method for communication based on the principle of quantum mechanics. Compared with fibre-based demonstrations, free-space links could provide the most appealing solution for communication over much larger distances. Despite significant efforts, all realizations to date rely on stationary sites. Experimental verifications are therefore extremely crucial for applications to a typical low Earth orbit satellite. To achieve direct and full-scale verifications of our set-up, we have carried out three independent experiments with a decoy-state QKD system, and overcome all conditions. The system is operated on a moving platform (using a turntable), on a floating platform (using a hot-air balloon), and with a high-loss channel to demonstrate performances under conditions of rapid motion, attitude change, vibration, random movement of satellites, and a high-loss regime. The experiments address wide ranges of all leading parameters relevant to low Earth orbit satellites. Our results pave the way towards ground-satellite QKD and a global quantum communication network.

  16. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  17. Research on target information optics communications transmission characteristic and performance in multi-screens testing system

    NASA Astrophysics Data System (ADS)

    Li, Hanshan

    2016-04-01

    To enhance the stability and reliability of multi-screens testing system, this paper studies multi-screens target optical information transmission link properties and performance in long-distance, sets up the discrete multi-tone modulation transmission model based on geometric model of laser multi-screens testing system and visible light information communication principle; analyzes the electro-optic and photoelectric conversion function of sender and receiver in target optical information communication system; researches target information transmission performance and transfer function of the generalized visible-light communication channel; found optical information communication transmission link light intensity space distribution model and distribution function; derives the SNR model of information transmission communication system. Through the calculation and experiment analysis, the results show that the transmission error rate increases with the increment of transmission rate in a certain channel modulation depth; when selecting the appropriate transmission rate, the bit error rate reach 0.01.

  18. A trade-off between local and distributed information processing associated with remote episodic versus semantic memory.

    PubMed

    Heisz, Jennifer J; Vakorin, Vasily; Ross, Bernhard; Levine, Brian; McIntosh, Anthony R

    2014-01-01

    Episodic memory and semantic memory produce very different subjective experiences yet rely on overlapping networks of brain regions for processing. Traditional approaches for characterizing functional brain networks emphasize static states of function and thus are blind to the dynamic information processing within and across brain regions. This study used information theoretic measures of entropy to quantify changes in the complexity of the brain's response as measured by magnetoencephalography while participants listened to audio recordings describing past personal episodic and general semantic events. Personal episodic recordings evoked richer subjective mnemonic experiences and more complex brain responses than general semantic recordings. Critically, we observed a trade-off between the relative contribution of local versus distributed entropy, such that personal episodic recordings produced relatively more local entropy whereas general semantic recordings produced relatively more distributed entropy. Changes in the relative contributions of local and distributed entropy to the total complexity of the system provides a potential mechanism that allows the same network of brain regions to represent cognitive information as either specific episodes or more general semantic knowledge.

  19. Extracting the temperature of hot carriers in time- and angle-resolved photoemission.

    PubMed

    Ulstrup, Søren; Johannsen, Jens Christian; Grioni, Marco; Hofmann, Philip

    2014-01-01

    The interaction of light with a material's electronic system creates an out-of-equilibrium (non-thermal) distribution of optically excited electrons. Non-equilibrium dynamics relaxes this distribution on an ultrafast timescale to a hot Fermi-Dirac distribution with a well-defined temperature. The advent of time- and angle-resolved photoemission spectroscopy (TR-ARPES) experiments has made it possible to track the decay of the temperature of the excited hot electrons in selected states in the Brillouin zone, and to reveal their cooling in unprecedented detail in a variety of emerging materials. It is, however, not a straightforward task to determine the temperature with high accuracy. This is mainly attributable to an a priori unknown position of the Fermi level and the fact that the shape of the Fermi edge can be severely perturbed when the state in question is crossing the Fermi energy. Here, we introduce a method that circumvents these difficulties and accurately extracts both the temperature and the position of the Fermi level for a hot carrier distribution by tracking the occupation statistics of the carriers measured in a TR-ARPES experiment.

  20. An experimental study of an adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Celik, Zeki; Roberts, Leonard

    1988-01-01

    A series of adaptive wall ventilated wind tunnel experiments was carried out to demonstrate the feasibility of using the side wall pressure distribution as the flow variable for the assessment of compatibility with free air conditions. Iterative and one step convergence methods were applied using the streamwise velocity component, the side wall pressure distribution and the normal velocity component in order to investigate their relative merits. The advantage of using the side wall pressure as the flow variable is to reduce the data taking time which is one the major contributors to the total testing time. In ventilated adaptive wall wind tunnel testing, side wall pressure measurements require simple instrumentation as opposed to the Laser Doppler Velocimetry used to measure the velocity components. In ventilated adaptive wall tunnel testing, influence coefficients are required to determine the pressure corrections in the plenum compartment. Experiments were carried out to evaluate the influence coefficients from side wall pressure distributions, and from streamwise and normal velocity distributions at two control levels. Velocity measurements were made using a two component Laser Doppler Velocimeter system.

  1. Scattering from Rock and Rock Outcrops

    DTIC Science & Technology

    2013-09-30

    whose orientations and size distributions reflect the internal fault organization of the bedrock. A mathematical model of the leeward side of an...scattering from facets oriented close to normal incidence to the sonar system. Diffraction from sharp edges may also contribute strong scattering that 5 is...collected in a recent field experiment and are currently being analyzed. Figure 5 shows PhD student Derek Olson alongside the photogrammetry system

  2. The Starlite Project

    DTIC Science & Technology

    1990-09-01

    conflicts. The current prototyping tool also provides a multiversion data object control mechanism. From a series of experiments, we found that the...performance of a multiversion distributed database system is quite sensitive to the size of read-sets and write-sets of transactions. A multiversion database...510-512. (18) Son, S. H. and N. Haghighi, "Performance Evaluation of Multiversion Database Systems," Sixth IEEE International Conference on Data

  3. Satellite and earth science data management activities at the U.S. geological survey's EROS data center

    USGS Publications Warehouse

    Carneggie, David M.; Metz, Gary G.; Draeger, William C.; Thompson, Ralph J.

    1991-01-01

    The U.S. Geological Survey's Earth Resources Observation Systems (EROS) Data Center, the national archive for Landsat data, has 20 years of experience in acquiring, archiving, processing, and distributing Landsat and earth science data. The Center is expanding its satellite and earth science data management activities to support the U.S. Global Change Research Program and the National Aeronautics and Space Administration (NASA) Earth Observing System Program. The Center's current and future data management activities focus on land data and include: satellite and earth science data set acquisition, development and archiving; data set preservation, maintenance and conversion to more durable and accessible archive medium; development of an advanced Land Data Information System; development of enhanced data packaging and distribution mechanisms; and data processing, reprocessing, and product generation systems.

  4. Teaching and Learning Activity Sequencing System using Distributed Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsunori; Ishikawa, Tomotake; Okamoto, Toshio

    The purpose of this study is development of a supporting system for teacher's design of lesson plan. Especially design of lesson plan which relates to the new subject "Information Study" is supported. In this study, we developed a system which generates teaching and learning activity sequences by interlinking lesson's activities corresponding to the various conditions according to the user's input. Because user's input is multiple information, there will be caused contradiction which the system should solve. This multiobjective optimization problem is resolved by Distributed Genetic Algorithms, in which some fitness functions are defined with reference models on lesson, thinking and teaching style. From results of various experiments, effectivity and validity of the proposed methods and reference models were verified; on the other hand, some future works on reference models and evaluation functions were also pointed out.

  5. A fiber-based quasi-continuous-wave quantum key distribution system

    PubMed Central

    Shen, Yong; Chen, Yan; Zou, Hongxin; Yuan, Jianmin

    2014-01-01

    We report a fiber-based quasi-continuous-wave (CW) quantum key distribution (QKD) system with continuous variables (CV). This system employs coherent light pulses and time multiplexing to maximally reduce cross talk in the fiber. No-switching detection scheme is adopted to optimize the repetition rate. Information is encoded on the sideband of the pulsed coherent light to fully exploit the continuous wave nature of laser field. With this configuration, high secret key rate can be achieved. For the 50 MHz detected bandwidth in our experiment, when the multidimensional reconciliation protocol is applied, a secret key rate of 187 kb/s can be achieved over 50 km of optical fiber against collective attacks, which have been shown to be asymptotically optimal. Moreover, recently studied loopholes have been fixed in our system. PMID:24691409

  6. Estimation of stress distribution in ferromagnetic tensile specimens using low cost eddy current stress measurement system and BP neural network.

    PubMed

    Li, Jianwei; Zhang, Weimin; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi

    2017-01-01

    Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing.

  7. Estimation of stress distribution in ferromagnetic tensile specimens using low cost eddy current stress measurement system and BP neural network

    PubMed Central

    Li, Jianwei; Zeng, Weiqin; Chen, Guolong; Qiu, Zhongchao; Cao, Xinyuan; Gao, Xuanyi

    2017-01-01

    Estimation of the stress distribution in ferromagnetic components is very important for evaluating the working status of mechanical equipment and implementing preventive maintenance. Eddy current testing technology is a promising method in this field because of its advantages of safety, no need of coupling agent, etc. In order to reduce the cost of eddy current stress measurement system, and obtain the stress distribution in ferromagnetic materials without scanning, a low cost eddy current stress measurement system based on Archimedes spiral planar coil was established, and a method based on BP neural network to obtain the stress distribution using the stress of several discrete test points was proposed. To verify the performance of the developed test system and the validity of the proposed method, experiment was implemented using structural steel (Q235) specimens. Standard curves of sensors at each test point were achieved, the calibrated data were used to establish the BP neural network model for approximating the stress variation on the specimen surface, and the stress distribution curve of the specimen was obtained by interpolating with the established model. The results show that there is a good linear relationship between the change of signal modulus and the stress in most elastic range of the specimen, and the established system can detect the change in stress with a theoretical average sensitivity of -0.4228 mV/MPa. The obtained stress distribution curve is well consonant with the theoretical analysis result. At last, possible causes and improving methods of problems appeared in the results were discussed. This research has important significance for reducing the cost of eddy current stress measurement system, and advancing the engineering application of eddy current stress testing. PMID:29145500

  8. Data Aggregation System: A system for information retrieval on demand over relational and non-relational distributed data sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ball, G.; Kuznetsov, V.; Evans, D.

    We present the Data Aggregation System, a system for information retrieval and aggregation from heterogenous sources of relational and non-relational data for the Compact Muon Solenoid experiment on the CERN Large Hadron Collider. The experiment currently has a number of organically-developed data sources, including front-ends to a number of different relational databases and non-database data services which do not share common data structures or APIs (Application Programming Interfaces), and cannot at this stage be readily converged. DAS provides a single interface for querying all these services, a caching layer to speed up access to expensive underlying calls and the abilitymore » to merge records from different data services pertaining to a single primary key.« less

  9. Design considerations, architecture, and use of the Mini-Sentinel distributed data system.

    PubMed

    Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S

    2012-01-01

    We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.

  10. The dish-Rankine SCSTPE program (Engineering Experiment no. 1). [systems engineering and economic analysis for a small community solar thermal electric system

    NASA Technical Reports Server (NTRS)

    Pons, R. L.; Grigsby, C. E.

    1980-01-01

    Activities planned for phase 2 Of the Small Community Solar Thermal Power Experiment (PFDR) program are summarized with emphasis on a dish-Rankine point focusing distributed receiver solar thermal electric system. Major design efforts include: (1) development of an advanced concept indirect-heated receiver;(2) development of hardware and software for a totally unmanned power plant control system; (3) implementation of a hybrid digital simulator which will validate plant operation prior to field testing; and (4) the acquisition of an efficient organic Rankine cycle power conversion unit. Preliminary performance analyses indicate that a mass-produced dish-Rankine PFDR system is potentially capable of producing electricity at a levelized busbar energy cost of 60 to 70 mills per KWh and with a capital cost of about $1300 per KW.

  11. Complexation behavior of oppositely charged polyelectrolytes: Effect of charge distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Mingtian; Li, Baohui, E-mail: dliang@pku.edu.cn, E-mail: baohui@nankai.edu.cn; Zhou, Jihan

    Complexation behavior of oppositely charged polyelectrolytes in a solution is investigated using a combination of computer simulations and experiments, focusing on the influence of polyelectrolyte charge distributions along the chains on the structure of the polyelectrolyte complexes. The simulations are performed using Monte Carlo with the replica-exchange algorithm for three model systems where each system is composed of a mixture of two types of oppositely charged model polyelectrolyte chains (EGEG){sub 5}/(KGKG){sub 5}, (EEGG){sub 5}/(KKGG){sub 5}, and (EEGG){sub 5}/(KGKG){sub 5}, in a solution including explicit solvent molecules. Among the three model systems, only the charge distributions along the chains are notmore » identical. Thermodynamic quantities are calculated as a function of temperature (or ionic strength), and the microscopic structures of complexes are examined. It is found that the three systems have different transition temperatures, and form complexes with different sizes, structures, and densities at a given temperature. Complex microscopic structures with an alternating arrangement of one monolayer of E/K monomers and one monolayer of G monomers, with one bilayer of E and K monomers and one bilayer of G monomers, and with a mixture of monolayer and bilayer of E/K monomers in a box shape and a trilayer of G monomers inside the box are obtained for the three mixture systems, respectively. The experiments are carried out for three systems where each is composed of a mixture of two types of oppositely charged peptide chains. Each peptide chain is composed of Lysine (K) and glycine (G) or glutamate (E) and G, in solution, and the chain length and amino acid sequences, and hence the charge distribution, are precisely controlled, and all of them are identical with those for the corresponding model chain. The complexation behavior and complex structures are characterized through laser light scattering and atomic force microscopy measurements. The order of the apparent weight-averaged molar mass and the order of density of complexes observed from the three experimental systems are qualitatively in agreement with those predicted from the simulations.« less

  12. Results of the VPI&SU Comstar experiment. [depolarization and attenuation due to rain

    NASA Technical Reports Server (NTRS)

    Andrews, J. H.; Ozbay, C.; Pratt, T.; Bostian, C. W.; Manus, E. A.; Gaines, J. M.; Marshall, R. E.; Stutzman, W. L.; Wiley, P. H.

    1982-01-01

    This paper summarizes annual and cumulative attenuation data, depolarization data, and associated local rain rate distributions obtained with the Comstar family of 19.04- and 28.56-GHz satellite beacons during the years 1977-1981. It discusses the relationships between attenuation and rain rate and between attenuation and depolarization, compares measured data on the joint distribution of attenuation and depolarization, and examines the limitations that propagation effects will impose on future 20/30-GHz satellite communications systems.

  13. Ozone and aerosol distributions measured by airborne lidar during the 1988 Arctic Boundary Layer Experiment

    NASA Technical Reports Server (NTRS)

    Browell, Edward V.; Butler, Carolyn F.; Kooi, Susan A.

    1991-01-01

    Consideration is given to O3 and aerosol distributions measured from an aircraft using a DIAL system in order to study the sources and sinks of gases and aerosols over the tundra regions of Alaska during summer 1988. The tropospheric O3 budget over the Arctic was found to be strongly influenced by stratospheric intrusions. Regions of low aerosol scattering and enhanced O3 mixing ratios were usually correlated with descending air from the upper troposphere or lower stratosphere.

  14. Precipitating Auroral Electron Flux Characteristics Based on UV Data Obtained by the AIRS Experiment Onboard the Polar BEAR Satellite

    DTIC Science & Technology

    1992-03-01

    approved for publication. RICHARD EASTES DAVID ANDERSON Contract Manager Branch Chief "" K. VICKERY Division Director This document has been reviewed by...Contract Manager: Richard Eastes/GPIM 11. SUPPLEMENTARY NOTES 12a. DISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Approved for public...system within this band ( Vallance Jones, 1974) 4.2 Model atmosphere MSIS86 (Hedin, 1987) was used to generate the model atmospheres for our analysis of

  15. The Design of a 100 GHz CARM (Cyclotron Auto-Resonance Maser) Oscillator Experiment

    DTIC Science & Technology

    1988-09-14

    pulsed-power system must be considered. A model of the voltage pulse that consists of a linear voltage rise from zero to the operating voltage...to vary as the voltage to the 3/2 power in order to model space-charge limited flow from a relativistic diode.. As the current rises in the pulse, the...distribution due to a space-charge-limited, laminar flow of electrons based on a one-dimensional, planar, relativistic model . From the charge distribution

  16. Measurements in the Turbulent Boundary Layer at Constant Pressure in Subsonic and Supersonic Flow. Part I. Mean Flow

    DTIC Science & Technology

    1978-05-01

    distribution unlimited. I REPORTS ":-- r , Prepared for ARNOLD ENGINEERING DEVELOPMENT CENTER/DOTR AiR FORCE SYSTEMS COMMAND ARNOLD AIR FORCE STATIONI...section and diffuser. The measurements used the JPL multlport measuring system , which simultaneously recorded the stag- nation temperature and...stagnation and static pressures were recorded by the data system . For. the experiments.at CIT, two techniques were employed. Within the first i00 cm from

  17. Software-Enabled Distributed Network Governance: The PopMedNet Experience.

    PubMed

    Davies, Melanie; Erickson, Kyle; Wyner, Zachary; Malenfant, Jessica; Rosen, Rob; Brown, Jeffrey

    2016-01-01

    The expanded availability of electronic health information has led to increased interest in distributed health data research networks. The distributed research network model leaves data with and under the control of the data holder. Data holders, network coordinating centers, and researchers have distinct needs and challenges within this model. The concerns of network stakeholders are addressed in the design and governance models of the PopMedNet software platform. PopMedNet features include distributed querying, customizable workflows, and auditing and search capabilities. Its flexible role-based access control system enables the enforcement of varying governance policies. Four case studies describe how PopMedNet is used to enforce network governance models. Trust is an essential component of a distributed research network and must be built before data partners may be willing to participate further. The complexity of the PopMedNet system must be managed as networks grow and new data, analytic methods, and querying approaches are developed. The PopMedNet software platform supports a variety of network structures, governance models, and research activities through customizable features designed to meet the needs of network stakeholders.

  18. Numerical and experimental analyses of the radiant heat flux produced by quartz heating systems

    NASA Technical Reports Server (NTRS)

    Turner, Travis L.; Ash, Robert L.

    1994-01-01

    A method is developed for predicting the radiant heat flux distribution produced by tungsten filament, tubular fused-quartz envelope heating systems with reflectors. The method is an application of Monte Carlo simulation, which takes the form of a random walk or ray tracing scheme. The method is applied to four systems of increasing complexity, including a single lamp without a reflector, a single lamp with a Hat reflector, a single lamp with a parabolic reflector, and up to six lamps in a six-lamp contoured-reflector heating unit. The application of the Monte Carlo method to the simulation of the thermal radiation generated by these systems is discussed. The procedures for numerical implementation are also presented. Experiments were conducted to study these quartz heating systems and to acquire measurements of the corresponding empirical heat flux distributions for correlation with analysis. The experiments were conducted such that several complicating factors could be isolated and studied sequentially. Comparisons of the experimental results with analysis are presented and discussed. Good agreement between the experimental and simulated results was obtained in all cases. This study shows that this method can be used to analyze very complicated quartz heating systems and can account for factors such as spectral properties, specular reflection from curved surfaces, source enhancement due to reflectors and/or adjacent sources, and interaction with a participating medium in a straightforward manner.

  19. Utilization of Internet Protocol-Based Voice Systems in Remote Payload Operations

    NASA Technical Reports Server (NTRS)

    Chamberlain, jim; Bradford, Bob; Best, Susan; Nichols, Kelvin

    2002-01-01

    Due to limited crew availability to support science and the large number of experiments to be operated simultaneously, telescience is key to a successful International Space Station (ISS) science program. Crew, operations personnel at NASA centers, and researchers at universities and companies around the world must work closely together to per orm scientific experiments on-board ISS. The deployment of reliable high-speed Internet Protocol (IP)-based networks promises to greatly enhance telescience capabilities. These networks are now being used to cost-effectively extend the reach of remote mission support systems. They reduce the need for dedicated leased lines and travel while improving distributed workgroup collaboration capabilities. NASA has initiated use of Voice over Internet Protocol (VoIP) to supplement the existing mission voice communications system used by researchers at their remote sites. The Internet Voice Distribution System (IVoDS) connects remote researchers to mission support "loopsll or conferences via NASA networks and Internet 2. Researchers use NODS software on personal computers to talk with operations personnel at NASA centers. IVoDS also has the ;capability, if authorized, to allow researchers to communicate with the ISS crew during experiment operations. NODS was developed by Marshall Space Flight Center with contractors & Technology, First Virtual Communications, Lockheed-Martin, and VoIP Group. NODS is currently undergoing field-testing with full deployment for up to 50 simultaneous users expected in 2002. Research is being performed in parallel with IVoDS deployment for a next-generation system to qualitatively enhance communications among ISS operations personnel. In addition to the current voice capability, video and data/application-sharing capabilities are being investigated. IVoDS technology is also being considered for mission support systems for programs such as Space Launch Initiative and Homeland Defense.

  20. Adaptive Optical System for Retina Imaging Approaches Clinic Applications

    NASA Astrophysics Data System (ADS)

    Ling, N.; Zhang, Y.; Rao, X.; Wang, C.; Hu, Y.; Jiang, W.; Jiang, C.

    We presented "A small adaptive optical system on table for human retinal imaging" at the 3rd Workshop on Adaptive Optics for Industry and Medicine. In this system, a 19 element small deformable mirror was used as wavefront correction element. High resolution images of photo receptors and capillaries of human retina were obtained. In recent two years, at the base of this system a new adaptive optical system for human retina imaging has been developed. The wavefront correction element is a newly developed 37 element deformable mirror. Some modifications have been adopted for easy operation. Experiments for different imaging wavelengths and axial positions were conducted. Mosaic pictures of photoreceptors and capillaries were obtained. 100 normal and abnormal eyes of different ages have been inspected.The first report in the world concerning the most detailed capillary distribution images cover ±3° by ± 3° field around the fovea has been demonstrated. Some preliminary very early diagnosis experiment has been tried in laboratory. This system is being planned to move to the hospital for clinic experiments.

  1. Scientific workflow and support for high resolution global climate modeling at the Oak Ridge Leadership Computing Facility

    NASA Astrophysics Data System (ADS)

    Anantharaj, V.; Mayer, B.; Wang, F.; Hack, J.; McKenna, D.; Hartman-Baker, R.

    2012-04-01

    The Oak Ridge Leadership Computing Facility (OLCF) facilitates the execution of computational experiments that require tens of millions of CPU hours (typically using thousands of processors simultaneously) while generating hundreds of terabytes of data. A set of ultra high resolution climate experiments in progress, using the Community Earth System Model (CESM), will produce over 35,000 files, ranging in sizes from 21 MB to 110 GB each. The execution of the experiments will require nearly 70 Million CPU hours on the Jaguar and Titan supercomputers at OLCF. The total volume of the output from these climate modeling experiments will be in excess of 300 TB. This model output must then be archived, analyzed, distributed to the project partners in a timely manner, and also made available more broadly. Meeting this challenge would require efficient movement of the data, staging the simulation output to a large and fast file system that provides high volume access to other computational systems used to analyze the data and synthesize results. This file system also needs to be accessible via high speed networks to an archival system that can provide long term reliable storage. Ideally this archival system is itself directly available to other systems that can be used to host services making the data and analysis available to the participants in the distributed research project and to the broader climate community. The various resources available at the OLCF now support this workflow. The available systems include the new Jaguar Cray XK6 2.63 petaflops (estimated) supercomputer, the 10 PB Spider center-wide parallel file system, the Lens/EVEREST analysis and visualization system, the HPSS archival storage system, the Earth System Grid (ESG), and the ORNL Climate Data Server (CDS). The ESG features federated services, search & discovery, extensive data handling capabilities, deep storage access, and Live Access Server (LAS) integration. The scientific workflow enabled on these systems, and developed as part of the Ultra-High Resolution Climate Modeling Project, allows users of OLCF resources to efficiently share simulated data, often multi-terabyte in volume, as well as the results from the modeling experiments and various synthesized products derived from these simulations. The final objective in the exercise is to ensure that the simulation results and the enhanced understanding will serve the needs of a diverse group of stakeholders across the world, including our research partners in U.S. Department of Energy laboratories & universities, domain scientists, students (K-12 as well as higher education), resource managers, decision makers, and the general public.

  2. A Review on Development Practice of Smart Grid Technology in China

    NASA Astrophysics Data System (ADS)

    Han, Liu; Chen, Wei; Zhuang, Bo; Shen, Hongming

    2017-05-01

    Smart grid has become an inexorable trend of energy and economy development worldwide. Since the development of smart grid was put forward in China in 2009, we have obtained abundant research results and practical experiences as well as extensive attention from international community in this field. This paper reviews the key technologies and demonstration projects on new energy connection forecasts; energy storage; smart substations; disaster prevention and reduction for power transmission lines; flexible DC transmission; distribution automation; distributed generation access and micro grid; smart power consumption; the comprehensive demonstration of power distribution and utilization; smart power dispatching and control systems; and the communication networks and information platforms of China, systematically, on the basis of 5 fields, i.e., renewable energy integration, smart power transmission and transformation, smart power distribution and consumption, smart power dispatching and control systems and information and communication platforms. Meanwhile, it also analyzes and compares with the developmental level of similar technologies abroad, providing an outlook on the future development trends of various technologies.

  3. Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Magee, Jeff; Moffett, Jonathan

    1996-06-01

    Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.

  4. Discharge transient coupling in large space power systems

    NASA Technical Reports Server (NTRS)

    Stevens, N. John; Stillwell, R. P.

    1990-01-01

    Experiments have shown that plasma environments can induce discharges in solar arrays. These plasmas simulate the environments found in low earth orbits where current plans call for operation of very large power systems. The discharges could be large enough to couple into the power system and possibly disrupt operations. Here, the general concepts of the discharge mechanism and the techniques of coupling are discussed. Data from both ground and flight experiments are reviewed to obtain an expected basis for the interactions. These concepts were applied to the Space Station solar array and distribution system as an example of the large space power system. The effect of discharges was found to be a function of the discharge site. For most sites in the array discharges would not seriously impact performance. One location at the negative end of the array was identified as a position where discharges could couple to charge stored in system capacitors. This latter case could impact performance.

  5. A Long-Term Study of the Microbial Community Structure in a Simulated Chloraminated Drinking Water Distribution System - abstract

    EPA Science Inventory

    Many US water treatment facilities use chloramination to limit regulated disinfectant by-product formation. However, chloramination has been shown to promote nitrifying bacteria, and 30 to 63% of water utilities using secondary chloramine disinfection experience nitrification ep...

  6. Field deployment to quantify the value of real-time information by integrating driver routing decisions and route assignment strategies.

    DOT National Transportation Integrated Search

    2014-05-01

    Advanced Traveler Information Systems (ATIS) have been proposed as a mechanism to generate and : distribute real-time travel information to drivers for the purpose of improving travel experience : represented by experienced travel time and enhancing ...

  7. Two-Dimensional Homogeneous Fermi Gases

    NASA Astrophysics Data System (ADS)

    Hueck, Klaus; Luick, Niclas; Sobirey, Lennart; Siegl, Jonas; Lompe, Thomas; Moritz, Henning

    2018-02-01

    We report on the experimental realization of homogeneous two-dimensional (2D) Fermi gases trapped in a box potential. In contrast to harmonically trapped gases, these homogeneous 2D systems are ideally suited to probe local as well as nonlocal properties of strongly interacting many-body systems. As a first benchmark experiment, we use a local probe to measure the density of a noninteracting 2D Fermi gas as a function of the chemical potential and find excellent agreement with the corresponding equation of state. We then perform matter wave focusing to extract the momentum distribution of the system and directly observe Pauli blocking in a near unity occupation of momentum states. Finally, we measure the momentum distribution of an interacting homogeneous 2D gas in the crossover between attractively interacting fermions and bosonic dimers.

  8. Adaptive, Distributed Control of Constrained Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PO) theory was recently developed as a broad framework for analyzing and optimizing distributed systems. Here we demonstrate its use for adaptive distributed control of Multi-Agent Systems (MASS), i.e., for distributed stochastic optimization using MAS s. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (Probability dist&&on on the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. One common way to find that equilibrium is to have each agent run a Reinforcement Learning (E) algorithm. PD theory reveals this to be a particular type of search algorithm for minimizing the Lagrangian. Typically that algorithm i s quite inefficient. A more principled alternative is to use a variant of Newton's method to minimize the Lagrangian. Here we compare this alternative to RL-based search in three sets of computer experiments. These are the N Queen s problem and bin-packing problem from the optimization literature, and the Bar problem from the distributed RL literature. Our results confirm that the PD-theory-based approach outperforms the RL-based scheme in all three domains.

  9. Interactions of sex and early life social experiences at two developmental stages shape nonapeptide receptor profiles.

    PubMed

    Hiura, Lisa C; Ophir, Alexander G

    2018-05-31

    Early life social experiences are critical to behavioral and cognitive development, and can have a tremendous influence on developing social phenotypes. Most work has focused on outcomes of experiences at a single stage of development (e.g., perinatal, or post-weaning). Few studies have assessed the impact of social experience at multiple developmental stages and across sex. Oxytocin and vasopressin are profoundly important for modulating social behavior and these nonapeptide systems are highly sensitive to developmental social experience, particularly in brain areas important for social behavior. We investigated whether oxytocin receptor (OTR) and vasopressin receptor (V1aR) distributions of prairie voles (Microtus ochrogaster) change as a function of parental composition within the natal nest or social composition after weaning. We raised pups either in the presence or absence of their fathers. At weaning, offspring were housed either individually or with a same-sex sibling. We also examined whether changes in receptor distributions are sexually dimorphic because the impact of the developmental environment on the nonapeptide system could be sex-dependent. We found that differences in nonapeptide receptor expression were region-, sex-, and rearing condition-specific, indicating a high level of complexity in the ways that early life experiences shape the social brain. We found many more differences in V1aR density compared to OTR density, indicating that nonapeptide receptors demonstrate differential levels of neural plasticity and sensitivity to environmental and biological variables. Our data highlight that critical factors including biological sex and multiple experiences across the developmental continuum interact in complex ways to shape the social brain. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Programming a Detector Emulator on NI's FlexRIO Platform

    NASA Astrophysics Data System (ADS)

    Gervais, Michelle; Crawford, Christopher; Sprow, Aaron; Nab Collaboration

    2017-09-01

    Recently digital detector emulators have been on the rise as a means to test data acquisition systems and analysis toolkits from a well understood data set. National Instruments' PXIe-7962R FPGA module and Active Technologies AT-1212 DAC module provide a customizable platform for analog output. Using a graphical programming language, we have developed a system capable of producing two time-correlated channels of analog output which sample unique amplitude spectra to mimic nuclear physics experiments. This system will be used to model the Nab experiment, in which a prompt beta decay electron is followed by a slow proton according to a defined time distribution. We will present the results of our work and discuss further development potential. DOE under Contract DE-SC0008107.

  11. The NO$$\

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zalesak, Jaroslav; et al.

    2014-01-01

    The NOνA experiment is a long-baseline neutrino experiment designed to make measurements to determine the neutrino mass hierarchy, neutrino mixing parameters and CP violation in the neutrino sector. In order to make these measurements the NOνA collaboration has designed a highly distributed, synchronized, continuous digitization and readout system that is able to acquire and correlate data from the Fermilab accelerator complex (NuMI), the NOνA near detector at the Fermilab site and the NOνA far detector which is located 810 km away at Ash River, MN. This system has unique properties that let it fully exploit the physics capabilities of themore » NOνA detector. The design of the NOνA DAQ system and its capabilities are discussed in this paper.« less

  12. Side-channel-free quantum key distribution.

    PubMed

    Braunstein, Samuel L; Pirandola, Stefano

    2012-03-30

    Quantum key distribution (QKD) offers the promise of absolutely secure communications. However, proofs of absolute security often assume perfect implementation from theory to experiment. Thus, existing systems may be prone to insidious side-channel attacks that rely on flaws in experimental implementation. Here we replace all real channels with virtual channels in a QKD protocol, making the relevant detectors and settings inside private spaces inaccessible while simultaneously acting as a Hilbert space filter to eliminate side-channel attacks. By using a quantum memory we find that we are able to bound the secret-key rate below by the entanglement-distillation rate computed over the distributed states.

  13. The Reliability Estimation for the Open Function of Cabin Door Affected by the Imprecise Judgment Corresponding to Distribution Hypothesis

    NASA Astrophysics Data System (ADS)

    Yu, Z. P.; Yue, Z. F.; Liu, W.

    2018-05-01

    With the development of artificial intelligence, more and more reliability experts have noticed the roles of subjective information in the reliability design of complex system. Therefore, based on the certain numbers of experiment data and expert judgments, we have divided the reliability estimation based on distribution hypothesis into cognition process and reliability calculation. Consequently, for an illustration of this modification, we have taken the information fusion based on intuitional fuzzy belief functions as the diagnosis model of cognition process, and finished the reliability estimation for the open function of cabin door affected by the imprecise judgment corresponding to distribution hypothesis.

  14. The BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, A

    2005-04-20

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less

  15. The BaBar Data Reconstruction Control System

    NASA Astrophysics Data System (ADS)

    Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.

    2005-08-01

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.

  16. Rogue waves in a multistable system.

    PubMed

    Pisarchik, Alexander N; Jaimes-Reátegui, Rider; Sevilla-Escoboza, Ricardo; Huerta-Cuellar, G; Taki, Majid

    2011-12-30

    Clear evidence of rogue waves in a multistable system is revealed by experiments with an erbium-doped fiber laser driven by harmonic pump modulation. The mechanism for the rogue wave formation lies in the interplay of stochastic processes with multistable deterministic dynamics. Low-frequency noise applied to a diode pump current induces rare jumps to coexisting subharmonic states with high-amplitude pulses perceived as rogue waves. The probability of these events depends on the noise filtered frequency and grows up when the noise amplitude increases. The probability distribution of spike amplitudes confirms the rogue wave character of the observed phenomenon. The results of numerical simulations are in good agreement with experiments.

  17. Spacelab data management subsystem phase B study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The Spacelab data management system is described. The data management subsystem (DMS) integrates the avionics equipment into an operational system by providing the computations, logic, signal flow, and interfaces needed to effectively command, control, monitor, and check out the experiment and subsystem hardware. Also, the DMS collects/retrieves experiment data and other information by recording and by command of the data relay link to ground. The major elements of the DMS are the computer subsystem, data acquisition and distribution subsystem, controls and display subsystem, onboard checkout subsystem, and software. The results of the DMS portion of the Spacelab Phase B Concept Definition Study are analyzed.

  18. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    NASA Astrophysics Data System (ADS)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  19. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems.

    PubMed

    Junge, Oliver; Kevrekidis, Ioannis G

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  20. Experiences using OpenMP based on Computer Directed Software DSM on a PC Cluster

    NASA Technical Reports Server (NTRS)

    Hess, Matthias; Jost, Gabriele; Mueller, Matthias; Ruehle, Roland

    2003-01-01

    In this work we report on our experiences running OpenMP programs on a commodity cluster of PCs running a software distributed shared memory (DSM) system. We describe our test environment and report on the performance of a subset of the NAS Parallel Benchmarks that have been automaticaly parallelized for OpenMP. We compare the performance of the OpenMP implementations with that of their message passing counterparts and discuss performance differences.

  1. High-Precision Distribution of Highly Stable Optical Pulse Trains with 8.8 × 10−19 instability

    PubMed Central

    Ning, B.; Zhang, S. Y.; Hou, D.; Wu, J. T.; Li, Z. B.; Zhao, J. Y.

    2014-01-01

    The high-precision distribution of optical pulse trains via fibre links has had a considerable impact in many fields. In most published work, the accuracy is still fundamentally limited by unavoidable noise sources, such as thermal and shot noise from conventional photodiodes and thermal noise from mixers. Here, we demonstrate a new high-precision timing distribution system that uses a highly precise phase detector to obviously reduce the effect of these limitations. Instead of using photodiodes and microwave mixers, we use several fibre Sagnac-loop-based optical-microwave phase detectors (OM-PDs) to achieve optical-electrical conversion and phase measurements, thereby suppressing the sources of noise and achieving ultra-high accuracy. The results of a distribution experiment using a 10-km fibre link indicate that our system exhibits a residual instability of 2.0 × 10−15 at1 s and8.8 × 10−19 at 40,000 s and an integrated timing jitter as low as 3.8 fs in a bandwidth of 1 Hz to 100 kHz. This low instability and timing jitter make it possible for our system to be used in the distribution of optical-clock signals or in applications that require extremely accurate frequency/time synchronisation. PMID:24870442

  2. Analysis of in-flight boundary-layer state measurements on a subsonic transport wing in high-lift configuration

    NASA Technical Reports Server (NTRS)

    vanDam, C. P.; Los, S. M.; Miley, S. J.; Yip, L. P.; Banks, D. W.; Roback, V. E.; Bertelrud, A.

    1995-01-01

    Flight experiments on NASA Langley's B737-100 (TSRV) airplane have been conducted to document flow characteristics in order to further the understanding of high-lift flow physics, and to correlate and validate computational predictions and wind-tunnel measurements. The project is a cooperative effort involving NASA, industry, and universities. In addition to focusing on in-flight measurements, the project includes extensive application of various computational techniques, and correlation of flight data with computational results and wind-tunnel measurements. Results obtained in the most recent phase of flight experiments are analyzed and presented in this paper. In-flight measurements include surface pressure distributions, measured using flush pressure taps and pressure belts on the slats, main element, and flap elements; surface shear stresses, measured using Preston tubes; off-surface velocity distributions, measured using shear-layer rakes; aeroelastic deformations of the flap elements, measured using an optical positioning system; and boundary-layer transition phenomena, measured using hot-film anemometers and an infrared imaging system. The analysis in this paper primarily focuses on changes in the boundary-layer state that occurred on the slats, main element, and fore flap as a result of changes in flap setting and/or flight condition. Following a detailed description of the experiment, the boundary-layer state phenomenon will be discussed based on data measured during these recent flight experiments.

  3. Applications of OALCLV in the high power laser systems

    NASA Astrophysics Data System (ADS)

    Huang, Dajie; Fan, Wei; Cheng, He; Wei, Hui; Wang, Jiangfeng; An, Honghai; Wang, Chao; Cheng, Yu; Xia, Gang; Li, Xuechun; Lin, Zunqi

    2017-10-01

    This paper introduces the recent development of our integrated optical addressed spatial light modulator and its applications in the high power laser systems. It can be used to convert the incident beam into uniform beam for high energy effiency, or it can realize special distribution to meet the requirements of physical experiment. The optical addressing method can avoid the problem of the black matrix effect of the electric addressing device. Its transmittance for 1053nm light is about 85% and the aperture of our device has reached 22mm× 22mm. As a transmissive device, it can be inserted into the system without affecting the original optical path. The applications of the device in the three laser systems are introduced in detail in this paper. In the SGII-Up laser facility, this device demonstrates its ability to shape the output laser beam of the fundamental frequency when the output energy reaches about 2000J. Meanwhile, there's no change in the time waveform and far field distribution. This means that it can effectively improve the capacity of the maximum output energy. In the 1J1Hz Nd-glass laser system, this device has been used to improve the uniformity of the output beam. As a result, the PV value reduces from 1.4 to 1.2, which means the beam quality has been improved effectively. In the 9th beam of SGII laser facility, the device has been used to meet the requirements of sampling the probe light. As the transmittance distribution of the laser beam can be adjusted, the sampling spot can be realized in real time. As a result, it's easy to make the sampled spot meet the requirements of physics experiment.

  4. Contact force structure and force chains in 3D sheared granular systems

    NASA Astrophysics Data System (ADS)

    Mair, Karen; Jettestuen, Espen; Abe, Steffen

    2010-05-01

    Faults often exhibit accumulations of granular debris, ground up to create a layer of rock flour or fault gouge separating the rigid fault walls. Numerical simulations and laboratory experiments of sheared granular materials, suggest that applied loads are preferentially transmitted across such systems by transient force networks that carry enhanced forces. The characterisation of such features is important since their nature and persistence almost certainly influence the macroscopic mechanical stability of these systems and potentially that of natural faults. 3D numerical simulations of granular shear are a valuable investigation tool since they allow us to track individual particle motions, contact forces and their evolution during applied shear, that are difficult to view directly in laboratory experiments or natural fault zones. In characterising contact force distributions, it is important to use global structure measures that allow meaningful comparisons of granular systems having e.g. different grain size distributions, as may be expected at different stages of a fault's evolution. We therefore use a series of simple measures to characterise the structure, such as distributions and correlations of contact forces that can be mapped onto a force network percolation problem as recently proposed by Ostojic and coworkers for 2D granular systems. This allows the use of measures from percolation theory to both define and characterise the force networks. We demonstrate the application of this method to 3D simulations of a sheared granular material. Importantly, we then compare our measure of the contact force structure with macroscopic frictional behaviour measured at the boundaries of our model to determine the influence of the force networks on macroscopic mechanical stability.

  5. Using passive fiber-optic distributed temperature sensing to estimate soil water content at a discontinuous permafrost site

    NASA Astrophysics Data System (ADS)

    Wagner, A. M.; Lindsey, N.; Ajo Franklin, J. B.; Gelvin, A.; Saari, S.; Ekblaw, I.; Ulrich, C.; Dou, S.; James, S. R.; Martin, E. R.; Freifeld, B. M.; Bjella, K.; Daley, T. M.

    2016-12-01

    We present preliminary results from an experimental study targeting the use of passive fiber-optic distributed temperature sensing (DTS) in a variety of geometries to estimate moisture content evolution in a dynamic permafrost system. A 4 km continuous 2D array of multi-component fiber optic cable (6 SM/6 MM) was buried at the Fairbanks Permafrost Experiment Station to investigate the possibility of using fiber optic distributed sensing as an early detection system for permafrost thaw. A heating experiment using 120 60 Watt heaters was conducted in a 140 m2 area to artificially thaw the topmost section of permafrost. The soils at the site are primarily silt but some disturbed areas include backfilled gravel to depths of approximately 1.0 m. Where permafrost exists, the depth to permafrost ranges from 1.5 to approximately 5 m. The experiment was also used to spatially estimate soil water content distribution throughout the fiber optic array. The horizontal fiber optic cable was buried at depths between 10 and 20 cm. Soil temperatures were monitored with a DTS system at 25 cm increments along the length of the fiber. At five locations, soil water content time-domain reflectometer (TDR) probes were also installed at two depths, in line with the fiber optic cable and 15 to 25 cm below the cable. The moisture content along the fiber optic array was estimated using diurnal effects from the dual depth temperature measurements. In addition to the horizontally installed fiber optic cable, vertical lines of fiber optic cable were also installed inside and outside the heater plot to a depth of 10 m in small diameter (2 cm) boreholes. These arrays were installed in conjunction with thermistor strings and are used to monitor the thawing process and to cross correlate with soil temperatures at the depth of the TDR probes. Results will be presented from the initiation of the artificial thawing through subsequent freeze-up. A comparison of the DTS measured temperatures and thermistors in vertically installed PVC pipes will also be shown. Initial results from a thermal model of the artificial heating experiment and the model's correlation to the actual soil temperature measurements will also be presented. These results show the possibility of using fiber optic cable to measure moisture contents along a longer array with only limited control points.

  6. An Implicit Upwind Algorithm for Computing Turbulent Flows on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Anerson, W. Kyle; Bonhaus, Daryl L.

    1994-01-01

    An implicit, Navier-Stokes solution algorithm is presented for the computation of turbulent flow on unstructured grids. The inviscid fluxes are computed using an upwind algorithm and the solution is advanced in time using a backward-Euler time-stepping scheme. At each time step, the linear system of equations is approximately solved with a point-implicit relaxation scheme. This methodology provides a viable and robust algorithm for computing turbulent flows on unstructured meshes. Results are shown for subsonic flow over a NACA 0012 airfoil and for transonic flow over a RAE 2822 airfoil exhibiting a strong upper-surface shock. In addition, results are shown for 3 element and 4 element airfoil configurations. For the calculations, two one equation turbulence models are utilized. For the NACA 0012 airfoil, a pressure distribution and force data are compared with other computational results as well as with experiment. Comparisons of computed pressure distributions and velocity profiles with experimental data are shown for the RAE airfoil and for the 3 element configuration. For the 4 element case, comparisons of surface pressure distributions with experiment are made. In general, the agreement between the computations and the experiment is good.

  7. Meningococcal Two-Partner Secretion Systems and Their Association with Outcome in Patients with Meningitis

    PubMed Central

    Piet, Jurgen R.; van Ulsen, Peter; ur Rahman, Sadeeq; Bovenkerk, Sandra; Bentley, Stephen D.

    2016-01-01

    Two-partner secretion (TPS) systems export large TpsA proteins to the surface and extracellular milieu. In meningococci, three different TPS systems exist, and of these, TPS system 2 (TPS2) and TPS3 can be detected by the host's immune system. We evaluated the distribution of TPS systems among clinical isolates from two prospective cohort studies comprising 373 patients with meningococcal meningitis. TPS system 1 was present in 91% of isolates, and system 2 and/or 3 was present in 67%. The TPS system distribution was related to clonal complexes. Infection with strains with TPS2 and/or TPS3 resulted in less severe disease and better outcomes than infection with strains without these systems. Using whole-blood stimulation experiments, we found no differences in the host cytokine response between patients infected with TPS system 2 and 3 knockout strains and patients infected with a wild-type strain. In conclusion, meningococcal TPS system 2 and/or 3 is associated with disease severity and outcome in patients with meningitis. PMID:27324486

  8. Investigating Actuation Force Fight with Asynchronous and Synchronous Redundancy Management Techniques

    NASA Technical Reports Server (NTRS)

    Hall, Brendan; Driscoll, Kevin; Schweiker, Kevin; Dutertre, Bruno

    2013-01-01

    Within distributed fault-tolerant systems the term force-fight is colloquially used to describe the level of command disagreement present at redundant actuation interfaces. This report details an investigation of force-fight using three distributed system case-study architectures. Each case study architecture is abstracted and formally modeled using the Symbolic Analysis Laboratory (SAL) tool chain from the Stanford Research Institute (SRI). We use the formal SAL models to produce k-induction based proofs of a bounded actuation agreement property. We also present a mathematically derived bound of redundant actuation agreement for sine-wave stimulus. The report documents our experiences and lessons learned developing the formal models and the associated proofs.

  9. Fire Source Localization Based on Distributed Temperature Sensing by a Dual-Line Optical Fiber System.

    PubMed

    Sun, Miao; Tang, Yuquan; Yang, Shuang; Li, Jun; Sigrist, Markus W; Dong, Fengzhong

    2016-06-06

    We propose a method for localizing a fire source using an optical fiber distributed temperature sensor system. A section of two parallel optical fibers employed as the sensing element is installed near the ceiling of a closed room in which the fire source is located. By measuring the temperature of hot air flows, the problem of three-dimensional fire source localization is transformed to two dimensions. The method of the source location is verified with experiments using burning alcohol as fire source, and it is demonstrated that the method represents a robust and reliable technique for localizing a fire source also for long sensing ranges.

  10. Self-similarity of waiting times in fracture systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niccolini, G.; Bosia, F.; Carpinteri, A.

    2009-08-15

    Experimental and numerical results are presented for a fracture experiment carried out on a fiber-reinforced element under flexural loading, and a statistical analysis is performed for acoustic emission waiting-time distributions. By an optimization procedure, a recently proposed scaling law describing these distributions for different event magnitude scales is confirmed by both experimental and numerical data, thus reinforcing the idea that fracture of heterogeneous materials has scaling properties similar to those found for earthquakes. Analysis of the different scaling parameters obtained for experimental and numerical data leads us to formulate the hypothesis that the type of scaling function obtained depends onmore » the level of correlation among fracture events in the system.« less

  11. First results on the Experiment FESTER on optical turbulence over False Bay South Africa: dependencies and consequences

    NASA Astrophysics Data System (ADS)

    Sprung, Detlev; van Eijk, Alexander M. J.; Sucher, Erik; Eisele, Christian; Seiffer, Dirk; Stein, Karin

    2016-10-01

    The experiment FESTER (First European South African Transmission ExpeRiment) took place in 2015 to investigate the atmospheric influence on electro-optical systems performance across False Bay / South Africa on a long term basis. Several permanent stations for monitoring electro-optical propagation and atmospheric parameters were set up around the Bay. Additional intensive observation periods (IOPs) allowed for boat runs to assess the inhomogeneous atmospheric propagation conditions over water. In this paper we focus on the distribution of optical turbulence over the Bay. The different impact of water masses originating from the Indian Ocean and the Benguela current on the development of optical turbulence is discussed. The seasonal behavior of optical turbulence is presented and its effect on electro-optical system performance examined.

  12. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  13. Joint Experiment on Scalable Parallel Processors (JESPP) Parallel Data Management

    DTIC Science & Technology

    2006-05-01

    management and analysis tool, called Simulation Data Grid ( SDG ). The design principles driving the design of SDG are: 1) minimize network communication...or SDG . In this report, an initial prototype implementation of this system is described. This project follows on earlier research, primarily...distributed logging system had some 2 limitations. These limitations will be described in this report, and how the SDG addresses these limitations. 3.0

  14. Formal Verification of a Conflict Resolution and Recovery Algorithm

    NASA Technical Reports Server (NTRS)

    Maddalon, Jeffrey; Butler, Ricky; Geser, Alfons; Munoz, Cesar

    2004-01-01

    New air traffic management concepts distribute the duty of traffic separation among system participants. As a consequence, these concepts have a greater dependency and rely heavily on on-board software and hardware systems. One example of a new on-board capability in a distributed air traffic management system is air traffic conflict detection and resolution (CD&R). Traditional methods for safety assessment such as human-in-the-loop simulations, testing, and flight experiments may not be sufficient for this highly distributed system as the set of possible scenarios is too large to have a reasonable coverage. This paper proposes a new method for the safety assessment of avionics systems that makes use of formal methods to drive the development of critical systems. As a case study of this approach, the mechanical veri.cation of an algorithm for air traffic conflict resolution and recovery called RR3D is presented. The RR3D algorithm uses a geometric optimization technique to provide a choice of resolution and recovery maneuvers. If the aircraft adheres to these maneuvers, they will bring the aircraft out of conflict and the aircraft will follow a conflict-free path to its original destination. Veri.cation of RR3D is carried out using the Prototype Verification System (PVS).

  15. UBioLab: a web-LABoratory for Ubiquitous in-silico experiments.

    PubMed

    Bartocci, E; Di Berardini, M R; Merelli, E; Vito, L

    2012-03-01

    The huge and dynamic amount of bioinformatic resources (e.g., data and tools) available nowadays in Internet represents a big challenge for biologists -for what concerns their management and visualization- and for bioinformaticians -for what concerns the possibility of rapidly creating and executing in-silico experiments involving resources and activities spread over the WWW hyperspace. Any framework aiming at integrating such resources as in a physical laboratory has imperatively to tackle -and possibly to handle in a transparent and uniform way- aspects concerning physical distribution, semantic heterogeneity, co-existence of different computational paradigms and, as a consequence, of different invocation interfaces (i.e., OGSA for Grid nodes, SOAP for Web Services, Java RMI for Java objects, etc.). The framework UBioLab has been just designed and developed as a prototype following the above objective. Several architectural features -as those ones of being fully Web-based and of combining domain ontologies, Semantic Web and workflow techniques- give evidence of an effort in such a direction. The integration of a semantic knowledge management system for distributed (bioinformatic) resources, a semantic-driven graphic environment for defining and monitoring ubiquitous workflows and an intelligent agent-based technology for their distributed execution allows UBioLab to be a semantic guide for bioinformaticians and biologists providing (i) a flexible environment for visualizing, organizing and inferring any (semantics and computational) "type" of domain knowledge (e.g., resources and activities, expressed in a declarative form), (ii) a powerful engine for defining and storing semantic-driven ubiquitous in-silico experiments on the domain hyperspace, as well as (iii) a transparent, automatic and distributed environment for correct experiment executions.

  16. World Ocean Circulation Experiment

    NASA Technical Reports Server (NTRS)

    Clarke, R. Allyn

    1992-01-01

    The oceans are an equal partner with the atmosphere in the global climate system. The World Ocean Circulation Experiment is presently being implemented to improve ocean models that are useful for climate prediction both by encouraging more model development but more importantly by providing quality data sets that can be used to force or to validate such models. WOCE is the first oceanographic experiment that plans to generate and to use multiparameter global ocean data sets. In order for WOCE to succeed, oceanographers must establish and learn to use more effective methods of assembling, quality controlling, manipulating and distributing oceanographic data.

  17. Isotopic fission-fragment distributions of 238U, 239Np, 240Pu, 244Cm, and 250Cf produced through inelastic scattering, transfer, and fusion reactions in inverse kinematics

    NASA Astrophysics Data System (ADS)

    Ramos, D.; Caamaño, M.; Farget, F.; Rodríguez-Tajes, C.; Audouin, L.; Benlliure, J.; Casarejos, E.; Clement, E.; Cortina, D.; Delaune, O.; Derkx, X.; Dijon, A.; Doré, D.; Fernández-Domínguez, B.; de France, G.; Heinz, A.; Jacquot, B.; Navin, A.; Paradela, C.; Rejmund, M.; Roger, T.; Salsac, M.-D.; Schmitt, C.

    2018-05-01

    Transfer- and fusion-induced fission in inverse kinematics has proved to be a powerful tool to investigate nuclear fission, widening information on the fission fragments and access to unstable fissioning systems with respect to other experimental approaches. An experimental campaign is being carried out at GANIL with this technique since 2008. In these experiments, a beam of 238U, accelerated to 6.1 MeV/u, impinges on a 12C target. Fissioning systems from U to Cf are populated through inelastic scattering, transfer, and fusion reactions, with excitation energies that range from a few MeV up to 46 MeV. The use of inverse kinematics, the SPIDER telescope, and the VAMOS spectrometer allow the characterization of the fissioning system in terms of mass, nuclear charge, and excitation energy, and the isotopic identification of the full fragment distribution. This work reports on new data from the second experiment of the campaign on fission-fragment yields of the heavy actinides 238U, 239Np, 240Pu, 244Cm, and 250Cf, which are of interest from both fundamental and application points of view.

  18. Designing for Mathematical Abstraction

    ERIC Educational Resources Information Center

    Pratt, Dave; Noss, Richard

    2010-01-01

    Our focus is on the design of systems (pedagogical, technical, social) that encourage mathematical abstraction, a process we refer to as "designing for abstraction." In this paper, we draw on detailed design experiments from our research on children's understanding about chance and distribution to re-present this work as a case study in designing…

  19. Educational Publishing: Experiences from Asia and the Pacific.

    ERIC Educational Resources Information Center

    United Nations Educational, Scientific and Cultural Organization, Bangkok (Thailand). Asian Centre for Educational Innovation for Development.

    This resource book on educational publishing presents examples of evaluation and planning; try-out procedures; the production process; and warehousing and distribution, all reinforced by examples of systems and structures and case studies which were presented at the 1985 Manila and Tonga Seminars. Part one, Planning, Try-out and Evaluation of…

  20. Visualizing Chemistry with Infrared Imaging

    ERIC Educational Resources Information Center

    Xie, Charles

    2011-01-01

    Almost all chemical processes release or absorb heat. The heat flow in a chemical system reflects the process it is undergoing. By showing the temperature distribution dynamically, infrared (IR) imaging provides a salient visualization of the process. This paper presents a set of simple experiments based on IR imaging to demonstrate its enormous…

  1. The effect of advanced treatment on chlorine decay in metallic pipes

    EPA Science Inventory

    Experiments were run to measure what effect advanced treatment might have on the kinetics of chlorine decay in water distribution systems. A recirculating loop of 6-inch diameter unlined ductile iron pipe was used to simulate turbulent flow conditions in a pipe with significant c...

  2. Teaching Analytics: A Clustering and Triangulation Study of Digital Library User Data

    ERIC Educational Resources Information Center

    Xu, Beijie; Recker, Mimi

    2012-01-01

    Teachers and students increasingly enjoy unprecedented access to abundant web resources and digital libraries to enhance and enrich their classroom experiences. However, due to the distributed nature of such systems, conventional educational research methods, such as surveys and observations, provide only limited snapshots. In addition,…

  3. Control of New Copper Corrosion in High-Alkalinity Drinking Water using Orthophosphate - article

    EPA Science Inventory

    Research and field experience have shown that high-alkalinity waters can be associated with elevated copper levels in drinking water. The objective of this study was to document the application of orthophosphate to the distribution system of a building with a copper problem asso...

  4. Influence of permeability on nanoscale zero-valent iron particle transport in saturated homogeneous and heterogeneous porous media.

    PubMed

    Strutz, Tessa J; Hornbruch, Götz; Dahmke, Andreas; Köber, Ralf

    2016-09-01

    Nanoscale zero-valent iron (NZVI) particles can be used for in situ groundwater remediation. The spatial particle distribution plays a very important role in successful and efficient remediation, especially in heterogeneous systems. Initial sand permeability (k 0) influences on spatial particle distributions were investigated and quantified in homogeneous and heterogeneous systems within the presented study. Four homogeneously filled column experiments and a heterogeneously filled tank experiment, using different median sand grain diameters (d 50), were performed to determine if NZVI particles were transported into finer sand where contaminants could be trapped. More NZVI particle retention, less particle transport, and faster decrease in k were observed in the column studies using finer sands than in those using coarser sands, reflecting a function of k 0. In heterogeneous media, NZVI particles were initially transported and deposited in coarse sand areas. Increasing the retained NZVI mass (decreasing k in particle deposition areas) caused NZVI particles to also be transported into finer sand areas, forming an area with a relatively homogeneous particle distribution and converged k values despite the different grain sizes present. The deposited-particle surface area contribution to the increasing of the matrix surface area (θ) was one to two orders of magnitude higher for finer than coarser sand. The dependency of θ on d 50 presumably affects simulated k changes and NZVI distributions in numerical simulations of NZVI injections into heterogeneous aquifers. The results implied that NZVI can in principle also penetrate finer layers.

  5. Police officers' perceptions and experiences with mentally disordered suspects.

    PubMed

    Oxburgh, Laura; Gabbert, Fiona; Milne, Rebecca; Cherryman, Julie

    Despite mentally disordered suspects being over-represented within the criminal justice system, there is a dearth of published literature that examines police officers' perceptions when interviewing this vulnerable group. This is concerning given that police officers are increasingly the first point of contact with these individuals. Using a Grounded Theory approach, this study examined 35 police officers' perceptions and experiences when interviewing mentally disordered suspects. Current safeguards, such as Appropriate Adults, and their experiences of any training they received were also explored. A specially designed questionnaire was developed and distributed across six police forces in England and Wales. Nine conceptual categories emerged from the data that highlighted how police officers' level of experience impacted upon their perceptions when dealing with this cohort. As a consequence, a new model grounded within Schema Theory has emerged termed Police Experience Transitional Model. Implications include the treatment and outcome of mentally disordered suspects being heavily dependent on whom they encounter within the criminal justice system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Impacts of Using Distributed Energy Resources to Reduce Peak Loads in Vermont

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruth, Mark F.; Lunacek, Monte S.; Jones, Birk

    To help the United States develop a modern electricity grid that provides reliable power from multiple resources as well as resiliency under extreme conditions, the U.S. Department of Energy (DOE) is leading the Grid Modernization Initiative (GMI) to help shape the future of the nation's grid. Under the GMI, DOE funded the Vermont Regional Initiative project to provide the technical support and analysis to utilities that need to mitigate possible impacts of increasing renewable generation required by statewide goals. Advanced control of distributed energy resources (DER) can both support higher penetrations of renewable energy by balancing controllable loads to windmore » and photovoltaic (PV) solar generation and reduce peak demand by shedding noncritical loads. This work focuses on the latter. This document reports on an experiment that evaluated and quantified the potential benefits and impacts of reducing the peak load through demand response (DR) using centrally controllable electric water heaters (EWHs) and batteries on two Green Mountain Power (GMP) feeders. The experiment simulated various hypothetical scenarios that varied the number of controllable EWHs, the amount of distributed PV systems, and the number of distributed residential batteries. The control schemes were designed with several objectives. For the first objective, the primary simulations focused on reducing the load during the independent system operator (ISO) peak when capacity charges were the primary concern. The second objective was to mitigate DR rebound to avoid new peak loads and high ramp rates. The final objective was to minimize customers' discomfort, which is defined by the lack of hot water when it is needed. We performed the simulations using the National Renewable Energy Laboratory's (NREL's) Integrated Energy System Model (IESM) because it can simulate both electric power distribution feeder and appliance end use performance and it includes the ability to simulate multiple control strategies.« less

  7. Hadron production in diffractive deep-inelastic scattering

    NASA Astrophysics Data System (ADS)

    H1 Collaboration; Adloff, C.; Aid, S.; Anderson, M.; Andreev, V.; Andrieu, B.; Arkadov, V.; Arndt, C.; Ayyaz, I.; Babaev, A.; Bähr, J.; Bán, J.; Baranov, P.; Barrelet, E.; Barschke, R.; Bartel, W.; Bassler, U.; Bate, P.; Beck, M.; Beglarian, A.; Behrend, H.-J.; Beier, C.; Belousov, A.; Berger, Ch.; Bernardi, G.; Bertrand-Coremans, G.; Beyer, R.; Biddulph, P.; Bizot, J. C.; Borras, K.; Boudry, V.; Braemer, A.; Braunschweig, W.; Brisson, V.; Brown, D. P.; Brückner, W.; Bruel, P.; Bruncko, D.; Brune, C.; Bürger, J.; Büsser, F. W.; Buniatian, A.; Burke, S.; Buschhorn, G.; Calvet, D.; Campbell, A. J.; Carli, T.; Chabert, E.; Charlet, M.; Clarke, D.; Clerbaux, B.; Cocks, S.; Contreras, J. G.; Cormack, C.; Coughlan, J. A.; Cousinou, M.-C.; Cox, B. E.; Cozzika, G.; Cvach, J.; Dainton, J. B.; Dau, W. D.; Daum, K.; David, M.; de Roeck, A.; de Wolf, E. A.; Delcourt, B.; Diaconu, C.; Dirkmann, M.; Dixon, P.; Dlugosz, W.; Donovan, K. T.; Dowell, J. D.; Droutskoi, A.; Ebert, J.; Eckerlin, G.; Eckstein, D.; Efremenko, V.; Egli, S.; Eichler, R.; Eisele, F.; Eisenhandler, E.; Elsen, E.; Enzenberger, M.; Erdmann, M.; Fahr, A. B.; Favart, L.; Fedotov, A.; Felst, R.; Feltesse, J.; Ferencei, J.; Ferrarotto, F.; Flamm, K.; Fleischer, M.; Flügge, G.; Fomenko, A.; Formánek, J.; Foster, J. M.; Franke, G.; Gabathuler, E.; Gabathuler, K.; Gaede, F.; Garvey, J.; Gayler, J.; Gebauer, M.; Gerhards, R.; Glazov, A.; Goerlich, L.; Gogitidze, N.; Goldberg, M.; Gorelov, I.; Grab, C.; Grässler, H.; Greenshaw, T.; Griffiths, R. K.; Grindhammer, G.; Gruber, C.; Hadig, T.; Haidt, D.; Hajduk, L.; Haller, T.; Hampel, M.; Haustein, V.; Haynes, W. J.; Heinemann, B.; Heinzelmann, G.; Henderson, R. C. W.; Hengstmann, S.; Henschel, H.; Heremans, R.; Herynek, I.; Hewitt, K.; Hiller, K. H.; Hilton, C. D.; Hladký, J.; Höppner, M.; Hoffmann, D.; Holtom, T.; Horisberger, R.; Hudgson, V. L.; Hütte, M.; Ibbotson, M.; Isolarş Sever, Ç.; Itterbeck, H.; Jacquet, M.; Jaffre, M.; Janoth, J.; Jansen, D. M.; Jönsson, L.; Johnson, D. P.; Jung, H.; Kander, M.; Kant, D.; Kathage, U.; Katzy, J.; Kaufmann, H. H.; Kaufmann, O.; Kausch, M.; Kazarian, S.; Kenyon, I. R.; Kermiche, S.; Keuker, C.; Kiesling, C.; Klein, M.; Kleinwort, C.; Knies, G.; Köhne, J. H.; Kolanoski, H.; Kolya, S. D.; Korbel, V.; Kostka, P.; Kotelnikov, S. K.; Krämerkämper, T.; Krasny, M. W.; Krehbiel, H.; Krücker, D.; Küpper, A.; Küster, H.; Kuhlen, M.; Kurča, T.; Laforge, B.; Lahmann, R.; Landon, M. P. J.; Lange, W.; Langenegger, U.; Lebedev, A.; Lehmann, M.; Lehner, F.; Lemaitre, V.; Levonian, S.; Lindstroem, M.; Lipinski, J.; List, B.; Lobo, G.; Lubimov, V.; Lüke, D.; Lytkin, L.; Magnussen, N.; Mahlke-Krüger, H.; Malinovski, E.; Maraček, R.; Marage, P.; Marks, J.; Marshall, R.; Martin, G.; Martin, R.; Martyn, H.-U.; Martyniak, J.; Maxfield, S. J.; McMahon, S. J.; McMahon, T. R.; Mehta, A.; Meier, K.; Merkel, P.; Metlica, F.; Meyer, A.; Meyer, A.; Meyer, H.; Meyer, J.; Meyer, P.-O.; Migliori, A.; Mikocki, S.; Milstead, D.; Moeck, J.; Mohr, R.; Mohrdieck, S.; Moreau, F.; Morris, J. V.; Mroczko, E.; Müller, D.; Müller, K.; Murín, P.; Nagovizin, V.; Nahnhauer, R.; Naroska, B.; Naumann, Th.; Négri, I.; Newman, P. R.; Newton, D.; Nguyen, H. K.; Nicholls, T. C.; Niebergall, F.; Niebuhr, C.; Niedzballa, Ch.; Niggli, H.; Nix, O.; Nowak, G.; Nunnemann, T.; Oberlack, H.; Olsson, J. E.; Ozerov, D.; Palmen, P.; Panaro, E.; Panitch, A.; Pascaud, C.; Passaggio, S.; Patel, G. D.; Pawletta, H.; Peppel, E.; Perez, E.; Phillips, J. P.; Pieuchot, A.; Pitzl, D.; Pöschl, R.; Pope, G.; Povh, B.; Rabbertz, K.; Reimer, P.; Reisert, B.; Rick, H.; Riess, S.; Rizvi, E.; Robmann, P.; Roosen, R.; Rosenbauer, K.; Rostovtsev, A.; Rouse, F.; Royon, C.; Rusakov, S.; Rybicki, K.; Sankey, D. P. C.; Schacht, P.; Scheins, J.; Schiek, S.; Schleif, S.; Schleper, P.; von Schlippe, W.; Schmidt, D.; Schmidt, G.; Schoeffel, L.; Schöning, A.; Schröder, V.; Schultz-Coulon, H.-C.; Schwab, B.; Sefkow, F.; Semenov, A.; Shekelyan, V.; Sheviakov, I.; Shtarkov, L. N.; Siegmon, G.; Siewert, U.; Sirois, Y.; Skillicorn, I. O.; Sloan, T.; Smirnov, P.; Smith, M.; Solochenko, V.; Soloviev, Y.; Specka, A.; Spiekermann, J.; Spitzer, H.; Squinabol, F.; Steffen, P.; Steinberg, R.; Steinhart, J.; Stella, B.; Stellberger, A.; Stiewe, J.; Stolze, K.; Straumann, U.; Struczinski, W.; Sutton, J. P.; Swart, M.; Tapprogge, S.; Taševský, M.; Tchernyshov, V.; Tchetchelnitski, S.; Theissen, J.; Thompson, G.; Thompson, P. D.; Tobien, N.; Todenhagen, R.; Truöl, P.; Tsipolitis, G.; Turnau, J.; Tzamariudaki, E.; Udluft, S.; Usik, A.; Valkár, S.; Valkárová, A.; Vallée, C.; van Esch, P.; van Mechelen, P.; Vazdik, Y.; Villet, G.; Wacker, K.; Wallny, R.; Walter, T.; Waugh, B.; Weber, G.; Weber, M.; Wegener, D.; Wegner, A.; Wengler, T.; Werner, M.; West, L. R.; Wiesand, S.; Wilksen, T.; Willard, S.; Winde, M.; Winter, G.-G.; Wittek, C.; Wittmann, E.; Wobisch, M.; Wollatz, H.; Wünsch, E.; Žáček, J.; Zálešák, J.; Zhang, Z.; Zhokin, A.; Zini, P.; Zomer, F.; Zsembery, J.; Zurnedden, M.

    1998-05-01

    Characteristics of hadron production in diffractive deep-inelastic positron-proton scattering are studied using data collected in 1994 by the H1 experiment at HERA. The following distributions are measured in the centre-of-mass frame of the photon dissociation system: the hadronic energy flow, the Feynman-x (xF) variable for charged particles, the squared transverse momentum of charged particles (pT*2), and the mean pT*2 as a function of xF. These distributions are compared with results in the γ*p centre-of-mass frame from inclusive deep-inelastic scattering in the fixed-target experiment EMC, and also with the predictions of several Monte Carlo calculations. The data are consistent with a picture in which the partonic structure of the diffractive exchange is dominated at low Q2 by hard gluons.

  8. Small Aircraft Data Distribution System

    NASA Technical Reports Server (NTRS)

    Chazanoff, Seth L.; Dinardo, Steven J.

    2012-01-01

    The CARVE Small Aircraft Data Distribution System acquires the aircraft location and attitude data that is required by the various programs running on a distributed network. This system distributes the data it acquires to the data acquisition programs for inclusion in their data files. It uses UDP (User Datagram Protocol) to broadcast data over a LAN (Local Area Network) to any programs that might have a use for the data. The program is easily adaptable to acquire additional data and log that data to disk. The current version also drives displays using precision pitch and roll information to aid the pilot in maintaining a level-level attitude for radar/radiometer mapping beyond the degree available by flying visually or using a standard gyro-driven attitude indicator. The software is designed to acquire an array of data to help the mission manager make real-time decisions as to the effectiveness of the flight. This data is displayed for the mission manager and broadcast to the other experiments on the aircraft for inclusion in their data files. The program also drives real-time precision pitch and roll displays for the pilot and copilot to aid them in maintaining the desired attitude, when required, during data acquisition on mapping lines.

  9. Distributed agile software development for the SKA

    NASA Astrophysics Data System (ADS)

    Wicenec, Andreas; Parsons, Rebecca; Kitaeff, Slava; Vinsen, Kevin; Wu, Chen; Nelson, Paul; Reed, David

    2012-09-01

    The SKA software will most probably be developed by many groups distributed across the globe and coming from dierent backgrounds, like industries and research institutions. The SKA software subsystems will have to cover a very wide range of dierent areas, but still they have to react and work together like a single system to achieve the scientic goals and satisfy the challenging data ow requirements. Designing and developing such a system in a distributed fashion requires proper tools and the setup of an environment to allow for ecient detection and tracking of interface and integration issues in particular in a timely way. Agile development can provide much faster feedback mechanisms and also much tighter collaboration between the customer (scientist) and the developer. Continuous integration and continuous deployment on the other hand can provide much faster feedback of integration issues from the system level to the subsystem developers. This paper describes the results obtained from trialing a potential SKA development environment based on existing science software development processes like ALMA, the expected distribution of the groups potentially involved in the SKA development and experience gained in the development of large scale commercial software projects.

  10. Product Distribution Theory for Control of Multi-Agent Systems

    NASA Technical Reports Server (NTRS)

    Lee, Chia Fan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for controlling Multi-Agent Systems (MAS's). First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint stare of the agents. Accordingly we can consider a team game in which the shared utility is a performance measure of the behavior of the MAS. For such a scenario the game is at equilibrium - the Lagrangian is optimized - when the joint distribution of the agents optimizes the system's expected performance. One common way to find that equilibrium is to have each agent run a reinforcement learning algorithm. Here we investigate the alternative of exploiting PD theory to run gradient descent on the Lagrangian. We present computer experiments validating some of the predictions of PD theory for how best to do that gradient descent. We also demonstrate how PD theory can improve performance even when we are not allowed to rerun the MAS from different initial conditions, a requirement implicit in some previous work.

  11. The Emerald Mission-Based Correlation System - An Experimental Data Analysis of Air Force Research Laboratory (AFRL) Air Force Enterprise Defense (AFED) Information Security (INFOSEC) Alarms

    DTIC Science & Technology

    2003-01-01

    AFRL-IF-RS-TR-2002-315 Final Technical Report January 2003 THE EMERALD MISSION-BASED CORRELATION SYSTEM – AN EXPERIMENTAL DATA...2003 3. REPORT TYPE AND DATES COVERED Final Jan 02 – Jul 02 4. TITLE AND SUBTITLE THE EMERALD MISSION-BASED CORRELATION SYSTEM – AN EXPERIMENTAL...DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 Words) This project was established to experiment on the efficacy of the SRI EMERALD Mission-based

  12. Specifications Physiological Monitoring System

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The operation of a physiological monitoring system (PMS) is described. Specifications were established for performance, design, interface, and test requirements. The PMS is a compact, microprocessor-based system, which can be worn in a pack on the body or may be mounted on a Spacelab rack or other appropriate structure. It consists of two modules, the Data Control Unit (DCU) and the Remote Control/Display Unit (RCDU). Its purpose is to collect and distribute data from physiological experiments in the Spacelab and in the Orbiter.

  13. Load-Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity, and Linear Models

    DOE PAGES

    Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano; ...

    2018-01-01

    This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less

  14. Distribution of high-dimensional entanglement via an intra-city free-space link

    PubMed Central

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-01-01

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links. PMID:28737168

  15. Load-Flow in Multiphase Distribution Networks: Existence, Uniqueness, Non-Singularity, and Linear Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Andrey; Wang, Cong; Dall'Anese, Emiliano

    This paper considers unbalanced multiphase distribution systems with generic topology and different load models, and extends the Z-bus iterative load-flow algorithm based on a fixed-point interpretation of the AC load-flow equations. Explicit conditions for existence and uniqueness of load-flow solutions are presented. These conditions also guarantee convergence of the load-flow algorithm to the unique solution. The proposed methodology is applicable to generic systems featuring (i) wye connections; (ii) ungrounded delta connections; (iii) a combination of wye-connected and delta-connected sources/loads; and, (iv) a combination of line-to-line and line-to-grounded-neutral devices at the secondary of distribution transformers. Further, a sufficient condition for themore » non-singularity of the load-flow Jacobian is proposed. Finally, linear load-flow models are derived, and their approximation accuracy is analyzed. Theoretical results are corroborated through experiments on IEEE test feeders.« less

  16. Spatial vs. individual variability with inheritance in a stochastic Lotka-Volterra system

    NASA Astrophysics Data System (ADS)

    Dobramysl, Ulrich; Tauber, Uwe C.

    2012-02-01

    We investigate a stochastic spatial Lotka-Volterra predator-prey model with randomized interaction rates that are either affixed to the lattice sites and quenched, and / or specific to individuals in either population. In the latter situation, we include rate inheritance with mutations from the particles' progenitors. Thus we arrive at a simple model for competitive evolution with environmental variability and selection pressure. We employ Monte Carlo simulations in zero and two dimensions to study the time evolution of both species' densities and their interaction rate distributions. The predator and prey concentrations in the ensuing steady states depend crucially on the environmental variability, whereas the temporal evolution of the individualized rate distributions leads to largely neutral optimization. Contrary to, e.g., linear gene expression models, this system does not experience fixation at extreme values. An approximate description of the resulting data is achieved by means of an effective master equation approach for the interaction rate distribution.

  17. Distribution of high-dimensional entanglement via an intra-city free-space link.

    PubMed

    Steinlechner, Fabian; Ecker, Sebastian; Fink, Matthias; Liu, Bo; Bavaresco, Jessica; Huber, Marcus; Scheidl, Thomas; Ursin, Rupert

    2017-07-24

    Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links.

  18. GreenLITE™: a novel approach for quantification of atmospheric methane concentrations, 2-D spatial distribution, and flux

    NASA Astrophysics Data System (ADS)

    Dobler, J. T.; Blume, N.; Pernini, T.; Zaccheo, T. S.; Braun, M.

    2017-12-01

    The Greenhouse Gas Laser Imaging Tomography Experiment (GreenLITE™) was originally developed by Harris and Atmospheric and Environmental Research (AER) under a cooperative agreement with the National Energy Technology Laboratory of the Department of Energy. The system, initially conceived in 2013, used a pair of high-precision intensity modulated continuous wave (IMCW) transceivers and a series of retroreflectors to generate overlapping atmospheric density measurements of carbon dioxide (CO2) for continuous monitoring of ground carbon storage sites. The overlapping measurements provide an estimate of the two-dimensional (2-D) spatial distribution of the gas within the area of interest using sparsely sampled tomography methods. GreenLITE™ is a full end-to-end system that utilizes standard 4G connectivity and an all cloud-based data storage, processing, and dissemination suite to provide autonomous, near-real-time data via a web-based user interface. The system has been demonstrated for measuring and mapping CO2 over areas from approximately 0.04 km2 to 25 km2 ( 200 m X 200 m, up to 5 km X 5 km), including a year-long demonstration over the city of Paris, France. In late 2016, the GreenLITE™ system was converted by Harris and AER to provide similar measurement capabilities for methane (CH4). Recent experiments have shown that GreenLITE™ CH4 retrieved concentrations agree with a Picarro cavity ring-down spectrometer, calibrated with World Meteorological Organization traceable gas, to within approximately 0.5% of background or 10-15 parts per billion. The system has been tested with several controlled releases over the past year, including a weeklong experiment at an industrial oil and gas facility. Recent experiments have been exploring the use of a box model-based approach for estimating flux, and the initial results are very promising. We will present a description of the instrument, share some recent methane experimental results, and describe the flux estimation process and results of testing to date.

  19. From Experiments to Simulations: Downscaling Measurements of Na+ Distribution at the Root-Soil Interface

    NASA Astrophysics Data System (ADS)

    Perelman, A.; Guerra, H. J.; Pohlmeier, A. J.; Vanderborght, J.; Lazarovitch, N.

    2017-12-01

    When salinity increases beyond a certain threshold, crop yield will decrease at a fixed rate, according to the Maas and Hoffman model (1976). Thus, it is highly important to predict salinization and its impact on crops. Current models do not consider the impact of the transpiration rate on plant salt tolerance, although it affects plant water uptake and thus salt accumulation around the roots, consequently influencing the plant's sensitivity to salinity. Better model parametrization can improve the prediction of real salinity effects on crop growth and yield. The aim of this research is to study Na+ distribution around roots at different scales using different non-invasive methods, and to examine how this distribution is affected by the transpiration rate and plant water uptake. Results from tomato plants that were grown on rhizoslides (a capillary paper growth system) showed that the Na+ concentration was higher at the root-substrate interface than in the bulk. Also, Na+ accumulation around the roots decreased under a low transpiration rate, supporting our hypothesis. The rhizoslides enabled the root growth rate and architecture to be studied under different salinity levels. The root system architecture was retrieved from photos taken during the experiment, enabling us to incorporate real root systems into a simulation. Magnetic resonance imaging (MRI) was used to observe correlations between root system architectures and Na+ distribution. The MRI provided fine resolution of the Na+ accumulation around a single root without disturbing the root system. With time, Na+ accumulated only where roots were found in the soil and later around specific roots. Rhizoslides allow the root systems of larger plants to be investigated, but this method is limited by the medium (paper) and the dimension (2D). The MRI can create a 3D image of Na+ accumulation in soil on a microscopic scale. These data are being used for model calibration, which is expected to enable the prediction of root water uptake in saline soils for different climatic conditions and different soil water availabilities.

  20. Numerical investigation on properties of attack angle for an opposing jet thermal protection system

    NASA Astrophysics Data System (ADS)

    Lu, Hai-Bo; Liu, Wei-Qiang

    2012-08-01

    The three-dimensional Navier—Stokes equation and the k-in viscous model are used to simulate the attack angle characteristics of a hemisphere nose-tip with an opposing jet thermal protection system in supersonic flow conditions. The numerical method is validated by the relevant experiment. The flow field parameters, aerodynamic forces, and surface heat flux distributions for attack angles of 0°, 2°, 5°, 7°, and 10° are obtained. The detailed numerical results show that the cruise attack angle has a great influence on the flow field parameters, aerodynamic force, and surface heat flux distribution of the supersonic vehicle nose-tip with an opposing jet thermal protection system. When the attack angle reaches 10°, the heat flux on the windward generatrix is close to the maximal heat flux on the wall surface of the nose-tip without thermal protection system, thus the thermal protection has failed.

Top