Thermal Distribution System | Energy Systems Integration Facility | NREL
Thermal Distribution System Thermal Distribution System The Energy Systems Integration Facility's integrated thermal distribution system consists of a thermal water loop connected to a research boiler and . Photo of the roof of the Energy Systems Integration Facility. The thermal distribution bus allows
Generalised Central Limit Theorems for Growth Rate Distribution of Complex Systems
NASA Astrophysics Data System (ADS)
Takayasu, Misako; Watanabe, Hayafumi; Takayasu, Hideki
2014-04-01
We introduce a solvable model of randomly growing systems consisting of many independent subunits. Scaling relations and growth rate distributions in the limit of infinite subunits are analysed theoretically. Various types of scaling properties and distributions reported for growth rates of complex systems in a variety of fields can be derived from this basic physical model. Statistical data of growth rates for about 1 million business firms are analysed as a real-world example of randomly growing systems. Not only are the scaling relations consistent with the theoretical solution, but the entire functional form of the growth rate distribution is fitted with a theoretical distribution that has a power-law tail.
Consistency criteria for generalized Cuddeford systems
NASA Astrophysics Data System (ADS)
Ciotti, Luca; Morganti, Lucia
2010-01-01
General criteria to check the positivity of the distribution function (phase-space consistency) of stellar systems of assigned density and anisotropy profile are useful starting points in Jeans-based modelling. Here, we substantially extend previous results, and present the inversion formula and the analytical necessary and sufficient conditions for phase-space consistency of the family of multicomponent Cuddeford spherical systems: the distribution function of each density component of these systems is defined as the sum of an arbitrary number of Cuddeford distribution functions with arbitrary values of the anisotropy radius, but identical angular momentum exponent. The radial trend of anisotropy that can be realized by these models is therefore very general. As a surprising byproduct of our study, we found that the `central cusp-anisotropy theorem' (a necessary condition for consistency relating the values of the central density slope and of the anisotropy parameter) holds not only at the centre but also at all radii in consistent multicomponent generalized Cuddeford systems. This last result suggests that the so-called mass-anisotropy degeneracy could be less severe than what is sometimes feared.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
PV System Component Fault and Failure Compilation and Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne
This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.
An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil
2012-01-01
Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.
NASA Astrophysics Data System (ADS)
Mbanjwa, Mesuli B.; Chen, Hao; Fourie, Louis; Ngwenya, Sibusiso; Land, Kevin
2014-06-01
Multiplexed or parallelised droplet microfluidic systems allow for increased throughput in the production of emulsions and microparticles, while maintaining a small footprint and utilising minimal ancillary equipment. The current paper demonstrates the design and fabrication of a multiplexed microfluidic system for producing biocatalytic microspheres. The microfluidic system consists of an array of 10 parallel microfluidic circuits, for simultaneous operation to demonstrate increased production throughput. The flow distribution was achieved using a principle of reservoirs supplying individual microfluidic circuits. The microfluidic devices were fabricated in poly (dimethylsiloxane) (PDMS) using soft lithography techniques. The consistency of the flow distribution was determined by measuring the size variations of the microspheres produced. The coefficient of variation of the particles was determined to be 9%, an indication of consistent particle formation and good flow distribution between the 10 microfluidic circuits.
Spatiotemporal stick-slip phenomena in a coupled continuum-granular system
NASA Astrophysics Data System (ADS)
Ecke, Robert
In sheared granular media, stick-slip behavior is ubiquitous, especially at very small shear rates and weak drive coupling. The resulting slips are characteristic of natural phenomena such as earthquakes and well as being a delicate probe of the collective dynamics of the granular system. In that spirit, we developed a laboratory experiment consisting of sheared elastic plates separated by a narrow gap filled with quasi-two-dimensional granular material (bi-dispersed nylon rods) . We directly determine the spatial and temporal distributions of strain displacements of the elastic continuum over 200 spatial points located adjacent to the gap. Slip events can be divided into large system-spanning events and spatially distributed smaller events. The small events have a probability distribution of event moment consistent with an M - 3 / 2 power law scaling and a Poisson distributed recurrence time distribution. Large events have a broad, log-normal moment distribution and a mean repetition time. As the applied normal force increases, there are fractionally more (less) large (small) events, and the large-event moment distribution broadens. The magnitude of the slip motion of the plates is well correlated with the root-mean-square displacements of the granular matter. Our results are consistent with mean field descriptions of statistical models of earthquakes and avalanches. We further explore the high-speed dynamics of system events and also discuss the effective granular friction of the sheared layer. We find that large events result from stored elastic energy in the plates in this coupled granular-continuum system.
Great Expectations: Distributed Financial Computing at Cornell.
ERIC Educational Resources Information Center
Schulden, Louise; Sidle, Clint
1988-01-01
The Cornell University Distributed Accounting (CUDA) system is an attempt to provide departments a software tool for better managing their finances, creating microcomputer standards, creating a vehicle for better administrative microcomputer support, and insuring local systems are consistent with central computer systems. (Author/MLW)
Maintaining consistency in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.
1991-01-01
In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.
One approach for evaluating the Distributed Computing Design System (DCDS)
NASA Technical Reports Server (NTRS)
Ellis, J. T.
1985-01-01
The Distributed Computer Design System (DCDS) provides an integrated environment to support the life cycle of developing real-time distributed computing systems. The primary focus of DCDS is to significantly increase system reliability and software development productivity, and to minimize schedule and cost risk. DCDS consists of integrated methodologies, languages, and tools to support the life cycle of developing distributed software and systems. Smooth and well-defined transistions from phase to phase, language to language, and tool to tool provide a unique and unified environment. An approach to evaluating DCDS highlights its benefits.
Distributed software framework and continuous integration in hydroinformatics systems
NASA Astrophysics Data System (ADS)
Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao
2017-08-01
When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.
The embedded operating system project
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1985-01-01
The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.
Centralized versus distributed propulsion
NASA Technical Reports Server (NTRS)
Clark, J. P.
1982-01-01
The functions and requirements of auxiliary propulsion systems are reviewed. None of the three major tasks (attitude control, stationkeeping, and shape control) can be performed by a collection of thrusters at a single central location. If a centralized system is defined as a collection of separated clusters, made up of the minimum number of propulsion units, then such a system can provide attitude control and stationkeeping for most vehicles. A distributed propulsion system is characterized by more numerous propulsion units in a regularly distributed arrangement. Various proposed large space systems are reviewed and it is concluded that centralized auxiliary propulsion is best suited to vehicles with a relatively rigid core. These vehicles may carry a number of flexible or movable appendages. A second group, consisting of one or more large flexible flat plates, may need distributed propulsion for shape control. There is a third group, consisting of vehicles built up from multiple shuttle launches, which may be forced into a distributed system because of the need to add additional propulsion units as the vehicles grow. The effects of distributed propulsion on a beam-like structure were examined. The deflection of the structure under both translational and rotational thrusts is shown as a function of the number of equally spaced thrusters. When two thrusters only are used it is shown that location is an important parameter. The possibility of using distributed propulsion to achieve minimum overall system weight is also examined. Finally, an examination of the active damping by distributed propulsion is described.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Distributed Issues for Ada Real-Time Systems
1990-07-23
NUMBERS Distributed Issues for Ada Real - Time Systems MDA 903-87- C- 0056 S. AUTHOR(S) Thomas E. Griest 7. PERFORMING ORGANiZATION NAME(S) AND ADORESS(ES) 8...considerations. I Adding to the problem of distributed real - time systems is the issue of maintaining a common sense of time among all of the processors...because -omeone is waiting for the final output of a very large set of computations. However in real - time systems , consistent meeting of short-term
The consistency service of the ATLAS Distributed Data Management system
NASA Astrophysics Data System (ADS)
Serfon, Cédric; Garonne, Vincent; ATLAS Collaboration
2011-12-01
With the continuously increasing volume of data produced by ATLAS and stored on the WLCG sites, the probability of data corruption or data losses, due to software and hardware failures is increasing. In order to ensure the consistency of all data produced by ATLAS a Consistency Service has been developed as part of the DQ2 Distributed Data Management system. This service is fed by the different ATLAS tools, i.e. the analysis tools, production tools, DQ2 site services or by site administrators that report corrupted or lost files. It automatically corrects the errors reported and informs the users in case of irrecoverable file loss.
Efficient transformer study: Analysis of manufacture and utility data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burkes, Klaehn; Cordaro, Joe; McIntosh, John
Distribution transformers convert power from the distribution system voltage to the end-customer voltage, which consists of residences, businesses, distributed generation, campus systems, and manufacturing facilities. Amorphous metal distribution transformers (AMDT) are also more expensive and heavier than conventional silicon steel distribution transformers. This and the difficulty to measure the benefit from energy efficiency and low awareness of the technology have hindered the adoption of AMDT. This report presents the cost savings for installing AMDT and the amount of energy saved based on the improved efficiency.
Exploiting replication in distributed systems
NASA Technical Reports Server (NTRS)
Birman, Kenneth P.; Joseph, T. A.
1989-01-01
Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs.
NASA Technical Reports Server (NTRS)
Mintz, Toby; Maslowski, Edward A.; Colozza, Anthony; McFarland, Willard; Prokopius, Kevin P.; George, Patrick J.; Hussey, Sam W.
2010-01-01
The Lunar Surface Power Distribution Network Study team worked to define, breadboard, build and test an electrical power distribution system consistent with NASA's goal of providing electrical power to sustain life and power equipment used to explore the lunar surface. A testbed was set up to simulate the connection of different power sources and loads together to form a mini-grid and gain an understanding of how the power systems would interact. Within the power distribution scheme, each power source contributes to the grid in an independent manner without communication among the power sources and without a master-slave scenario. The grid consisted of four separate power sources and the accompanying power conditioning equipment. Overall system design and testing was performed. The tests were performed to observe the output and interaction of the different power sources as some sources are added and others are removed from the grid connection. The loads on the system were also varied from no load to maximum load to observe the power source interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
HENSINGER, DAVID M.; JOHNSTON, GABRIEL A.; HINMAN-SWEENEY, ELAINE M.
2002-10-01
A distributed reconfigurable micro-robotic system is a collection of unlimited numbers of distributed small, homogeneous robots designed to autonomously organize and reorganize in order to achieve mission-specified geometric shapes and functions. This project investigated the design, control, and planning issues for self-configuring and self-organizing robots. In the 2D space a system consisting of two robots was prototyped and successfully displayed automatic docking/undocking to operate dependently or independently. Additional modules were constructed to display the usefulness of a self-configuring system in various situations. In 3D a self-reconfiguring robot system of 4 identical modules was built. Each module connects to its neighborsmore » using rotating actuators. An individual component can move in three dimensions on its neighbors. We have also built a self-reconfiguring robot system consisting of 9-module Crystalline Robot. Each module in this robot is actuated by expansion/contraction. The system is fully distributed, has local communication (to neighbors) capabilities and it has global sensing capabilities.« less
SGR-like behaviour of the repeating FRB 121102
NASA Astrophysics Data System (ADS)
Wang, F. Y.; Yu, H.
2017-03-01
Fast radio bursts (FRBs) are millisecond-duration radio signals occurring at cosmological distances. However the physical model of FRBs is mystery, many models have been proposed. Here we study the frequency distributions of peak flux, fluence, duration and waiting time for the repeating FRB 121102. The cumulative distributions of peak flux, fluence and duration show power-law forms. The waiting time distribution also shows power-law distribution, and is consistent with a non-stationary Poisson process. These distributions are similar as those of soft gamma repeaters (SGRs). We also use the statistical results to test the proposed models for FRBs. These distributions are consistent with the predictions from avalanche models of slowly driven nonlinear dissipative systems.
Mobile hybrid LiDAR & infrared sensing for natural gas pipeline monitoring, final report.
DOT National Transportation Integrated Search
2016-01-01
The natural gas distribution system in the U.S. has a total of 1.2 million miles of mains and about 65 million service lines as of 2012 [1]. This distribution system consists of various material types and is subjected to various threats which vary ac...
An Analysis of Our Cable Distribution System: Its Current and Future Capabilities.
ERIC Educational Resources Information Center
Clarke, Tobin de Leon
Three goals have been set for San Joaquin Delta College Learning Resource Center's cable distribution system: it is to be made useable, useful, and flexible. Presently the system consists of a microwave dish installed on one building which points to a relay station with approximately one and one half miles of cable pulled to various locations. A…
Automation in the Space Station module power management and distribution Breadboard
NASA Technical Reports Server (NTRS)
Walls, Bryan; Lollar, Louis F.
1990-01-01
The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.
FTIR Analyses of Hypervelocity Impact Deposits: DebriSat Tests
2015-03-27
Aerospace Concept Design Center advised on selection of materials for various subsystems. • Test chamber lined with “soft catch” foam panels to trap...C-0001 Authorized by: Space Systems Group Distribution Statement A: Approved for public release; distribution unlimited Report...Pre Preshot target was a multi-shock shield supplied by NASA designed to catch the projectile. It consisted of seven bumper panels consisting of
NASA Astrophysics Data System (ADS)
Tovbin, Yu. K.
2018-06-01
An analysis is presented of one of the key concepts of physical chemistry of condensed phases: the theory self-consistency in describing the rates of elementary stages of reversible processes and the equilibrium distribution of components in a reaction mixture. It posits that by equating the rates of forward and backward reactions, we must obtain the same equation for the equilibrium distribution of reaction mixture components, which follows directly from deducing the equation in equilibrium theory. Ideal reaction systems always have this property, since the theory is of a one-particle character. Problems arise in considering interparticle interactions responsible for the nonideal behavior of real systems. The Eyring and Temkin approaches to describing nonideal reaction systems are compared. Conditions for the self-consistency of the theory for mono- and bimolecular processes in different types of interparticle potentials, the degree of deviation from the equilibrium state, allowing for the internal motions of molecules in condensed phases, and the electronic polarization of the reagent environment are considered within the lattice gas model. The inapplicability of the concept of an activated complex coefficient for reaching self-consistency is demonstrated. It is also shown that one-particle approximations for considering intermolecular interactions do not provide a theory of self-consistency for condensed phases. We must at a minimum consider short-range order correlations.
Intercommunications in Real Time, Redundant, Distributed Computer System
NASA Technical Reports Server (NTRS)
Zanger, H.
1980-01-01
An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.
A new taxonomy for distributed computer systems based upon operating system structure
NASA Technical Reports Server (NTRS)
Foudriat, E. C.
1985-01-01
Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.
1988-09-01
The current prototyping tool also provides a multiversion data object control mechanism. In a real-time database system, synchronization protocols...data in distributed real-time systems. The semantic informa- tion of read-only transactions is exploited for improved efficiency, and a multiversion ...are discussed. ." Index Terms: distributed system, replication, read-only transaction, consistency, multiversion . I’ I’ I’ 4. -9- I I I ° e% 4, 1
2006-04-01
and Scalability, (2) Sensors and Platforms, (3) Distributed Computing and Processing , (4) Information Management, (5) Fusion and Resource Management...use of the deployed system. 3.3 Distributed Computing and Processing Session The Distributed Computing and Processing Session consisted of three
NASA Astrophysics Data System (ADS)
Jin, Honglin; Kato, Teruyuki; Hori, Muneo
2007-07-01
An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.
Networked control of microgrid system of systems
NASA Astrophysics Data System (ADS)
Mahmoud, Magdi S.; Rahman, Mohamed Saif Ur; AL-Sunni, Fouad M.
2016-08-01
The microgrid has made its mark in distributed generation and has attracted widespread research. However, microgrid is a complex system which needs to be viewed from an intelligent system of systems perspective. In this paper, a network control system of systems is designed for the islanded microgrid system consisting of three distributed generation units as three subsystems supplying a load. The controller stabilises the microgrid system in the presence of communication infractions such as packet dropouts and delays. Simulation results are included to elucidate the effectiveness of the proposed control strategy.
ERIC Educational Resources Information Center
1979
This document, designed to serve as a training manual for technical instructors and as a field resource reference for Peace Corps volunteers, consists of nine units. Unit topics focus on: (1) water supply sources; (2) water treatment; (3) planning water distribution systems; (4) characteristics of an adequate system; (5) construction techniques;…
Dynamic shared state maintenance in distributed virtual environments
NASA Astrophysics Data System (ADS)
Hamza-Lup, Felix George
Advances in computer networks and rendering systems facilitate the creation of distributed collaborative environments in which the distribution of information at remote locations allows efficient communication. Particularly challenging are distributed interactive Virtual Environments (VE) that allow knowledge sharing through 3D information. The purpose of this work is to address the problem of latency in distributed interactive VE and to develop a conceptual model for consistency maintenance in these environments based on the participant interaction model. An area that needs to be explored is the relationship between the dynamic shared state and the interaction with the virtual entities present in the shared scene. Mixed Reality (MR) and VR environments must bring the human participant interaction into the loop through a wide range of electronic motion sensors, and haptic devices. Part of the work presented here defines a novel criterion for categorization of distributed interactive VE and introduces, as well as analyzes, an adaptive synchronization algorithm for consistency maintenance in such environments. As part of the work, a distributed interactive Augmented Reality (AR) testbed and the algorithm implementation details are presented. Currently the testbed is part of several research efforts at the Optical Diagnostics and Applications Laboratory including 3D visualization applications using custom built head-mounted displays (HMDs) with optical motion tracking and a medical training prototype for endotracheal intubation and medical prognostics. An objective method using quaternion calculus is applied for the algorithm assessment. In spite of significant network latency, results show that the dynamic shared state can be maintained consistent at multiple remotely located sites. In further consideration of the latency problems and in the light of the current trends in interactive distributed VE applications, we propose a hybrid distributed system architecture for sensor-based distributed VE that has the potential to improve the system real-time behavior and scalability. (Abstract shortened by UMI.)
Log-Based Recovery in Asynchronous Distributed Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kane, Kenneth Paul
1989-01-01
A log-based mechanism is described for restoring consistent states to replicated data objects after failures. Preserving a causal form of consistency based on the notion of virtual time is focused upon in this report. Causal consistency has been shown to apply to a variety of applications, including distributed simulation, task decomposition, and mail delivery systems. Several mechanisms have been proposed for implementing causally consistent recovery, most notably those of Strom and Yemini, and Johnson and Zwaenepoel. The mechanism proposed here differs from these in two major respects. First, a roll-forward style of recovery is implemented. A functioning process is never required to roll-back its state in order to achieve consistency with a recovering process. Second, the mechanism does not require any explicit information about the causal dependencies between updates. Instead, all necessary dependency information is inferred from the orders in which updates are logged by the object servers. This basic recovery technique appears to be applicable to forms of consistency other than causal consistency. In particular, it is shown how the recovery technique can be modified to support an atomic form of consistency (grouping consistency). By combining grouping consistency with casual consistency, it may even be possible to implement serializable consistency within this mechanism.
Apollo experience report: Command and service module electrical power distribution on subsystem
NASA Technical Reports Server (NTRS)
Munford, R. E.; Hendrix, B.
1974-01-01
A review of the design philosophy and development of the Apollo command and service modules electrical power distribution subsystem, a brief history of the evolution of the total system, and some of the more significant components within the system are discussed. The electrical power distribution primarily consisted of individual control units, interconnecting units, and associated protective devices. Because each unit within the system operated more or less independently of other units, the discussion of the subsystem proceeds generally in descending order of complexity; the discussion begins with the total system, progresses to the individual units of the system, and concludes with the components within the units.
An Efficient Resource Management System for a Streaming Media Distribution Network
ERIC Educational Resources Information Center
Cahill, Adrian J.; Sreenan, Cormac J.
2006-01-01
This paper examines the design and evaluation of a TV on Demand (TVoD) system, consisting of a globally accessible storage architecture where all TV content broadcast over a period of time is made available for streaming. The proposed architecture consists of idle Internet Service Provider (ISP) servers that can be rented and released dynamically…
Towards the Formal Verification of a Distributed Real-Time Automotive System
NASA Technical Reports Server (NTRS)
Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey
2010-01-01
We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).
Task allocation model for minimization of completion time in distributed computer systems
NASA Astrophysics Data System (ADS)
Wang, Jai-Ping; Steidley, Carl W.
1993-08-01
A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.
Notes on a storage manager for the Clouds kernel
NASA Technical Reports Server (NTRS)
Pitts, David V.; Spafford, Eugene H.
1986-01-01
The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.
Exploiting Virtual Synchrony in Distributed Systems
1987-02-01
for distributed systems yield the best performance relative to the level of synchronization guaranteed by the primitive . A pro- grammer could then... synchronization facility. Semaphores Replicated binary and general semaphores . Monitors Monitor lock, condition variables and signals. Deadlock detection...We describe applications of a new software abstraction called the virtually synchronous process group. Such a group consists of a set of processes
An approach for heterogeneous and loosely coupled geospatial data distributed computing
NASA Astrophysics Data System (ADS)
Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui
2010-07-01
Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.
Measuring the effects of heterogeneity on distributed systems
NASA Technical Reports Server (NTRS)
El-Toweissy, Mohamed; Zeineldine, Osman; Mukkamala, Ravi
1991-01-01
Distributed computer systems in daily use are becoming more and more heterogeneous. Currently, much of the design and analysis studies of such systems assume homogeneity. This assumption of homogeneity has been mainly driven by the resulting simplicity in modeling and analysis. A simulation study is presented which investigated the effects of heterogeneity on scheduling algorithms for hard real time distributed systems. In contrast to previous results which indicate that random scheduling may be as good as a more complex scheduler, this algorithm is shown to be consistently better than a random scheduler. This conclusion is more prevalent at high workloads as well as at high levels of heterogeneity.
Benford's law and the FSD distribution of economic behavioral micro data
NASA Astrophysics Data System (ADS)
Villas-Boas, Sofia B.; Fu, Qiuzi; Judge, George
2017-11-01
In this paper, we focus on the first significant digit (FSD) distribution of European micro income data and use information theoretic-entropy based methods to investigate the degree to which Benford's FSD law is consistent with the nature of these economic behavioral systems. We demonstrate that Benford's law is not an empirical phenomenon that occurs only in important distributions in physical statistics, but that it also arises in self-organizing dynamic economic behavioral systems. The empirical likelihood member of the minimum divergence-entropy family, is used to recover country based income FSD probability density functions and to demonstrate the implications of using a Benford prior reference distribution in economic behavioral system information recovery.
Real-time high speed generator system emulation with hardware-in-the-loop application
NASA Astrophysics Data System (ADS)
Stroupe, Nicholas
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
DOT National Transportation Integrated Search
2012-07-01
For this study, a novel optical fiber sensing system was developed and tested for the monitoring of corrosion in : transportation systems. The optical fiber sensing system consists of a reference long period fiber gratings (LPFG) sensor : for corrosi...
Bibliography On Multiprocessors And Distributed Processing
NASA Technical Reports Server (NTRS)
Miya, Eugene N.
1988-01-01
Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.
The Influence of Manufacturing Variations on a Crash Energy Management System
DOT National Transportation Integrated Search
2008-09-24
Crash Energy Management (CEM) systems protect passengers in the event of a train collision. A CEM system distributes crush throughout designated unoccupied crush zones of a passenger rail consist. This paper examines the influence of manufacturing va...
Economic optimization of the energy transport component of a large distributed solar power plant
NASA Technical Reports Server (NTRS)
Turner, R. H.
1976-01-01
A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.
Bluetooth-based distributed measurement system
NASA Astrophysics Data System (ADS)
Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng
2007-07-01
A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.
Pond fractals in a tidal flat.
Cael, B B; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.
Packaging and distributing ecological data from multisite studies
NASA Technical Reports Server (NTRS)
Olson, R. J.; Voorhees, L. D.; Field, J. M.; Gentry, M. J.
1996-01-01
Studies of global change and other regional issues depend on ecological data collected at multiple study areas or sites. An information system model is proposed for compiling diverse data from dispersed sources so that the data are consistent, complete, and readily available. The model includes investigators who collect and analyze field measurements, science teams that synthesize data, a project information system that collates data, a data archive center that distributes data to secondary users, and a master data directory that provides broader searching opportunities. Special attention to format consistency is required, such as units of measure, spatial coordinates, dates, and notation for missing values. Often data may need to be enhanced by estimating missing values, aggregating to common temporal units, or adding other related data such as climatic and soils data. Full documentation, an efficient data distribution mechanism, and an equitable way to acknowledge the original source of data are also required.
Emergency response to an anthrax attack
Wein, Lawrence M.; Craft, David L.; Kaplan, Edward H.
2003-01-01
We developed a mathematical model to compare various emergency responses in the event of an airborne anthrax attack. The system consists of an atmospheric dispersion model, an age-dependent dose–response model, a disease progression model, and a set of spatially distributed two-stage queueing systems consisting of antibiotic distribution and hospital care. Our results underscore the need for the extremely aggressive and timely use of oral antibiotics by all asymptomatics in the exposure region, distributed either preattack or by nonprofessionals postattack, and the creation of surge capacity for supportive hospital care via expanded training of nonemergency care workers at the local level and the use of federal and military resources and nationwide medical volunteers. The use of prioritization (based on disease stage and/or age) at both queues, and the development and deployment of modestly rapid and sensitive biosensors, while helpful, produce only second-order improvements. PMID:12651951
NASA Astrophysics Data System (ADS)
Cael, B. B.; Lambert, Bennett; Bisson, Kelsey
2015-11-01
Studies over the past decade have reported power-law distributions for the areas of terrestrial lakes and Arctic melt ponds, as well as fractal relationships between their areas and coastlines. Here we report similar fractal structure of ponds in a tidal flat, thereby extending the spatial and temporal scales on which such phenomena have been observed in geophysical systems. Images taken during low tide of a tidal flat in Damariscotta, Maine, reveal a well-resolved power-law distribution of pond sizes over three orders of magnitude with a consistent fractal area-perimeter relationship. The data are consistent with the predictions of percolation theory for unscreened perimeters and scale-free cluster size distributions and are robust to alterations of the image processing procedure. The small spatial and temporal scales of these data suggest this easily observable system may serve as a useful model for investigating the evolution of pond geometries, while emphasizing the generality of fractal behavior in geophysical surfaces.
Stronger Consistency and Semantics for Low-Latency Geo-Replicated Storage
2013-06-01
Wallach, Mike Burrows , Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: A distributed storage system for structured data. ACM TOCS, 26(2...propagation for weakly consistent replication. In SOSP, October 1997. [60] Larry Peterson, Andy Bavier, and Sapan Bhatia. VICCI: A programmable cloud
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
This report summarizes all work of the Limited Energy Study of Steam Distribution Systems, Energy Engineering Analysis Program, Hawthorne Army Ammunition Depot (HWAAD), Nevada. The purpose of this limited energy study is to evaluate steam distribution and condensate collection systems in both the Industrial Area and Ordnance Area of HWAAD to develop a set of replacement actions that will reduce energy consumption and operating costs. These efforts consist of corrections and revisions to previously submitted funding requests. A number of facilities covering over 140,000 acres constitute HWAAD; however, this study was limited to the Industrial and Ordnance Areas.
Status, Vision, and Challenges of an Intelligent Distributed Engine Control Architecture
NASA Technical Reports Server (NTRS)
Behbahani, Alireza; Culley, Dennis; Garg, Sanjay; Millar, Richard; Smith, Bert; Wood, Jim; Mahoney, Tim; Quinn, Ronald; Carpenter, Sheldon; Mailander, Bill;
2007-01-01
A Distributed Engine Control Working Group (DECWG) consisting of the Department of Defense (DoD), the National Aeronautics and Space Administration (NASA) Glenn Research Center (GRC) and industry has been formed to examine the current and future requirements of propulsion engine systems. The scope of this study will include an assessment of the paradigm shift from centralized engine control architecture to an architecture based on distributed control utilizing open system standards. Included will be a description of the work begun in the 1990's, which continues today, followed by the identification of the remaining technical challenges which present barriers to on-engine distributed control.
Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS
NASA Technical Reports Server (NTRS)
Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew
2016-01-01
EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.
A development framework for artificial intelligence based distributed operations support systems
NASA Technical Reports Server (NTRS)
Adler, Richard M.; Cottman, Bruce H.
1990-01-01
Advanced automation is required to reduce costly human operations support requirements for complex space-based and ground control systems. Existing knowledge based technologies have been used successfully to automate individual operations tasks. Considerably less progress has been made in integrating and coordinating multiple operations applications for unified intelligent support systems. To fill this gap, SOCIAL, a tool set for developing Distributed Artificial Intelligence (DAI) systems is being constructed. SOCIAL consists of three primary language based components defining: models of interprocess communication across heterogeneous platforms; models for interprocess coordination, concurrency control, and fault management; and for accessing heterogeneous information resources. DAI applications subsystems, either new or existing, will access these distributed services non-intrusively, via high-level message-based protocols. SOCIAL will reduce the complexity of distributed communications, control, and integration, enabling developers to concentrate on the design and functionality of the target DAI system itself.
Arcade: A Web-Java Based Framework for Distributed Computing
NASA Technical Reports Server (NTRS)
Chen, Zhikai; Maly, Kurt; Mehrotra, Piyush; Zubair, Mohammad; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
Distributed heterogeneous environments are being increasingly used to execute a variety of large size simulations and computational problems. We are developing Arcade, a web-based environment to design, execute, monitor, and control distributed applications. These targeted applications consist of independent heterogeneous modules which can be executed on a distributed heterogeneous environment. In this paper we describe the overall design of the system and discuss the prototype implementation of the core functionalities required to support such a framework.
A comparison of KABCO and AIS injury severity metrics using CODES linked data.
Burch, Cynthia; Cook, Lawrence; Dischinger, Patricia
2014-01-01
The research objective is to compare the consistency of distributions between crash assigned (KABCO) and hospital assigned (Abbreviated Injury Scale, AIS) injury severity scoring systems for 2 states. The hypothesis is that AIS scores will be more consistent between the 2 studied states (Maryland and Utah) than KABCO. The analysis involved Crash Outcome Data Evaluation System (CODES) data from 2 states, Maryland and Utah, for years 2006-2008. Crash report and hospital inpatient data were linked probabilistically and International Classification of Diseases (CMS 2013) codes from hospital records were translated into AIS codes. KABCO scores from police crash reports were compared to those AIS scores within and between the 2 study states. Maryland appears to have the more severe crash report KABCO scoring for injured crash participants, with close to 50 percent of all injured persons being coded as a level B or worse, and Utah observes approximately 40 percent in this group. When analyzing AIS scores, some fluctuation was seen within states over time, but the distribution of MAIS is much more comparable between states. Maryland had approximately 85 percent of hospitalized injured cases coded as MAIS = 1 or minor. In Utah this percentage was close to 80 percent for all 3 years. This is quite different from the KABCO distributions, where Maryland had a smaller percentage of cases in the lowest injury severity category as compared to Utah. This analysis examines the distribution of 2 injury severity metrics different in both design and collection and found that both classifications are consistent within each state from 2006 to 2008. However, the distribution of both KABCO and Maximum Abbreviated Injury Scale (MAIS) varies between the states. MAIS was found to be more consistent between states than KABCO.
Integrated Nationwide Electronic Health Records system: Semi-distributed architecture approach.
Fragidis, Leonidas L; Chatzoglou, Prodromos D; Aggelidis, Vassilios P
2016-11-14
The integration of heterogeneous electronic health records systems by building an interoperable nationwide electronic health record system provides undisputable benefits in health care, like superior health information quality, medical errors prevention and cost saving. This paper proposes a semi-distributed system architecture approach for an integrated national electronic health record system incorporating the advantages of the two dominant approaches, the centralized architecture and the distributed architecture. The high level design of the main elements for the proposed architecture is provided along with diagrams of execution and operation and data synchronization architecture for the proposed solution. The proposed approach effectively handles issues related to redundancy, consistency, security, privacy, availability, load balancing, maintainability, complexity and interoperability of citizen's health data. The proposed semi-distributed architecture offers a robust interoperability framework without healthcare providers to change their local EHR systems. It is a pragmatic approach taking into account the characteristics of the Greek national healthcare system along with the national public administration data communication network infrastructure, for achieving EHR integration with acceptable implementation cost.
Results on angular distributions of thermal dileptons in nuclear collisions
NASA Astrophysics Data System (ADS)
Usai, Gianluca; NA60 Collaboration
2009-11-01
The NA60 experiment at the CERN SPS has studied dimuon production in 158 AGeV In-In collisions. The strong pair excess above the known sources found in the mass region 0.2
Optimal reconstruction of historical water supply to a distribution system: A. Methodology.
Aral, M M; Guan, J; Maslia, M L; Sautner, J B; Gillig, R E; Reyes, J J; Williams, R C
2004-09-01
The New Jersey Department of Health and Senior Services (NJDHSS), with support from the Agency for Toxic Substances and Disease Registry (ATSDR) conducted an epidemiological study of childhood leukaemia and nervous system cancers that occurred in the period 1979 through 1996 in Dover Township, Ocean County, New Jersey. The epidemiological study explored a wide variety of possible risk factors, including environmental exposures. ATSDR and NJDHSS determined that completed human exposure pathways to groundwater contaminants occurred in the past through private and community water supplies (i.e. the water distribution system serving the area). To investigate this exposure, a model of the water distribution system was developed and calibrated through an extensive field investigation. The components of this water distribution system, such as number of pipes, number of tanks, and number of supply wells in the network, changed significantly over a 35-year period (1962--1996), the time frame established for the epidemiological study. Data on the historical management of this system was limited. Thus, it was necessary to investigate alternative ways to reconstruct the operation of the system and test the sensitivity of the system to various alternative operations. Manual reconstruction of the historical water supply to the system in order to provide this sensitivity analysis was time-consuming and labour intensive, given the complexity of the system and the time constraints imposed on the study. To address these issues, the problem was formulated as an optimization problem, where it was assumed that the water distribution system was operated in an optimum manner at all times to satisfy the constraints in the system. The solution to the optimization problem provided the historical water supply strategy in a consistent manner for each month of the study period. The non-uniqueness of the selected historical water supply strategy was addressed by the formulation of a second model, which was based on the first solution. Numerous other sensitivity analyses were also conducted using these two models. Both models are solved using a two-stage progressive optimality algorithm along with genetic algorithms (GAs) and the EPANET2 water distribution network solver. This process reduced the required solution time and generated a historically consistent water supply strategy for the water distribution system.
DOT National Transportation Integrated Search
1990-01-01
The validation and evaluation of an expert system for traffic control in highway work zones (TRANZ) is described. The stages in the evaluation process consisted of the following: revisit the experts, selectively distribute copies of TRANZ with docume...
Two-Dimensional Automatic Measurement for Nozzle Flow Distribution Using Improved Ultrasonic Sensor
Zhai, Changyuan; Zhao, Chunjiang; Wang, Xiu; Wang, Ning; Zou, Wei; Li, Wei
2015-01-01
Spray deposition and distribution are affected by many factors, one of which is nozzle flow distribution. A two-dimensional automatic measurement system, which consisted of a conveying unit, a system control unit, an ultrasonic sensor, and a deposition collecting dish, was designed and developed. The system could precisely move an ultrasonic sensor above a pesticide deposition collecting dish to measure the nozzle flow distribution. A sensor sleeve with a PVC tube was designed for the ultrasonic sensor to limit its beam angle in order to measure the liquid level in the small troughs. System performance tests were conducted to verify the designed functions and measurement accuracy. A commercial spray nozzle was also used to measure its flow distribution. The test results showed that the relative error on volume measurement was less than 7.27% when the liquid volume was 2 mL in trough, while the error was less than 4.52% when the liquid volume was 4 mL or more. The developed system was also used to evaluate the flow distribution of a commercial nozzle. It was able to provide the shape and the spraying width of the flow distribution accurately. PMID:26501288
Marketing and Distributive Education: Scope and Sequence.
ERIC Educational Resources Information Center
Nashville - Davidson County Metropolitan Public Schools, TN.
This guide, which was written as an initial step in the development of a systemwide articulated curriculum sequence for all vocational programs within the Metropolitan Nashville Public School System, outlines the suggested scope and sequence of a 2-year program in marketing and distributive education. The guide consists of a course description;…
NASA Technical Reports Server (NTRS)
Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven
2010-01-01
Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.
Bilinear effect in complex systems
NASA Astrophysics Data System (ADS)
Lam, Lui; Bellavia, David C.; Han, Xiao-Pu; Alston Liu, Chih-Hui; Shu, Chang-Qing; Wei, Zhengjin; Zhou, Tao; Zhu, Jichen
2010-09-01
The distribution of the lifetime of Chinese dynasties (as well as that of the British Isles and Japan) in a linear Zipf plot is found to consist of two straight lines intersecting at a transition point. This two-section piecewise-linear distribution is different from the power law or the stretched exponent distribution, and is called the Bilinear Effect for short. With assumptions mimicking the organization of ancient Chinese regimes, a 3-layer network model is constructed. Numerical results of this model show the bilinear effect, providing a plausible explanation of the historical data. The bilinear effect in two other social systems is presented, indicating that such a piecewise-linear effect is widespread in social systems.
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Quantum key distribution network for multiple applications
NASA Astrophysics Data System (ADS)
Tajima, A.; Kondoh, T.; Ochi, T.; Fujiwara, M.; Yoshino, K.; Iizuka, H.; Sakamoto, T.; Tomita, A.; Shimamura, E.; Asami, S.; Sasaki, M.
2017-09-01
The fundamental architecture and functions of secure key management in a quantum key distribution (QKD) network with enhanced universal interfaces for smooth key sharing between arbitrary two nodes and enabling multiple secure communication applications are proposed. The proposed architecture consists of three layers: a quantum layer, key management layer and key supply layer. We explain the functions of each layer, the key formats in each layer and the key lifecycle for enabling a practical QKD network. A quantum key distribution-advanced encryption standard (QKD-AES) hybrid system and an encrypted smartphone system were developed as secure communication applications on our QKD network. The validity and usefulness of these systems were demonstrated on the Tokyo QKD Network testbed.
Moorhead district heating, phase 2
NASA Astrophysics Data System (ADS)
Sundberg, R. E.
1981-01-01
The feasibility of developing a demonstration cogeneration hot water district heating system was studied. The district heating system would use coal and cogenerated heat from the Moorhead power plant to heat the water that would be distributed through underground pipes to customers or their space and domestic water heating needs, serving a substantial portion of the commercial and institutional loads as well as single and multiple family residences near the distribution lines. The technical feasibility effort considered the distribution network, retrofit of the power plant, and conversion of heating systems in customers' buildings to use hot water from the system. The system would be developed over six years. The economic analysis consisted of a market assessment and development of business plans for construction and operation of the system. Rate design methodology, institutional issues, development risk, and the proposal for implementation are discussed.
NASA Technical Reports Server (NTRS)
Mah, G. R.; Myers, J.
1993-01-01
The U.S. Government has initiated the Global Change Research program, a systematic study of the Earth as a complete system. NASA's contribution of the Global Change Research Program is the Earth Observing System (EOS), a series of orbital sensor platforms and an associated data processing and distribution system. The EOS Data and Information System (EOSDIS) is the archiving, production, and distribution system for data collected by the EOS space segment and uses a multilayer architecture for processing, archiving, and distributing EOS data. The first layer consists of the spacecraft ground stations and processing facilities that receive the raw data from the orbiting platforms and then separate the data by individual sensors. The second layer consists of Distributed Active Archive Centers (DAAC) that process, distribute, and archive the sensor data. The third layer consists of a user science processing network. The EOSDIS is being developed in a phased implementation. The initial phase, Version 0, is a prototype of the operational system. Version 0 activities are based upon existing systems and are designed to provide an EOSDIS-like capability for information management and distribution. An important science support task is the creation of simulated data sets for EOS instruments from precursor aircraft or satellite data. The Land Processes DAAC, at the EROS Data Center (EDC), is responsible for archiving and processing EOS precursor data from airborne instruments such as the Thermal Infrared Multispectral Scanner (TIMS), the Thematic Mapper Simulator (TMS), and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS). AVIRIS, TIMS, and TMS are flown by the NASA-Ames Research Center ARC) on an ER-2. The ER-2 flies at 65000 feet and can carry up to three sensors simultaneously. Most jointly collected data sets are somewhat boresighted and roughly registered. The instrument data are being used to construct data sets that simulate the spectral and spatial characteristics of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument scheduled to be flown on the first EOS-AM spacecraft. The ASTER is designed to acquire 14 channels of land science data in the visible and near-IR (VNIR), shortwave-IR (SWIR), and thermal-IR (TIR) regions from 0.52 micron to 11.65 micron at high spatial resolutions of 15 m to 90 m. Stereo data will also be acquired in the VNIR region in a single band. The AVIRIS and TMS cover the ASTER VNIR and SWIR bands, and the TIMS covers the TIR bands. Simulated ASTER data sets have been generated over Death Valley, California, Cuprite, Nevada, and the Drum Mountains, Utah using a combination of AVIRIS, TIMS, amd TMS data, and existing digital elevation models (DEM) for the topographic information.
Control of Groundwater Remediation Process as Distributed Parameter System
NASA Astrophysics Data System (ADS)
Mendel, M.; Kovács, T.; Hulkó, G.
2014-12-01
Pollution of groundwater requires the implementation of appropriate solutions which can be deployed for several years. The case of local groundwater contamination and its subsequent spread may result in contamination of drinking water sources or other disasters. This publication aims to design and demonstrate control of pumping wells for a model task of groundwater remediation. The task consists of appropriately spaced soil with input parameters, pumping wells and control system. Model of controlled system is made in the program MODFLOW using the finitedifference method as distributed parameter system. Control problem is solved by DPS Blockset for MATLAB & Simulink.
N-body experiments and missing mass in clusters of galaxies
NASA Technical Reports Server (NTRS)
Smith, H.; Hintzen, P.; Sofia, S.; Oegerle, W.; Scott, J.; Holman, G.
1979-01-01
It is commonly assumed that the distributions of surface density and radial-velocity dispersion in clusters of galaxies are sensitive tracers of the underlying distribution of any unseen mass. N-body experiments have been used to test this assumption. Calculations with equal-mass systems indicate that the effects of the underlying mass distribution cannot be detected by observations of the surface-density or radial-velocity distributions, and the existence of an extended binding mass in all well-studied clusters would be consistent with available observations.
Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision (in Chinese)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong
Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logicmore » and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.« less
Code of Federal Regulations, 2011 CFR
2011-01-01
... CONCERNING USE OF THE NOAA SPACE-BASED DATA COLLECTION SYSTEMS § 911.3 Definitions. For purposes of this part... data from fixed and moving platforms and provides platform location data. This system consists of... Data Processing and Distribution for the National Environmental Satellite, Data, and Information...
Stability and effectiveness of chlorine disinfectants in water distribution systems.
Olivieri, V P; Snead, M C; Krusé, C W; Kawata, K
1986-11-01
A test system for water distribution was used to evaluate the stability and effectiveness of three residual disinfectants--free chlorine, combined chlorine, and chlorine dioxide--when challenged with a sewage contaminant. The test distribution system consisted of the street main and internal plumbing for two barracks at Fort George G. Meade, MD. To the existing pipe network, 152 m (500 ft) of 13-mm (0.5 in.) copper pipe were added for sampling, and 60 m (200 ft) of 2.54-cm (1.0 in.) plastic pipe were added for circulation. The levels of residual disinfectants tested were 0.2 mg/L and 1.0 mg/L as available chlorine. In the absence of a disinfectant residual, microorganisms in the sewage contaminant were consistently recovered at high levels. The presence of any disinfectant residual reduced the microorganism level and frequency of occurrence at the consumer's tap. Free chlorine was the most effective residual disinfectant and may serve as a marker or flag in the distribution network. Free chlorine and chlorine dioxide were the least stable in the pipe network. The loss of disinfectant in the pipe network followed first-order kinetics. The half-life determined in static tests for free chlorine, chlorine dioxide, and combined chlorine was 140, 93, and 1680 min.
Stability and effectiveness of chlorine disinfectants in water distribution systems.
Olivieri, V P; Snead, M C; Krusé, C W; Kawata, K
1986-01-01
A test system for water distribution was used to evaluate the stability and effectiveness of three residual disinfectants--free chlorine, combined chlorine, and chlorine dioxide--when challenged with a sewage contaminant. The test distribution system consisted of the street main and internal plumbing for two barracks at Fort George G. Meade, MD. To the existing pipe network, 152 m (500 ft) of 13-mm (0.5 in.) copper pipe were added for sampling, and 60 m (200 ft) of 2.54-cm (1.0 in.) plastic pipe were added for circulation. The levels of residual disinfectants tested were 0.2 mg/L and 1.0 mg/L as available chlorine. In the absence of a disinfectant residual, microorganisms in the sewage contaminant were consistently recovered at high levels. The presence of any disinfectant residual reduced the microorganism level and frequency of occurrence at the consumer's tap. Free chlorine was the most effective residual disinfectant and may serve as a marker or flag in the distribution network. Free chlorine and chlorine dioxide were the least stable in the pipe network. The loss of disinfectant in the pipe network followed first-order kinetics. The half-life determined in static tests for free chlorine, chlorine dioxide, and combined chlorine was 140, 93, and 1680 min. PMID:3028767
Can airborne ultrasound monitor bubble size in chocolate?
NASA Astrophysics Data System (ADS)
Watson, N.; Hazlehurst, T.; Povey, M.; Vieira, J.; Sundara, R.; Sandoz, J.-P.
2014-04-01
Aerated chocolate products consist of solid chocolate with the inclusion of bubbles and are a popular consumer product in many countries. The volume fraction and size distribution of the bubbles has an effect on their sensory properties and manufacturing cost. For these reasons it is important to have an online real time process monitoring system capable of measuring their bubble size distribution. As these products are eaten by consumers it is desirable that the monitoring system is non contact to avoid food contaminations. In this work we assess the feasibility of using an airborne ultrasound system to monitor the bubble size distribution in aerated chocolate bars. The experimental results from the airborne acoustic experiments were compared with theoretical results for known bubble size distributions using COMSOL Multiphysics. This combined experimental and theoretical approach is used to develop a greater understanding of how ultrasound propagates through aerated chocolate and to assess the feasibility of using airborne ultrasound to monitor bubble size distribution in these systems. The results indicated that a smaller bubble size distribution would result in an increase in attenuation through the product.
Evaluation of the Performance of the Distributed Phased-MIMO Sonar.
Pan, Xiang; Jiang, Jingning; Wang, Nan
2017-01-11
A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.
Evaluation of the Performance of the Distributed Phased-MIMO Sonar
Pan, Xiang; Jiang, Jingning; Wang, Nan
2017-01-01
A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071
Automation of Shuttle Tile Inspection - Engineering methodology for Space Station
NASA Technical Reports Server (NTRS)
Wiskerchen, M. J.; Mollakarimi, C.
1987-01-01
The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.
NASA Astrophysics Data System (ADS)
Ding, Kun; Chan, C. T.
2018-04-01
The calculation of optical force density distribution inside a material is challenging at the nanoscale, where quantum and nonlocal effects emerge and macroscopic parameters such as permittivity become ill-defined. We demonstrate that the microscopic optical force density of nanoplasmonic systems can be defined and calculated using the microscopic fields generated using a self-consistent hydrodynamics model that includes quantum, nonlocal, and retardation effects. We demonstrate this technique by calculating the microscopic optical force density distributions and the optical binding force induced by external light on nanoplasmonic dimers. This approach works even in the limit when the nanoparticles are close enough to each other so that electron tunneling occurs, a regime in which classical electromagnetic approach fails completely. We discover that an uneven distribution of optical force density can lead to a light-induced spinning torque acting on individual particles. The hydrodynamics method offers us an accurate and efficient approach to study optomechanical behavior for plasmonic systems at the nanoscale.
Work distributions for random sudden quantum quenches
NASA Astrophysics Data System (ADS)
Łobejko, Marcin; Łuczka, Jerzy; Talkner, Peter
2017-05-01
The statistics of work performed on a system by a sudden random quench is investigated. Considering systems with finite dimensional Hilbert spaces we model a sudden random quench by randomly choosing elements from a Gaussian unitary ensemble (GUE) consisting of Hermitian matrices with identically, Gaussian distributed matrix elements. A probability density function (pdf) of work in terms of initial and final energy distributions is derived and evaluated for a two-level system. Explicit results are obtained for quenches with a sharply given initial Hamiltonian, while the work pdfs for quenches between Hamiltonians from two independent GUEs can only be determined in explicit form in the limits of zero and infinite temperature. The same work distribution as for a sudden random quench is obtained for an adiabatic, i.e., infinitely slow, protocol connecting the same initial and final Hamiltonians.
Classroom Audio Distribution in the Postsecondary Setting: A Story of Universal Design for Learning
ERIC Educational Resources Information Center
Flagg-Williams, Joan B.; Bokhorst-Heng, Wendy D.
2016-01-01
Classroom Audio Distribution Systems (CADS) consist of amplification technology that enhances the teacher's, or sometimes the student's, vocal signal above the background noise in a classroom. Much research has supported the benefits of CADS for student learning, but most of it has focused on elementary school classrooms. This study investigated…
Substantiation of the Parameters of the Central Distributor for Mineral Fertilizers
ERIC Educational Resources Information Center
Nukeshev, Sayakhat O.; Eskhozhin, Kairat D.; Tokushev, Masgut H.; Zhazykbayeva, Zhazira M.
2016-01-01
The main problem of distribution systems with a centralized metering seed actions of pneumatic planters is deficient feed-rate consistency of seeds in supply coulters. Thus, the purpose of the study is to develop the optimal ways of decreasing the irregular distribution of the seeds and mineral fertilizers in the coulters. In order to achieve this…
Ji, Yanli; Wen, Jizhi; Veldhuisen, Barbera; Haer-Wigman, Lonneke; Wang, Zhen; Lodén-van Straaten, Martin; Wei, Ling; Luo, Guangping; Fu, Yongshui; van der Schoot, C Ellen
2017-02-01
Genotyping platforms for common red blood cell (RBC) antigens have been successfully applied in Caucasian and black populations but not in Chinese populations. In this study, a genotyping assay based on multiplex ligation-dependent probe amplification (MLPA) technology was applied in a Chinese population to validate the MLPA probes. Subsequently, the comprehensive distribution of 17 blood group systems also was obtained. DNA samples from 200 Chinese donors were extracted and genotyped using the blood-MLPA assay. To confirm the MLPA results, a second independent genotyping assay (ID Core+) was conducted in 40 donors, and serological typing of 14 blood-group antigens was performed in 91 donors. In donors who had abnormal copy numbers of an allele (DI and GYPB) determined by MLPA, additional experiments were performed (polymerase chain reaction, sequencing, and flow cytometry analysis). The genotyping results obtained using the blood-MLPA and ID Core+ assays were consistent. Serological data were consistent with the genotyping results except for one donor who had a Lu(a-b-) phenotype. Of the 17 blood group systems, the distribution of the MNS, Duffy, Kidd, Diego, Yt, and Dombrock systems was polymorphic. The Mur and St a antigens of the MNS system were distributed with a frequency of 9% (18 of 200) and 2% (4 of 200), respectively. One donor with chimerism and one who carried a novel DI*02(A845V) allele, which predicts the depression of Di b antigen expression, were identified. The blood-MLPA assay could easily identify the common blood-group alleles and correctly predicted phenotype in the Chinese population. The Mur and St a antigens were distributed with high frequency in a Southern Chinese Han population. © 2016 AABB.
NASA Technical Reports Server (NTRS)
Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
Software Framework for Peer Data-Management Services
NASA Technical Reports Server (NTRS)
Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy
2007-01-01
Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.
Fault-tolerant clock synchronization in distributed systems
NASA Technical Reports Server (NTRS)
Ramanathan, Parameswaran; Shin, Kang G.; Butler, Ricky W.
1990-01-01
Existing fault-tolerant clock synchronization algorithms are compared and contrasted. These include the following: software synchronization algorithms, such as convergence-averaging, convergence-nonaveraging, and consistency algorithms, as well as probabilistic synchronization; hardware synchronization algorithms; and hybrid synchronization. The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms. More recent developments such as hardware-assisted software synchronization and algorithms for synchronizing large, partially connected distributed systems are especially emphasized.
Distributed state machine supervision for long-baseline gravitational-wave detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rollins, Jameson Graef, E-mail: jameson.rollins@ligo.org
The Laser Interferometer Gravitational-wave Observatory (LIGO) consists of two identical yet independent, widely separated, long-baseline gravitational-wave detectors. Each Advanced LIGO detector consists of complex optical-mechanical systems isolated from the ground by multiple layers of active seismic isolation, all controlled by hundreds of fast, digital, feedback control systems. This article describes a novel state machine-based automation platform developed to handle the automation and supervisory control challenges of these detectors. The platform, called Guardian, consists of distributed, independent, state machine automaton nodes organized hierarchically for full detector control. User code is written in standard Python and the platform is designed to facilitatemore » the fast-paced development process associated with commissioning the complicated Advanced LIGO instruments. While developed specifically for the Advanced LIGO detectors, Guardian is a generic state machine automation platform that is useful for experimental control at all levels, from simple table-top setups to large-scale multi-million dollar facilities.« less
Stochastic parameter estimation in nonlinear time-delayed vibratory systems with distributed delay
NASA Astrophysics Data System (ADS)
Torkamani, Shahab; Butcher, Eric A.
2013-07-01
The stochastic estimation of parameters and states in linear and nonlinear time-delayed vibratory systems with distributed delay is explored. The approach consists of first employing a continuous time approximation to approximate the delayed integro-differential system with a large set of ordinary differential equations having stochastic excitations. Then the problem of state and parameter estimation in the resulting stochastic ordinary differential system is represented as an optimal filtering problem using a state augmentation technique. By adapting the extended Kalman-Bucy filter to the augmented filtering problem, the unknown parameters of the time-delayed system are estimated from noise-corrupted, possibly incomplete measurements of the states. Similarly, the upper bound of the distributed delay can also be estimated by the proposed technique. As an illustrative example to a practical problem in vibrations, the parameter, delay upper bound, and state estimation from noise-corrupted measurements in a distributed force model widely used for modeling machine tool vibrations in the turning operation is investigated.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Virtual Collaborative Environments for System of Systems Engineering and Applications for ISAT
NASA Technical Reports Server (NTRS)
Dryer, David A.
2002-01-01
This paper describes an system of systems or metasystems approach and models developed to help prepare engineering organizations for distributed engineering environments. These changes in engineering enterprises include competition in increasingly global environments; new partnering opportunities caused by advances in information and communication technologies, and virtual collaboration issues associated with dispersed teams. To help address challenges and needs in this environment, a framework is proposed that can be customized and adapted for NASA to assist in improved engineering activities conducted in distributed, enhanced engineering environments. The approach is designed to prepare engineers for such distributed collaborative environments by learning and applying e-engineering methods and tools to a real-world engineering development scenario. The approach consists of two phases: an e-engineering basics phase and e-engineering application phase. The e-engineering basics phase addresses skills required for e-engineering. The e-engineering application phase applies these skills in a distributed collaborative environment to system development projects.
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-02-25
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems.
Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Simarro, Raúl; Benet, Ginés
2015-01-01
This paper is part of a study of intelligent architectures for distributed control and communications systems. The study focuses on optimizing control systems by evaluating the performance of middleware through quality of service (QoS) parameters and the optimization of control using Quality of Control (QoC) parameters. The main aim of this work is to study, design, develop, and evaluate a distributed control architecture based on the Data-Distribution Service for Real-Time Systems (DDS) communication standard as proposed by the Object Management Group (OMG). As a result of the study, an architecture called Frame-Sensor-Adapter to Control (FSACtrl) has been developed. FSACtrl provides a model to implement an intelligent distributed Event-Based Control (EBC) system with support to measure QoS and QoC parameters. The novelty consists of using, simultaneously, the measured QoS and QoC parameters to make decisions about the control action with a new method called Event Based Quality Integral Cycle. To validate the architecture, the first five Braitenberg vehicles have been implemented using the FSACtrl architecture. The experimental outcomes, demonstrate the convenience of using jointly QoS and QoC parameters in distributed control systems. PMID:25723145
NASA Astrophysics Data System (ADS)
Yuizono, Takaya; Hara, Kousuke; Nakayama, Shigeru
A web-based distributed cooperative development environment of sign-language animation system has been developed. We have extended the system from the previous animation system that was constructed as three tiered system which consists of sign-language animation interface layer, sign-language data processing layer, and sign-language animation database. Two components of a web client using VRML plug-in and web servlet are added to the previous system. The systems can support humanoid-model avatar for interoperability, and can use the stored sign language animation data shared on the database. It is noted in the evaluation of this system that the inverse kinematics function of web client improves the sign-language animation making.
NASA Astrophysics Data System (ADS)
Kihm, Seoneui; Seo, Seongu; Yoon, Suk-jin
2018-01-01
The presence of "anisotropic satellite distribution (ASD)" around massive galaxies is often taken as evidence against the ΛCDM cosmology. To address whether such anisotropy can be reconciled with the standard cosmology, we examine the spatial distributions of satellites around central galaxies in the hydrodynamic cosmological simulation, Illustris. In an attempt to understand the ASD of our Galaxy, we limit our analysis to the systems consisting of a MW-sized host and at least 11 satellites. We find that ASDs are rather a common feature in the simulation and that ASD systems tend to possess a larger fraction of recently accreted satellites than isotropy systems. We discuss a possible link of ASD formation to the surrounding environment in the ΛCDM setting.
Liu, Wei; Huang, Jie
2018-03-01
This paper studies the cooperative global robust output regulation problem for a class of heterogeneous second-order nonlinear uncertain multiagent systems with jointly connected switching networks. The main contributions consist of the following three aspects. First, we generalize the result of the adaptive distributed observer from undirected jointly connected switching networks to directed jointly connected switching networks. Second, by performing a new coordinate and input transformation, we convert our problem into the cooperative global robust stabilization problem of a more complex augmented system via the distributed internal model principle. Third, we solve the stabilization problem by a distributed state feedback control law. Our result is illustrated by the leader-following consensus problem for a group of Van der Pol oscillators.
Julia, Chantal; Ducrot, Pauline; Péneau, Sandrine; Deschamps, Valérie; Méjean, Caroline; Fézeu, Léopold; Touvier, Mathilde; Hercberg, Serge; Kesse-Guyot, Emmanuelle
2015-09-28
Our objectives were to assess the performance of the 5-Colour nutrition label (5-CNL) front-of-pack nutrition label based on the Food Standards Agency nutrient profiling system to discriminate nutritional quality of foods currently on the market in France and its consistency with French nutritional recommendations. Nutritional composition of 7777 foods available on the French market collected from the web-based collaborative project Open Food Facts were retrieved. Distribution of products across the 5-CNL categories according to food groups, as arranged in supermarket shelves was assessed. Distribution of similar products from different brands in the 5-CNL categories was also assessed. Discriminating performance was considered as the number of color categories present in each food group. In the case of discrepancies between the category allocation and French nutritional recommendations, adaptations of the original score were proposed. Overall, the distribution of foodstuffs in the 5-CNL categories was consistent with French recommendations: 95.4% of 'Fruits and vegetables', 72.5% of 'Cereals and potatoes' were classified as 'Green' or 'Yellow' whereas 86.0% of 'Sugary snacks' were classified as 'Pink' or 'Red'. Adaptations to the original FSA score computation model were necessary for beverages, added fats and cheese in order to be consistent with French official nutritional recommendations. The 5-CNL label displays a high performance in discriminating nutritional quality of foods across food groups, within a food group and for similar products from different brands. Adaptations from the original model were necessary to maintain consistency with French recommendations and high performance of the system.
NASA'S Earth Science Data Stewardship Activities
NASA Technical Reports Server (NTRS)
Lowe, Dawn R.; Murphy, Kevin J.; Ramapriyan, Hampapuram
2015-01-01
NASA has been collecting Earth observation data for over 50 years using instruments on board satellites, aircraft and ground-based systems. With the inception of the Earth Observing System (EOS) Program in 1990, NASA established the Earth Science Data and Information System (ESDIS) Project and initiated development of the Earth Observing System Data and Information System (EOSDIS). A set of Distributed Active Archive Centers (DAACs) was established at locations based on science discipline expertise. Today, EOSDIS consists of 12 DAACs and 12 Science Investigator-led Processing Systems (SIPS), processing data from the EOS missions, as well as the Suomi National Polar Orbiting Partnership mission, and other satellite and airborne missions. The DAACs archive and distribute the vast majority of data from NASA’s Earth science missions, with data holdings exceeding 12 petabytes The data held by EOSDIS are available to all users consistent with NASA’s free and open data policy, which has been in effect since 1990. The EOSDIS archives consist of raw instrument data counts (level 0 data), as well as higher level standard products (e.g., geophysical parameters, products mapped to standard spatio-temporal grids, results of Earth system models using multi-instrument observations, and long time series of Earth System Data Records resulting from multiple satellite observations of a given type of phenomenon). EOSDIS data stewardship responsibilities include ensuring that the data and information content are reliable, of high quality, easily accessible, and usable for as long as they are considered to be of value.
Using Ada to implement the operations management system in a community of experts
NASA Technical Reports Server (NTRS)
Frank, M. S.
1986-01-01
An architecture is described for the Space Station Operations Management System (OMS), consisting of a distributed expert system framework implemented in Ada. The motivation for such a scheme is based on the desire to integrate the very diverse elements of the OMS while taking maximum advantage of knowledge based systems technology. Part of the foundation of an Ada based distributed expert system was accomplished in the form of a proof of concept prototype for the KNOMES project (Knowledge-based Maintenance Expert System). This prototype successfully used concurrently active experts to accomplish monitoring and diagnosis for the Remote Manipulator System. The basic concept of this software architecture is named ACTORS for Ada Cognitive Task ORganization Scheme. It is when one considers the overall problem of integrating all of the OMS elements into a cooperative system that the AI solution stands out. By utilizing a distributed knowledge based system as the framework for OMS, it is possible to integrate those components which need to share information in an intelligent manner.
The embedded operating system project
NASA Technical Reports Server (NTRS)
Campbell, R. H.
1984-01-01
This progress report describes research towards the design and construction of embedded operating systems for real-time advanced aerospace applications. The applications concerned require reliable operating system support that must accommodate networks of computers. The report addresses the problems of constructing such operating systems, the communications media, reconfiguration, consistency and recovery in a distributed system, and the issues of realtime processing. A discussion is included on suitable theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based systems. In particular, this report addresses: atomic actions, fault tolerance, operating system structure, program development, reliability and availability, and networking issues. This document reports the status of various experiments designed and conducted to investigate embedded operating system design issues.
Development of the radial neutron camera system for the HL-2A tokamak
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi
2016-06-15
A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less
NASA's Earth Science Data Systems
NASA Technical Reports Server (NTRS)
Ramapriyan, H. K.
2015-01-01
NASA's Earth Science Data Systems (ESDS) Program has evolved over the last two decades, and currently has several core and community components. Core components provide the basic operational capabilities to process, archive, manage and distribute data from NASA missions. Community components provide a path for peer-reviewed research in Earth Science Informatics to feed into the evolution of the core components. The Earth Observing System Data and Information System (EOSDIS) is a core component consisting of twelve Distributed Active Archive Centers (DAACs) and eight Science Investigator-led Processing Systems spread across the U.S. The presentation covers how the ESDS Program continues to evolve and benefits from as well as contributes to advances in Earth Science Informatics.
IEA/SPS 500 kW distributed collector system
NASA Technical Reports Server (NTRS)
Neumann, T. W.; Hartman, C. D.
1980-01-01
Engineering studies for an International Energy Agency project for the design and construction of a 500 kW solar thermal electric power generation system of the distributed collector system (DCS) type are reviewed. The DCS system design consists of a mixed field of parabolic trough type solar collectors which are used to heat a thermal heat transfer oil. Heated oil is delivered to a thermocline storage tank from which heat is extracted and delivered to a boiler by a second heat transfer loop using the same heat transfer oil. Steam is generated in the boiler, expanded through a steam turbine, and recirculated through a condenser system cooled by a wet cooling tower.
NASA Technical Reports Server (NTRS)
1982-01-01
Farmers are increasingly turning to aerial applications of pesticides, fertilizers and other materials. Sometimes uneven distribution of the chemicals is caused by worn nozzles, improper alignment of spray nozzles or system leaks. If this happens, job must be redone with added expense to both the pilot and customer. Traditional pattern analysis techniques take days or weeks. Utilizing NASA's wind tunnel and computer validation technology, Dr. Roth, Oklahoma State University (OSU), developed a system for providing answers within minutes. Called the Rapid Distribution Pattern Evaluation System, the OSU system consists of a 100-foot measurement frame tied in to computerized analysis and readout equipment. System is mobile, delivered by trailer to airfields in agricultural areas where OSU conducts educational "fly-ins." A fly-in typically draws 50 to 100 aerial applicators, researchers, chemical suppliers and regulatory officials. An applicator can have his spray pattern checked. A computerized readout, available in five to 12 minutes, provides information for correcting shortcomings in the distribution pattern.
RF-based power distribution system for optogenetic experiments
NASA Astrophysics Data System (ADS)
Filipek, Tomasz A.; Kasprowicz, Grzegorz H.
2017-08-01
In this paper, the wireless power distribution system for optogenetic experiment was demonstrated. The design and the analysis of the power transfer system development is described in details. The architecture is outlined in the context of performance requirements that had to be met. We show how to design a wireless power transfer system using resonant coupling circuits which consist of a number of receivers and one transmitter covering the entire cage area with a specific power density. The transmitter design with the full automated protection stage is described with detailed consideration of the specification and the construction of the transmitting loop antenna. In addition, the design of the receiver is described, including simplification of implementation and the minimization of the impact of component tolerances on the performance of the distribution system. The conducted analysis has been confirmed by calculations and measurement results. The presented distribution system was designed to provide 100 mW power supply to each of the ten possible receivers in a limited 490 x 350 mm cage space while using a single transmitter working at the coupling resonant frequency of 27 MHz.
Performance Monitoring of Residential Hot Water Distribution Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Anna; Lanzisera, Steven; Lutz, Jim
Current water distribution systems are designed such that users need to run the water for some time to achieve the desired temperature, wasting energy and water in the process. We developed a wireless sensor network for large-scale, long time-series monitoring of residential water end use. Our system consists of flow meters connected to wireless motes transmitting data to a central manager mote, which in turn posts data to our server via the internet. This project also demonstrates a reliable and flexible data collection system that could be configured for various other forms of end use metering in buildings. The purposemore » of this study was to determine water and energy use and waste in hot water distribution systems in California residences. We installed meters at every end use point and the water heater in 20 homes and collected 1s flow and temperature data over an 8 month period. For a typical shower and dishwasher events, approximately half the energy is wasted. This relatively low efficiency highlights the importance of further examining the energy and water waste in hot water distribution systems.« less
A fragmentation model of earthquake-like behavior in internet access activity
NASA Astrophysics Data System (ADS)
Paguirigan, Antonino A.; Angco, Marc Jordan G.; Bantang, Johnrob Y.
We present a fragmentation model that generates almost any inverse power-law size distribution, including dual-scaled versions, consistent with the underlying dynamics of systems with earthquake-like behavior. We apply the model to explain the dual-scaled power-law statistics observed in an Internet access dataset that covers more than 32 million requests. The non-Poissonian statistics of the requested data sizes m and the amount of time τ needed for complete processing are consistent with the Gutenberg-Richter-law. Inter-event times δt between subsequent requests are also shown to exhibit power-law distributions consistent with the generalized Omori law. Thus, the dataset is similar to the earthquake data except that two power-law regimes are observed. Using the proposed model, we are able to identify underlying dynamics responsible in generating the observed dual power-law distributions. The model is universal enough for its applicability to any physical and human dynamics that is limited by finite resources such as space, energy, time or opportunity.
NASA Astrophysics Data System (ADS)
Lytvtnenko, D. M.; Slyusarenko, Yu. V.; Kirdin, A. I.
2012-10-01
A consistent theory of equilibrium states of same sign charges above the surface of liquid dielectric film located on solid substrate in the presence of external attracting constant electric field is proposed. The approach to the development of the theory is based on the Thomas-Fermi model generalized to the systems under consideration and on the variational principle. The using of self-consistent field model allows formulating a theory containing no adjustable constants. In the framework of the variational principle we obtain the self-consistency equations for the parameters describing the system: the distribution function of charges above the liquid dielectric surface, the electrostatic field potentials in all regions of the system and the surface profile of the liquid dielectric. The self-consistency equations are used to describe the phase transition associated with the formation of spatially periodic structures in the system of charges on liquid dielectric surface. Assuming the non-degeneracy of the gas of charges above the surface of liquid dielectric film the solutions of the self-consistency equations near the critical point are obtained. In the case of the symmetric phase we obtain the expressions for the potentials and electric fields in all regions of the studied system. The distribution of the charges above the surface of liquid dielectric film for the symmetric phase is derived. The system parameters of the phase transition to nonsymmetric phase - the states with a spatially periodic ordering are obtained. We derive the expression determining the period of two-dimensional lattice as a function of physical parameters of the problem - the temperature, the external attractive electric field, the number of electrons per unit of the flat surface area of the liquid dielectric, the density of the dielectric, its surface tension and permittivity, and the permittivity of the solid substrate. The possibility of generalizing the developed theory in the case of degenerate gas of like-charged particles above the liquid dielectric surface is discussed.
Coherent Frequency Reference System for the NASA Deep Space Network
NASA Technical Reports Server (NTRS)
Tucker, Blake C.; Lauf, John E.; Hamell, Robert L.; Gonzaler, Jorge, Jr.; Diener, William A.; Tjoelker, Robert L.
2010-01-01
The NASA Deep Space Network (DSN) requires state-of-the-art frequency references that are derived and distributed from very stable atomic frequency standards. A new Frequency Reference System (FRS) and Frequency Reference Distribution System (FRD) have been developed, which together replace the previous Coherent Reference Generator System (CRG). The FRS and FRD each provide new capabilities that significantly improve operability and reliability. The FRS allows for selection and switching between frequency standards, a flywheel capability (to avoid interruptions when switching frequency standards), and a frequency synthesis system (to generate standardized 5-, 10-, and 100-MHz reference signals). The FRS is powered by redundant, specially filtered, and sustainable power systems and includes a monitor and control capability for station operations to interact and control the frequency-standard selection process. The FRD receives the standardized 5-, 10-, and 100-MHz reference signals and distributes signals to distribution amplifiers in a fan out fashion to dozens of DSN users that require the highly stable reference signals. The FRD is also powered by redundant, specially filtered, and sustainable power systems. The new DSN Frequency Distribution System, which consists of the FRS and FRD systems described here, is central to all operational activities of the NASA DSN. The frequency generation and distribution system provides ultra-stable, coherent, and very low phase-noise references at 5, l0, and 100 MHz to between 60 and 100 separate users at each Deep Space Communications Complex.
NASA Astrophysics Data System (ADS)
Diego, Jose M.; Schmidt, Kasper B.; Broadhurst, Tom; Lam, Daniel; Vega-Ferrero, Jesús; Zheng, Wei; Lee, Slanger; Morishita, Takahiro; Bernstein, Gary; Lim, Jeremy; Silk, Joseph; Ford, Holland
2018-02-01
We derive a free-form mass distribution for the unrelaxed cluster A370 (z = 0.375), using the first release of the Hubble Frontier Fields images (76 orbits) and GLASS spectroscopy. Starting from a reliable set of 10 multiply lensed systems, we produce a free-form lens model that identifies ≈80 multiple images. Good consistency is found between models using independent subsamples of these lensed systems, with detailed agreement for the well-resolved arcs. The mass distribution has two very similar concentrations centred on the two prominent brightest cluster galaxies (or BCGs), with mass profiles that are accurately constrained by a uniquely useful system of long radially lensed images centred on both BCGs. We show that the lensing mass profiles of these BCGs are mainly accounted for by their stellar mass profiles, with a modest contribution from dark matter within r < 100 kpc of each BCG. This conclusion may favour a cooled cluster gas origin for BCGs, rather than via mergers of normal galaxies for which dark matter should dominate over stars. Growth via merging between BCGs is, however, consistent with this finding, so that stars still dominate over dark matter. We do not observe any significant offset between the positions of the peaks of the dark matter distribution and the light distribution.
Research study on multi-KW-DC distribution system
NASA Technical Reports Server (NTRS)
Berkery, E. A.; Krausz, A.
1975-01-01
A detailed definition of the HVDC test facility and the equipment required to implement the test program are provided. The basic elements of the test facility are illustrated, and consist of: the power source, conventional and digital supervision and control equipment, power distribution harness and simulated loads. The regulated dc power supplies provide steady-state power up to 36 KW at 120 VDC. Power for simulated line faults will be obtained from two banks of 90 ampere-hour lead-acid batteries. The relative merits of conventional and multiplexed power control will be demonstrated by the Supervision and Monitor Unit (SMU) and the Automatically Controlled Electrical Systems (ACES) hardware. The distribution harness is supported by a metal duct which is bonded to all component structures and functions as the system ground plane. The load banks contain passive resistance and reactance loads, solid state power controllers and active pulse width modulated loads. The HVDC test facility is designed to simulate a power distribution system for large aerospace vehicles.
Distributed weighted least-squares estimation with fast convergence for large-scale systems☆
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods. PMID:25641976
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
Solar-heated municipal swimming pools, a case study: Dade County, Florida
NASA Astrophysics Data System (ADS)
Levin, M.
1981-09-01
The installation of a solar energy system to heat the water in the swimming pool in one of Dade County, Florida's major parks is described. The mechanics of solar heated swimming pools are explained. The solar heating system consists of 216 unglazed polypropylene tube collectors, a differential thermostat, and the distribution system. The systems performance and economics as well as future plants are discussed.
NASA Astrophysics Data System (ADS)
Zhang, Daili
Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.
Establishment of key grid-connected performance index system for integrated PV-ES system
NASA Astrophysics Data System (ADS)
Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.
2016-08-01
In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.
NASA Technical Reports Server (NTRS)
1999-01-01
This document describes the design of the leading edge suction system for flight demonstration of hybrid laminar flow control on the Boeing 757 airplane. The exterior pressures on the wing surface and the required suction quantity and distribution were determined in previous work. A system consisting of porous skin, sub-surface spanwise passages ("flutes"), pressure regulating screens and valves, collection fittings, ducts and a turbocompressor was defined to provide the required suction flow. Provisions were also made for flexible control of suction distribution and quantity for HLFC research purposes. Analysis methods for determining pressure drops and flow for transpiration heating for thermal anti-icing are defined. The control scheme used to observe and modulate suction distribution in flight is described.
Constraining Binary Asteroid Mass Distributions Based On Mutual Motion
NASA Astrophysics Data System (ADS)
Davis, Alex B.; Scheeres, Daniel J.
2017-06-01
The mutual gravitational potential and torques of binary asteroid systems results in a complex coupling of attitude and orbital motion based on the mass distribution of each body. For a doubly-synchronous binary system observations of the mutual motion can be leveraged to identify and measure the unique mass distributions of each body. By implementing arbitrary shape and order computation of the full two-body problem (F2BP) equilibria we study the influence of asteroid asymmetries on separation and orientation of a doubly-synchronous system. Additionally, simulations of binary systems perturbed from doubly-synchronous behavior are studied to understand the effects of mass distribution perturbations on precession and nutation rates such that unique behaviors can be isolated and used to measure asteroid mass distributions. We apply our investigation to the Trojan binary asteroid system 617 Patroclus and Menoetius (1906 VY), which will be the final flyby target of the recently announced LUCY Discovery mission in March 2033. This binary asteroid system is of particular interest due to the results of a recent stellar occultation study (DPS 46, id.506.09) that suggests the system to be doubly-synchronous and consisting of two-similarly sized oblate ellipsoids, in addition to suggesting the presence mass asymmetries resulting from an impact crater on the southern limb of Menoetius.
Augmentation of the space station module power management and distribution breadboard
NASA Technical Reports Server (NTRS)
Walls, Bryan; Hall, David K.; Lollar, Louis F.
1991-01-01
The space station module power management and distribution (SSM/PMAD) breadboard models power distribution and management, including scheduling, load prioritization, and a fault detection, identification, and recovery (FDIR) system within a Space Station Freedom habitation or laboratory module. This 120 VDC system is capable of distributing up to 30 kW of power among more than 25 loads. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level consists of fast, simple (from a computing standpoint) switchgear that is capable of quickly safing the system. At the next level are local load center processors, (LLP's) which execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. Above the LLP's are three cooperating artificial intelligence (AI) systems which manage load prioritizations, load scheduling, load shedding, and fault recovery and management. Recent upgrades to hardware and modifications to software at both the LLP and AI system levels promise a drastic increase in speed, a significant increase in functionality and reliability, and potential for further examination of advanced automation techniques. The background, SSM/PMAD, interface to the Lewis Research Center test bed, the large autonomous spacecraft electrical power system, and future plans are discussed.
Universal distribution of component frequencies in biological and technological systems
Pang, Tin Yau; Maslov, Sergei
2013-01-01
Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195
Job monitoring on DIRAC for Belle II distributed computing
NASA Astrophysics Data System (ADS)
Kato, Yuji; Hayasaka, Kiyoshi; Hara, Takanori; Miyake, Hideki; Ueda, Ikuo
2015-12-01
We developed a monitoring system for Belle II distributed computing, which consists of active and passive methods. In this paper we describe the passive monitoring system, where information stored in the DIRAC database is processed and visualized. We divide the DIRAC workload management flow into steps and store characteristic variables which indicate issues. These variables are chosen carefully based on our experiences, then visualized. As a result, we are able to effectively detect issues. Finally, we discuss the future development for automating log analysis, notification of issues, and disabling problematic sites.
Application of a Fiber Optic Distributed Strain Sensor System to Woven E-Glass Composite
NASA Technical Reports Server (NTRS)
Anastasi, Robert F.; Lopatin, Craig
2001-01-01
A distributed strain sensing system utilizing a series of identically written Bragg gratings along an optical fiber is examined for potential application to Composite Armored Vehicle health monitoring. A vacuum assisted resin transfer molding process was used to fabricate a woven fabric E-glass/composite panel with an embedded fiber optic strain sensor. Test samples machined from the panel were mechanically tested in 4-point bending. Experimental results are presented that show the mechanical strain from foil strain gages comparing well to optical strain from the embedded sensors. Also, it was found that the distributed strain along the sample length was consistent with the loading configuration.
Estimating distributions with increasing failure rate in an imperfect repair model.
Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R
2002-03-01
A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.
ERIC Educational Resources Information Center
Pulkki, Jutta Maarit; Rissanen, Pekka; Raitanen, Jani A.; Viitanen, Elina A.
2011-01-01
This study focuses on a large set of rehabilitation services used between 2004 and 2005 in one hospital district area in Finland. The rehabilitation system consists of several subsystems. This complex system is suggested to produce arbitrary rehabilitation services. Despite the criticisms against the system during decades, no attempts have been…
A comparison of Boolean-based retrieval to the WAIS system for retrieval of aeronautical information
NASA Technical Reports Server (NTRS)
Marchionini, Gary; Barlow, Diane
1994-01-01
An evaluation of an information retrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.
A Software Architecture for Intelligent Synthesis Environments
NASA Technical Reports Server (NTRS)
Filman, Robert E.; Norvig, Peter (Technical Monitor)
2001-01-01
The NASA's Intelligent Synthesis Environment (ISE) program is a grand attempt to develop a system to transform the way complex artifacts are engineered. This paper discusses a "middleware" architecture for enabling the development of ISE. Desirable elements of such an Intelligent Synthesis Architecture (ISA) include remote invocation; plug-and-play applications; scripting of applications; management of design artifacts, tools, and artifact and tool attributes; common system services; system management; and systematic enforcement of policies. This paper argues that the ISA extend conventional distributed object technology (DOT) such as CORBA and Product Data Managers with flexible repositories of product and tool annotations and "plug-and-play" mechanisms for inserting "ility" or orthogonal concerns into the system. I describe the Object Infrastructure Framework, an Aspect Oriented Programming (AOP) environment for developing distributed systems that provides utility insertion and enables consistent annotation maintenance. This technology can be used to enforce policies such as maintaining the annotations of artifacts, particularly the provenance and access control rules of artifacts-, performing automatic datatype transformations between representations; supplying alternative servers of the same service; reporting on the status of jobs and the system; conveying privileges throughout an application; supporting long-lived transactions; maintaining version consistency; and providing software redundancy and mobility.
ERIC Educational Resources Information Center
Bjorck, Ulric
Students' use of distributed Problem-Based Learning (dPBL) in university courses in social economy was studied. A sociocultural framework was used to analyze the actions of students focusing on their mastery of dPBL. The main data material consisted of messages written in an asynchronous conferencing system by 50 Swedish college students in 2…
NASA Astrophysics Data System (ADS)
Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.
2018-05-01
This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.
EOS MLS Science Data Processing System: A Description of Architecture and Capabilities
NASA Technical Reports Server (NTRS)
Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.
2006-01-01
This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.
Access control and privacy in large distributed systems
NASA Technical Reports Server (NTRS)
Leiner, B. M.; Bishop, M.
1986-01-01
Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.
Reducing Interprocessor Dependence in Recoverable Distributed Shared Memory
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1994-01-01
Checkpointing techniques in parallel systems use dependency tracking and/or message logging to ensure that a system rolls back to a consistent state. Traditional dependency tracking in distributed shared memory (DSM) systems is expensive because of high communication frequency. In this paper we show that, if designed correctly, a DSM system only needs to consider dependencies due to the transfer of blocks of data, resulting in reduced dependency tracking overhead and reduced potential for rollback propagation. We develop an ownership timestamp scheme to tolerate the loss of block state information and develop a passive server model of execution where interactions between processors are considered atomic. With our scheme, dependencies are significantly reduced compared to the traditional message-passing model.
Monte Carlo simulation of the radiant field produced by a multiple-lamp quartz heating system
NASA Technical Reports Server (NTRS)
Turner, Travis L.
1991-01-01
A method is developed for predicting the radiant heat flux distribution produced by a reflected bank of tungsten-filament tubular-quartz radiant heaters. The method is correlated with experimental results from two cases, one consisting of a single lamp and a flat reflector and the other consisting of a single lamp and a parabolic reflector. The simulation methodology, computer implementation, and experimental procedures are discussed. Analytical refinements necessary for comparison with experiment are discussed and applied to a multilamp, common reflector heating system.
Space Physics Data Facility Web Services
NASA Technical Reports Server (NTRS)
Candey, Robert M.; Harris, Bernard T.; Chimiak, Reine A.
2005-01-01
The Space Physics Data Facility (SPDF) Web services provides a distributed programming interface to a portion of the SPDF software. (A general description of Web services is available at http://www.w3.org/ and in many current software-engineering texts and articles focused on distributed programming.) The SPDF Web services distributed programming interface enables additional collaboration and integration of the SPDF software system with other software systems, in furtherance of the SPDF mission to lead collaborative efforts in the collection and utilization of space physics data and mathematical models. This programming interface conforms to all applicable Web services specifications of the World Wide Web Consortium. The interface is specified by a Web Services Description Language (WSDL) file. The SPDF Web services software consists of the following components: 1) A server program for implementation of the Web services; and 2) A software developer s kit that consists of a WSDL file, a less formal description of the interface, a Java class library (which further eases development of Java-based client software), and Java source code for an example client program that illustrates the use of the interface.
Method used to test the imaging consistency of binocular camera's left-right optical system
NASA Astrophysics Data System (ADS)
Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui
2016-09-01
To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-13
... and reinforced concrete floors acting as diaphragms in distributing loads to vertically resisting... reinforced concrete foundation. The reactor is fueled with standard low-enriched TRIGA (Training, Research... cooled by a light water primary system consisting of the reactor pool and a heat removal system to remove...
A ferrofluid-based neural network: design of an analogue associative memory
NASA Astrophysics Data System (ADS)
Palm, R.; Korenivski, V.
2009-02-01
We analyse an associative memory based on a ferrofluid, consisting of a system of magnetic nano-particles suspended in a carrier fluid of variable viscosity subject to patterns of magnetic fields from an array of input and output magnetic pads. The association relies on forming patterns in the ferrofluid during a training phase, in which the magnetic dipoles are free to move and rotate to minimize the total energy of the system. Once equilibrated in energy for a given input-output magnetic field pattern pair, the particles are fully or partially immobilized by cooling the carrier liquid. Thus produced particle distributions control the memory states, which are read out magnetically using spin-valve sensors incorporated into the output pads. The actual memory consists of spin distributions that are dynamic in nature, realized only in response to the input patterns that the system has been trained for. Two training algorithms for storing multiple patterns are investigated. Using Monte Carlo simulations of the physical system, we demonstrate that the device is capable of storing and recalling two sets of images, each with an accuracy approaching 100%.
NASA Astrophysics Data System (ADS)
Zhao, Ben; Garbacki, Paweł; Gkantsidis, Christos; Iamnitchi, Adriana; Voulgaris, Spyros
After a decade of intensive investigation, peer-to-peer computing has established itself as an accepted research eld in the general area of distributed systems. Peer-to- peer computing can be seen as the democratization of computing over throwing traditional hierarchical designs favored in client-server systems largely brought about by last-mile network improvements which have made individual PCs rst-class citizens in the network community. Much of the early focus in peer-to-peer systems was on best-effort le sharing applications. In recent years, however, research has focused on peer-to-peer systems that provide operational properties and functionality similar to those shown by more traditional distributed systems. These properties include stronger consistency, reliability, and security guarantees suitable to supporting traditional applications such as databases.
WASTE HANDLING BUILDING ELECTRICAL SYSTEM DESCRIPTION DOCUMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.C. Khamamkar
2000-06-23
The Waste Handling Building Electrical System performs the function of receiving, distributing, transforming, monitoring, and controlling AC and DC power to all waste handling building electrical loads. The system distributes normal electrical power to support all loads that are within the Waste Handling Building (WHB). The system also generates and distributes emergency power to support designated emergency loads within the WHB within specified time limits. The system provides the capability to transfer between normal and emergency power. The system provides emergency power via independent and physically separated distribution feeds from the normal supply. The designated emergency electrical equipment will bemore » designed to operate during and after design basis events (DBEs). The system also provides lighting, grounding, and lightning protection for the Waste Handling Building. The system is located in the Waste Handling Building System. The system consists of a diesel generator, power distribution cables, transformers, switch gear, motor controllers, power panel boards, lighting panel boards, lighting equipment, lightning protection equipment, control cabling, and grounding system. Emergency power is generated with a diesel generator located in a QL-2 structure and connected to the QL-2 bus. The Waste Handling Building Electrical System distributes and controls primary power to acceptable industry standards, and with a dependability compatible with waste handling building reliability objectives for non-safety electrical loads. It also generates and distributes emergency power to the designated emergency loads. The Waste Handling Building Electrical System receives power from the Site Electrical Power System. The primary material handling power interfaces include the Carrier/Cask Handling System, Canister Transfer System, Assembly Transfer System, Waste Package Remediation System, and Disposal Container Handling Systems. The system interfaces with the MGR Operations Monitoring and Control System for supervisory monitoring and control signals. The system interfaces with all facility support loads such as heating, ventilation, and air conditioning, office, fire protection, monitoring and control, safeguards and security, and communications subsystems.« less
Anomalous Transient Amplification of Waves in Non-normal Photonic Media
NASA Astrophysics Data System (ADS)
Makris, K. G.; Ge, L.; Türeci, H. E.
2014-10-01
Dissipation is a ubiquitous phenomenon in dynamical systems encountered in nature because no finite system is fully isolated from its environment. In optical systems, a key challenge facing any technological application has traditionally been the mitigation of optical losses. Recent work has shown that a new class of optical materials that consist of a precisely balanced distribution of loss and gain can be exploited to engineer novel functionalities for propagating and filtering electromagnetic radiation. Here we show a generic property of optical systems that feature an unbalanced distribution of loss and gain, described by non-normal operators, namely, that an overall lossy optical system can transiently amplify certain input signals by several orders of magnitude. We present a mathematical framework to analyze the dynamics of wave propagation in media with an arbitrary distribution of loss and gain, and we construct the initial conditions to engineer such non-normal power amplifiers. Our results point to a new design space for engineered optical systems employed in photonics and quantum optics.
Significance of losses in water distribution systems in India
Raman, V.
1983-01-01
Effective management of water supply systems consists in supplying adequate quantities of clean water to the population. Detailed pilot studies of water distribution systems were carried out in 9 cities in India during 1971-81 to establish the feasibility of a programme of assessment, detection, and control of water losses from supply systems. A cost-benefit analysis was carried out. Water losses from mains and service pipes in the areas studied amounted to 20-35% of the total flow in the system. At a conservative estimate, the national loss of processed water through leaks in the water distribution systems amounts to 1012 litres per year, which is equivalent to 500 million rupees. It is possible to bring down the water losses in the pipe mains to 3-5% of the total flow, and the cost incurred on the control programme can be recovered in 6-18 months. Appropriate conservation measures will help in achieving the goals of the International Water Supply and Sanitation Decade to provide clean water for all. PMID:6418401
Significance of losses in water distribution systems in India.
Raman, V
1983-01-01
Effective management of water supply systems consists in supplying adequate quantities of clean water to the population. Detailed pilot studies of water distribution systems were carried out in 9 cities in India during 1971-81 to establish the feasibility of a programme of assessment, detection, and control of water losses from supply systems. A cost-benefit analysis was carried out. Water losses from mains and service pipes in the areas studied amounted to 20-35% of the total flow in the system. At a conservative estimate, the national loss of processed water through leaks in the water distribution systems amounts to 10(12) litres per year, which is equivalent to 500 million rupees.It is possible to bring down the water losses in the pipe mains to 3-5% of the total flow, and the cost incurred on the control programme can be recovered in 6-18 months. Appropriate conservation measures will help in achieving the goals of the International Water Supply and Sanitation Decade to provide clean water for all.
Waiting time distribution for continuous stochastic systems
NASA Astrophysics Data System (ADS)
Gernert, Robert; Emary, Clive; Klapp, Sabine H. L.
2014-12-01
The waiting time distribution (WTD) is a common tool for analyzing discrete stochastic processes in classical and quantum systems. However, there are many physical examples where the dynamics is continuous and only approximately discrete, or where it is favourable to discuss the dynamics on a discretized and a continuous level in parallel. An example is the hindered motion of particles through potential landscapes with barriers. In the present paper we propose a consistent generalization of the WTD from the discrete case to situations where the particles perform continuous barrier crossing characterized by a finite duration. To this end, we introduce a recipe to calculate the WTD from the Fokker-Planck (Smoluchowski) equation. In contrast to the closely related first passage time distribution (FPTD), which is frequently used to describe continuous processes, the WTD contains information about the direction of motion. As an application, we consider the paradigmatic example of an overdamped particle diffusing through a washboard potential. To verify the approach and to elucidate its numerical implications, we compare the WTD defined via the Smoluchowski equation with data from direct simulation of the underlying Langevin equation and find full consistency provided that the jumps in the Langevin approach are defined properly. Moreover, for sufficiently large energy barriers, the WTD defined via the Smoluchowski equation becomes consistent with that resulting from the analytical solution of a (two-state) master equation model for the short-time dynamics developed previously by us [Phys. Rev. E 86, 061135 (2012), 10.1103/PhysRevE.86.061135]. Thus, our approach "interpolates" between these two types of stochastic motion. We illustrate our approach for both symmetric systems and systems under constant force.
Distribution of the Endocannabinoid System in the Central Nervous System.
Hu, Sherry Shu-Jung; Mackie, Ken
2015-01-01
The endocannabinoid system consists of endogenous cannabinoids (endocannabinoids), the enzymes that synthesize and degrade endocannabinoids, and the receptors that transduce the effects of endocannabinoids. Much of what we know about the function of endocannabinoids comes from studies that combine localization of endocannabinoid system components with physiological or behavioral approaches. This review will focus on the localization of the best-known components of the endocannabinoid system for which the strongest anatomical evidence exists.
Consistent detection of global predicates
NASA Technical Reports Server (NTRS)
Cooper, Robert; Marzullo, Keith
1991-01-01
A fundamental problem in debugging and monitoring is detecting whether the state of a system satisfies some predicate. If the system is distributed, then the resulting uncertainty in the state of the system makes such detection, in general, ill-defined. Three algorithms are presented for detecting global predicates in a well-defined way. These algorithms do so by interpreting predicates with respect to the communication that has occurred in the system.
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Globular cluster systems and their host galaxies: comparison of spatial distributions and colors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hargis, Jonathan R.; Rhode, Katherine L., E-mail: jhargis@haverford.edu
2014-11-20
We present a study of the spatial and color distributions of four early-type galaxies and their globular cluster (GC) systems observed as part of our ongoing wide-field imaging survey. We use BVR KPNO 4 m+MOSAIC imaging data to characterize the galaxies' GC populations, perform surface photometry of the galaxies, and compare the projected two-dimensional shape of the host galaxy light to that of the GC population. The GC systems of the ellipticals NGC 4406 and NGC 5813 both show an elliptical distribution consistent with that of the host galaxy light. Our analysis suggests a similar result for the giant ellipticalmore » NGC 4472, but a smaller GC candidate sample precludes a definite conclusion. For the S0 galaxy NGC 4594, the GCs have a circular projected distribution, in contrast to the host galaxy light, which is flattened in the inner regions. For NGC 4406 and NGC 5813, we also examine the projected shapes of the metal-poor and metal-rich GC subpopulations and find that both subpopulations have elliptical shapes that are consistent with those of the host galaxy light. Lastly, we use integrated colors and color profiles to compare the stellar populations of the galaxies to their GC systems. For each galaxy, we explore the possibility of color gradients in the individual metal-rich and metal-poor GC subpopulations. We find statistically significant color gradients in both GC subpopulations of NGC 4594 over the inner ∼5 effective radii (∼20 kpc). We compare our results to scenarios for the formation and evolution of giant galaxies and their GC systems.« less
The Hall D solenoid helium refrigeration system at JLab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laverdure, Nathaniel A.; Creel, Jonathan D.; Dixon, Kelly d.
Hall D, the new Jefferson Lab experimental facility built for the 12GeV upgrade, features a LASS 1.85 m bore solenoid magnet supported by a 4.5 K helium refrigerator system. This system consists of a CTI 2800 4.5 K refrigerator cold box, three 150 hp screw compressors, helium gas management and storage, and liquid helium and nitrogen storage for stand-alone operation. The magnet interfaces with the cryo refrigeration system through an LN2-shielded distribution box and transfer line system, both designed and fabricated by JLab. The distribution box uses a thermo siphon design to respectively cool four magnet coils and shields withmore » liquid helium and nitrogen. We describe the salient design features of the cryo system and discuss our recent commissioning experience.« less
Fault-Tolerant Multiprocessor and VLSI-Based Systems.
1987-03-15
54590 170 Table 1: Statistics for the Benchmark Programs pages are distributed amongst the groups of the reconfigured memory in proportion to the...distances are proportional to only the logarithm of the sure that possesses relevance to a system which consists of alare nmbe ofhomgenouseleent...and comn.unication overhead resulting from faults communicating with all of the other elements in the system the network to degrade proportionately to
NASA Astrophysics Data System (ADS)
Tan, Zhi-Jie; Zou, Xian-Wu; Huang, Sheng-You; Zhang, Wei; Jin, Zhun-Zhi
2002-07-01
We investigate the pattern of particle distribution and its evolution with time in multiparticle systems using the model of random walks with memory enhancement and decay. This model describes some biological intelligent walks. With decrease in the memory decay exponent α, the distribution of particles changes from a random dispersive pattern to a locally dense one, and then returns to the random one. Correspondingly, the fractal dimension Df,p characterizing the distribution of particle positions increases from a low value to a maximum and then decreases to the low one again. This is determined by the degree of overlap of regions consisting of sites with remanent information. The second moment of the density ρ(2) was introduced to investigate the inhomogeneity of the particle distribution. The dependence of ρ(2) on α is similar to that of Df,p on α. ρ(2) increases with time as a power law in the process of adjusting the particle distribution, and then ρ(2) tends to a stable equilibrium value.
Principles and Foundations for Fractionated Networked Cyber-Physical Systems
2012-07-13
spectrum between autonomy to cooperation. Our distributed comput- ing model is based on distributed knowledge sharing, and makes very few assumptions but...over the computation without the need for explicit migration. Randomization techniques will make sure that enough di- versity is maintained to allow...small UAV testbed consisting of 10 inex- pensive quadcopters at SRI. Hard ware-wise, we added heat sinks to mitigate the impact of additional heat that
Transverse Momentum Distributions of Electron in Simulated QED Model
NASA Astrophysics Data System (ADS)
Kaur, Navdeep; Dahiya, Harleen
2018-05-01
In the present work, we have studied the transverse momentum distributions (TMDs) for the electron in simulated QED model. We have used the overlap representation of light-front wave functions where the spin-1/2 relativistic composite system consists of spin-1/2 fermion and spin-1 vector boson. The results have been obtained for T-even TMDs in transverse momentum plane for fixed value of longitudinal momentum fraction x.
Determining on-fault earthquake magnitude distributions from integer programming
NASA Astrophysics Data System (ADS)
Geist, Eric L.; Parsons, Tom
2018-02-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
Chaotic itinerancy and power-law residence time distribution in stochastic dynamical systems.
Namikawa, Jun
2005-08-01
Chaotic itinerant motion among varieties of ordered states is described by a stochastic model based on the mechanism of chaotic itinerancy. The model consists of a random walk on a half-line and a Markov chain with a transition probability matrix. The stability of attractor ruin in the model is investigated by analyzing the residence time distribution of orbits at attractor ruins. It is shown that the residence time distribution averaged over all attractor ruins can be described by the superposition of (truncated) power-law distributions if the basin of attraction for each attractor ruin has a zero measure. This result is confirmed by simulation of models exhibiting chaotic itinerancy. Chaotic itinerancy is also shown to be absent in coupled Milnor attractor systems if the transition probability among attractor ruins can be represented as a Markov chain.
Enhanced interfaces for web-based enterprise-wide image distribution.
Jost, R Gilbert; Blaine, G James; Fritz, Kevin; Blume, Hartwig; Sadhra, Sarbjit
2002-01-01
Modern Web browsers support image distribution with two shortcomings: (1) image grayscale presentation at client workstations is often sub-optimal and generally inconsistent with the presentation state on diagnostic workstations and (2) an Electronic Patient Record (EPR) application usually cannot directly access images with an integrated viewer. We have modified our EPR and our Web-based image-distribution system to allow access to images from within the EPR. In addition, at the client workstation, a grayscale transformation is performed that consists of two components: a client-display-specific component based on the characteristic display function of the class of display system, and a modality-specific transformation that is downloaded with every image. The described techniques have been implemented in our institution and currently support enterprise-wide clinical image distribution. The effectiveness of the techniques is reviewed.
All-digital radar architecture
NASA Astrophysics Data System (ADS)
Molchanov, Pavlo A.
2014-10-01
All digital radar architecture requires exclude mechanical scan system. The phase antenna array is necessarily large because the array elements must be co-located with very precise dimensions and will need high accuracy phase processing system for aggregate and distribute T/R modules data to/from antenna elements. Even phase array cannot provide wide field of view. New nature inspired all digital radar architecture proposed. The fly's eye consists of multiple angularly spaced sensors giving the fly simultaneously thee wide-area visual coverage it needs to detect and avoid the threats around him. Fly eye radar antenna array consist multiple directional antennas loose distributed along perimeter of ground vehicle or aircraft and coupled with receiving/transmitting front end modules connected by digital interface to central processor. Non-steering antenna array allows creating all-digital radar with extreme flexible architecture. Fly eye radar architecture provides wide possibility of digital modulation and different waveform generation. Simultaneous correlation and integration of thousands signals per second from each point of surveillance area allows not only detecting of low level signals ((low profile targets), but help to recognize and classify signals (targets) by using diversity signals, polarization modulation and intelligent processing. Proposed all digital radar architecture with distributed directional antenna array can provide a 3D space vector to the jammer by verification direction of arrival for signals sources and as result jam/spoof protection not only for radar systems, but for communication systems and any navigation constellation system, for both encrypted or unencrypted signals, for not limited number or close positioned jammers.
Transition in the waiting-time distribution of price-change events in a global socioeconomic system
NASA Astrophysics Data System (ADS)
Zhao, Guannan; McDonald, Mark; Fenn, Dan; Williams, Stacy; Johnson, Nicholas; Johnson, Neil F.
2013-12-01
The goal of developing a firmer theoretical understanding of inhomogeneous temporal processes-in particular, the waiting times in some collective dynamical system-is attracting significant interest among physicists. Quantifying the deviations between the waiting-time distribution and the distribution generated by a random process may help unravel the feedback mechanisms that drive the underlying dynamics. We analyze the waiting-time distributions of high-frequency foreign exchange data for the best executable bid-ask prices across all major currencies. We find that the lognormal distribution yields a good overall fit for the waiting-time distribution between currency rate changes if both short and long waiting times are included. If we restrict our study to long waiting times, each currency pair’s distribution is consistent with a power-law tail with exponent near to 3.5. However, for short waiting times, the overall distribution resembles one generated by an archetypal complex systems model in which boundedly rational agents compete for limited resources. Our findings suggest that a gradual transition arises in trading behavior between a fast regime in which traders act in a boundedly rational way and a slower one in which traders’ decisions are driven by generic feedback mechanisms across multiple timescales and hence produce similar power-law tails irrespective of currency type.
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-12-31
This report summarizes all work of the Limited Energy Study of Steam Distribution Systems, Energy Engineering Analysis Program, Hawthorne Army Ammunition Depot (HWAAD), Nevada. The project is authorized under Contract No. DACA05-92-C-0155 with the U.S. Army Corps of Engineers, Sacramento District, California. The purpose of this limited energy study is to evaluate steam distribution and condensate collection systems in both the Industrial Area and Ordnance Area of HWAAD to develop a set of replacement actions that will reduce energy consumption and operating costs. These efforts consist of corrections and revisions to previously submitted funding requests. Amended DD Forms 1391 andmore » supporting documentation are prepared for: (1) Project 40667, Modernize Steam Distribution System, Industrial Area, and (2) Project 42166, Modernize Ordnance Area Steam Distribution, Ordnance Area. HWAAD is located next to Highway 95 near the center of Nevada`s border with California, about 130 miles southeast of Reno. The elevation is about 4,100 feet. The location is depicted on Figure 1-1. A number of facilities covering over 140,000 acres constitute HWAAD; however, this study was limited to the Industrial and Ordnance Areas.« less
Coherent one-way quantum key distribution
NASA Astrophysics Data System (ADS)
Stucki, Damien; Fasel, Sylvain; Gisin, Nicolas; Thoma, Yann; Zbinden, Hugo
2007-05-01
Quantum Key Distribution (QKD) consists in the exchange of a secrete key between two distant points [1]. Even if quantum key distribution systems exist and commercial systems are reaching the market [2], there are still improvements to be made: simplify the construction of the system; increase the secret key rate. To this end, we present a new protocol for QKD tailored to work with weak coherent pulses and at high bit rates [3]. The advantages of this system are that the setup is experimentally simple and it is tolerant to reduced interference visibility and to photon number splitting attacks, thus resulting in a high efficiency in terms of distilled secret bits per qubit. After having successfully tested the feasibility of the system [3], we are currently developing a fully integrated and automated prototype within the SECOQC project [4]. We present the latest results using the prototype. We also discuss the issue of the photon detection, which still remains the bottleneck for QKD.
NASA Technical Reports Server (NTRS)
1980-01-01
Computer simulations and laboratory tests were used to evaluate the hazard posed by lightning flashes to ground on the Solar Power Satellite rectenna and to make recommendations on a lightning protection system for the rectenna. The distribution of lightning over the lower 48 of the continental United States was determined, as were the interactions of lightning with the rectenna and the modes in which those interactions could damage the rectenna. Lightning protection was both required and feasible. Several systems of lightning protection were considered and evaluated. These included two systems that employed lightning rods of different lengths and placed on top of the rectenna's billboards and a third, distribution companies; it consists of short lightning rods all along the length of each billboard that are connected by a horizontal wire above the billboard. The distributed lightning protection system afforded greater protection than the other systems considered and was easier to integrate into the rectenna's structural design.
Distributed run of a one-dimensional model in a regional application using SOAP-based web services
NASA Astrophysics Data System (ADS)
Smiatek, Gerhard
This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.
Albattat, Ali; Gruenwald, Benjamin C.; Yucelen, Tansel
2016-01-01
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches. PMID:27537894
Albattat, Ali; Gruenwald, Benjamin C; Yucelen, Tansel
2016-08-16
The last decade has witnessed an increased interest in physical systems controlled over wireless networks (networked control systems). These systems allow the computation of control signals via processors that are not attached to the physical systems, and the feedback loops are closed over wireless networks. The contribution of this paper is to design and analyze event-triggered decentralized and distributed adaptive control architectures for uncertain networked large-scale modular systems; that is, systems consist of physically-interconnected modules controlled over wireless networks. Specifically, the proposed adaptive architectures guarantee overall system stability while reducing wireless network utilization and achieving a given system performance in the presence of system uncertainties that can result from modeling and degraded modes of operation of the modules and their interconnections between each other. In addition to the theoretical findings including rigorous system stability and the boundedness analysis of the closed-loop dynamical system, as well as the characterization of the effect of user-defined event-triggering thresholds and the design parameters of the proposed adaptive architectures on the overall system performance, an illustrative numerical example is further provided to demonstrate the efficacy of the proposed decentralized and distributed control approaches.
SGR-like behaviour of the repeating FRB 121102
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, F.Y.; Yu, H., E-mail: fayinwang@nju.edu.cn, E-mail: yuhai@smail.nju.edu.cn
2017-03-01
Fast radio bursts (FRBs) are millisecond-duration radio signals occurring at cosmological distances. However the physical model of FRBs is mystery, many models have been proposed. Here we study the frequency distributions of peak flux, fluence, duration and waiting time for the repeating FRB 121102. The cumulative distributions of peak flux, fluence and duration show power-law forms. The waiting time distribution also shows power-law distribution, and is consistent with a non-stationary Poisson process. These distributions are similar as those of soft gamma repeaters (SGRs). We also use the statistical results to test the proposed models for FRBs. These distributions are consistentmore » with the predictions from avalanche models of slowly driven nonlinear dissipative systems.« less
Power Management and Distribution Trades Studies for a Deep-Space Mission Scientific Spacecraft
NASA Technical Reports Server (NTRS)
Kimnach, Greg L.; Soltis, James V.
2004-01-01
As part of NASA's Project Prometheus, the Nuclear Systems Program, NASA GRC performed trade studies on the various Power Management and Distribution (PMAD) options for a deep-space scientific spacecraft which would have a nominal electrical power requirement of 100 kWe. These options included AC (1000Hz and 1500Hz and DC primary distribution at various voltages. The distribution system efficiency, reliability, mass, thermal, corona, space radiation levels and technology readiness of devices and components were considered. The final proposed system consisted of two independent power distribution channels, sourced by two 3-phase, 110 kVA alternators nominally operating at half-rated power. Each alternator nominally supplies 50kWe to one half of the ion thrusters and science modules but is capable of supplying the total power re3quirements in the event of loss of one alternator. This paper is an introduction to the methodology for the trades done to arrive at the proposed PMAD architecture. Any opinions expressed are those of the author(s) and do not necessarily reflect the views of Project Prometheus.
Power Management and Distribution Trades Studies for a Deep-space Mission Scientific Spacecraft
NASA Astrophysics Data System (ADS)
Kimnach, Greg L.; Soltis, James V.
2004-02-01
As part of NASA's Project Prometheus, the Nuclear Systems Program, NASA GRC performed trade studies on the various Power Management and Distribution (PMAD) options for a deep-space scientific spacecraft, which would have a nominal electrical power requirement of 100 kWe. These options included AC (1000Hz and 1500Hz) and DC primary distribution at various voltages. The distribution system efficiency, reliability, mass, thermal, corona, space radiation levels, and technology readiness of devices and components were considered. The final proposed system consisted of two independent power distribution channels, sourced by two 3-phase, 110 kVA alternators nominally operating at half-rated power. Each alternator nominally supplies 50 kWe to one-half of the ion thrusters and science modules, but is capable of supplying the total power requirements in the event of loss of one alternator. This paper is an introduction to the methodology for the trades done to arrive at the proposed PMAD architecture. Any opinions expressed are those of the author(s) and do not necessarily reflect the views of Project Prometheus.
Bédard, Emilie; Fey, Stéphanie; Charron, Dominique; Lalancette, Cindy; Cantin, Philippe; Dolcé, Patrick; Laferrière, Céline; Déziel, Eric; Prévost, Michèle
2015-03-15
Legionella pneumophila is frequently detected in hot water distribution systems and thermal control is a common measure implemented by health care facilities. A risk assessment based on water temperature profiling and temperature distribution within the network is proposed, to guide effective monitoring strategies and allow the identification of high risk areas. Temperature and heat loss at control points (water heater, recirculation, representative points-of-use) were monitored in various sections of five health care facilities hot water distribution systems and results used to develop a temperature-based risk assessment tool. Detailed investigations show that defective return valves in faucets can cause widespread temperature losses because of hot and cold water mixing. Systems in which water temperature coming out of the water heaters was kept consistently above 60 °C and maintained above 55 °C across the network were negative for Legionella by culture or qPCR. For systems not meeting these temperature criteria, risk areas for L. pneumophila were identified using temperature profiling and system's characterization; higher risk was confirmed by more frequent microbiological detection by culture and qPCR. Results confirmed that maintaining sufficiently high temperatures within hot water distribution systems suppressed L. pneumophila culturability. However, the risk remains as shown by the persistence of L. pneumophila by qPCR. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1983-01-01
Stress distributions were calculated for a creep law to predict a rate of plastic deformation. The expected reduction in stresses is obtained. Improved schemes for calculating growth system temperature distributions were evaluated. Temperature field modeling examined the possibility of using horizontal temperature gradients to influence stress distribution in ribbon. The defect structure of 10 cm wide ribbon grown in the cartridge system was examined. A new feature is identified from an examination of cross sectional micrographs. It consists of high density dislocation bands extending through the ribbon thickness. A four point bending apparatus was constructed for high temperature study of the creep response of silicon, to be used to generate defects for comparison with as grown defects in ribbon. The feasibility of laser interferometric techniques for sheet residual stress distribution measurement is examined. The mathematical formalism for calculating residual stress from changes in surface topology caused by an applied stress in a rectangular specimen was developed, and the system for laser interferometric measurement to obtain surface topology data was tested on CZ silicon.
Computational Model for Ethnographically Informed Systems Design
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes
This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.
Verification of Faulty Message Passing Systems with Continuous State Space in PVS
NASA Technical Reports Server (NTRS)
Pilotto, Concetta; White, Jerome
2010-01-01
We present a library of Prototype Verification System (PVS) meta-theories that verifies a class of distributed systems in which agent commu nication is through message-passing. The theoretic work, outlined in, consists of iterative schemes for solving systems of linear equations , such as message-passing extensions of the Gauss and Gauss-Seidel me thods. We briefly review that work and discuss the challenges in formally verifying it.
A user-oriented synthetic workload generator
NASA Technical Reports Server (NTRS)
Kao, Wei-Lun
1991-01-01
A user oriented synthetic workload generator that simulates users' file access behavior based on real workload characterization is described. The model for this workload generator is user oriented and job specific, represents file I/O operations at the system call level, allows general distributions for the usage measures, and assumes independence in the file I/O operation stream. The workload generator consists of three parts which handle specification of distributions, creation of an initial file system, and selection and execution of file I/O operations. Experiments on SUN NFS are shown to demonstrate the usage of the workload generator.
Fujiwara, M.; Waseda, A.; Nojima, R.; Moriai, S.; Ogata, W.; Sasaki, M.
2016-01-01
Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir’s (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km). PMID:27363566
Fujiwara, M; Waseda, A; Nojima, R; Moriai, S; Ogata, W; Sasaki, M
2016-07-01
Distributed storage plays an essential role in realizing robust and secure data storage in a network over long periods of time. A distributed storage system consists of a data owner machine, multiple storage servers and channels to link them. In such a system, secret sharing scheme is widely adopted, in which secret data are split into multiple pieces and stored in each server. To reconstruct them, the data owner should gather plural pieces. Shamir's (k, n)-threshold scheme, in which the data are split into n pieces (shares) for storage and at least k pieces of them must be gathered for reconstruction, furnishes information theoretic security, that is, even if attackers could collect shares of less than the threshold k, they cannot get any information about the data, even with unlimited computing power. Behind this scenario, however, assumed is that data transmission and authentication must be perfectly secure, which is not trivial in practice. Here we propose a totally information theoretically secure distributed storage system based on a user-friendly single-password-authenticated secret sharing scheme and secure transmission using quantum key distribution, and demonstrate it in the Tokyo metropolitan area (≤90 km).
Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A
2005-01-01
To determine the frequency of sampling in small water distribution systems (<5,000 inhabitants) and compare the results according to different hypotheses in bacteria distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value <0.001) and the probability of II type error with the assumption of heterogeneity was higher with 4 samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.
Towards Information Enrichment through Recommendation Sharing
NASA Astrophysics Data System (ADS)
Weng, Li-Tung; Xu, Yue; Li, Yuefeng; Nayak, Richi
Nowadays most existing recommender systems operate in a single organisational basis, i.e. a recommender system recommends items to customers of one organisation based on the organisation's datasets only. Very often the datasets of a single organisation do not have sufficient resources to be used to generate quality recommendations. Therefore, it would be beneficial if recommender systems of different organisations with similar nature can cooperate together to share their resources and recommendations. In this chapter, we present an Ecommerce-oriented Distributed Recommender System (EDRS) that consists of multiple recommender systems from different organisations. By sharing resources and recommendations with each other, these recommenders in the distributed recommendation system can provide better recommendation service to their users. As for most of the distributed systems, peer selection is often an important aspect. This chapter also presents a recommender selection technique for the proposed EDRS, and it selects and profiles recommenders based on their stability, average performance and selection frequency. Based on our experiments, it is shown that recommenders' recommendation quality can be effectively improved by adopting the proposed EDRS and the associated peer selection technique.
Contextuality in canonical systems of random variables
NASA Astrophysics Data System (ADS)
Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.
2017-10-01
Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.
The Self-Organization of a Spoken Word
Holden, John G.; Rajaraman, Srinivasan
2012-01-01
Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213
Observational constraints on the inter-binary stellar flare hypothesis for the gamma-ray bursts
NASA Astrophysics Data System (ADS)
Rao, A. R.; Vahia, M. N.
1994-01-01
The Gamma Ray Observatory/Burst and Transient Source Experiment (GRO/BATSE) results on the Gamma Ray Bursts (GRBs) have given an internally consistent set of observations of about 260 GRBs which have been released for analysis by the BATSE team. Using this database we investigate our earlier suggestion (Vahia and Rao, 1988) that GRBs are inter-binary stellar flares from a group of objects classified as Magnetically Active Stellar Systems (MASS) which includes flare stars, RS CVn binaries and cataclysmic variables. We show that there exists an observationally consistent parameter space for the number density, scale height and flare luminosity of MASS which explains the complete log(N) - log(P) distribution of GRBs as also the observed isotropic distribution. We further use this model to predict anisotropy in the GRB distribution at intermediate luminosities. We make definite predictions under the stellar flare hypothesis that can be tested in the near future.
Distinguishing remobilized ash from erupted volcanic plumes using space-borne multi-angle imaging.
Flower, Verity J B; Kahn, Ralph A
2017-10-28
Volcanic systems are comprised of a complex combination of ongoing eruptive activity and secondary hazards, such as remobilized ash plumes. Similarities in the visual characteristics of remobilized and erupted plumes, as imaged by satellite-based remote sensing, complicate the accurate classification of these events. The stereo imaging capabilities of the Multi-angle Imaging SpectroRadiometer (MISR) were used to determine the altitude and distribution of suspended particles. Remobilized ash shows distinct dispersion, with particles distributed within ~1.5 km of the surface. Particle transport is consistently constrained by local topography, limiting dispersion pathways downwind. The MISR Research Aerosol (RA) retrieval algorithm was used to assess plume particle microphysical properties. Remobilized ash plumes displayed a dominance of large particles with consistent absorption and angularity properties, distinct from emitted plumes. The combination of vertical distribution, topographic control, and particle microphysical properties makes it possible to distinguish remobilized ash flows from eruptive plumes, globally.
The Design of Optical Sensor for the Pinhole/Occulter Facility
NASA Technical Reports Server (NTRS)
Greene, Michael E.
1990-01-01
Three optical sight sensor systems were designed, built and tested. Two optical lines of sight sensor system are capable of measuring the absolute pointing angle to the sun. The system is for use with the Pinhole/Occulter Facility (P/OF), a solar hard x ray experiment to be flown from Space Shuttle or Space Station. The sensor consists of a pinhole camera with two pairs of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the pinhole, track and hold circuitry for data reduction, an analog to digital converter, and a microcomputer. The deflection of the image center is calculated from these data using an approximation for the solar image. A second system consists of a pinhole camera with a pair of perpendicularly mounted linear photodiode arrays, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed. A third optical sensor system is capable of measuring the internal vibration of the P/OF between the mask and base. The system consists of a white light source, a mirror and a pair of perpendicularly mounted linear photodiode arrays to detect the intensity distribution of the solar image produced by the mirror, amplification circuitry, threshold detection circuitry, and a microcomputer board. The deflection of the image and hence the vibration of the structure is calculated by knowing the position of each pixel of the photodiode array and merely counting the pixel numbers until threshold is surpassed.
CALIBRATION OF EQUILIBRIUM TIDE THEORY FOR EXTRASOLAR PLANET SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Brad M. S., E-mail: hansen@astro.ucla.ed
2010-11-01
We provide an 'effective theory' of tidal dissipation in extrasolar planet systems by empirically calibrating a model for the equilibrium tide. The model is valid to high order in eccentricity and parameterized by two constants of bulk dissipation-one for dissipation in the planet and one for dissipation in the host star. We are able to consistently describe the distribution of extrasolar planetary systems in terms of period, eccentricity, and mass (with a lower limit of a Saturn mass) with this simple model. Our model is consistent with the survival of short-period exoplanet systems, but not with the circularization period ofmore » equal mass stellar binaries, suggesting that the latter systems experience a higher level of dissipation than exoplanet host stars. Our model is also not consistent with the explanation of inflated planetary radii as resulting from tidal dissipation. The paucity of short-period planets around evolved A stars is explained as the result of enhanced tidal inspiral resulting from the increase in stellar radius with evolution.« less
7 CFR 1730.63 - IDR policy criteria.
Code of Federal Regulations, 2012 CFR
2012-01-01
... policies must be consistent with prudent electric utility practice. (2) IDR policies must incorporate the Institute of Electrical and Electronic Engineers (IEEE): IEEE 1547TM—Standard for Interconnecting... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.63 IDR...
7 CFR 1730.63 - IDR policy criteria.
Code of Federal Regulations, 2014 CFR
2014-01-01
... policies must be consistent with prudent electric utility practice. (2) IDR policies must incorporate the Institute of Electrical and Electronic Engineers (IEEE): IEEE 1547TM—Standard for Interconnecting... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.63 IDR...
7 CFR 1730.63 - IDR policy criteria.
Code of Federal Regulations, 2013 CFR
2013-01-01
... policies must be consistent with prudent electric utility practice. (2) IDR policies must incorporate the Institute of Electrical and Electronic Engineers (IEEE): IEEE 1547TM—Standard for Interconnecting... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.63 IDR...
7 CFR 1730.63 - IDR policy criteria.
Code of Federal Regulations, 2011 CFR
2011-01-01
... policies must be consistent with prudent electric utility practice. (2) IDR policies must incorporate the Institute of Electrical and Electronic Engineers (IEEE): IEEE 1547TM—Standard for Interconnecting... AGRICULTURE ELECTRIC SYSTEM OPERATIONS AND MAINTENANCE Interconnection of Distributed Resources § 1730.63 IDR...
NASA Technical Reports Server (NTRS)
Marochnik, Leonid S.; Mukhin, Lev M.; Sagdeev, Roald Z.
1991-01-01
Views of the large-scale structure of the solar system, consisting of the Sun, the nine planets and their satellites, changed when Oort demonstrated that a gigantic cloud of comets (the Oort cloud) is located on the periphery of the solar system. The following subject areas are covered: (1) the Oort cloud's mass; (2) Hill's cloud mass; (3) angular momentum distribution in the solar system; and (4) the cometary cloud around other stars.
1982-07-01
waste-heat steam generators. The applicable steam generator design concepts and general design consideration were reviewed and critical problems...a once-through forced-circulation steam generator design should be selected because of stability, reliability, compact- ness and lightweight...consists of three sections and one appendix. In Section I, the applicable steam generator design conccpts and general design * considerations are reviewed
Optimized Power Generation and Distribution Unit for Mobile Applications
2006-09-01
reference commands to the overall system. This would be consistent with exoskeleton usage . Power Generation (prime mover) Power Distribution...technologies i.e. technologies that as of yet have not been used in the same field. • Produce list(s) in order of ranking for different properties ...developments have come through material science and bearing technology – it is the material properties of a flywheel that determine the maximum energy that can
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlov, D. A.; Bidus, N. V.; Bobrov, A. I., E-mail: bobrov@phys.unn.ru
2015-01-15
The distribution of elastic strains in a system consisting of a quantum-dot layer and a buried GaAs{sub x}P{sub 1−x} layer is studied using geometric phase analysis. A hypothesis is offered concerning the possibility of controlling the process of the formation of InAs quantum dots in a GaAs matrix using a local isovalent phosphorus impurity.
Proceedings of the Expert Systems Workshop Held in Pacific Grove, California on 16-18 April 1986
1986-04-18
13- NUMBER OF PAGES 197 N IS. SECURITY CLASS, (ol Mm raport) UNCLASSIFIED I5a. DECLASSIFI CATION/DOWNGRADING SCHEDULE 16. DISTRIBUTION...are distributed and parallel. * - Features unimplemented at present; scheduled for phase 2. Table 1-1: Key design characteristics of ABE 2. a...data structuring techniques and a semi- deterministic scheduler . A program for the DF framework consists of a number of independent processing modules
The frontoparietal control system: A central role in mental health
Cole, Michael W.; Repovs, Grega; Anticevic, Alan
2014-01-01
Recent findings suggest the existence of a frontoparietal control system consisting of ‘flexible hubs’ that regulate distributed systems (e.g., visual, limbic, motor) according to current task goals. A growing number of studies are reporting alterations of this control system across a striking range of mental diseases. We suggest this may reflect a critical role for the control system in promoting and maintaining mental health. Specifically, we propose that this system implements feedback control to regulate symptoms as they arise (e.g., excessive anxiety reduced via regulation of amygdala), such that an intact control system is protective against a variety of mental illnesses. Consistent with this possibility, recent results indicate that several major mental illnesses involve altered brain-wide connectivity of the control system, likely altering its ability to regulate symptoms. These results suggest that this ‘immune system of the mind’ may be an especially important target for future basic and clinical research. PMID:24622818
Subsystem design in aircraft power distribution systems using optimization
NASA Astrophysics Data System (ADS)
Chandrasekaran, Sriram
2000-10-01
The research reported in this dissertation focuses on the development of optimization tools for the design of subsystems in a modern aircraft power distribution system. The baseline power distribution system is built around a 270V DC bus. One of the distinguishing features of this power distribution system is the presence of regenerative power from the electrically driven flight control actuators and structurally integrated smart actuators back to the DC bus. The key electrical components of the power distribution system are bidirectional switching power converters, which convert, control and condition electrical power between the sources and the loads. The dissertation is divided into three parts. Part I deals with the formulation of an optimization problem for a sample system consisting of a regulated DC-DC buck converter preceded by an input filter. The individual subsystems are optimized first followed by the integrated optimization of the sample system. It is shown that the integrated optimization provides better results than that obtained by integrating the individually optimized systems. Part II presents a detailed study of piezoelectric actuators. This study includes modeling, optimization of the drive amplifier and the development of a current control law for piezoelectric actuators coupled to a simple mechanical structure. Linear and nonlinear methods to study subsystem interaction and stability are studied in Part III. A multivariable impedance ratio criterion applicable to three phase systems is proposed. Bifurcation methods are used to obtain global stability characteristics of interconnected systems. The application of a nonlinear design methodology, widely used in power systems, to incrementally improve the robustness of a system to Hopf bifurcation instability is discussed.
Ion imaging study of dissociative charge transfer in the N{sub 2}{sup +}+ CH{sub 4} system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pei Linsen; Farrar, James M.
The velocity map ion imaging method is applied to the dissociative charge transfer reactions of N{sub 2}{sup +} with CH{sub 4} studied in crossed beams. The velocity space images are collected at four collision energies between 0.5 and 1.5 eV, providing both product kinetic energy and angular distributions for the reaction products CH{sub 3}{sup +} and CH{sub 2}{sup +}. The general shapes of the images are consistent with long range electron transfer from CH{sub 4} to N{sub 2}{sup +} preceding dissociation, and product kinetic energy distributions are consistent with energy resonance in the initial electron transfer step. The branching ratiomore » for CH{sub 3}{sup +}:CH{sub 2}{sup +} is 85:15 over the full collision energy range, consistent with literature reports.« less
Terrestrial Spaceflight Analogs: Antarctica
NASA Technical Reports Server (NTRS)
Crucian, Brian
2013-01-01
Alterations in immune cell distribution and function, circadian misalignment, stress and latent viral reactivation appear to persist during Antarctic winterover at Concordia Station. Some of these changes are similar to those observed in Astronauts, either during or immediately following spaceflight. Others are unique to the Concordia analog. Based on some initial immune data and environmental conditions, Concordia winterover may be an appropriate analog for some flight-associated immune system changes and mission stress effects. An ongoing smaller control study at Neumayer III will address the influence of the hypoxic variable. Changes were observed in the peripheral blood leukocyte distribution consistent with immune mobilization, and similar to those observed during spaceflight. Alterations in cytokine production profiles were observed during winterover that are distinct from those observed during spaceflight, but potentially consistent with those observed during persistent hypobaric hypoxia. The reactivation of latent herpesviruses was observed during overwinter/isolation, that is consistently associated with dysregulation in immune function.
Validation and performance of the LHC cryogenic system through commissioning of the first sector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serio, L.; Bouillot, A.; Casas-Cubillos, J.
2007-12-01
The cryogenic system [1] for the Large Hadron Collider accelerator is presently in its final phase of commissioning at nominal operating conditions. The refrigeration capacity for the LHC is produced using eight large cryogenic plants and eight 1.8 K refrigeration units installed on five cryogenic islands. Machine cryogenic equipment is installed in a 26.7-km circumference ring deep underground tunnel and are maintained at their nominal operating conditions via a distribution system consisting of transfer lines, cold interconnection boxes at each cryogenic island and a cryogenic distribution line. The functional analysis of the whole system during all operating conditions was establishedmore » and validated during the first sector commissioning in order to maximize the system availability. Analysis, operating modes, main failure scenarios, results and performance of the cryogenic system are presented.« less
A comparison of TPS and different measurement techniques in small-field electron beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donmez Kesen, Nazmiye, E-mail: nazo94@gmail.com; Cakir, Aydin; Okutan, Murat
In recent years, small-field electron beams have been used for the treatment of superficial lesions, which requires small circular fields. However, when using very small electron fields, some significant dosimetric problems may occur. In this study, dose distributions and outputs of circular fields with dimensions of 5 cm and smaller, for nominal energies of 6, 9, and 15 MeV from the Siemens ONCOR Linac, were measured and compared with data from a treatment planning system using the pencil-beam algorithm in electron beam calculations. All dose distribution measurements were performed using the Gafchromic EBT film; these measurements were compared with datamore » that were obtained from the Computerized Medical Systems (CMS) XiO treatment planning system (TPS), using the gamma-index method in the PTW VeriSoft software program. Output measurements were performed using the Gafchromic EBT film, an Advanced Markus ion chamber, and thermoluminescent dosimetry (TLD). Although the pencil-beam algorithm is used to model electron beams in many clinics, there is no substantial amount of detailed information in the literature about its use. As the field size decreased, the point of maximum dose moved closer to the surface. Output factors were consistent; differences from the values obtained from the TPS were, at maximum, 42% for 6 and 15 MeV and 32% for 9 MeV. When the dose distributions from the TPS were compared with the measurements from the Gafchromic EBT films, it was observed that the results were consistent for 2-cm diameter and larger fields, but the outputs for fields of 1-cm diameter and smaller were not consistent. In CMS XiO TPS, calculated using the pencil-beam algorithm, the dose distributions of electron treatment fields that were created with circular cutout of a 1-cm diameter were not appropriate for patient treatment and the pencil-beam algorithm is not convenient for monitor unit (MU) calculations in electron dosimetry.« less
Precision time distribution within a deep space communications complex
NASA Technical Reports Server (NTRS)
Curtright, J. B.
1972-01-01
The Precision Time Distribution System (PTDS) at the Golstone Deep Space Communications Complex is a practical application of existing technology to the solution of a local problem. The problem was to synchronize four station timing systems to a master source with a relative accuracy consistently and significantly better than 10 microseconds. The solution involved combining a precision timing source, an automatic error detection assembly and a microwave distribution network into an operational system. Upon activation of the completed PTDS two years ago, synchronization accuracy at Goldstone (two station relative) was improved by an order of magnitude. It is felt that the validation of the PTDS mechanization is now completed. Other facilities which have site dispersion and synchronization accuracy requirements similar to Goldstone may find the PTDS mechanization useful in solving their problem. At present, the two station relative synchronization accuracy at Goldstone is better than one microsecond.
Network-based reading system for lung cancer screening CT
NASA Astrophysics Data System (ADS)
Fujino, Yuichi; Fujimura, Kaori; Nomura, Shin-ichiro; Kawashima, Harumi; Tsuchikawa, Megumu; Matsumoto, Toru; Nagao, Kei-ichi; Uruma, Takahiro; Yamamoto, Shinji; Takizawa, Hotaka; Kuroda, Chikazumi; Nakayama, Tomio
2006-03-01
This research aims to support chest computed tomography (CT) medical checkups to decrease the death rate by lung cancer. We have developed a remote cooperative reading system for lung cancer screening over the Internet, a secure transmission function, and a cooperative reading environment. It is called the Network-based Reading System. A telemedicine system involves many issues, such as network costs and data security if we use it over the Internet, which is an open network. In Japan, broadband access is widespread and its cost is the lowest in the world. We developed our system considering human machine interface and security. It consists of data entry terminals, a database server, a computer aided diagnosis (CAD) system, and some reading terminals. It uses a secure Digital Imaging and Communication in Medicine (DICOM) encrypting method and Public Key Infrastructure (PKI) based secure DICOM image data distribution. We carried out an experimental trial over the Japan Gigabit Network (JGN), which is the testbed for the Japanese next-generation network, and conducted verification experiments of secure screening image distribution, some kinds of data addition, and remote cooperative reading. We found that network bandwidth of about 1.5 Mbps enabled distribution of screening images and cooperative reading and that the encryption and image distribution methods we proposed were applicable to the encryption and distribution of general DICOM images via the Internet.
NASA Technical Reports Server (NTRS)
Slaughter, B. C.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Main Propulsion System (MPS) hardware, generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to available data from the Rockwell Downey/NASA JSC FMEA/CIL review. The Orbiter MPS is composed of the Propellant Management Subsystem (PMS) consisting of the liquid oxygen (LO2) and liquid hydrogen (LH2) subsystems and the helium subsystem. The PMS is a system of manifolds, distribution lines, and valves by which the liquid propellants pass from the External Tank to the Space Shuttle Main Engine (SSME). The helium subsystem consists of a series of helium supply tanks and their associated regulators, control valves, and distribution lines. Volume 1 contains the MPS description, assessment results, ground rules and assumptions, and some of the IOA worksheets.
NASA Astrophysics Data System (ADS)
Yang, Can; Ma, Cheng; Hu, Linxi; He, Guangqiang
2018-06-01
We present a hierarchical modulation coherent communication protocol, which simultaneously achieves classical optical communication and continuous-variable quantum key distribution. Our hierarchical modulation scheme consists of a quadrature phase-shifting keying modulation for classical communication and a four-state discrete modulation for continuous-variable quantum key distribution. The simulation results based on practical parameters show that it is feasible to transmit both quantum information and classical information on a single carrier. We obtained a secure key rate of 10^{-3} bits/pulse to 10^{-1} bits/pulse within 40 kilometers, and in the meantime the maximum bit error rate for classical information is about 10^{-7}. Because continuous-variable quantum key distribution protocol is compatible with standard telecommunication technology, we think our hierarchical modulation scheme can be used to upgrade the digital communication systems to extend system function in the future.
NASA Technical Reports Server (NTRS)
Woodcock, G. R.
1980-01-01
The design analysis of a silicon power conversion system for the solar power satellite (SPS) is summarized. The solar array, consisting of glass encapsulated 50 micrometer silicon solar cells, is described. The general scheme for power distribution to the array/antenna interface is described. Degradation by proton irradiation is considered. The interface between the solar array and the klystron equipped power transmitter is described.
Median Filtering Methods for Non-volcanic Tremor Detection
NASA Astrophysics Data System (ADS)
Damiao, L. G.; Nadeau, R. M.; Dreger, D. S.; Luna, B.; Zhang, H.
2016-12-01
Various properties of median filtering over time and space are used to address challenges posed by the Non-volcanic tremor detection problem. As part of a "Big-Data" effort to characterize the spatial and temporal distribution of ambient tremor throughout the Northern San Andreas Fault system, continuous seismic data from multiple seismic networks with contrasting operational characteristics and distributed over a variety of regions are being used. Automated median filtering methods that are flexible enough to work consistently with these data are required. Tremor is characterized by a low-amplitude, long-duration signal-train whose shape is coherent at multiple stations distributed over a large area. There are no consistent phase arrivals or mechanisms in a given tremor's signal and even the durations and shapes among different tremors vary considerably. A myriad of masquerading noise, anthropogenic and natural-event signals must also be discriminated in order to obtain accurate tremor detections. We present here results of the median methods applied to data from four regions of the San Andreas Fault system in northern California (Geysers Geothermal Field, Napa, Bitterwater and Parkfield) to illustrate the ability of the methods to detect tremor under diverse conditions.
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x(-1-α) exp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Sea-ice floe-size distribution in the context of spontaneous scaling emergence in stochastic systems
NASA Astrophysics Data System (ADS)
Herman, Agnieszka
2010-06-01
Sea-ice floe-size distribution (FSD) in ice-pack covered seas influences many aspects of ocean-atmosphere interactions. However, data concerning FSD in the polar oceans are still sparse and processes shaping the observed FSD properties are poorly understood. Typically, power-law FSDs are assumed although no feasible explanation has been provided neither for this one nor for other properties of the observed distributions. Consequently, no model exists capable of predicting FSD parameters in any particular situation. Here I show that the observed FSDs can be well represented by a truncated Pareto distribution P(x)=x-1-αexp[(1-α)/x] , which is an emergent property of a certain group of multiplicative stochastic systems, described by the generalized Lotka-Volterra (GLV) equation. Building upon this recognition, a possibility of developing a simple agent-based GLV-type sea-ice model is considered. Contrary to simple power-law FSDs, GLV gives consistent estimates of the total floe perimeter, as well as floe-area distribution in agreement with observations.
Factors Affecting Atrazine Concentration and Quantitative Determination in Chlorinated Water
Although the herbicide atrazine has been reported to not react measurably with free chlorine during drinking water treatment, this work demonstrates that at contact times consistent with drinking water distribution system residence times, a transformation of atrazine can be obser...
Modelling and control of a microgrid including photovoltaic and wind generation
NASA Astrophysics Data System (ADS)
Hussain, Mohammed Touseef
Extensive increase of distributed generation (DG) penetration and the existence of multiple DG units at distribution level have introduced the notion of micro-grid. This thesis develops a detailed non-linear and small-signal dynamic model of a microgrid that includes PV, wind and conventional small scale generation along with their power electronics interfaces and the filters. The models developed evaluate the amount of generation mix from various DGs for satisfactory steady state operation of the microgrid. In order to understand the interaction of the DGs on microgrid system initially two simpler configurations were considered. The first one consists of microalternator, PV and their electronics, and the second system consists of microalternator and wind system each connected to the power system grid. Nonlinear and linear state space model of each microgrid are developed. Small signal analysis showed that the large participation of PV/wind can drive the microgrid to the brink of unstable region without adequate control. Non-linear simulations are carried out to verify the results obtained through small-signal analysis. The role of the extent of generation mix of a composite microgrid consisting of wind, PV and conventional generation was investigated next. The findings of the smaller systems were verified through nonlinear and small signal modeling. A central supervisory capacitor energy storage controller interfaced through a STATCOM was proposed to monitor and enhance the microgrid operation. The potential of various control inputs to provide additional damping to the system has been evaluated through decomposition techniques. The signals identified to have damping contents were employed to design the supervisory control system. The controller gains were tuned through an optimal pole placement technique. Simulation studies demonstrate that the STATCOM voltage phase angle and PV inverter phase angle were the best inputs for enhanced stability boundaries.
Telerobotic system performance measurement - Motivation and methods
NASA Technical Reports Server (NTRS)
Kondraske, George V.; Khoury, George J.
1992-01-01
A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.
Analysis of critical operating conditions for LV distribution networks with microgrids
NASA Astrophysics Data System (ADS)
Zehir, M. A.; Batman, A.; Sonmez, M. A.; Font, A.; Tsiamitros, D.; Stimoniaris, D.; Kollatou, T.; Bagriyanik, M.; Ozdemir, A.; Dialynas, E.
2016-11-01
Increase in the penetration of Distributed Generation (DG) in distribution networks, raises the risk of voltage limit violations while contributing to line losses. Especially in low voltage (LV) distribution networks (secondary distribution networks), impacts of active power flows on the bus voltages and on the network losses are more dominant. As network operators must meet regulatory limitations, they have to take into account the most critical operating conditions in their systems. In this study, it is aimed to present the impact of the worst operation cases of LV distribution networks comprising microgrids. Simulation studies are performed on a field data-based virtual test-bed. The simulations are repeated for several cases consisting different microgrid points of connection with different network loading and microgrid supply/demand conditions.
Performance Evaluation of Communication Software Systems for Distributed Computing
NASA Technical Reports Server (NTRS)
Fatoohi, Rod
1996-01-01
In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokogawa, D., E-mail: d.yokogawa@chem.nagoya-u.ac.jp; Institute of Transformative Bio-Molecules
2016-09-07
Theoretical approach to design bright bio-imaging molecules is one of the most progressing ones. However, because of the system size and computational accuracy, the number of theoretical studies is limited to our knowledge. To overcome the difficulties, we developed a new method based on reference interaction site model self-consistent field explicitly including spatial electron density distribution and time-dependent density functional theory. We applied it to the calculation of indole and 5-cyanoindole at ground and excited states in gas and solution phases. The changes in the optimized geometries were clearly explained with resonance structures and the Stokes shift was correctly reproduced.
A Review of Distributed Control Techniques for Power Quality Improvement in Micro-grids
NASA Astrophysics Data System (ADS)
Zeeshan, Hafiz Muhammad Ali; Nisar, Fatima; Hassan, Ahmad
2017-05-01
Micro-grid is typically visualized as a small scale local power supply network dependent on distributed energy resources (DERs) that can operate simultaneously with grid as well as in standalone manner. The distributed generator of a micro-grid system is usually a converter-inverter type topology acting as a non-linear load, and injecting harmonics into the distribution feeder. Hence, the negative effects on power quality by the usage of distributed generation sources and components are clearly witnessed. In this paper, a review of distributed control approaches for power quality improvement is presented which encompasses harmonic compensation, loss mitigation and optimum power sharing in multi-source-load distributed power network. The decentralized subsystems for harmonic compensation and active-reactive power sharing accuracy have been analysed in detail. Results have been validated to be consistent with IEEE standards.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, R.
1993-01-01
The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.
NASA Technical Reports Server (NTRS)
Fridlind, A. M.; Ackerman, A. S.; Grandin, A.; Dezitter, F.; Weber, M.; Strapp, J. W.; Korolev, A. V.; Williams, C. R.
2015-01-01
Occurrences of jet engine power loss and damage have been associated with flight through fully glaciated deep convection at -10 to -50 degrees Centigrade. Power loss events commonly occur during flight through radar reflectivity (Zeta (sub e)) less than 20-30 decibels relative to Zeta (dBZ - radar returns) and no more than moderate turbulence, often overlying moderate to heavy rain near the surface. During 2010-2012, Airbus carried out flight tests seeking to characterize the highest ice water content (IWC) in such low-radar-reflectivity regions of large, cold-topped storm systems in the vicinity of Cayenne, Darwin, and Santiago. Within the highest IWC regions encountered, at typical sampling elevations (circa 11 kilometers), the measured ice size distributions exhibit a notably narrow concentration of mass over area-equivalent diameters of 100-500 micrometers. Given substantial and poorly quantified measurement uncertainties, here we evaluate the consistency of the Airbus in situ measurements with ground-based profiling radar observations obtained under quasi-steady, heavy stratiform rain conditions in one of the Airbus-sampled locations. We find that profiler-observed radar reflectivities and mean Doppler velocities at Airbus sampling temperatures are generally consistent with those calculated from in situ size-distribution measurements. We also find that column simulations using the in situ size distributions as an upper boundary condition are generally consistent with observed profiles of radar reflectivity (Ze), mean Doppler velocity (MDV), and retrieved rain rate. The results of these consistency checks motivate an examination of the microphysical pathways that could be responsible for the observed size-distribution features in Ackerman et al. (2015).
NASA Astrophysics Data System (ADS)
Maksimov, P. P.; Tsvyk, A. I.; Shestopalov, V. P.
1985-10-01
The effect of local phase nonuniformities of the diffraction gratings and the field distribution of the open cavity on the electronic efficiency of a diffraction-radiation generator (DRG) is analyzed numerically on the basis of a self-consistent system of nonlinear stationary equations for the DRG. It is shown that the interaction power and efficiency of a DRG can be increased by the use of an open cavity with a nonuniform diffraction grating and a complex form of microwave field distribution over the interaction space.
Lodewyck, Jérôme; Debuisschert, Thierry; García-Patrón, Raúl; Tualle-Brouri, Rosa; Cerf, Nicolas J; Grangier, Philippe
2007-01-19
An intercept-resend attack on a continuous-variable quantum-key-distribution protocol is investigated experimentally. By varying the interception fraction, one can implement a family of attacks where the eavesdropper totally controls the channel parameters. In general, such attacks add excess noise in the channel, and may also result in non-Gaussian output distributions. We implement and characterize the measurements needed to detect these attacks, and evaluate experimentally the information rates available to the legitimate users and the eavesdropper. The results are consistent with the optimality of Gaussian attacks resulting from the security proofs.
Force Network of a 2D Frictionless Emulsion System
NASA Astrophysics Data System (ADS)
Desmond, Kenneth; Weeks, Eric R.
2010-03-01
We use a quasi-two-dimensional emulsion as a new experimental system to measure various jamming transition properties. Our system consist of confining oil-in-water emulsion droplets between two parallel plates, so that the droplets are squeezed into quasi-two dimensional disks, analogous to granular photoelastic disks. By varying the droplet area fraction, we investigate the force network of this system as we cross through the jamming transition. At a critical area fraction, the composition of the system is no longer characterized primarily by circular disks, but by disks deformed to varying degrees. Quantifying the deformation provides information about the forces acting upon each droplet, and ultimately the force network. The probability distribution of forces is similar to that found for photoelastic disks, with the width of the force distribution narrowing with increasing packing fraction.
Analysis of Energy Storage System with Distributed Hydrogen Production and Gas Turbine
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Bartela, Łukasz; Dubiel-Jurgaś, Klaudia
2017-12-01
Paper presents the concept of energy storage system based on power-to-gas-to-power (P2G2P) technology. The system consists of a gas turbine co-firing hydrogen, which is supplied from a distributed electrolysis installations, powered by the wind farms located a short distance from the potential construction site of the gas turbine. In the paper the location of this type of investment was selected. As part of the analyses, the area of wind farms covered by the storage system and the share of the electricity production which is subjected storage has been changed. The dependence of the changed quantities on the potential of the hydrogen production and the operating time of the gas turbine was analyzed. Additionally, preliminary economic analyses of the proposed energy storage system were carried out.
Assessing the Operational Resilience of Electrical Distribution Systems
2017-09-01
as solar, hydro, wind , nuclear, or gas turbine power plants, produce electricity. Transmission systems move electricity in bulk from the originating...us. As I continue in my career I will consistently seek to emulate your attention to detail and ability to quickly frame and solve a problem . Thank...generation facility can cause problems (Knaus, 2017). Disruptions to transmission systems, either from the loss of a high-voltage line or a substation, can
Distributed Engine Control Empirical/Analytical Verification Tools
NASA Technical Reports Server (NTRS)
DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan
2013-01-01
NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.
Modeling of luminance distribution in CAVE-type virtual reality systems
NASA Astrophysics Data System (ADS)
Meironke, Michał; Mazikowski, Adam
2017-08-01
At present, one of the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems are usually consisted of four, five or six projection screens and in case of six screens arranged in form of a cube. Providing the user with a high level of immersion feeling in such systems is largely dependent of optical properties of the system. The modeling of physical phenomena plays nowadays a huge role in the most fields of science and technology. It allows to simulate work of device without a need to make any changes in the physical constructions. In this paper distribution of luminance in CAVE-type virtual reality systems were modelled. Calculations were performed for the model of 6-walled CAVE-type installation, based on Immersive 3D Visualization Laboratory, situated at the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology. Tests have been carried out for two different scattering distribution of the screen material in order to check how these characteristicinfluence on the luminance distribution of the whole CAVE. The basis assumption and simplification of modeled CAVE-type installation and results were presented. The brief discussion about the results and usefulness of developed model were also carried out.
Determining on-fault earthquake magnitude distributions from integer programming
Geist, Eric L.; Parsons, Thomas E.
2018-01-01
Earthquake magnitude distributions among faults within a fault system are determined from regional seismicity and fault slip rates using binary integer programming. A synthetic earthquake catalog (i.e., list of randomly sampled magnitudes) that spans millennia is first formed, assuming that regional seismicity follows a Gutenberg-Richter relation. Each earthquake in the synthetic catalog can occur on any fault and at any location. The objective is to minimize misfits in the target slip rate for each fault, where slip for each earthquake is scaled from its magnitude. The decision vector consists of binary variables indicating which locations are optimal among all possibilities. Uncertainty estimates in fault slip rates provide explicit upper and lower bounding constraints to the problem. An implicit constraint is that an earthquake can only be located on a fault if it is long enough to contain that earthquake. A general mixed-integer programming solver, consisting of a number of different algorithms, is used to determine the optimal decision vector. A case study is presented for the State of California, where a 4 kyr synthetic earthquake catalog is created and faults with slip ≥3 mm/yr are considered, resulting in >106 variables. The optimal magnitude distributions for each of the faults in the system span a rich diversity of shapes, ranging from characteristic to power-law distributions.
Mean-field theory of a plastic network of integrate-and-fire neurons.
Chen, Chun-Chung; Jasnow, David
2010-01-01
We consider a noise-driven network of integrate-and-fire neurons. The network evolves as result of the activities of the neurons following spike-timing-dependent plasticity rules. We apply a self-consistent mean-field theory to the system to obtain the mean activity level for the system as a function of the mean synaptic weight, which predicts a first-order transition and hysteresis between a noise-dominated regime and a regime of persistent neural activity. Assuming Poisson firing statistics for the neurons, the plasticity dynamics of a synapse under the influence of the mean-field environment can be mapped to the dynamics of an asymmetric random walk in synaptic-weight space. Using a master equation for small steps, we predict a narrow distribution of synaptic weights that scales with the square root of the plasticity rate for the stationary state of the system given plausible physiological parameter values describing neural transmission and plasticity. The dependence of the distribution on the synaptic weight of the mean-field environment allows us to determine the mean synaptic weight self-consistently. The effect of fluctuations in the total synaptic conductance and plasticity step sizes are also considered. Such fluctuations result in a smoothing of the first-order transition for low number of afferent synapses per neuron and a broadening of the synaptic-weight distribution, respectively.
Positron Scanner for Locating Brain Tumors
DOE R&D Accomplishments Database
Rankowitz, S.; Robertson, J. S.; Higinbotham, W. A.; Rosenblum, M. J.
1962-03-01
A system is described that makes use of positron emitting isotopes for locating brain tumors. This system inherently provides more information about the distribution of radioactivity in the head in less time than existing scanners which use one or two detectors. A stationary circular array of 32 scintillation detectors scans a horizontal layer of the head from many directions simultaneously. The data, consisting of the number of counts in all possible coincidence pairs, are coded and stored in the memory of a Two-Dimensional Pulse-Height Analyzer. A unique method of displaying and interpreting the data is described that enables rapid approximate analysis of complex source distribution patterns. (auth)
Discrete shaped strain sensors for intelligent structures
NASA Technical Reports Server (NTRS)
Andersson, Mark S.; Crawley, Edward F.
1992-01-01
Design of discrete, highly distributed sensor systems for intelligent structures has been studied. Data obtained indicate that discrete strain-averaging sensors satisfy the functional requirements for distributed sensing of intelligent structures. Bartlett and Gauss-Hanning sensors, in particular, provide good wavenumber characteristics while meeting the functional requirements. They are characterized by good rolloff rates and positive Fourier transforms for all wavenumbers. For the numerical integration schemes, Simpson's rule is considered to be very simple to implement and consistently provides accurate results for five sensors or more. It is shown that a sensor system that satisfies the functional requirements can be applied to a structure that supports mode shapes with purely sinusoidal curvature.
Advanced S-Band studies using the TDRSS communications satellite
NASA Technical Reports Server (NTRS)
Jenkins, Jeffrey D.; Osborne, William P.; Fan, Yiping
1994-01-01
This report will describe the design, implementation, and results of a propagation experiment which used TDRSS to transmit spread signals at S-Band to an instrumented mobile receiver. The results consist of fade measurements and distribution functions in 21 environments across the Continental United States (CONUS). From these distribution functions, some idea may be gained about what system designers should expect for excess path loss in many mobile environments. Some of these results may be compared against similar measurements made with narrowband beacon measurements. Such comparisons provide insight into what gains the spread signaling system may or may not have in multipath and shadowing environments.
Distributed Accounting on the Grid
NASA Technical Reports Server (NTRS)
Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.
2001-01-01
By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.
Cost comparison of unit dose and traditional drug distribution in a long-term-care facility.
Lepinski, P W; Thielke, T S; Collins, D M; Hanson, A
1986-11-01
Unit dose and traditional drug distribution systems were compared in a 352-bed long-term-care facility by analyzing nursing time, medication-error rate, medication costs, and waste. Time spent by nurses in preparing, administering, charting, and other tasks associated with medications was measured with a stop-watch on four different nursing units during six-week periods before and after the nursing home began using unit dose drug distribution. Medication-error rate before and after implementation of the unit dose system was determined by patient profile audits and medication inventories. Medication costs consisted of patient billing costs (acquisition cost plus fee) and cost of medications destroyed. The unit dose system required a projected 1507.2 hours less nursing time per year. Mean medication-error rates were 8.53% and 0.97% for the traditional and unit dose systems, respectively. Potential annual savings because of decreased medication waste with the unit dose system were $2238.72. The net increase in cost for the unit dose system was estimated at $615.05 per year, or approximately $1.75 per patient. The unit dose system appears safer and more time-efficient than the traditional system, although its costs are higher.
Distributed autonomous systems: resource management, planning, and control algorithms
NASA Astrophysics Data System (ADS)
Smith, James F., III; Nguyen, ThanhVu H.
2005-05-01
Distributed autonomous systems, i.e., systems that have separated distributed components, each of which, exhibit some degree of autonomy are increasingly providing solutions to naval and other DoD problems. Recently developed control, planning and resource allocation algorithms for two types of distributed autonomous systems will be discussed. The first distributed autonomous system (DAS) to be discussed consists of a collection of unmanned aerial vehicles (UAVs) that are under fuzzy logic control. The UAVs fly and conduct meteorological sampling in a coordinated fashion determined by their fuzzy logic controllers to determine the atmospheric index of refraction. Once in flight no human intervention is required. A fuzzy planning algorithm determines the optimal trajectory, sampling rate and pattern for the UAVs and an interferometer platform while taking into account risk, reliability, priority for sampling in certain regions, fuel limitations, mission cost, and related uncertainties. The real-time fuzzy control algorithm running on each UAV will give the UAV limited autonomy allowing it to change course immediately without consulting with any commander, request other UAVs to help it, alter its sampling pattern and rate when observing interesting phenomena, or to terminate the mission and return to base. The algorithms developed will be compared to a resource manager (RM) developed for another DAS problem related to electronic attack (EA). This RM is based on fuzzy logic and optimized by evolutionary algorithms. It allows a group of dissimilar platforms to use EA resources distributed throughout the group. For both DAS types significant theoretical and simulation results will be presented.
Crack surface roughness in three-dimensional random fuse networks
NASA Astrophysics Data System (ADS)
Nukala, Phani Kumar V. V.; Zapperi, Stefano; Šimunović, Srđan
2006-08-01
Using large system sizes with extensive statistical sampling, we analyze the scaling properties of crack roughness and damage profiles in the three-dimensional random fuse model. The analysis of damage profiles indicates that damage accumulates in a diffusive manner up to the peak load, and localization sets in abruptly at the peak load, starting from a uniform damage landscape. The global crack width scales as Wtilde L0.5 and is consistent with the scaling of localization length ξ˜L0.5 used in the data collapse of damage profiles in the postpeak regime. This consistency between the global crack roughness exponent and the postpeak damage profile localization length supports the idea that the postpeak damage profile is predominantly due to the localization produced by the catastrophic failure, which at the same time results in the formation of the final crack. Finally, the crack width distributions can be collapsed for different system sizes and follow a log-normal distribution.
Lunar PMAD technology assessment
NASA Technical Reports Server (NTRS)
Metcalf, Kenneth J.
1992-01-01
This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.
Jiang, Zhi-Shen; Wang, Fei; Xing, Da-Wei; Xu, Ting; Yan, Jian-Hua; Cen, Ke-Fa
2012-11-01
The experimental method by using the tunable diode laser absorption spectroscopy combined with the model and algo- rithm was studied to reconstruct the two-dimensional distribution of gas concentration The feasibility of the reconstruction program was verified by numerical simulation A diagnostic system consisting of 24 lasers was built for the measurement of H2O in the methane/air premixed flame. The two-dimensional distribution of H2O concentration in the flame was reconstructed, showing that the reconstruction results reflect the real two-dimensional distribution of H2O concentration in the flame. This diagnostic scheme provides a promising solution for combustion control.
From microscopic taxation and redistribution models to macroscopic income distributions
NASA Astrophysics Data System (ADS)
Bertotti, Maria Letizia; Modanese, Giovanni
2011-10-01
We present here a general framework, expressed by a system of nonlinear differential equations, suitable for the modeling of taxation and redistribution in a closed society. This framework allows one to describe the evolution of income distribution over the population and to explain the emergence of collective features based on knowledge of the individual interactions. By making different choices of the framework parameters, we construct different models, whose long-time behavior is then investigated. Asymptotic stationary distributions are found, which enjoy similar properties as those observed in empirical distributions. In particular, they exhibit power law tails of Pareto type and their Lorenz curves and Gini indices are consistent with some real world ones.
Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.
Kounios, J; Holcomb, P J
1994-07-01
Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.
A modular Space Station/Base electrical power system - Requirements and design study.
NASA Technical Reports Server (NTRS)
Eliason, J. T.; Adkisson, W. B.
1972-01-01
The requirements and procedures necessary for definition and specification of an electrical power system (EPS) for the future space station are discussed herein. The considered space station EPS consists of a replaceable main power module with self-contained auxiliary power, guidance, control, and communication subsystems. This independent power source may 'plug into' a space station module which has its own electrical distribution, control, power conditioning, and auxiliary power subsystems. Integration problems are discussed, and a transmission system selected with local floor-by-floor power conditioning and distribution in the station module. This technique eliminates the need for an immediate long range decision on the ultimate space base power sources by providing capability for almost any currently considered option.
Low cost management of replicated data in fault-tolerant distributed systems
NASA Technical Reports Server (NTRS)
Joseph, Thomas A.; Birman, Kenneth P.
1990-01-01
Many distributed systems replicate data for fault tolerance or availability. In such systems, a logical update on a data item results in a physical update on a number of copies. The synchronization and communication required to keep the copies of replicated data consistent introduce a delay when operations are performed. A technique is described that relaxes the usual degree of synchronization, permitting replicated data items to be updated concurrently with other operations, while at the same time ensuring that correctness is not violated. The additional concurrency thus obtained results in better response time when performing operations on replicated data. How this technique performs in conjunction with a roll-back and a roll-forward failure recovery mechanism is also discussed.
Programming secure mobile agents in healthcare environments using role-based permissions.
Georgiadis, C K; Baltatzis, J; Pangalos, G I
2003-01-01
The healthcare environment consists of vast amounts of dynamic and unstructured information, distributed over a large number of information systems. Mobile agent technology is having an ever-growing impact on the delivery of medical information. It supports acquiring and manipulating information distributed in a large number of information systems. Moreover is suitable for the computer untrained medical stuff. But the introduction of mobile agents generates advanced threads to the sensitive healthcare information, unless the proper countermeasures are taken. By applying the role-based approach to the authorization problem, we ease the sharing of information between hospital information systems and we reduce the administering part. The different initiative of the agent's migration method, results in different methods of assigning roles to the agent.
NASA Technical Reports Server (NTRS)
Soltis, Steven R.; Ruwart, Thomas M.; OKeefe, Matthew T.
1996-01-01
The global file system (GFS) is a prototype design for a distributed file system in which cluster nodes physically share storage devices connected via a network-like fiber channel. Networks and network-attached storage devices have advanced to a level of performance and extensibility so that the previous disadvantages of shared disk architectures are no longer valid. This shared storage architecture attempts to exploit the sophistication of storage device technologies whereas a server architecture diminishes a device's role to that of a simple component. GFS distributes the file system responsibilities across processing nodes, storage across the devices, and file system resources across the entire storage pool. GFS caches data on the storage devices instead of the main memories of the machines. Consistency is established by using a locking mechanism maintained by the storage devices to facilitate atomic read-modify-write operations. The locking mechanism is being prototyped in the Silicon Graphics IRIX operating system and is accessed using standard Unix commands and modules.
2013-06-01
simulation of complex systems (Sterman 2000, Meadows 2008): a) Causal Loop Diagrams. A Causal Loop Diagram ( CLD ) is used to represent the feedback...structure of the dynamic system. CLDs consist of variables in the system being connected by arrows to show their causal influences and relationships. It is...distribution of orders will be included in the model. 6.4.2 Causal Loop Diagrams The CLD , as seen in Figure 5, is derived from the WDA constructs for the
Advanced Manufacturing Systems in Food Processing and Packaging Industry
NASA Astrophysics Data System (ADS)
Shafie Sani, Mohd; Aziz, Faieza Abdul
2013-06-01
In this paper, several advanced manufacturing systems in food processing and packaging industry are reviewed, including: biodegradable smart packaging and Nano composites, advanced automation control system consists of fieldbus technology, distributed control system and food safety inspection features. The main purpose of current technology in food processing and packaging industry is discussed due to major concern on efficiency of the plant process, productivity, quality, as well as safety. These application were chosen because they are robust, flexible, reconfigurable, preserve the quality of the food, and efficient.
NASA Technical Reports Server (NTRS)
Bifano, W. J.; Ratajczak, A. F.; Ice, W. J.
1978-01-01
A stand alone photovoltaic power system for installation in the Papago Indian village of Schuchuli is being designed and fabricated to provide electricity for village water pumping and basic domestic needs. The system will consist of a 3.5 kW (peak) photovoltaic array; controls, instrumentations, and storage batteries located in an electrical equipment building and a 120 volt dc village distribution network. The system will power a 2 HP dc electric motor.
NASA Astrophysics Data System (ADS)
Mahmud, Md. Almostasim; MacDonald, Brendan D.
2017-01-01
In this paper we experimentally examine evaporation flux distributions and modes of interfacial energy transport for continuously fed evaporating spherical sessile water droplets in a regime that is relevant for applications, particularly for evaporative cooling systems. The contribution of the thermal conduction through the vapor phase was found to be insignificant compared to the thermal conduction through the liquid phase for the conditions we investigated. The local evaporation flux distributions associated with thermal conduction were found to vary along the surface of the droplet. Thermal conduction provided a majority of the energy required for evaporation but did not account for all of the energy transport, contributing 64 ±3 % , 77 ±3 % , and 77 ±4 % of the energy required for the three cases we examined. Based on the temperature profiles measured along the interface we found that thermocapillary flow was predicted to occur in our experiments, and two convection cells were consistent with the temperature distributions for higher substrate temperatures while a single convection cell was consistent with the temperature distributions for a lower substrate temperature.
Seat Interfaces for Aircrew Performance and Safety
2010-01-01
Quantum -II Desktop System consists of a keyboard and hardware accessories (electrodes, cables, etc.), and interfaces with a desktop computer via software...segment. Resistance and reactance data was collected to estimate blood volume changes. The Quantum -II Desktop system collected continuous data of...Approved for public release; distribution unlimited. 88 ABW Cleared 03/13/2015; 88ABW-2015-1053. mockup also included a laptop computer , a
Progress in space power technology
NASA Technical Reports Server (NTRS)
Mullin, J. P.; Randolph, L. P.; Hudson, W. R.
1980-01-01
The National Aeronautics and Space Administration's Space Power Research and Technology Program has the objective of providing the technology base for future space power systems. The current technology program which consists of photovoltaic energy conversion, chemical energy conversion and storage, thermal-to-electric conversion, power systems management and distribution, and advanced energetics is discussed. In each area highlights, current programs, and near-term directions will be presented.
Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments
NASA Technical Reports Server (NTRS)
Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)
2002-01-01
The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.
Tcherniavski, Iouri; Kahrizi, Mojtaba
2008-11-20
Using a gradient optimization method with objective functions formulated in terms of a signal-to-noise ratio (SNR) calculated at given values of the prescribed spatial ground resolution, optimization problems of geometrical parameters of a distributed optical system and a charge-coupled device of a space-based optical-electronic system are solved for samples of the optical systems consisting of two and three annular subapertures. The modulation transfer function (MTF) of the distributed aperture is expressed in terms of an average MTF taking residual image alignment (IA) and optical path difference (OPD) errors into account. The results show optimal solutions of the optimization problems depending on diverse variable parameters. The information on the magnitudes of the SNR can be used to determine the number of the subapertures and their sizes, while the information on the SNR decrease depending on the IA and OPD errors can be useful in design of a beam combination control system to produce the necessary requirements to its accuracy on the basis of the permissible deterioration in the image quality.
NASA Astrophysics Data System (ADS)
Syaina, L. P.; Majidi, M. A.
2018-04-01
Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.
Consistent second-order boundary implementations for convection-diffusion lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chew, Jia Wei
2018-02-01
In this study, an alternative second-order boundary scheme is proposed under the framework of the convection-diffusion lattice Boltzmann (LB) method for both straight and curved geometries. With the proposed scheme, boundary implementations are developed for the Dirichlet, Neumann and linear Robin conditions in a consistent way. The Chapman-Enskog analysis and the Hermite polynomial expansion technique are first applied to derive the explicit expression for the general distribution function with second-order accuracy. Then, the macroscopic variables involved in the expression for the distribution function is determined by the prescribed macroscopic constraints and the known distribution functions after streaming [see the paragraph after Eq. (29) for the discussions of the "streaming step" in LB method]. After that, the unknown distribution functions are obtained from the derived macroscopic information at the boundary nodes. For straight boundaries, boundary nodes are directly placed at the physical boundary surface, and the present scheme is applied directly. When extending the present scheme to curved geometries, a local curvilinear coordinate system and first-order Taylor expansion are introduced to relate the macroscopic variables at the boundary nodes to the physical constraints at the curved boundary surface. In essence, the unknown distribution functions at the boundary node are derived from the known distribution functions at the same node in accordance with the macroscopic boundary conditions at the surface. Therefore, the advantages of the present boundary implementations are (i) the locality, i.e., no information from neighboring fluid nodes is required; (ii) the consistency, i.e., the physical boundary constraints are directly applied when determining the macroscopic variables at the boundary nodes, thus the three kinds of conditions are realized in a consistent way. It should be noted that the present focus is on two-dimensional cases, and theoretical derivations as well as the numerical validations are performed in the framework of the two-dimensional five-velocity lattice model.
NASA Astrophysics Data System (ADS)
Kobulnicky, Henry A.; Kiminki, Daniel C.; Lundquist, Michael J.; Burke, Jamison; Chapman, James; Keller, Erica; Lester, Kathryn; Rolen, Emily K.; Topel, Eric; Bhattacharjee, Anirban; Smullen, Rachel A.; Vargas Álvarez, Carlos A.; Runnoe, Jessie C.; Dale, Daniel A.; Brotherton, Michael M.
2014-08-01
We analyze orbital solutions for 48 massive multiple-star systems in the Cygnus OB2 association, 23 of which are newly presented here, to find that the observed distribution of orbital periods is approximately uniform in log P for P < 45 days, but it is not scale-free. Inflections in the cumulative distribution near 6 days, 14 days, and 45 days suggest key physical scales of sime0.2, sime0.4, and sime1 A.U. where yet-to-be-identified phenomena create distinct features. No single power law provides a statistically compelling prescription, but if features are ignored, a power law with exponent β ~= -0.22 provides a crude approximation over P = 1.4-2000 days, as does a piece-wise linear function with a break near 45 days. The cumulative period distribution flattens at P > 45 days, even after correction for completeness, indicating either a lower binary fraction or a shift toward low-mass companions. A high degree of similarity (91% likelihood) between the Cyg OB2 period distribution and that of other surveys suggests that the binary properties at P <~ 25 days are determined by local physics of disk/clump fragmentation and are relatively insensitive to environmental and evolutionary factors. Fully 30% of the unbiased parent sample is a binary with period P < 45 days. Completeness corrections imply a binary fraction near 55% for P < 5000 days. The observed distribution of mass ratios 0.2 < q < 1 is consistent with uniform, while the observed distribution of eccentricities 0.1 < e < 0.6 is consistent with uniform plus an excess of e ~= 0 systems. We identify six stars, all supergiants, that exhibit aperiodic velocity variations of ~30 km s-1 attributed to atmospheric fluctuations.
40 CFR 56.6 - Dissemination of policy and guidance.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) REGIONAL CONSISTENCY § 56.6 Dissemination of policy and guidance. The Assistant Administrators of... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Dissemination of policy and guidance... to disseminate policy and guidance. They shall distribute material under foregoing systems to the...
40 CFR 56.6 - Dissemination of policy and guidance.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) REGIONAL CONSISTENCY § 56.6 Dissemination of policy and guidance. The Assistant Administrators of... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Dissemination of policy and guidance... to disseminate policy and guidance. They shall distribute material under foregoing systems to the...
Direct measurements of temperature-dependent laser absorptivity of metal powders
Rubenchik, A.; Wu, S.; Mitchell, S.; ...
2015-08-12
Here, a compact system is developed to measure laser absorptivity for a variety of powder materials (metals, ceramics, etc.) with different powder size distributions and thicknesses. The measured results for several metal powders are presented. The results are consistent with those from ray tracing calculations.
DOT National Transportation Integrated Search
2012-07-01
For years, specifications have focused on the water to cement ratio (w/cm) and strength of concrete, despite the majority of the volume : of a concrete mixture consisting of aggregate. An aggregate distribution of roughly 60% coarse aggregate and 40%...
Direct measurements of temperature-dependent laser absorptivity of metal powders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubenchik, A.; Wu, S.; Mitchell, S.
Here, a compact system is developed to measure laser absorptivity for a variety of powder materials (metals, ceramics, etc.) with different powder size distributions and thicknesses. The measured results for several metal powders are presented. The results are consistent with those from ray tracing calculations.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris D,
2010-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from TRMM data as a cluster of pixels with an 85 GHz polarization-corrected brightness temperature below 255 K and with an area at least 64 km 2. The study database consisted of convective systems in West Africa from May Sep for 1998-2007 and in the western Pacific from May Nov 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences among the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Sub-setting the database revealed some sensitivity in distribution shape to the size of the sampling area, length of sample period, and climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is wetter or drier than normal.
NASA Astrophysics Data System (ADS)
Bier, Martin
2018-02-01
Nonequilibrium systems commonly exhibit Lévy noise. This means that the distribution for the size of the Brownian fluctuations has a "fat" power-law tail. Large Brownian kicks are then more common as compared to the ordinary Gaussian distribution. We consider a two-state system, i.e., two wells and a barrier in between. The barrier is sufficiently high for a barrier crossing to be a rare event. When the noise is Lévy, we do not get a Boltzmann distribution between the two wells. Instead we get a situation where the distribution between the two wells also depends on the height of the barrier that is in between. Ordinarily, a catalyst, by lowering the barrier between two states, speeds up the relaxation to an equilibrium, but does not change the equilibrium distribution. In an environment with Lévy noise, on the other hand, we have the possibility of epicatalysis, i.e., a catalyst effectively altering the distribution between two states through the changing of the barrier height. After deriving formulas to quantitatively describe this effect, we discuss how this idea may apply in nuclear reactors and in the biochemistry of a living cell.
Pinto, Ameet J.; Schroeder, Joanna; Lunn, Mary; Sloan, William
2014-01-01
ABSTRACT Bacterial communities migrate continuously from the drinking water treatment plant through the drinking water distribution system and into our built environment. Understanding bacterial dynamics in the distribution system is critical to ensuring that safe drinking water is being supplied to customers. We present a 15-month survey of bacterial community dynamics in the drinking water system of Ann Arbor, MI. By sampling the water leaving the treatment plant and at nine points in the distribution system, we show that the bacterial community spatial dynamics of distance decay and dispersivity conform to the layout of the drinking water distribution system. However, the patterns in spatial dynamics were weaker than those for the temporal trends, which exhibited seasonal cycling correlating with temperature and source water use patterns and also demonstrated reproducibility on an annual time scale. The temporal trends were driven by two seasonal bacterial clusters consisting of multiple taxa with different networks of association within the larger drinking water bacterial community. Finally, we show that the Ann Arbor data set robustly conforms to previously described interspecific occupancy abundance models that link the relative abundance of a taxon to the frequency of its detection. Relying on these insights, we propose a predictive framework for microbial management in drinking water systems. Further, we recommend that long-term microbial observatories that collect high-resolution, spatially distributed, multiyear time series of community composition and environmental variables be established to enable the development and testing of the predictive framework. PMID:24865557
Distributed Prognostics based on Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, I.
2014-01-01
Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based models are constructed that describe the operation of a system and how it fails. Such approaches consist of an estimation phase, in which the health state of the system is first identified, and a prediction phase, in which the health state is projected forward in time to determine the end of life. Centralized solutions to these problems are often computationally expensive, do not scale well as the size of the system grows, and introduce a single point of failure. In this paper, we propose a novel distributed model-based prognostics scheme that formally describes how to decompose both the estimation and prediction problems into independent local subproblems whose solutions may be easily composed into a global solution. The decomposition of the prognostics problem is achieved through structural decomposition of the underlying models. The decomposition algorithm creates from the global system model a set of local submodels suitable for prognostics. Independent local estimation and prediction problems are formed based on these local submodels, resulting in a scalable distributed prognostics approach that allows the local subproblems to be solved in parallel, thus offering increases in computational efficiency. Using a centrifugal pump as a case study, we perform a number of simulation-based experiments to demonstrate the distributed approach, compare the performance with a centralized approach, and establish its scalability. Index Terms-model-based prognostics, distributed prognostics, structural model decomposition ABBREVIATIONS
Renken, Robert A.
1996-01-01
The Southeastern Coastal Plain aquifer system consists of a thick sequence of unconsolidated to poorly consolidated Cretaceous and Tertiary rocks that extend from Mississippi to South Carolina. Four regional sand and gravel aquifers are separated by three regional confining units of clay, shale, and chalk that do not conform everywhere to stratigraphic boundaries. The change in geologic facies is the most important factor controlling the distribution of transmissivity within the aquifer system.
The Design of a 100 GHz CARM (Cyclotron Auto-Resonance Maser) Oscillator Experiment
1988-09-14
pulsed-power system must be considered. A model of the voltage pulse that consists of a linear voltage rise from zero to the operating voltage...to vary as the voltage to the 3/2 power in order to model space-charge limited flow from a relativistic diode.. As the current rises in the pulse, the...distribution due to a space-charge-limited, laminar flow of electrons based on a one-dimensional, planar, relativistic model . From the charge distribution
Solar heating system installed at Troy, Ohio
NASA Technical Reports Server (NTRS)
1980-01-01
The completed system was composed of three basic subsystems: the collector system consisting of 3,264 square feet of Owens Illinois evacuated glass tube collectors; the storage system which included a 5,000 gallon insulated steel tank; and the distribution and control system which included piping, pumping and heat transfer components as well as the solemoid activated valves and control logic for the efficient and safe operation of the entire system. This solar heating system was installed in an existing facility and was, therefore, a retrofit system. Extracts from the site files, specifications, drawings, installation, operation and maintenance instructions are included.
Comparing particle-size distributions in modern and ancient sand-bed rivers
NASA Astrophysics Data System (ADS)
Hajek, E. A.; Lynds, R. M.; Huzurbazar, S. V.
2011-12-01
Particle-size distributions yield valuable insight into processes controlling sediment supply, transport, and deposition in sedimentary systems. This is especially true in ancient deposits, where effects of changing boundary conditions and autogenic processes may be detected from deposited sediment. In order to improve interpretations in ancient deposits and constrain uncertainty associated with new methods for paleomorphodynamic reconstructions in ancient fluvial systems, we compare particle-size distributions in three active sand-bed rivers in central Nebraska (USA) to grain-size distributions from ancient sandy fluvial deposits. Within the modern rivers studied, particle-size distributions of active-layer, suspended-load, and slackwater deposits show consistent relationships despite some morphological and sediment-supply differences between the rivers. In particular, there is substantial and consistent overlap between bed-material and suspended-load distributions, and the coarsest material found in slackwater deposits is comparable to the coarse fraction of suspended-sediment samples. Proxy bed-load and slackwater-deposit samples from the Kayenta Formation (Lower Jurassic, Utah/Colorado, USA) show overlap similar to that seen in the modern rivers, suggesting that these deposits may be sampled for paleomorphodynamic reconstructions, including paleoslope estimation. We also compare grain-size distributions of channel, floodplain, and proximal-overbank deposits in the Willwood (Paleocene/Eocene, Bighorn Basin, Wyoming, USA), Wasatch (Paleocene/Eocene, Piceance Creek Basin, Colorado, USA), and Ferris (Cretaceous/Paleocene, Hanna Basin, Wyoming, USA) formations. Grain-size characteristics in these deposits reflect how suspended- and bed-load sediment is distributed across the floodplain during channel avulsion events. In order to constrain uncertainty inherent in such estimates, we evaluate uncertainty associated with sample collection, preparation, analytical particle-size analysis, and statistical characterization in both modern and ancient settings. We consider potential error contributions and evaluate the degree to which this uncertainty might be significant in modern sediment-transport studies and ancient paleomorphodynamic reconstructions.
Investigating the Luminous Environment of SDSS Data Release 4 Mg II Absorption Line Systems
NASA Astrophysics Data System (ADS)
Caler, Michelle A.; Ravi, Sheth K.
2018-01-01
We investigate the luminous environment within a few hundred kiloparsecs of 3760 Mg II absorption line systems. These systems lie along 3760 lines of sight to Sloan Digital Sky Survey (SDSS) Data Release 4 QSOs, have redshifts that range between 0.37 ≤ z ≤ 0.82, and have rest equivalent widths greater than 0.18 Å. We use the SDSS Catalog Archive Server to identify galaxies projected near 3 arcminutes of the absorbing QSO’s position, and a background subtraction technique to estimate the absolute magnitude distribution and luminosity function of galaxies physically associated with these Mg II absorption line systems. The Mg II absorption system sample is split into two parts, with the split occurring at rest equivalent width 0.8 Å, and the resulting absolute magnitude distributions and luminosity functions compared on scales ranging from 50 h-1 kpc to 880 h-1 kpc. We find that, on scales of 100 h-1 kpc and smaller, the two distributions differ: the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width ≥ 0.8 Å (2750 lines of sight) seems to be approximated by that of elliptical-Sa type galaxies, whereas the absolute magnitude distribution of galaxies associated with systems of rest frame equivalent width < 0.8 Å (1010 lines of sight) seems to be approximated by that of Sa-Sbc type galaxies. However, on larger scales greater than 200 h-1 kpc, both distributions are broadly consistent with that of elliptical-Sa type galaxies. We note that, in a broader context, these results represent an estimate of the bright end of the galaxy luminosity function at a median redshift of z ˜ 0.65.
NASA Astrophysics Data System (ADS)
Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro
Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.
Effects of Spatial Gradients on Electron Runaway Acceleration
NASA Technical Reports Server (NTRS)
MacNeice, Peter; Ljepojevic, N. N.
1996-01-01
The runaway process is known to accelerate electrons in many laboratory plasmas and has been suggested as an acceleration mechanism in some astrophysical plasmas, including solar flares. Current calculations of the electron velocity distributions resulting from the runaway process are greatly restricted because they impose spatial homogeneity on the distribution. We have computed runaway distributions which include consistent development of spatial gradients in the energetic tail. Our solution for the electron velocity distribution is presented as a function of distance along a finite length acceleration region, and is compared with the equivalent distribution for the infinitely long homogenous system (i.e., no spatial gradients), as considered in the existing literature. All these results are for the weak field regime. We also discuss the severe restrictiveness of this weak field assumption.
Minimal Distance to Approximating Noncontextual System as a Measure of Contextuality
NASA Astrophysics Data System (ADS)
Kujala, Janne V.
2017-07-01
Let random vectors Rc={Rpc:p\\in Pc} represent joint measurements of certain subsets Pc\\subset P of properties p\\in P in different contexts c\\in C. Such a system is traditionally called noncontextual if there exists a jointly distributed set {Qp:p\\in P} of random variables such that Rc has the same distribution as {Qp:p\\in Pc} for all c\\in C. A trivial necessary condition for noncontextuality and a precondition for many measures of contextuality is that the system is consistently connected, i.e., all Rpc,Rp^{c^' }},\\dots measuring the same property p\\in P have the same distribution. The contextuality-by-default (CbD) approach allows defining more general measures of contextuality that apply to inconsistently connected systems as well, but at the price of a higher computational cost. In this paper we propose a novel measure of contextuality that shares the generality of the CbD approach and the computational benefits of the previously proposed negative probability (NP) approach. The present approach differs from CbD in that instead of considering all possible joints of the double-indexed random variables Rpc, it considers all possible approximating single-indexed systems {Qp:p\\in P}. The degree of contextuality is defined based on the minimum possible probabilistic distance of the actual measurements Rc from {Qp:p\\in Pc}. We show that this measure, called the optimal approximation (OA) measure, agrees with a certain measure of contextuality of the CbD approach for all systems where each property enters in exactly two contexts. The OA measure can be calculated far more efficiently than the CbD measure and even more efficiently than the NP measure for sufficiently large systems. We also define a variant, the OA-NP measure of contextuality that agrees with the NP measure for consistently connected (non-signaling) systems while extending it to inconsistently connected systems.
NASA Astrophysics Data System (ADS)
Želi, Velibor; Zorica, Dušan
2018-02-01
Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.
NASA Astrophysics Data System (ADS)
Enginoglu, Ozan; Ozturk, Hasan
2016-12-01
This study presents a new mass distribution control system (MDCS) along with its analysis and simulation. It is aimed to balance a system containing rotating parts in order to minimize the dynamic vibration on it. For this purpose, a test mechanism rotating with an angular velocity of ω is simulated. The mechanism consists of a pair of MDCS, each containing three flaps connected to the shaft. The flaps rotate in relation to the shaft's plane of rotation. The center of gravity (COG) of the MDCS is concentric with the shaft axis when all three flaps are stretched out but the COG changes as the flaps rotate. By adjusting the orientations of the flaps in both systems, it is possible to create a counterforce which suppresses the imbalance force, reducing the vibration to a minimum.
Integrated multimedia medical data agent in E-health.
di Giacomo, P; Ricci, Fabrizio L; Bocchi, Leonardo
2006-01-01
E-Health is producing a great impact in the field of information distribution of the health services to the intra-hospital and the public. Previous researches have addressed the development of system architectures in the aim of integrating the distributed and heterogeneous medical information systems. The easing of difficulties in the sharing and management of medical data and the timely accessibility to these data is a critical need for health care providers. We have proposed a client-server agent that allows a portal to the every permitted Information System of the Hospital that consists of PACS, RIS and HIS via the Intranet and the Internet. Our proposed agent enables remote access into the usually closed information system of the hospital and a server that indexes all the medical data which allows for in-depth and complex search queries for data retrieval.
A study on reliability of power customer in distribution network
NASA Astrophysics Data System (ADS)
Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin
2017-05-01
The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.
Site 300 City Water Master Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, Jeff
Lawrence Livermore National Laboratory (LLNL), a scientific research facility, operates an experimental test site known as Site 300. The site is located in a remote area of southeastern Alameda County, California, and consists of about 100 facilities spread across 7,000-acres. The Site 300 water system includes groundwater wells and a system of storage tanks, booster pumps, and underground piping to distribute water to buildings and significant areas throughout the site. Site 300, which is classified as a non-transient non-community (NTNC) water system, serves approximately 110 employees through 109 service connections. The distribution system includes approximately 76,500-feet of water mains varyingmore » from 4- to 10-inches in diameter, mostly asbestos cement (AC) pipe, and eleven water storage tanks. The water system is divided into four pressure zones fed by three booster pump stations to tanks in each zone.« less
Architecture of PAU survey camera readout electronics
NASA Astrophysics Data System (ADS)
Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo
2012-07-01
PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.
Status and interconnections of selected environmental issues in the global coastal zones
Shi, Hua; Singh, Ashbindu
2003-01-01
This study focuses on assessing the state of population distribution, land cover distribution, biodiversity hotspots, and protected areas in global coastal zones. The coastal zone is defined as land within 100 km of the coastline. This study attempts to answer such questions as: how crowded are the coastal zones, what is the pattern of land cover distribution in these areas, how much of these areas are designated as protected areas, what is the state of the biodiversity hotspots, and what are the interconnections between people and coastal environment. This study uses globally consistent and comprehensive geospatial datasets based on remote sensing and other sources. The application of Geographic Information System (GIS) layering methods and consistent datasets has made it possible to identify and quantify selected coastal zones environmental issues and their interconnections. It is expected that such information provide a scientific basis for global coastal zones management and assist in policy formulations at the national and international levels.
The kinematics of dense clusters of galaxies. II - The distribution of velocity dispersions
NASA Technical Reports Server (NTRS)
Zabludoff, Ann I.; Geller, Margaret J.; Huchra, John P.; Ramella, Massimo
1993-01-01
From the survey of 31 Abell R above 1 cluster fields within z of 0.02-0.05, we extract 25 dense clusters with velocity dispersions omicron above 300 km/s and with number densities exceeding the mean for the Great Wall of galaxies by one deviation. From the CfA Redshift Survey (in preparation), we obtain an approximately volume-limited catalog of 31 groups with velocity dispersions above 100 km/s and with the same number density limit. We combine these well-defined samples to obtain the distribution of cluster velocity dispersions. The group sample enables us to correct for incompleteness in the Abell catalog at low velocity dispersions. The clusters from the Abell cluster fields populate the high dispersion tail. For systems with velocity dispersions above 700 km/s, approximately the median for R = 1 clusters, the group and cluster abundances are consistent. The combined distribution is consistent with cluster X-ray temperature functions.
Tomograms for open quantum systems: In(finite) dimensional optical and spin systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thapliyal, Kishore, E-mail: tkishore36@yahoo.com; Banerjee, Subhashish, E-mail: subhashish@iitj.ac.in; Pathak, Anirban, E-mail: anirban.pathak@gmail.com
Tomograms are obtained as probability distributions and are used to reconstruct a quantum state from experimentally measured values. We study the evolution of tomograms for different quantum systems, both finite and infinite dimensional. In realistic experimental conditions, quantum states are exposed to the ambient environment and hence subject to effects like decoherence and dissipation, which are dealt with here, consistently, using the formalism of open quantum systems. This is extremely relevant from the perspective of experimental implementation and issues related to state reconstruction in quantum computation and communication. These considerations are also expected to affect the quasiprobability distribution obtained frommore » experimentally generated tomograms and nonclassicality observed from them. -- Highlights: •Tomograms are constructed for open quantum systems. •Finite and infinite dimensional quantum systems are studied. •Finite dimensional systems (phase states, single & two qubit spin states) are studied. •A dissipative harmonic oscillator is considered as an infinite dimensional system. •Both pure dephasing as well as dissipation effects are studied.« less
Model-based reasoning for power system management using KATE and the SSM/PMAD
NASA Technical Reports Server (NTRS)
Morris, Robert A.; Gonzalez, Avelino J.; Carreira, Daniel J.; Mckenzie, F. D.; Gann, Brian
1993-01-01
The overall goal of this research effort has been the development of a software system which automates tasks related to monitoring and controlling electrical power distribution in spacecraft electrical power systems. The resulting software system is called the Intelligent Power Controller (IPC). The specific tasks performed by the IPC include continuous monitoring of the flow of power from a source to a set of loads, fast detection of anomalous behavior indicating a fault to one of the components of the distribution systems, generation of diagnosis (explanation) of anomalous behavior, isolation of faulty object from remainder of system, and maintenance of flow of power to critical loads and systems (e.g. life-support) despite fault conditions being present (recovery). The IPC system has evolved out of KATE (Knowledge-based Autonomous Test Engineer), developed at NASA-KSC. KATE consists of a set of software tools for developing and applying structure and behavior models to monitoring, diagnostic, and control applications.
Zhang, Zhen; Ma, Cheng; Zhu, Rong
2016-10-14
High integration of multi-functional instruments raises a critical issue in temperature control that is challenging due to its spatial-temporal complexity. This paper presents a multi-input multi-output (MIMO) self-tuning temperature sensing and control system for efficiently modulating the temperature environment within a multi-module instrument. The smart system ensures that the internal temperature of the instrument converges to a target without the need of a system model, thus making the control robust. The system consists of a fully-connected proportional-integral-derivative (PID) neural network (FCPIDNN) and an on-line self-tuning module. The experimental results show that the presented system can effectively control the internal temperature under various mission scenarios, in particular, it is able to self-reconfigure upon actuator failure. The system provides a new scheme for a complex and time-variant MIMO control system which can be widely applied for the distributed measurement and control of the environment in instruments, integration electronics, and house constructions.
Water Distribution System Risk Tool for Investment Planning (WaterRF Report 4332)
Product Description/Abstract The product consists of the Pipe Risk Screening Tool (PRST), and a report on the development and use of the tool. The PRST is a software-based screening aid to identify and rank candidate pipes for actions that range from active monitoring (including...
Recent progress in distributed optical fiber Raman photon sensors at China Jiliang University
NASA Astrophysics Data System (ADS)
Zhang, Zaixuan; Wang, Jianfeng; Li, Yi; Gong, Huaping; Yu, Xiangdong; Liu, Honglin; Jin, Yongxing; Kang, Juan; Li, Chenxia; Zhang, Wensheng; Zhang, Wenping; Niu, Xiaohui; Sun, Zhongzhou; Zhao, Chunliu; Dong, Xinyong; Jin, Shangzhong
2012-06-01
A brief review of recent progress in researches, productions and applications of full distributed fiber Raman photon sensors at China Jiliang University (CJLU) is presented. In order to improve the measurement distance, the accuracy, the space resolution, the ability of multi-parameter measurements, and the intelligence of full distributed fiber sensor systems, a new generation fiber sensor technology based on the optical fiber nonlinear scattering fusion principle is proposed. A series of new generation full distributed fiber sensors are investigated and designed, which consist of new generation ultra-long distance full distributed fiber Raman and Rayleigh scattering photon sensors integrated with a fiber Raman amplifier, auto-correction full distributed fiber Raman photon temperature sensors based on Raman correlation dual sources, full distributed fiber Raman photon temperature sensors based on a pulse coding source, full distributed fiber Raman photon temperature sensors using a fiber Raman wavelength shifter, a new type of Brillouin optical time domain analyzers (BOTDAs) integrated with a fiber Raman amplifier for replacing a fiber Brillouin amplifier, full distributed fiber Raman and Brillouin photon sensors integrated with a fiber Raman amplifier, and full distributed fiber Brillouin photon sensors integrated with a fiber Brillouin frequency shifter. The Internet of things is believed as one of candidates of the next technological revolution, which has driven hundreds of millions of class markets. Sensor networks are important components of the Internet of things. The full distributed optical fiber sensor network (Rayleigh, Raman, and Brillouin scattering) is a 3S (smart materials, smart structure, and smart skill) system, which is easy to construct smart fiber sensor networks. The distributed optical fiber sensor can be embedded in the power grids, railways, bridges, tunnels, roads, constructions, water supply systems, dams, oil and gas pipelines and other facilities, and can be integrated with wireless networks.
Polarizable atomic multipole-based force field for DOPC and POPE membrane lipids
NASA Astrophysics Data System (ADS)
Chu, Huiying; Peng, Xiangda; Li, Yan; Zhang, Yuebin; Min, Hanyi; Li, Guohui
2018-04-01
A polarizable atomic multipole-based force field for the membrane bilayer models 1,2-dioleoyl-phosphocholine (DOPC) and 1-palmitoyl-2-oleoyl-phosphatidylethanolamine (POPE) has been developed. The force field adopts the same framework as the Atomic Multipole Optimized Energetics for Biomolecular Applications (AMOEBA) model, in which the charge distribution of each atom is represented by the permanent atomic monopole, dipole and quadrupole moments. Many-body polarization including the inter- and intra-molecular polarization is modelled in a consistent manner with distributed atomic polarizabilities. The van der Waals parameters were first transferred from existing AMOEBA parameters for small organic molecules and then optimised by fitting to ab initio intermolecular interaction energies between models and a water molecule. Molecular dynamics simulations of the two aqueous DOPC and POPE membrane bilayer systems, consisting of 72 model molecules, were then carried out to validate the force field parameters. Membrane width, area per lipid, volume per lipid, deuterium order parameters, electron density profile, etc. were consistent with experimental values.
Evolution of Combustion-Generated Particles at Tropospheric Conditions
NASA Technical Reports Server (NTRS)
Tacina, Kathleen M.; Heath, Christopher M.
2012-01-01
This paper describes particle evolution measurements taken in the Particulate Aerosol Laboratory (PAL). The PAL consists of a burner capable of burning jet fuel that exhausts into an altitude chamber that can simulate temperature and pressure conditions up to 13,700 m. After presenting results from initial temperature distributions inside the chamber, particle count data measured in the altitude chamber are shown. Initial particle count data show that the sampling system can have a significant effect on the measured particle distribution: both the value of particle number concentration and the shape of the radial distribution of the particle number concentration depend on whether the measurement probe is heated or unheated.
Generation of flower high-order Poincaré sphere laser beams from a spatial light modulator
NASA Astrophysics Data System (ADS)
Lu, T. H.; Huang, T. D.; Wang, J. G.; Wang, L. W.; Alfano, R. R.
2016-12-01
We propose and experimentally demonstrate a new complex laser beam with inhomogeneous polarization distributions mapping onto high-order Poincaré spheres (HOPSs). The complex laser mode is achieved by superposition of Laguerre-Gaussian modes and manifests exotic flower-like localization on intensity and phase profiles. A simple optical system is used to generate a polarization-variant distribution on the complex laser mode by superposition of orthogonal circular polarizations with opposite topological charges. Numerical analyses of the polarization distribution are consistent with the experimental results. The novel flower HOPS beams can act as a new light source for photonic applications.
Trading Freshness for Performance in Distributed Systems
2014-12-01
Systems (IOPADS ’99), May 1999. 3.6 Charles Babcock. Data, data, everywhere. Information Week, January 2006. 3.2 Peter Bailis, Shivaram Venkataraman ... Venkataraman , Michael J. Franklin, Joseph M. Hellerstein, and Ion Stoica. Pbs at work: Advancing data management with consistency metrics. In Proceedings of...5. doi: 10.1145/2463676.2465260. URL http://doi.acm.org/10. 1145/2463676.2465260. 2.4 Peter Bailis, Shivaram Venkataraman , Michael J. Franklin, Joseph
Performance Analysis of the Unitree Central File
NASA Technical Reports Server (NTRS)
Pentakalos, Odysseas I.; Flater, David
1994-01-01
This report consists of two parts. The first part briefly comments on the documentation status of two major systems at NASA#s Center for Computational Sciences, specifically the Cray C98 and the Convex C3830. The second part describes the work done on improving the performance of file transfers between the Unitree Mass Storage System running on the Convex file server and the users workstations distributed over a large georgraphic area.
ETICS: the international software engineering service for the grid
NASA Astrophysics Data System (ADS)
Meglio, A. D.; Bégin, M.-E.; Couvares, P.; Ronchieri, E.; Takacs, E.
2008-07-01
The ETICS system is a distributed software configuration, build and test system designed to fulfil the needs of improving the quality, reliability and interoperability of distributed software in general and grid software in particular. The ETICS project is a consortium of five partners (CERN, INFN, Engineering Ingegneria Informatica, 4D Soft and the University of Wisconsin-Madison). The ETICS service consists of a build and test job execution system based on the Metronome software and an integrated set of web services and software engineering tools to design, maintain and control build and test scenarios. The ETICS system allows taking into account complex dependencies among applications and middleware components and provides a rich environment to perform static and dynamic analysis of the software and execute deployment, system and interoperability tests. This paper gives an overview of the system architecture and functionality set and then describes how the EC-funded EGEE, DILIGENT and OMII-Europe projects are using the software engineering services to build, validate and distribute their software. Finally a number of significant use and test cases will be described to show how ETICS can be used in particular to perform interoperability tests of grid middleware using the grid itself.
Suppression of fixed pattern noise for infrared image system
NASA Astrophysics Data System (ADS)
Park, Changhan; Han, Jungsoo; Bae, Kyung-Hoon
2008-04-01
In this paper, we propose suppression of fixed pattern noise (FPN) and compensation of soft defect for improvement of object tracking in cooled staring infrared focal plane array (IRFPA) imaging system. FPN appears an observable image which applies to non-uniformity compensation (NUC) by temperature. Soft defect appears glittering black and white point by characteristics of non-uniformity for IR detector by time. This problem is very important because it happen serious problem for object tracking as well as degradation for image quality. Signal processing architecture in cooled staring IRFPA imaging system consists of three tables: low, normal, high temperature for reference gain and offset values. Proposed method operates two offset tables for each table. This is method which operates six term of temperature on the whole. Proposed method of soft defect compensation consists of three stages: (1) separates sub-image for an image, (2) decides a motion distribution of object between each sub-image, (3) analyzes for statistical characteristic from each stationary fixed pixel. Based on experimental results, the proposed method shows an improved image which suppresses FPN by change of temperature distribution from an observational image in real-time.
Feasibility analysis of marine ecological on-line integrated monitoring system
NASA Astrophysics Data System (ADS)
Chu, D. Z.; Cao, X.; Zhang, S. W.; Wu, N.; Ma, R.; Zhang, L.; Cao, L.
2017-08-01
The in-situ water quality sensors were susceptible to biological attachment. Moreover, sea water corrosion and wave impact damage, and many sensors scattered distribution would cause maintenance inconvenience. The paper proposed a highly integrated marine ecological on-line integrated monitoring system, which can be used inside monitoring station. All sensors were reasonably classified, the similar in series, the overall in parallel. The system composition and workflow were described. In addition, the paper proposed attention issues of the system design and corresponding solutions. Water quality multi-parameters and 5 nutrient salts as the verification index, in-situ and systematic data comparison experiment were carried out. The results showed that the data consistency of nutrient salt, PH and salinity was better. Temperature and dissolved oxygen data trend was consistent, but the data had deviation. Turbidity fluctuated greatly; the chlorophyll trend was similar with it. Aiming at the above phenomena, three points system optimization direction were proposed.
Statistical thermodynamics of clustered populations.
Matsoukas, Themis
2014-08-01
We present a thermodynamic theory for a generic population of M individuals distributed into N groups (clusters). We construct the ensemble of all distributions with fixed M and N, introduce a selection functional that embodies the physics that governs the population, and obtain the distribution that emerges in the scaling limit as the most probable among all distributions consistent with the given physics. We develop the thermodynamics of the ensemble and establish a rigorous mapping to regular thermodynamics. We treat the emergence of a so-called giant component as a formal phase transition and show that the criteria for its emergence are entirely analogous to the equilibrium conditions in molecular systems. We demonstrate the theory by an analytic model and confirm the predictions by Monte Carlo simulation.
Dawson, Paul R.; Boyce, Donald E.; Park, Jun-Sang; ...
2017-10-15
A robust methodology is presented to extract slip system strengths from lattice strain distributions for polycrystalline samples obtained from high-energy x-ray diffraction (HEXD) experiments with in situ loading. The methodology consists of matching the evolution of coefficients of a harmonic expansion of the distributions from simulation to the coefficients derived from measurements. Simulation results are generated via finite element simulations of virtual polycrystals that are subjected to the loading history applied in the HEXD experiments. Advantages of the methodology include: (1) its ability to utilize extensive data sets generated by HEXD experiments; (2) its ability to capture trends in distributionsmore » that may be noisy (both measured and simulated); and (3) its sensitivity to the ratios of the family strengths. The approach is used to evaluate the slip system strengths of Ti-6Al-4V using samples having relatively equiaxed grains. These strength estimates are compared to values in the literature.« less
A uniform approach for programming distributed heterogeneous computing systems
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-01-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater’s performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations. PMID:25844015
Universality in survivor distributions: Characterizing the winners of competitive dynamics
NASA Astrophysics Data System (ADS)
Luck, J. M.; Mehta, A.
2015-11-01
We investigate the survivor distributions of a spatially extended model of competitive dynamics in different geometries. The model consists of a deterministic dynamical system of individual agents at specified nodes, which might or might not survive the predatory dynamics: all stochasticity is brought in by the initial state. Every such initial state leads to a unique and extended pattern of survivors and nonsurvivors, which is known as an attractor of the dynamics. We show that the number of such attractors grows exponentially with system size, so that their exact characterization is limited to only very small systems. Given this, we construct an analytical approach based on inhomogeneous mean-field theory to calculate survival probabilities for arbitrary networks. This powerful (albeit approximate) approach shows how universality arises in survivor distributions via a key concept—the dynamical fugacity. Remarkably, in the large-mass limit, the survivor probability of a node becomes independent of network geometry and assumes a simple form which depends only on its mass and degree.
A uniform approach for programming distributed heterogeneous computing systems.
Grasso, Ivan; Pellegrini, Simone; Cosenza, Biagio; Fahringer, Thomas
2014-12-01
Large-scale compute clusters of heterogeneous nodes equipped with multi-core CPUs and GPUs are getting increasingly popular in the scientific community. However, such systems require a combination of different programming paradigms making application development very challenging. In this article we introduce libWater, a library-based extension of the OpenCL programming model that simplifies the development of heterogeneous distributed applications. libWater consists of a simple interface, which is a transparent abstraction of the underlying distributed architecture, offering advanced features such as inter-context and inter-node device synchronization. It provides a runtime system which tracks dependency information enforced by event synchronization to dynamically build a DAG of commands, on which we automatically apply two optimizations: collective communication pattern detection and device-host-device copy removal. We assess libWater's performance in three compute clusters available from the Vienna Scientific Cluster, the Barcelona Supercomputing Center and the University of Innsbruck, demonstrating improved performance and scaling with different test applications and configurations.
The AI Bus architecture for distributed knowledge-based systems
NASA Technical Reports Server (NTRS)
Schultz, Roger D.; Stobie, Iain
1991-01-01
The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.
Distributed intelligent control and status networking
NASA Technical Reports Server (NTRS)
Fortin, Andre; Patel, Manoj
1993-01-01
Over the past two years, the Network Control Systems Branch (Code 532) has been investigating control and status networking technologies. These emerging technologies use distributed processing over a network to accomplish a particular custom task. These networks consist of small intelligent 'nodes' that perform simple tasks. Containing simple, inexpensive hardware and software, these nodes can be easily developed and maintained. Once networked, the nodes can perform a complex operation without a central host. This type of system provides an alternative to more complex control and status systems which require a central computer. This paper will provide some background and discuss some applications of this technology. It will also demonstrate the suitability of one particular technology for the Space Network (SN) and discuss the prototyping activities of Code 532 utilizing this technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albrecht, Simon; Winn, Joshua N.; Marcy, Geoffrey W.
We measure the sky-projected stellar obliquities ({lambda}) in the multiple-transiting planetary systems KOI-94 and Kepler-25, using the Rossiter-McLaughlin effect. In both cases, the host stars are well aligned with the orbital planes of the planets. For KOI-94 we find {lambda} = -11 Degree-Sign {+-} 11 Degree-Sign , confirming a recent result by Hirano and coworkers. Kepler-25 was a more challenging case, because the transit depth is unusually small (0.13%). To obtain the obliquity, it was necessary to use prior knowledge of the star's projected rotation rate and apply two different analysis methods to independent wavelength regions of the spectra. Themore » two methods gave consistent results, {lambda} = 7 Degree-Sign {+-} 8 Degree-Sign and -0. Degree-Sign 5 {+-} 5. Degree-Sign 7. There are now a total of five obliquity measurements for host stars of systems of multiple-transiting planets, all of which are consistent with spin-orbit alignment. This alignment is unlikely to be the result of tidal interactions because of the relatively large orbital distances and low planetary masses in the systems. In this respect, the multiplanet host stars differ from hot-Jupiter host stars, which commonly have large spin-orbit misalignments whenever tidal interactions are weak. In particular, the weak-tide subset of hot-Jupiter hosts has obliquities consistent with an isotropic distribution (p = 0.6), but the multiplanet hosts are incompatible with such a distribution (p {approx} 10{sup -6}). This suggests that high obliquities are confined to hot-Jupiter systems, and provides further evidence that hot-Jupiter formation involves processes that tilt the planetary orbit.« less
Lehtola, Markku J; Juhna, Tālis; Miettinen, Ilkka T; Vartiainen, Terttu; Martikainen, Pertti J
2004-12-01
The formation of biofilms in drinking water distribution networks is a significant technical, aesthetic and hygienic problem. In this study, the effects of assimilable organic carbon, microbially available phosphorus (MAP), residual chlorine, temperature and corrosion products on the formation of biofilms were studied in two full-scale water supply systems in Finland and Latvia. Biofilm collectors consisting of polyvinyl chloride pipes were installed in several waterworks and distribution networks, which were supplied with chemically precipitated surface waters and groundwater from different sources. During a 1-year study, the biofilm density was measured by heterotrophic plate counts on R2A-agar, acridine orange direct counting and ATP-analyses. A moderate level of residual chlorine decreased biofilm density, whereas an increase of MAP in water and accumulated cast iron corrosion products significantly increased biofilm density. This work confirms, in a full-scale distribution system in Finland and Latvia, our earlier in vitro finding that biofilm formation is affected by the availability of phosphorus in drinking water.
NASA Astrophysics Data System (ADS)
Murayama, Hideaki; Kageyama, Kazuro; Kimpara, Isao; Akiyoshi, Shimada; Naruse, Hiroshi
2000-06-01
In this study, we developed a health monitoring system using a fiber optic distributed strain sensor for International America's Cup Class (IACC) yachts. Most structural components of an IACC yacht consist of an aluminum honeycomb core sandwiched between carbon fiber reinforced plastic (CFRP) laminates. In such structures, delamination, skin/core debonding and debonding between adhered members will be result in serious fracture of the structure. We equipped two IACC yachts with fiber optic strain sensors designed to measured the distributed strain using a Brillouin optical time domain reflectometer (BOTDR) and to detect any deterioration or damage to the yacht's structures caused by such failures. And based on laboratory test results, we proposed a structural health monitoring technique for IACC yachts that involves analyzing their strain distribution. Some important information about structural conditions of the IACC yachts could be obtained from this system through the periodical strain measurements in the field.
Crovelli, R.A.; Balay, R.H.
1991-01-01
A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.
The Enskog Equation for Confined Elastic Hard Spheres
NASA Astrophysics Data System (ADS)
Maynar, P.; García de Soria, M. I.; Brey, J. Javier
2018-03-01
A kinetic equation for a system of elastic hard spheres or disks confined by a hard wall of arbitrary shape is derived. It is a generalization of the modified Enskog equation in which the effects of the confinement are taken into account and it is supposed to be valid up to moderate densities. From the equation, balance equations for the hydrodynamic fields are derived, identifying the collisional transfer contributions to the pressure tensor and heat flux. A Lyapunov functional, H[f], is identified. For any solution of the kinetic equation, H decays monotonically in time until the system reaches the inhomogeneous equilibrium distribution, that is a Maxwellian distribution with a density field consistent with equilibrium statistical mechanics.
Carr, R A; Sanders, D S A; Stores, O P; Smew, F A; Parkes, M E; Ross‐Gilbertson, V; Chachlani, N; Simon, J
2006-01-01
Background Guidelines on staffing and workload for histopathology and cytopathology departments was published by the Royal College of Pathologists (RCPath) in July 2003. In this document, a system is provided whereby the workload of a cellular pathology department and individual pathologists can be assessed with a scoring system based on specialty and complexity of the specimens. A similar, but simplified, system of scoring specimens by specialty was developed in the Warwick District General Hospital. The system was based on the specimen type and suggested clinical diagnosis, so that specimens could be allocated prospectively by the laboratory technical staff to even out workload and support subspecialisation in a department staffed by 4.6 whole‐time equivalent consultant pathologists. Methods The pathologists were asked to indicate their reporting preferences to determine specialist reporting teams. The workload was allocated according to the “prospective” Warwick system (based on specimen type and suggested clinical diagnosis, not affected by final diagnosis or individual pathologist variation in reference to numbers of blocks, sections and special stains examined) for October 2003. The cumulative Warwick score was compared with the “retrospective” RCPath scoring system for each pathologist and between specialties. Four pathologists recorded their time for cut‐up and reporting for the month audited. Results The equitable distribution of work between pathologists was ensured by the Warwick allocation and workload system, hence facilitating specialist reporting. Less variation was observed in points reported per hour by the Warwick system (6.3 (range 5.5–6.9)) than by the RCPath system (11.5 (range 9.3–15)). Conclusions The RCPath system of scoring is inherently complex, is applied retrospectively and is not consistent across subspecialities. The Warwick system is simpler, prospective and can be run by technical staff; it facilitates even workload distribution throughout the day. Subspecialisation within a small‐sized or medium‐sized department with fair distribution of work between pathologists is also allowed for by this system. Reporting times among pathologists were shown by time and motion studies to be more consistent with Warwick points per hour than with RCPath points per hour. PMID:16524963
Performance issues in management of the Space Station Information System
NASA Technical Reports Server (NTRS)
Johnson, Marjory J.
1988-01-01
The onboard segment of the Space Station Information System (SSIS), called the Data Management System (DMS), will consist of a Fiber Distributed Data Interface (FDDI) token-ring network. The performance of the DMS in scenarios involving two kinds of network management is analyzed. In the first scenario, how the transmission of routine management messages impacts performance of the DMS is examined. In the second scenario, techniques for ensuring low latency of real-time control messages in an emergency are examined.
Specifications Physiological Monitoring System
NASA Technical Reports Server (NTRS)
1985-01-01
The operation of a physiological monitoring system (PMS) is described. Specifications were established for performance, design, interface, and test requirements. The PMS is a compact, microprocessor-based system, which can be worn in a pack on the body or may be mounted on a Spacelab rack or other appropriate structure. It consists of two modules, the Data Control Unit (DCU) and the Remote Control/Display Unit (RCDU). Its purpose is to collect and distribute data from physiological experiments in the Spacelab and in the Orbiter.
Process of producing liquid hydrocarbon fuels from biomass
Kuester, James L.
1987-07-07
A continuous thermochemical indirect liquefaction process to convert various biomass materials into diesel-type transportation fuels which fuels are compatible with current engine designs and distribution systems comprising feeding said biomass into a circulating solid fluidized bed gasification system to produce a synthesis gas containing olefins, hydrogen and carbon monoxide and thereafter introducing the synthesis gas into a catalytic liquefaction system to convert the synthesis gas into liquid hydrocarbon fuel consisting essentially of C.sub.7 -C.sub.17 paraffinic hydrocarbons having cetane indices of 50+.
Factor information retrieval system version 2. 0 (fire) (for microcomputers). Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
FIRE Version 2.0 contains EPA's unique recommended criteria and toxic air emission estimation factors. FIRE consists of: (1) an EPA internal repository system that contains emission factor data identified and collected, and (2) an external distribution system that contains only EPA's recommended factors. The emission factors, compiled from a review of the literature, are identified by pollutant name, CAS number, process and emission source descriptions, SIC code, SCC, and control status. The factors are rated for quality using AP-42 rating criteria.
NASA Astrophysics Data System (ADS)
Liu, Yiming; Shi, Yimin; Bai, Xuchao; Zhan, Pei
2018-01-01
In this paper, we study the estimation for the reliability of a multicomponent system, named N- M-cold-standby redundancy system, based on progressive Type-II censoring sample. In the system, there are N subsystems consisting of M statistically independent distributed strength components, and only one of these subsystems works under the impact of stresses at a time and the others remain as standbys. Whenever the working subsystem fails, one from the standbys takes its place. The system fails when the entire subsystems fail. It is supposed that the underlying distributions of random strength and stress both belong to the generalized half-logistic distribution with different shape parameter. The reliability of the system is estimated by using both classical and Bayesian statistical inference. Uniformly minimum variance unbiased estimator and maximum likelihood estimator for the reliability of the system are derived. Under squared error loss function, the exact expression of the Bayes estimator for the reliability of the system is developed by using the Gauss hypergeometric function. The asymptotic confidence interval and corresponding coverage probabilities are derived based on both the Fisher and the observed information matrices. The approximate highest probability density credible interval is constructed by using Monte Carlo method. Monte Carlo simulations are performed to compare the performances of the proposed reliability estimators. A real data set is also analyzed for an illustration of the findings.
NASA Astrophysics Data System (ADS)
Lee, Jounghun; Choi, Yun-Young
2015-02-01
We report a detection of the effect of the large-scale velocity shear on the spatial distributions of the galactic satellites around the isolated hosts. Identifying the isolated galactic systems, each of which consists of a single host galaxy and its satellites, from the Seventh Data Release of the Sloan Digital Sky Survey and reconstructing linearly the velocity shear field in the local universe, we measure the alignments between the relative positions of the satellites from their isolated hosts and the principal axes of the local velocity shear tensors projected onto the plane of sky. We find a clear signal that the galactic satellites in isolated systems are located preferentially along the directions of the minor principal axes of the large-scale velocity shear field. Those galactic satellites that are spirals, are brighter, are located at distances larger than the projected virial radii of the hosts, and belong to the spiral hosts yield stronger alignment signals, which implies that the alignment strength depends on the formation and accretion epochs of the galactic satellites. It is also shown that the alignment strength is quite insensitive to the cosmic web environment, as well as the size and luminosity of the isolated hosts. Although this result is consistent with the numerical finding of Libeskind et al. based on an N-body experiment, owing to the very low significance of the observed signals, it remains inconclusive whether or not the velocity shear effect on the satellite distribution is truly universal.
Minsley, Burke J.
2011-01-01
A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, ‘best’ model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequencydomain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favorably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment.
Static Frequency Converter System Installed and Tested
NASA Technical Reports Server (NTRS)
Brown, Donald P.; Sadhukhan, Debashis
2003-01-01
A new Static Frequency Converter (SFC) system has been installed and tested at the NASA Glenn Research Center s Central Air Equipment Building to provide consistent, reduced motor start times and improved reliability for the building s 14 large exhausters and compressors. The operational start times have been consistent around 2 min, 20 s per machine. This is at least a 3-min improvement (per machine) over the old variable-frequency motor generator sets. The SFC was designed and built by Asea Brown Boveri (ABB) and installed by Encompass Design Group (EDG) as part of a Construction of Facilities project managed by Glenn (Robert Scheidegger, project manager). The authors designed the Central Process Distributed Control Systems interface and control between the programmable logic controller, solid-state exciter, and switchgear, which was constructed by Gilcrest Electric.
Partial Discharge Monitoring on Metal-Enclosed Switchgear with Distributed Non-Contact Sensors.
Zhang, Chongxing; Dong, Ming; Ren, Ming; Huang, Wenguang; Zhou, Jierui; Gao, Xuze; Albarracín, Ricardo
2018-02-11
Metal-enclosed switchgear, which are widely used in the distribution of electrical energy, play an important role in power distribution networks. Their safe operation is directly related to the reliability of power system as well as the power quality on the consumer side. Partial discharge detection is an effective way to identify potential faults and can be utilized for insulation diagnosis of metal-enclosed switchgear. The transient earth voltage method, an effective non-intrusive method, has substantial engineering application value for estimating the insulation condition of switchgear. However, the practical application effectiveness of TEV detection is not satisfactory because of the lack of a TEV detection application method, i.e., a method with sufficient technical cognition and analysis. This paper proposes an innovative online PD detection system and a corresponding application strategy based on an intelligent feedback distributed TEV wireless sensor network, consisting of sensing, communication, and diagnosis layers. In the proposed system, the TEV signal or status data are wirelessly transmitted to the terminal following low-energy signal preprocessing and acquisition by TEV sensors. Then, a central server analyzes the correlation of the uploaded data and gives a fault warning level according to the quantity, trend, parallel analysis, and phase resolved partial discharge pattern recognition. In this way, a TEV detection system and strategy with distributed acquisition, unitized fault warning, and centralized diagnosis is realized. The proposed system has positive significance for reducing the fault rate of medium voltage switchgear and improving its operation and maintenance level.
NASA Astrophysics Data System (ADS)
Xie, Jibo; Li, Guoqing
2015-04-01
Earth observation (EO) data obtained by air-borne or space-borne sensors has the characteristics of heterogeneity and geographical distribution of storage. These data sources belong to different organizations or agencies whose data management and storage methods are quite different and geographically distributed. Different data sources provide different data publish platforms or portals. With more Remote sensing sensors used for Earth Observation (EO) missions, different space agencies have distributed archived massive EO data. The distribution of EO data archives and system heterogeneity makes it difficult to efficiently use geospatial data for many EO applications, such as hazard mitigation. To solve the interoperable problems of different EO data systems, an advanced architecture of distributed geospatial data infrastructure is introduced to solve the complexity of distributed and heterogeneous EO data integration and on-demand processing in this paper. The concept and architecture of geospatial data service gateway (GDSG) is proposed to build connection with heterogeneous EO data sources by which EO data can be retrieved and accessed with unified interfaces. The GDSG consists of a set of tools and service to encapsulate heterogeneous geospatial data sources into homogenous service modules. The GDSG modules includes EO metadata harvesters and translators, adaptors to different type of data system, unified data query and access interfaces, EO data cache management, and gateway GUI, etc. The GDSG framework is used to implement interoperability and synchronization between distributed EO data sources with heterogeneous architecture. An on-demand distributed EO data platform is developed to validate the GDSG architecture and implementation techniques. Several distributed EO data achieves are used for test. Flood and earthquake serves as two scenarios for the use cases of distributed EO data integration and interoperability.
A comparison of TPS and different measurement techniques in small-field electron beams.
Donmez Kesen, Nazmiye; Cakir, Aydin; Okutan, Murat; Bilge, Hatice
2015-01-01
In recent years, small-field electron beams have been used for the treatment of superficial lesions, which requires small circular fields. However, when using very small electron fields, some significant dosimetric problems may occur. In this study, dose distributions and outputs of circular fields with dimensions of 5cm and smaller, for nominal energies of 6, 9, and 15MeV from the Siemens ONCOR Linac, were measured and compared with data from a treatment planning system using the pencil-beam algorithm in electron beam calculations. All dose distribution measurements were performed using the Gafchromic EBT film; these measurements were compared with data that were obtained from the Computerized Medical Systems (CMS) XiO treatment planning system (TPS), using the gamma-index method in the PTW VeriSoft software program. Output measurements were performed using the Gafchromic EBT film, an Advanced Markus ion chamber, and thermoluminescent dosimetry (TLD). Although the pencil-beam algorithm is used to model electron beams in many clinics, there is no substantial amount of detailed information in the literature about its use. As the field size decreased, the point of maximum dose moved closer to the surface. Output factors were consistent; differences from the values obtained from the TPS were, at maximum, 42% for 6 and 15MeV and 32% for 9MeV. When the dose distributions from the TPS were compared with the measurements from the Gafchromic EBT films, it was observed that the results were consistent for 2-cm diameter and larger fields, but the outputs for fields of 1-cm diameter and smaller were not consistent. In CMS XiO TPS, calculated using the pencil-beam algorithm, the dose distributions of electron treatment fields that were created with circular cutout of a 1-cm diameter were not appropriate for patient treatment and the pencil-beam algorithm is not convenient for monitor unit (MU) calculations in electron dosimetry. Copyright © 2015 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
Space station WP-04 power system. Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Hallinan, G. J.
1987-01-01
Major study activities and results of the phase B study contract for the preliminary design of the space station Electrical Power System (EPS) are summarized. The areas addressed include the general system design, man-tended option, automation and robotics, evolutionary growth, software development environment, advanced development, customer accommodations, operations planning, product assurance, and design and development phase planning. The EPS consists of a combination photovoltaic and solar dynamic power generation subsystem and a power management and distribution (PMAD) subsystem. System trade studies and costing activities are also summarized.
Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas
2017-04-01
Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.
Topology determines force distributions in one-dimensional random spring networks.
Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N,z). Despite the universal properties of such (N,z) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
Topology determines force distributions in one-dimensional random spring networks
NASA Astrophysics Data System (ADS)
Heidemann, Knut M.; Sageman-Furnas, Andrew O.; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F.; Wardetzky, Max
2018-02-01
Networks of elastic fibers are ubiquitous in biological systems and often provide mechanical stability to cells and tissues. Fiber-reinforced materials are also common in technology. An important characteristic of such materials is their resistance to failure under load. Rupture occurs when fibers break under excessive force and when that failure propagates. Therefore, it is crucial to understand force distributions. Force distributions within such networks are typically highly inhomogeneous and are not well understood. Here we construct a simple one-dimensional model system with periodic boundary conditions by randomly placing linear springs on a circle. We consider ensembles of such networks that consist of N nodes and have an average degree of connectivity z but vary in topology. Using a graph-theoretical approach that accounts for the full topology of each network in the ensemble, we show that, surprisingly, the force distributions can be fully characterized in terms of the parameters (N ,z ) . Despite the universal properties of such (N ,z ) ensembles, our analysis further reveals that a classical mean-field approach fails to capture force distributions correctly. We demonstrate that network topology is a crucial determinant of force distributions in elastic spring networks.
Gutiérrez-Flores, Carina; la Luz, José L León-de; León, Francisco J García-De; Cota-Sánchez, J Hugo
2018-01-01
Polyploidy, the possession of more than two sets of chromosomes, is a major biological process affecting plant evolution and diversification. In the Cactaceae, genome doubling has also been associated with reproductive isolation, changes in breeding systems, colonization ability, and speciation. Pachycereus pringlei (S. Watson, 1885) Britton & Rose, 1909, is a columnar cactus that has long drawn the attention of ecologists, geneticists, and systematists due to its wide distribution range and remarkable assortment of breeding systems in the Mexican Sonoran Desert and the Baja California Peninsula (BCP). However, several important evolutionary questions, such as the distribution of chromosome numbers and whether the diploid condition is dominant over a potential polyploid condition driving the evolution and diversity in floral morphology and breeding systems in this cactus, are still unclear. In this study, we determined chromosome numbers in 11 localities encompassing virtually the entire geographic range of distribution of P. pringlei . Our data revealed the first diploid (2n = 22) count in this species restricted to the hermaphroditic populations of Catalana (ICA) and Cerralvo (ICE) Islands, whereas the tetraploid (2n = 44) condition is consistently distributed throughout the BCP and mainland Sonora populations distinguished by a non-hermaphroditic breeding system. These results validate a wider distribution of polyploid relative to diploid individuals and a shift in breeding systems coupled with polyploidisation. Considering that the diploid base number and hermaphroditism are the proposed ancestral conditions in Cactaceae, we suggest that ICE and ICA populations represent the relicts of a southern diploid ancestor from which both polyploidy and unisexuality evolved in mainland BCP, facilitating the northward expansion of this species. This cytogeographic distribution in conjunction with differences in floral attributes suggests the distinction of the diploid populations as a new taxonomic entity. We suggest that chromosome doubling in conjunction with allopatric distribution, differences in neutral genetic variation, floral traits, and breeding systems has driven the reproductive isolation, evolution, and diversification of this columnar cactus.
Gutiérrez-Flores, Carina; la Luz, José L. León-de; León, Francisco J. García-De; Cota-Sánchez, J. Hugo
2018-01-01
Abstract Polyploidy, the possession of more than two sets of chromosomes, is a major biological process affecting plant evolution and diversification. In the Cactaceae, genome doubling has also been associated with reproductive isolation, changes in breeding systems, colonization ability, and speciation. Pachycereus pringlei (S. Watson, 1885) Britton & Rose, 1909, is a columnar cactus that has long drawn the attention of ecologists, geneticists, and systematists due to its wide distribution range and remarkable assortment of breeding systems in the Mexican Sonoran Desert and the Baja California Peninsula (BCP). However, several important evolutionary questions, such as the distribution of chromosome numbers and whether the diploid condition is dominant over a potential polyploid condition driving the evolution and diversity in floral morphology and breeding systems in this cactus, are still unclear. In this study, we determined chromosome numbers in 11 localities encompassing virtually the entire geographic range of distribution of P. pringlei. Our data revealed the first diploid (2n = 22) count in this species restricted to the hermaphroditic populations of Catalana (ICA) and Cerralvo (ICE) Islands, whereas the tetraploid (2n = 44) condition is consistently distributed throughout the BCP and mainland Sonora populations distinguished by a non-hermaphroditic breeding system. These results validate a wider distribution of polyploid relative to diploid individuals and a shift in breeding systems coupled with polyploidisation. Considering that the diploid base number and hermaphroditism are the proposed ancestral conditions in Cactaceae, we suggest that ICE and ICA populations represent the relicts of a southern diploid ancestor from which both polyploidy and unisexuality evolved in mainland BCP, facilitating the northward expansion of this species. This cytogeographic distribution in conjunction with differences in floral attributes suggests the distinction of the diploid populations as a new taxonomic entity. We suggest that chromosome doubling in conjunction with allopatric distribution, differences in neutral genetic variation, floral traits, and breeding systems has driven the reproductive isolation, evolution, and diversification of this columnar cactus. PMID:29675137
Counting Raindrops and the Distribution of Intervals Between Them.
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Ten Veldhuis, M. C.; Hut, R.; Pape, J. J.
2017-12-01
Drop size distributions are often assumed to follow a generalized gamma function, characterized by one parameter, Λ, [1]. In principle, this Λ can be estimated by measuring the arrival rate of raindrops. The arrival rate should follow a Poisson distribution. By measuring the distribution of the time intervals between drops arriving at a certain surface area, one should not only be able to estimate the arrival rate but also the robustness of the underlying assumption concerning steady state. It is important to note that many rainfall radar systems also assume fixeddrop size distributions, and associated arrival rates, to derive rainfall rates. By testing these relationships with a simple device, we will be able to improve both land-based and space-based radar rainfall estimates. Here, an open-hardware sensor design is presented, consisting of a 3D printed housing for a piezoelectric element, some simple electronics and an Arduino. The target audience for this device are citizen scientists who want to contribute to collecting rainfall information beyond the standard rain gauge. The core of the sensor is a simple piezo-buzzer, as found in many devices such as watches and fire alarms. When a raindrop falls on a piezo-buzzer, a small voltage is generated , which can be used to register the drop's arrival time. By registering the intervals between raindrops, the associated Poisson distribution can be estimated. In addition to the hardware, we will present the first results of a measuring campaign in Myanmar that will have ran from August to October 2017. All design files and descriptions are available through GitHub: https://github.com/nvandegiesen/Intervalometer. This research is partially supported through the TWIGA project, funded by the European Commission's H2020 program under call SC5-18-2017 `Novel in-situ observation systems'. Reference [1]: Uijlenhoet, R., and J. N. M. Stricker. "A consistent rainfall parameterization based on the exponential raindrop size distribution." Journal of Hydrology 218, no. 3 (1999): 101-127.
Efficient self-consistency for magnetic tight binding
NASA Astrophysics Data System (ADS)
Soin, Preetma; Horsfield, A. P.; Nguyen-Manh, D.
2011-06-01
Tight binding can be extended to magnetic systems by including an exchange interaction on an atomic site that favours net spin polarisation. We have used a published model, extended to include long-ranged Coulomb interactions, to study defects in iron. We have found that achieving self-consistency using conventional techniques was either unstable or very slow. By formulating the problem of achieving charge and spin self-consistency as a search for stationary points of a Harris-Foulkes functional, extended to include spin, we have derived a much more efficient scheme based on a Newton-Raphson procedure. We demonstrate the capabilities of our method by looking at vacancies and self-interstitials in iron. Self-consistency can indeed be achieved in a more efficient and stable manner, but care needs to be taken to manage this. The algorithm is implemented in the code PLATO. Program summaryProgram title:PLATO Catalogue identifier: AEFC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 228 747 No. of bytes in distributed program, including test data, etc.: 1 880 369 Distribution format: tar.gz Programming language: C and PERL Computer: Apple Macintosh, PC, Unix machines Operating system: Unix, Linux, Mac OS X, Windows XP Has the code been vectorised or parallelised?: Yes. Up to 256 processors tested RAM: Up to 2 Gbytes per processor Classification: 7.3 External routines: LAPACK, BLAS and optionally ScaLAPACK, BLACS, PBLAS, FFTW Catalogue identifier of previous version: AEFC_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2009) 2616 Does the new version supersede the previous version?: Yes Nature of problem: Achieving charge and spin self-consistency in magnetic tight binding can be very difficult. Our existing schemes failed altogether, or were very slow. Solution method: A new scheme for achieving self-consistency in orthogonal tight binding has been introduced that explicitly evaluates the first and second derivatives of the energy with respect to input charge and spin, and then uses these to search for stationary values of the energy. Reasons for new version: Bug fixes and new functionality. Summary of revisions: New charge and spin mixing scheme for orthogonal tight binding. Numerous small bug fixes. Restrictions: The new mixing scheme scales poorly with system size. In particular the memory usage scales as number of atoms to the power 4. It is restricted to systems with about 200 atoms or less. Running time: Test cases will run in a few minutes, large calculations may run for several days.
Performance evaluation of the croissant production line with reparable machines
NASA Astrophysics Data System (ADS)
Tsarouhas, Panagiotis H.
2015-03-01
In this study, the analytical probability models for an automated serial production system, bufferless that consists of n-machines in series with common transfer mechanism and control system was developed. Both time to failure and time to repair a failure are assumed to follow exponential distribution. Applying those models, the effect of system parameters on system performance in actual croissant production line was studied. The production line consists of six workstations with different numbers of reparable machines in series. Mathematical models of the croissant production line have been developed using Markov process. The strength of this study is in the classification of the whole system in states, representing failures of different machines. Failure and repair data from the actual production environment have been used to estimate reliability and maintainability for each machine, workstation, and the entire line is based on analytical models. The analysis provides a useful insight into the system's behaviour, helps to find design inherent faults and suggests optimal modifications to upgrade the system and improve its performance.
The Distribution of Interplanetary Dust Near 1-AU: An MMS Perspective
NASA Astrophysics Data System (ADS)
Adrian, M. L.; St Cyr, O. C.; Wilson, L. B., III; Schiff, C.; Sacks, L. W.; Chai, D. J.; Queen, S. Z.; Sedlak, J. E.
2017-12-01
The distribution of dust in the ecliptic plane in the vicinity of 1-AU has been inferred from impacts on the four Magnetospheric Multiscale (MMS) mission spacecraft as detected by the Acceleration Measurement System (AMS) during periods when no other spacecraft activities are in progress. Consisting of four identically instrumented spacecraft, with an inter-spacecraft separation ranging from 10-km to 400-km, the MMS constellation forms a dust "detector" with approximately four-times the collection area of any previous dust monitoring framework. Here we introduce the MMS-AMS and the inferred dust impact observations, provide a preliminary comparison of the MMS distribution of dust impacts to previously reported interplanetary dust distributions — namely those of the STEREO mission — and report on our initial comparison of the MMS distribution of dust impacts with known meteor showers.
NASA Technical Reports Server (NTRS)
Mohr, Karen I.; Molinari, John; Thorncroft, Chris
2009-01-01
The characteristics of convective system populations in West Africa and the western Pacific tropical cyclone basin were analyzed to investigate whether interannual variability in convective activity in tropical continental and oceanic environments is driven by variations in the number of events during the wet season or by favoring large and/or intense convective systems. Convective systems were defined from Tropical Rainfall Measuring Mission (TRMM) data as a cluster of pixels with an 85-GHz polarization-corrected brightness temperature below 255 K and with an area of at least 64 square kilometers. The study database consisted of convective systems in West Africa from May to September 1998-2007, and in the western Pacific from May to November 1998-2007. Annual cumulative frequency distributions for system minimum brightness temperature and system area were constructed for both regions. For both regions, there were no statistically significant differences between the annual curves for system minimum brightness temperature. There were two groups of system area curves, split by the TRMM altitude boost in 2001. Within each set, there was no statistically significant interannual variability. Subsetting the database revealed some sensitivity in distribution shape to the size of the sampling area, the length of the sample period, and the climate zone. From a regional perspective, the stability of the cumulative frequency distributions implied that the probability that a convective system would attain a particular size or intensity does not change interannually. Variability in the number of convective events appeared to be more important in determining whether a year is either wetter or drier than normal.
Distributed antenna system and method
NASA Technical Reports Server (NTRS)
Fink, Patrick W. (Inventor); Dobbins, Justin A. (Inventor)
2004-01-01
System and methods are disclosed for employing one or more radiators having non-unique phase centers mounted to a body with respect to a plurality of transmitters to determine location characteristics of the body such as the position and/or attitude of the body. The one or more radiators may consist of a single, continuous element or of two or more discrete radiation elements whose received signals are combined. In a preferred embodiment, the location characteristics are determined using carrier phase measurements whereby phase center information may be determined or estimated. A distributed antenna having a wide angle view may be mounted to a moveable body in accord with the present invention. The distributed antenna may be utilized for maintaining signal contact with multiple spaced apart transmitters, such as a GPS constellation, as the body rotates without the need for RF switches to thereby provide continuous attitude and position determination of the body.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Di; Lian, Jianming; Sun, Yannan
Demand response is representing a significant but largely untapped resource that can greatly enhance the flexibility and reliability of power systems. In this paper, a hierarchical control framework is proposed to facilitate the integrated coordination between distributed energy resources and demand response. The proposed framework consists of coordination and device layers. In the coordination layer, various resource aggregations are optimally coordinated in a distributed manner to achieve the system-level objectives. In the device layer, individual resources are controlled in real time to follow the optimal power generation or consumption dispatched from the coordination layer. For the purpose of practical applications,more » a method is presented to determine the utility functions of controllable loads by taking into account the real-time load dynamics and the preferences of individual customers. The effectiveness of the proposed framework is validated by detailed simulation studies.« less
The structure of the clouds distributed operating system
NASA Technical Reports Server (NTRS)
Dasgupta, Partha; Leblanc, Richard J., Jr.
1989-01-01
A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.
Constraints for the Progenitor Masses of Historic Core-collapse Supernovae
NASA Astrophysics Data System (ADS)
Williams, Benjamin F.; Hillis, Tristan J.; Murphy, Jeremiah W.; Gilbert, Karoline; Dalcanton, Julianne J.; Dolphin, Andrew E.
2018-06-01
We age-date the stellar populations associated with 12 historic nearby core-collapse supernovae (CCSNe) and two supernova impostors; from these ages, we infer their initial masses and associated uncertainties. To do this, we have obtained new Hubble Space Telescope imaging covering these CCSNe. Using these images, we measure resolved stellar photometry for the stars surrounding the locations of the SNe. We then fit the color–magnitude distributions of this photometry with stellar evolution models to determine the ages of any young existing populations present. From these age distributions, we infer the most likely progenitor masses for all of the SNe in our sample. We find ages between 4 and 50 Myr, corresponding to masses from 7.5 to 59 solar masses. There were no SNe that lacked a local young population. Our sample contains four SNe Ib/c; their masses have a wide range of values, suggesting that the progenitors of stripped-envelope SNe are binary systems. Both impostors have masses constrained to be ≲7.5 solar masses. In cases with precursor imaging measurements, we find that age-dating and precursor imaging give consistent progenitor masses. This consistency implies that, although the uncertainties for each technique are significantly different, the results of both are reliable to the measured uncertainties. We combine these new measurements with those from our previous work and find that the distribution of 25 core-collapse SNe progenitor masses is consistent with a standard Salpeter power-law mass function, no upper mass cutoff, and an assumed minimum mass for core-collapse of 7.5 M⊙. The distribution is consistent with a minimum mass <9.5 M⊙.
A multi-echelon supply chain model for municipal solid waste management system.
Zhang, Yimei; Huang, Guo He; He, Li
2014-02-01
In this paper, a multi-echelon multi-period solid waste management system (MSWM) was developed by inoculating with multi-echelon supply chain. Waste managers, suppliers, industries and distributors could be engaged in joint strategic planning and operational execution. The principal of MSWM system is interactive planning of transportation and inventory for each organization in waste collection, delivery and disposal. An efficient inventory management plan for MSWM would lead to optimized productivity levels under available capacities (e.g., transportation and operational capacities). The applicability of the proposed system was illustrated by a case with three cities, one distribution and two waste disposal facilities. Solutions of the decision variable values under different significant levels indicate a consistent trend. With an increased significant level, the total generated waste would be decreased, and the total transported waste through distribution center to waste to energy and landfill would be decreased as well. Copyright © 2013 Elsevier Ltd. All rights reserved.
A multi-echelon supply chain model for municipal solid waste management system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Yimei, E-mail: yimei.zhang1@gmail.com; Huang, Guo He; He, Li
2014-02-15
In this paper, a multi-echelon multi-period solid waste management system (MSWM) was developed by inoculating with multi-echelon supply chain. Waste managers, suppliers, industries and distributors could be engaged in joint strategic planning and operational execution. The principal of MSWM system is interactive planning of transportation and inventory for each organization in waste collection, delivery and disposal. An efficient inventory management plan for MSWM would lead to optimized productivity levels under available capacities (e.g., transportation and operational capacities). The applicability of the proposed system was illustrated by a case with three cities, one distribution and two waste disposal facilities. Solutions ofmore » the decision variable values under different significant levels indicate a consistent trend. With an increased significant level, the total generated waste would be decreased, and the total transported waste through distribution center to waste to energy and landfill would be decreased as well.« less
MECDAS: A distributed data acquisition system for experiments at MAMI
NASA Astrophysics Data System (ADS)
Krygier, K. W.; Merle, K.
1994-02-01
For the coincidence experiments with the three spectrometer setup at MAMI an experiment control and data acquisition system has been built and was put successfully into final operation in 1992. MECDAS is designed as a distributed system using communication via Ethernet and optical links. As the front end, VME bus systems are used for real time purposes and direct hardware access via CAMAC, Fastbus or VMEbus. RISC workstations running UNIX are used for monitoring, data archiving and online and offline analysis of the experiment. MECDAS consists of several fixed programs and libraries, but large parts of readout and analysis can be configured by the user. Experiment specific configuration files are used to generate efficient and powerful code well adapted to special problems without additional programming. The experiment description is added to the raw collection of partially analyzed data to get self-descriptive data files.
a Distributed Online 3D-LIDAR Mapping System
NASA Astrophysics Data System (ADS)
Schmiemann, J.; Harms, H.; Schattenberg, J.; Becker, M.; Batzdorfer, S.; Frerichs, L.
2017-08-01
In this paper we are presenting work done within the joint development project ANKommEn. It deals with the development of a highly automated robotic system for fast data acquisition in civil disaster scenarios. One of the main requirements is a versatile system, hence the concept embraces a machine cluster consisting of multiple fundamentally different robotic platforms. To cover a large variety of potential deployment scenarios, neither the absolute amount of participants, nor the precise individual layout of each platform shall be restricted within the conceptual design. Thus leading to a variety of special requirements, like onboard and online data processing capabilities for each individual participant and efficient data exchange structures, allowing reliable random data exchange between individual robots. We are demonstrating the functionality and performance by means of a distributed mapping system evaluated with real world data in a challenging urban and rural indoor/outdoor scenarios.
Niaksu, Olegas; Zaptorius, Jonas
2014-01-01
This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.
Crater Topography on Titan: Implications for Landscape Evolution
NASA Technical Reports Server (NTRS)
Neish, Catherine D.; Kirk, R.L.; Lorenz, R. D.; Bray, V. J.; Schenk, P.; Stiles, B. W.; Turtle, E.; Mitchell, K.; Hayes, A.
2013-01-01
We present a comprehensive review of available crater topography measurements for Saturn's moon Titan. In general, the depths of Titan's craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede's average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 +/- 0.0003 (for the largest crater studied, Menrva, D approximately 425 km) and 0.017 +/- 0.004 (for the smallest crater studied, Ksa, D approximately 39 km). When we evaluate the Anderson-Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan's craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close 'airless' analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan's surface, the only body in the outer Solar System with extensive surface-atmosphere exchange.
Probabilistic graphs as a conceptual and computational tool in hydrology and water management
NASA Astrophysics Data System (ADS)
Schoups, Gerrit
2014-05-01
Originally developed in the fields of machine learning and artificial intelligence, probabilistic graphs constitute a general framework for modeling complex systems in the presence of uncertainty. The framework consists of three components: 1. Representation of the model as a graph (or network), with nodes depicting random variables in the model (e.g. parameters, states, etc), which are joined together by factors. Factors are local probabilistic or deterministic relations between subsets of variables, which, when multiplied together, yield the joint distribution over all variables. 2. Consistent use of probability theory for quantifying uncertainty, relying on basic rules of probability for assimilating data into the model and expressing unknown variables as a function of observations (via the posterior distribution). 3. Efficient, distributed approximation of the posterior distribution using general-purpose algorithms that exploit model structure encoded in the graph. These attributes make probabilistic graphs potentially useful as a conceptual and computational tool in hydrology and water management (and beyond). Conceptually, they can provide a common framework for existing and new probabilistic modeling approaches (e.g. by drawing inspiration from other fields of application), while computationally they can make probabilistic inference feasible in larger hydrological models. The presentation explores, via examples, some of these benefits.
Nuclear Pasta at Finite Temperature with the Time-Dependent Hartree-Fock Approach
NASA Astrophysics Data System (ADS)
Schuetrumpf, B.; Klatt, M. A.; Iida, K.; Maruhn, J. A.; Mecke, K.; Reinhard, P.-G.
2016-01-01
We present simulations of neutron-rich matter at sub-nuclear densities, like supernova matter. With the time-dependent Hartree-Fock approximation we can study the evolution of the system at temperatures of several MeV employing a full Skyrme interaction in a periodic three-dimensional grid [1]. The initial state consists of α particles randomly distributed in space that have a Maxwell-Boltzmann distribution in momentum space. Adding a neutron background initialized with Fermi distributed plane waves the calculations reflect a reasonable approximation of astrophysical matter. The matter evolves into spherical, rod-like, connected rod-like and slab-like shapes. Further we observe gyroid-like structures, discussed e.g. in [2], which are formed spontaneously choosing a certain value of the simulation box length. The ρ-T-map of pasta shapes is basically consistent with the phase diagrams obtained from QMD calculations [3]. By an improved topological analysis based on Minkowski functionals [4], all observed pasta shapes can be uniquely identified by only two valuations, namely the Euler characteristic and the integral mean curvature. In addition we propose the variance in the cell-density distribution as a measure to distinguish pasta matter from uniform matter.
Data quality assessment for comparative effectiveness research in distributed data networks
Brown, Jeffrey; Kahn, Michael; Toh, Sengwee
2015-01-01
Background Electronic health information routinely collected during healthcare delivery and reimbursement can help address the need for evidence about the real-world effectiveness, safety, and quality of medical care. Often, distributed networks that combine information from multiple sources are needed to generate this real-world evidence. Objective We provide a set of field-tested best practices and a set of recommendations for data quality checking for comparative effectiveness research (CER) in distributed data networks. Methods Explore the requirements for data quality checking and describe data quality approaches undertaken by several existing multi-site networks. Results There are no established standards regarding how to evaluate the quality of electronic health data for CER within distributed networks. Data checks of increasing complexity are often employed, ranging from consistency with syntactic rules to evaluation of semantics and consistency within and across sites. Temporal trends within and across sites are widely used, as are checks of each data refresh or update. Rates of specific events and exposures by age group, sex, and month are also common. Discussion Secondary use of electronic health data for CER holds promise but is complex, especially in distributed data networks that incorporate periodic data refreshes. The viability of a learning health system is dependent on a robust understanding of the quality, validity, and optimal secondary uses of routinely collected electronic health data within distributed health data networks. Robust data quality checking can strengthen confidence in findings based on distributed data network. PMID:23793049
Modeled ground water age distributions
Woolfenden, Linda R.; Ginn, Timothy R.
2009-01-01
The age of ground water in any given sample is a distributed quantity representing distributed provenance (in space and time) of the water. Conventional analysis of tracers such as unstable isotopes or anthropogenic chemical species gives discrete or binary measures of the presence of water of a given age. Modeled ground water age distributions provide a continuous measure of contributions from different recharge sources to aquifers. A numerical solution of the ground water age equation of Ginn (1999) was tested both on a hypothetical simplified one-dimensional flow system and under real world conditions. Results from these simulations yield the first continuous distributions of ground water age using this model. Complete age distributions as a function of one and two space dimensions were obtained from both numerical experiments. Simulations in the test problem produced mean ages that were consistent with the expected value at the end of the model domain for all dispersivity values tested, although the mean ages for the two highest dispersivity values deviated slightly from the expected value. Mean ages in the dispersionless case also were consistent with the expected mean ages throughout the physical model domain. Simulations under real world conditions for three dispersivity values resulted in decreasing mean age with increasing dispersivity. This likely is a consequence of an edge effect. However, simulations for all three dispersivity values tested were mass balanced and stable demonstrating that the solution of the ground water age equation can provide estimates of water mass density distributions over age under real world conditions.
NASA Astrophysics Data System (ADS)
Abdel-Fattah, Mohamed I.; Slatt, Roger M.
2013-12-01
Understanding sequence stratigraphy architecture in the incised-valley is a crucial step to understanding the effect of relative sea level changes on reservoir characterization and architecture. This paper presents a sequence stratigraphic framework of the incised-valley strata within the late Messinian Abu Madi Formation based on seismic and borehole data. Analysis of sand-body distribution reveals that fluvial channel sandstones in the Abu Madi Formation in the Baltim Fields, offshore Nile Delta, Egypt, are not randomly distributed but are predictable in their spatial and stratigraphic position. Elucidation of the distribution of sandstones in the Abu Madi incised-valley fill within a sequence stratigraphic framework allows a better understanding of their characterization and architecture during burial. Strata of the Abu Madi Formation are interpreted to comprise two sequences, which are the most complex stratigraphically; their deposits comprise a complex incised valley fill. The lower sequence (SQ1) consists of a thick incised valley-fill of a Lowstand Systems Tract (LST1)) overlain by a Transgressive Systems Tract (TST1) and Highstand Systems Tract (HST1). The upper sequence (SQ2) contains channel-fill and is interpreted as a LST2 which has a thin sandstone channel deposits. Above this, channel-fill sandstone and related strata with tidal influence delineates the base of TST2, which is overlain by a HST2. Gas reservoirs of the Abu Madi Formation (present-day depth ˜3552 m), the Baltim Fields, Egypt, consist of fluvial lowstand systems tract (LST) sandstones deposited in an incised valley. LST sandstones have a wide range of porosity (15 to 28%) and permeability (1 to 5080mD), which reflect both depositional facies and diagenetic controls. This work demonstrates the value of constraining and evaluating the impact of sequence stratigraphic distribution on reservoir characterization and architecture in incised-valley deposits, and thus has an important impact on reservoir quality evolution in hydrocarbon exploration in such settings.
NASA Astrophysics Data System (ADS)
Stastny, Jeffrey A.; Rogers, Craig A.; Liang, Chen
1993-07-01
A parametric design model has been created to optimize the sensitivity of the sensing cable in a distributed sensing system. The system consists of electrical time domain reflectometry (ETDR) signal processing equipment and specially designed sensing cables. The ETDR equipment sends a high-frequency electric pulse (in the giga hertz range) along the sensing cable. Some portion of the electric pulse will be reflected back to the ETDR equipment as a result of the variation of the cable impedance. The electric impedance variation in the sensing cable can be related to its mechanical deformation, such as cable elongation (change in the resistance), shear deformation (change in the capacitance), corrosion of the cable or the materials around the cable (change in inductance and capacitance), etc. The time delay, amplitude, and shape of the reflected pulse provides the means to locate, determine the magnitude, and indicate the nature of the change in the electrical impedance, which is then related to the distributed structural deformation. The sensing cables are an essential part of the health-monitoring system. By using the parametric design model, the optimum cable parameters can be determined for specific deformation. Proof-of-concept experiments also are presented in the paper to demonstrate the utility of an electrical TDR system in distributed sensing applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, C.E.; Lucas, M.C.; Tisinger, E.W.
1984-01-01
Our system consists of a LeCroy 3500 data acquisition system with a built-in CAMAC crate and eight bismuth-germanate detectors 7.62 cm in diameter and 7.62 cm long. Gamma-ray pulse-height distributions are acquired simultaneously for up to eight positions. The system was very carefully calibrated and characterized from 0.1 to 8.3 MeV using gamma-ray spectra from a variety of radioactive sources. By fitting the pulse-height distributions from the sources with a function containing 17 parameters, we determined theoretical repsonse functions. We use these response functions to unfold the distributions to obtain flux spectra. A flux-to-dose-rate conversion curve based on the workmore » of Dimbylow and Francis is then used to obtain dose rates. Direct use of measured spectra and flux-to-dose-rate curves to obtain dose rates avoids the errors that can arise from spectrum dependence in simple gamma-ray dosimeter instruments. We present some gamma-ray doses for the Little Boy assembly operated at low power. These results can be used to determine the exposures of the Hiroshima survivors and thus aid in the establishment of radation exposure limits for the nuclear industry.« less
NASA Astrophysics Data System (ADS)
Chaianong, A.; Bangviwat, A.; Menke, C.
2017-07-01
Driven by decreasing PV and energy storage prices, increasing electricity costs and policy supports from Thai government (self-consumption era), rooftop PV and energy storage systems are going to be deployed in the country rapidly that may disrupt existing business models structure of Thai distribution utilities due to revenue erosion and lost earnings opportunities. The retail rates that directly affect ratepayers (non-solar customers) are expected to increase. This paper focuses on a framework for evaluating impacts of PV with and without energy storage systems on Thai distribution utilities and ratepayers by using cost-benefit analysis (CBA). Prior to calculation of cost/benefit components, changes in energy sales need to be addressed. Government policies for the support of PV generation will also help in accelerating the rooftop PV installation. Benefit components include avoided costs due to transmission losses and deferring distribution capacity with appropriate PV penetration level, while cost components consist of losses in revenue, program costs, integration costs and unrecovered fixed costs. It is necessary for Thailand to compare total costs and total benefits of rooftop PV and energy storage systems in order to adopt policy supports and mitigation approaches, such as business model innovation and regulatory reform, effectively.
A practical three-dimensional dosimetry system for radiation therapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo Pengyi; Adamovics, John; Oldham, Mark
2006-10-15
There is a pressing need for a practical three-dimensional (3D) dosimetry system, convenient for clinical use, and with the accuracy and resolution to enable comprehensive verification of the complex dose distributions typical of modern radiation therapy. Here we introduce a dosimetry system that can achieve this challenge, consisting of a radiochromic dosimeter (PRESAGE trade mark sign ) and a commercial optical computed tomography (CT) scanning system (OCTOPUS trade mark sign ). PRESAGE trade mark sign is a transparent material with compelling properties for dosimetry, including insensitivity of the dose response to atmospheric exposure, a solid texture negating the need formore » an external container (reducing edge effects), and amenability to accurate optical CT scanning due to radiochromic optical contrast as opposed to light-scattering contrast. An evaluation of the performance and viability of the PRESAGE trade mark sign /OCTOPUS, combination for routine clinical 3D dosimetry is presented. The performance of the two components (scanner and dosimeter) was investigated separately prior to full system test. The optical CT scanner has a spatial resolution of {<=}1 mm, geometric accuracy within 1 mm, and high reconstruction linearity (with a R{sup 2} value of 0.9979 and a standard error of estimation of {approx}1%) relative to independent measurement. The overall performance of the PRESAGE trade mark sign /OCTOPUS system was evaluated with respect to a simple known 3D dose distribution, by comparison with GAFCHROMIC[reg] EBT film and the calculated dose from a commissioned planning system. The 'measured' dose distribution in a cylindrical PRESAGE trade mark sign dosimeter (16 cm diameter and 11 cm height) was determined by optical-CT, using a filtered backprojection reconstruction algorithm. A three-way Gamma map comparison (4% dose difference and 4 mm distance to agreement), between the PRESAGE trade mark sign , EBT and calculated dose distributions, showed full agreement in measurable region of PRESAGE trade mark sign dosimeter ({approx}90% of radius). The EBT and PRESAGE trade mark sign distributions agreed more closely with each other than with the calculated plan, consistent with penumbral blurring in the planning data which was acquired with an ion chamber. In summary, our results support the conclusion that the PRESAGE trade mark sign optical-CT combination represents a significant step forward in 3D dosimetry, and provides a robust, clinically effective and viable high-resolution relative 3D dosimetry system for radiation therapy.« less
1997-12-19
Resource Consultants Inc. (RCI) Science Applications InternatT Corp (SAIC) Veda Inc. Virtual Space Devices (VSD) 1.1 Background The Land Warrior...network. The VICs included: • VIC Alpha - a fully immersive Dismounted Soldier System developed by Veda under a STRICOM applied research effort...consists of the Dismounted Soldier System (DSS), which is characterized as follows: • Developed by Veda under a STRICOM applied research effort
Distributed Sensor Systems and Electromechanical Analog Facility
1980-01-01
interfaces (parallel I/O, modems , etc.) real time operating systems (perhaps a short survey of what is available in the industry today), data...consists of a LSI-11 microprocessor, 56K bytes of memory, and serial and parallel I/O boards. 2.1.7 Disk controller The standard disk controller...with MTS via the modems connected to the LSI-lls. This pseudodevice cannot be reassigned. OSWIT I/O AND INTERRUPT STRUCTURE 137 OSWIT
ICESat Science Investigator led Processing System (I-SIPS)
NASA Astrophysics Data System (ADS)
Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.
2003-12-01
The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.
NASA Technical Reports Server (NTRS)
Quinn, Todd M.; Walters, Jerry L.
1991-01-01
Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.
Equilibrium statistical mechanics of self-consistent wave-particle system
NASA Astrophysics Data System (ADS)
Elskens, Yves
2005-10-01
The equilibrium distribution of N particles and M waves (e.g. Langmuir) is analysed in the weak-coupling limit for the self-consistent hamiltonian model H = ∑rpr^2 /(2m) + ∑jφjIj+ ɛ∑r,j(βj/ kj) (kjxr- θj) [1]. In the canonical ensemble, with temperature T and reservoir velocity v < jφj/kj, the wave intensities are almost independent and exponentially distributed, with expectation
Biofilms consist of many species of bacteria, protozoa, and other microbes living together on almost any type of moist surface. Within drinking water distribution systems, biofilms grow readily on the inner walls of pipes, even in the presence of chlorine disinfectants. Some mi...
2015-01-05
Wang. KinWrite: Handwriting -Based Authentication Using Kinect, Annual Network & Distributed System Security Symposium (NDSS), San Diego, CA, 2013 21...the large varia- tion of different handwriting styles, neighboring characters within a word are usually connected, and we may need to segment a word
A Review of DIMPACK Version 1.0: Conditional Covariance-Based Test Dimensionality Analysis Package
ERIC Educational Resources Information Center
Deng, Nina; Han, Kyung T.; Hambleton, Ronald K.
2013-01-01
DIMPACK Version 1.0 for assessing test dimensionality based on a nonparametric conditional covariance approach is reviewed. This software was originally distributed by Assessment Systems Corporation and now can be freely accessed online. The software consists of Windows-based interfaces of three components: DIMTEST, DETECT, and CCPROX/HAC, which…
RESOLVED CO GAS INTERIOR TO THE DUST RINGS OF THE HD 141569 DISK
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flaherty, Kevin M.; Hughes, A. Meredith; Zachary, Julia
2016-02-10
The disk around HD 141569 is one of a handful of systems whose weak infrared emission is consistent with a debris disk, but still has a significant reservoir of gas. Here we report spatially resolved millimeter observations of the CO(3-2) and CO(1-0) emission as seen with the Submillimeter Array and CARMA. We find that the excitation temperature for CO is lower than expected from cospatial blackbody grains, similar to previous observations of analogous systems, and derive a gas mass that lies between that of gas-rich primordial disks and gas-poor debris disks. The data also indicate a large inner hole inmore » the CO gas distribution and an outer radius that lies interior to the outer scattered light rings. This spatial distribution, with the dust rings just outside the gaseous disk, is consistent with the expected interactions between gas and dust in an optically thin disk. This indicates that gas can have a significant effect on the location of the dust within debris disks.« less
Toward unification of taxonomy databases in a distributed computer environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi
1994-12-31
All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less
Western Greenland Subglacial Hydrologic Modeling and Observables: Seismicity and GPS
NASA Astrophysics Data System (ADS)
Carmichael, J. D.; Joughin, I. R.
2010-12-01
I present a hydro-mechanical model of the Western Greenland ice sheet with surface observables for two modes of meltwater input. Using input prescribed from distributed surface data, First, I bound the subglacial carrying capacity for both a distributed and localized system, in a typical summer. I provide observations of the ambient seismic response and its support for an established surface-to-bed connection. Second, I show the ice sheet response to large impulsive hydraulic inputs (lake drainage events) should produce distinct seismic observables that depend upon the localization of the drainage systems. In the former case, the signal propagates as a diffusive wave, while the channelized case, the response is localized. I provide a discussion of how these results are consistent with previous reports (Das et al, 2008, Joughin et al, 2008) of melt-induced speedup along Greenland's Western Flank. Late summer seismicity for a four-receiver array deployed near a supraglacial lake, 68 44.379N, 49 30.064W. Clusters of seismic activity are characterized by dominant shear-wave energy, consistent with basal sliding events.
NASA Astrophysics Data System (ADS)
Jacobson, S.; Scheeres, D.; Rossi, A.; Marzari, F.; Davis, D.
2014-07-01
From the results of a comprehensive asteroid-population-evolution model, we conclude that the YORP-induced rotational-fission hypothesis has strong repercussions for the small size end of the main-belt asteroid size-frequency distribution and is consistent with observed asteroid-population statistics and with the observed sub-populations of binary asteroids, asteroid pairs and contact binaries. The foundation of this model is the asteroid-rotation model of Marzari et al. (2011) and Rossi et al. (2009), which incorporates both the YORP effect and collisional evolution. This work adds to that model the rotational fission hypothesis (i.e. when the rotation rate exceeds a critical value, erosion and binary formation occur; Scheeres 2007) and binary-asteroid evolution (Jacobson & Scheeres, 2011). The YORP-effect timescale for large asteroids with diameters D > ˜ 6 km is longer than the collision timescale in the main belt, thus the frequency of large asteroids is determined by a collisional equilibrium (e.g. Bottke 2005), but for small asteroids with diameters D < ˜ 6 km, the asteroid-population evolution model confirms that YORP-induced rotational fission destroys small asteroids more frequently than collisions. Therefore, the frequency of these small asteroids is determined by an equilibrium between the creation of new asteroids out of the impact debris of larger asteroids and the destruction of these asteroids by YORP-induced rotational fission. By introducing a new source of destruction that varies strongly with size, YORP-induced rotational fission alters the slope of the size-frequency distribution. Using the outputs of the asteroid-population evolution model and a 1-D collision evolution model, we can generate this new size-frequency distribution and it matches the change in slope observed by the SKADS survey (Gladman 2009). This agreement is achieved with both an accretional power-law or a truncated ''Asteroids were Born Big'' size-frequency distribution (Weidenschilling 2010, Morbidelli 2009). The binary-asteroid evolution model is highly constrained by the modeling done in Jacobson & Scheeres, and therefore the asteroid-population evolution model has only two significant free parameters: the ratio of low-to-high-mass-ratio binaries formed after rotational fission events and the mean strength of the binary YORP (BYORP) effect. Using this model, we successfully reproduce the observed small-asteroid sub-populations, which orthogonally constrain the two free parameters. We find the outcome of rotational fission most likely produces an initial mass-ratio fraction that is four to eight times as likely to produce high-mass-ratio systems as low-mass-ratio systems, which is consistent with rotational fission creating binary systems in a flat distribution with respect to mass ratio. We also find that the mean of the log-normal BYORP coefficient distribution B ≈ 10^{-2}.
A logical model of cooperating rule-based systems
NASA Technical Reports Server (NTRS)
Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.
1989-01-01
A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.
Principles of Considering the Effect of the Limited Volume of a System on Its Thermodynamic State
NASA Astrophysics Data System (ADS)
Tovbin, Yu. K.
2018-01-01
The features of a system with a finite volume that affect its thermodynamic state are considered in comparison to describing small bodies in macroscopic phases. Equations for unary and pair distribution functions are obtained using difference derivatives of a discrete statistical sum. The structure of the equation for the free energy of a system consisting of an ensemble of volume-limited regions with different sizes and a full set of equations describing a macroscopic polydisperse system are discussed. It is found that the equations can be applied to molecular adsorption on small faces of microcrystals, to bound and isolated pores of a polydisperse material, and to describe the spinodal decomposition of a fluid in brief periods of time and high supersaturations of the bulk phase when each local region functions the same on average. It is shown that as the size of a system diminishes, corrections must be introduced for the finiteness of the system volume and fluctuations of the unary and pair distribution functions.
Distributed computer system enhances productivity for SRB joint optimization
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.
1987-01-01
Initial calculations of a redesign of the solid rocket booster joint that failed during the shuttle tragedy showed that the design had a weight penalty associated with it. Optimization techniques were to be applied to determine if there was any way to reduce the weight while keeping the joint opening closed and limiting the stresses. To allow engineers to examine as many alternatives as possible, a system was developed consisting of existing software that coupled structural analysis with optimization which would execute on a network of computer workstations. To increase turnaround, this system took advantage of the parallelism offered by the finite difference technique of computing gradients to allow several workstations to contribute to the solution of the problem simultaneously. The resulting system reduced the amount of time to complete one optimization cycle from two hours to one-half hour with a potential of reducing it to 15 minutes. The current distributed system, which contains numerous extensions, requires one hour turnaround per optimization cycle. This would take four hours for the sequential system.
A Statistical Study of the Mass Distribution of Neutron Stars
NASA Astrophysics Data System (ADS)
Cheng, Zheng; Zhang, Cheng-Min; Zhao, Yong-Heng; Wang, De-Hua; Pan, Yuan-Yue; Lei, Ya-Juan
2014-07-01
By reviewing the methods of mass measurements of neutron stars in four different kinds of systems, i.e., the high-mass X-ray binaries (HMXBs), low-mass X-ray binaries (LMXBs), double neutron star systems (DNSs) and neutron star-white dwarf (NS-WD) binary systems, we have collected the orbital parameters of 40 systems. By using the boot-strap method and the Monte-Carlo method, we have rebuilt the likelihood probability curves of the measured masses of 46 neutron stars. The statistical analysis of the simulation results shows that the masses of neutron stars in the X-ray neutron star systems and those in the radio pulsar systems exhibit different distributions. Besides, the Bayes statistics of these four different kind systems yields the most-probable probability density distributions of these four kind systems to be (1.340 ± 0.230)M8, (1, 505 ± 0.125)M8,(1.335 ± 0.055)M8 and (1.495 ± 0.225)M8, respectively. It is noteworthy that the masses of neutron stars in the HMXB and DNS systems are smaller than those in the other two kind systems by approximately 0.16M8. This result is consistent with the theoretical model of the pulsar to be accelerated to the millisecond order of magnitude via accretion of approximately 0.2M8. If the HMXBs and LMXBs are respectively taken to be the precursors of the BNS and NS-WD systems, then the influence of the accretion effect on the masses of neutron stars in the HMXB systems should be exceedingly small. Their mass distributions should be very close to the initial one during the formation of neutron stars. As for the LMXB and NS-WD systems, they should have already under- gone the process of suffcient accretion, hence there arises rather large deviation from the initial mass distribution.
Charge distribution and transport properties in reduced ceria phases: A review
NASA Astrophysics Data System (ADS)
Shoko, E.; Smith, M. F.; McKenzie, Ross H.
2011-12-01
The question of the charge distribution in reduced ceria phases (CeO2-x) is important for understanding the microscopic physics of oxygen storage capacity, and the electronic and ionic conductivities in these materials. All these are key properties in the application of these materials in catalysis and electrochemical devices. Several approaches have been applied to study this problem, including ab initio methods. Recently [1], we applied the bond valence model (BVM) to discuss the charge distribution in several different crystallographic phases of reduced ceria. Here, we compare the BVM results to those from atomistic simulations to determine if there is consistency in the predictions of the two approaches. Our analysis shows that the two methods give a consistent picture of the charge distribution around oxygen vacancies in bulk reduced ceria phases. We then review the transport theory applicable to reduced ceria phases, providing useful relationships which enable comparison of experimental results obtained by different techniques. In particular, we compare transport parameters obtained from the observed optical absorption spectrum, α(ω), dc electrical conductivity with those predicted by small polaron theory and the Harrison method. The small polaron energy is comparable to that estimated from α(ω). However, we found a discrepancy between the value of the electron hopping matrix element, t, estimated from the Marcus-Hush formula and that obtained by the Harrison method. Part of this discrepancy could be attributed to the system lying in the crossover region between adiabatic and nonadiabatic whereas our calculations assumed the system to be nonadiabatic. Finally, by considering the relationship between the charge distribution and electronic conductivity, we suggest the possibility of low temperature metallic conductivity for intermediate phases, i.e., x˜0.3. This has not yet been experimentally observed.
Thermodynamic Analyses of the LCLS-II Cryogenic Distribution System
Dalesandro, Andrew; Kaluzny, Joshua; Klebaner, Arkadiy
2016-12-29
The Linac Coherent Light Source (LCLS) at Stanford Linear Accelerator Center (SLAC) is in the process of being upgraded to a superconducting radio frequency (SRF) accelerator and renamed LCLS-II. This upgrade requires thirty-five 1.3 GHz SRF cryomodules (CM) and two 3.9 GHz CM. A cryogenic distribution system (CDS) is in development by Fermi National Accelerator Laboratory to interconnect the CM Linac with the cryogenic plant (CP). The CDS design utilizes cryogenic helium to support the CM operations with a high temperature thermal shield around 55 K, a low temperature thermal intercepts around 5 K, and a SRF cavity liquid heliummore » supply and sub-atmospheric vapor return both around 2 K. Additionally the design must accommodate a Linac consisting of two parallel cryogenic strings, supported by two independent CP utilizing CDS components such as distribution boxes, transfer lines, feed caps and endcaps. In this paper, we describe the overall layout of the cryogenic distribution system and the major thermodynamic factors which influence the CDS design including heat loads, pressure drops, temperature profiles, and pressure relieving requirements. In addition the paper describes how the models are created to perform the analyses.« less
Objectified quantification of uncertainties in Bayesian atmospheric inversions
NASA Astrophysics Data System (ADS)
Berchet, A.; Pison, I.; Chevallier, F.; Bousquet, P.; Bonne, J.-L.; Paris, J.-D.
2015-05-01
Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. When data pieces are sparse, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results to enhance the classical Bayesian inversion framework through a marginalization on a large set of plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurrence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is not explicitly describable. As a consequence, we carry out a Monte Carlo sampling based on an approximation of the probability of occurrence of the error distributions. This approximation is deduced from the well-tested method of the maximum likelihood estimation. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly accounts for the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of an emission aggregation pattern and of a sampling protocol in order to reduce the computation cost. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the mesoscale with virtual observations on a realistic network in Eurasia. Observing system simulation experiments are carried out with different transport patterns, flux distributions and total prior amounts of emitted methane. The method proves to consistently reproduce the known "truth" in most cases, with satisfactory tolerance intervals. Additionally, the method explicitly provides influence scores and posterior correlation matrices. An in-depth interpretation of the inversion results is then possible. The more objective quantification of the influence of the observations on the fluxes proposed here allows us to evaluate the impact of the observation network on the characterization of the surface fluxes. The explicit correlations between emission aggregates reveal the mis-separated regions, hence the typical temporal and spatial scales the inversion can analyse. These scales are consistent with the chosen aggregation patterns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Anthony M.; Williams, Liliya L.R.; Hjorth, Jens, E-mail: amyoung@astro.umn.edu, E-mail: llrw@astro.umn.edu, E-mail: jens@dark-cosmology.dk
One usually thinks of a radial density profile as having a monotonically changing logarithmic slope, such as in NFW or Einasto profiles. However, in two different classes of commonly used systems, this is often not the case. These classes exhibit non-monotonic changes in their density profile slopes which we call oscillations for short. We analyze these two unrelated classes separately. Class 1 consists of systems that have density oscillations and that are defined through their distribution function f ( E ), or differential energy distribution N ( E ), such as isothermal spheres, King profiles, or DARKexp, a theoretically derivedmore » model for relaxed collisionless systems. Systems defined through f ( E ) or N ( E ) generally have density slope oscillations. Class 1 system oscillations can be found at small, intermediate, or large radii but we focus on a limited set of Class 1 systems that have oscillations in the central regions, usually at log( r / r {sub −2}) ∼< −2, where r {sub −2} is the largest radius where d log(ρ)/ d log( r ) = −2. We show that the shape of their N ( E ) can roughly predict the amplitude of oscillations. Class 2 systems which are a product of dynamical evolution, consist of observed and simulated galaxies and clusters, and pure dark matter halos. Oscillations in the density profile slope seem pervasive in the central regions of Class 2 systems. We argue that in these systems, slope oscillations are an indication that a system is not fully relaxed. We show that these oscillations can be reproduced by small modifications to N ( E ) of DARKexp. These affect a small fraction of systems' mass and are confined to log( r / r {sub −2}) ∼< 0. The size of these modifications serves as a potential diagnostic for quantifying how far a system is from being relaxed.« less
User's Manual for Computer Program ROTOR. [to calculate tilt-rotor aircraft dynamic characteristics
NASA Technical Reports Server (NTRS)
Yasue, M.
1974-01-01
A detailed description of a computer program to calculate tilt-rotor aircraft dynamic characteristics is presented. This program consists of two parts: (1) the natural frequencies and corresponding mode shapes of the rotor blade and wing are developed from structural data (mass distribution and stiffness distribution); and (2) the frequency response (to gust and blade pitch control inputs) and eigenvalues of the tilt-rotor dynamic system, based on the natural frequencies and mode shapes, are derived. Sample problems are included to assist the user.
LHCb Conditions database operation assistance systems
NASA Astrophysics Data System (ADS)
Clemencic, M.; Shapoval, I.; Cattaneo, M.; Degaudenzi, H.; Santinelli, R.
2012-12-01
The Conditions Database (CondDB) of the LHCb experiment provides versioned, time dependent geometry and conditions data for all LHCb data processing applications (simulation, high level trigger (HLT), reconstruction, analysis) in a heterogeneous computing environment ranging from user laptops to the HLT farm and the Grid. These different use cases impose front-end support for multiple database technologies (Oracle and SQLite are used). Sophisticated distribution tools are required to ensure timely and robust delivery of updates to all environments. The content of the database has to be managed to ensure that updates are internally consistent and externally compatible with multiple versions of the physics application software. In this paper we describe three systems that we have developed to address these issues. The first system is a CondDB state tracking extension to the Oracle 3D Streams replication technology, to trap cases when the CondDB replication was corrupted. Second, an automated distribution system for the SQLite-based CondDB, providing also smart backup and checkout mechanisms for the CondDB managers and LHCb users respectively. And, finally, a system to verify and monitor the internal (CondDB self-consistency) and external (LHCb physics software vs. CondDB) compatibility. The former two systems are used in production in the LHCb experiment and have achieved the desired goal of higher flexibility and robustness for the management and operation of the CondDB. The latter one has been fully designed and is passing currently to the implementation stage.
Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution
NASA Astrophysics Data System (ADS)
Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.
2017-10-01
Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.
Stability assessment of a multi-port power electronic interface for hybrid micro-grid applications
NASA Astrophysics Data System (ADS)
Shamsi, Pourya
Migration to an industrial society increases the demand for electrical energy. Meanwhile, social causes for preserving the environment and reducing pollutions seek cleaner forms of energy sources. Therefore, there has been a growth in distributed generation from renewable sources in the past decade. Existing regulations and power system coordination does not allow for massive integration of distributed generation throughout the grid. Moreover, the current infrastructures are not designed for interfacing distributed and deregulated generation. In order to remedy this problem, a hybrid micro-grid based on nano-grids is introduced. This system consists of a reliable micro-grid structure that provides a smooth transition from the current distribution networks to smart micro-grid systems. Multi-port power electronic interfaces are introduced to manage the local generation, storage, and consumption. Afterwards, a model for this micro-grid is derived. Using this model, the stability of the system under a variety of source and load induced disturbances is studied. Moreover, pole-zero study of the micro-grid is performed under various loading conditions. An experimental setup of this micro-grid is developed, and the validity of the model in emulating the dynamic behavior of the system is verified. This study provides a theory for a novel hybrid micro-grid as well as models for stability assessment of the proposed micro-grid.
Testing the mutual information expansion of entropy with multivariate Gaussian distributions.
Goethe, Martin; Fita, Ignacio; Rubi, J Miguel
2017-12-14
The mutual information expansion (MIE) represents an approximation of the configurational entropy in terms of low-dimensional integrals. It is frequently employed to compute entropies from simulation data of large systems, such as macromolecules, for which brute-force evaluation of the full configurational integral is intractable. Here, we test the validity of MIE for systems consisting of more than m = 100 degrees of freedom (dofs). The dofs are distributed according to multivariate Gaussian distributions which were generated from protein structures using a variant of the anisotropic network model. For the Gaussian distributions, we have semi-analytical access to the configurational entropy as well as to all contributions of MIE. This allows us to accurately assess the validity of MIE for different situations. We find that MIE diverges for systems containing long-range correlations which means that the error of consecutive MIE approximations grows with the truncation order n for all tractable n ≪ m. This fact implies severe limitations on the applicability of MIE, which are discussed in the article. For systems with correlations that decay exponentially with distance, MIE represents an asymptotic expansion of entropy, where the first successive MIE approximations approach the exact entropy, while MIE also diverges for larger orders. In this case, MIE serves as a useful entropy expansion when truncated up to a specific truncation order which depends on the correlation length of the system.
Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrosiano, John; Newkirk, Ryan; Mc Donald, Mark P
2010-12-03
The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk ofmore » intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and corrunodity flow models are instantiated, a risk consequence analysis can be performed by injecting contaminant at chosen points in the system and propagating the event through the overarching system to arrive at morbidity and mortality figures. A generic chocolate snack cake model, consisting of fluid milk, liquid eggs, and cocoa, is described as an intended proof of concept for multi-ingredient food systems. We aim for an eventual tool that can be used directly by policy makers and planners.« less
NASA Astrophysics Data System (ADS)
Silva, Norberto De Jesus
Previous studies have shown that time-resolved fluorescence decay of various single tryptophan proteins is best described by a distribution of fluorescence lifetimes rather than one or two lifetimes. The thermal dependence of the lifetime distributions is consistent with the hypothesis that proteins fluctuate between a hierarchy of many conformational substates. With this scenario as a theoretical framework, the correlations between protein dynamic and structure are investigated by studying the time-resolved fluorescence and anisotropy decay of the single tryptophan (Trp) residue of human superoxide dismutase (HSOD) over a wide range of temperatures and at different denaturant concentrations. First, it is demonstrated that the center of the lifetime distribution can characterize the average deactivation environment of the excited Trp-protein system. A qualitative model is introduced to explain the time-resolved fluorescence decay of HSOD in 80% glycerol over a wide range of temperatures. The dynamical model features isoenergetic conformational substates separated by a hierarchy of energy barriers. The HSOD system is also investigated as a function of denaturant concentration in aqueous solution. As a function of guanidine hydrochloride (GdHCl), the width of the fluorescence lifetime distribution of HSOD displays a maximum which is not coincident with the fully denatured form of HSOD at 6.5M GdHCl. Furthermore, the width for the fully denatured form of HSOD is greater than that of the native form. This is consistent with the scenario that more conformational substates are being created upon denaturation of HSOD. HSOD is a dimeric protein and it was observed that the width of the lifetime distribution of HSOD at intermediate GdHCl concentrations increased with decreasing protein concentration. In addition, the secondary structure of HSOD at intermediate GdHCl concentration does not change with protein concentration. These results suggest that HSOD display structural microheterogeneity which is consistent with the hypothesis of conformational substates. Further analysis show that, during denaturation, the monomeric form of HSOD is an intermediate which displays native-like secondary structure and fluctuating tertiary structure; i.e., the monomeric form of HSOD is a molten globule.
Is the co-seismic slip distribution fractal?
NASA Astrophysics Data System (ADS)
Milliner, Christopher; Sammis, Charles; Allam, Amir; Dolan, James
2015-04-01
Co-seismic along-strike slip heterogeneity is widely observed for many surface-rupturing earthquakes as revealed by field and high-resolution geodetic methods. However, this co-seismic slip variability is currently a poorly understood phenomenon. Key unanswered questions include: What are the characteristics and underlying causes of along-strike slip variability? Do the properties of slip variability change from fault-to-fault, along-strike or at different scales? We cross-correlate optical, pre- and post-event air photos using the program COSI-Corr to measure the near-field, surface deformation pattern of the 1992 Mw 7.3 Landers and 1999 Mw 7.1 Hector Mine earthquakes in high-resolution. We produce the co-seismic slip profiles of both events from over 1,000 displacement measurements and observe consistent along-strike slip variability. Although the observed slip heterogeneity seems apparently complex and disordered, a spectral analysis reveals that the slip distributions are indeed self-affine fractal i.e., slip exhibits a consistent degree of irregularity at all observable length scales, with a 'short-memory' and is not random. We find a fractal dimension of 1.58 and 1.75 for the Landers and Hector Mine earthquakes, respectively, indicating that slip is more heterogeneous for the Hector Mine event. Fractal slip is consistent with both dynamic and quasi-static numerical simulations that use non-planar faults, which in turn causes heterogeneous along-strike stress, and we attribute the observed fractal slip to fault surfaces of fractal roughness. As fault surfaces are known to smooth over geologic time due to abrasional wear and fracturing, we also test whether the fractal properties of slip distributions alters between earthquakes from immature to mature fault systems. We will present results that test this hypothesis by using the optical image correlation technique to measure historic, co-seismic slip distributions of earthquakes from structurally mature, large cumulative displacement faults and compare these slip distributions to those from immature fault systems. Our results have fundamental implications for an understanding of slip heterogeneity and the behavior of the rupture process.
NASA Astrophysics Data System (ADS)
Jhan, Sin-Mu; Jin, Bih-Yaw
2017-11-01
A simple molecular orbital treatment of local current distributions inside single molecular junctions is developed in this paper. Using the first-order perturbation theory and nonequilibrium Green's function techniques in the framework of Hückel theory, we show that the leading contributions to local current distributions are directly proportional to the off-diagonal elements of transition density matrices. Under the orbital approximation, the major contributions to local currents come from a few dominant molecular orbital pairs which are mixed by the interactions between the molecule and electrodes. A few simple molecular junctions consisting of single- and multi-ring conjugated systems are used to demonstrate that local current distributions inside molecular junctions can be decomposed by partial sums of a few leading contributing transition density matrices.
Grist : grid-based data mining for astronomy
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden;
2004-01-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Grist: Grid-based Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.
2005-12-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Bunch compression efficiency of the femtosecond electron source at Chiang Mai University
NASA Astrophysics Data System (ADS)
Thongbai, C.; Kusoljariyakul, K.; Saisut, J.
2011-07-01
A femtosecond electron source has been developed at the Plasma and Beam Physics Research Facility (PBP), Chiang Mai University (CMU), Thailand. Ultra-short electron bunches can be produced with a bunch compression system consisting of a thermionic cathode RF-gun, an alpha-magnet as a magnetic bunch compressor, and a linear accelerator as a post acceleration section. To obtain effective bunch compression, it is crucial to provide a proper longitudinal phase-space distribution at the gun exit matched to the subsequent beam transport system. Via beam dynamics calculations and experiments, we investigate the bunch compression efficiency for various RF-gun fields. The particle distribution at the RF-gun exit will be tracked numerically through the alpha-magnet and beam transport. Details of the study and results leading to an optimum condition for our system will be presented.
Stress Distribution and Damage Mode of Ceramic-Dentin Bilayer Systems
NASA Astrophysics Data System (ADS)
Kurtoglu, Cem; Demiroz, S. Suna; Mehmetov, Emirullah; Uysal, Hakan
The aim of this study was to evaluate the damage modes of ceramic systems bonded to dentin under Hertzian indentation. Single-cycle Hertzian contact test over 150-850 N load range was applied randomly to 210 ceramic-dentin bilayer disc specimens of zirconia or IPS Empress II -1 mm, -1.5 mm and of feldspathic porcelain -1 mm, -1.5 mm, -2 mm. Optical microscopy was employed for the identification of quasiplastic mode and radial cracks. Finite element analysis was used to analyze the stress distribution. Our results showed that the degree of damage in both modes evolved progressively and the origin changed with contact load. Stress location and value were consistent with the mechanical test results. It was concluded that microstructure and thickness of the material have a significant effect on the damage modes of ceramic layer systems.
The Spallation Neutron Source accelerator system design
NASA Astrophysics Data System (ADS)
Henderson, S.; Abraham, W.; Aleksandrov, A.; Allen, C.; Alonso, J.; Anderson, D.; Arenius, D.; Arthur, T.; Assadi, S.; Ayers, J.; Bach, P.; Badea, V.; Battle, R.; Beebe-Wang, J.; Bergmann, B.; Bernardin, J.; Bhatia, T.; Billen, J.; Birke, T.; Bjorklund, E.; Blaskiewicz, M.; Blind, B.; Blokland, W.; Bookwalter, V.; Borovina, D.; Bowling, S.; Bradley, J.; Brantley, C.; Brennan, J.; Brodowski, J.; Brown, S.; Brown, R.; Bruce, D.; Bultman, N.; Cameron, P.; Campisi, I.; Casagrande, F.; Catalan-Lasheras, N.; Champion, M.; Champion, M.; Chen, Z.; Cheng, D.; Cho, Y.; Christensen, K.; Chu, C.; Cleaves, J.; Connolly, R.; Cote, T.; Cousineau, S.; Crandall, K.; Creel, J.; Crofford, M.; Cull, P.; Cutler, R.; Dabney, R.; Dalesio, L.; Daly, E.; Damm, R.; Danilov, V.; Davino, D.; Davis, K.; Dawson, C.; Day, L.; Deibele, C.; Delayen, J.; DeLong, J.; Demello, A.; DeVan, W.; Digennaro, R.; Dixon, K.; Dodson, G.; Doleans, M.; Doolittle, L.; Doss, J.; Drury, M.; Elliot, T.; Ellis, S.; Error, J.; Fazekas, J.; Fedotov, A.; Feng, P.; Fischer, J.; Fox, W.; Fuja, R.; Funk, W.; Galambos, J.; Ganni, V.; Garnett, R.; Geng, X.; Gentzlinger, R.; Giannella, M.; Gibson, P.; Gillis, R.; Gioia, J.; Gordon, J.; Gough, R.; Greer, J.; Gregory, W.; Gribble, R.; Grice, W.; Gurd, D.; Gurd, P.; Guthrie, A.; Hahn, H.; Hardek, T.; Hardekopf, R.; Harrison, J.; Hatfield, D.; He, P.; Hechler, M.; Heistermann, F.; Helus, S.; Hiatt, T.; Hicks, S.; Hill, J.; Hill, J.; Hoff, L.; Hoff, M.; Hogan, J.; Holding, M.; Holik, P.; Holmes, J.; Holtkamp, N.; Hovater, C.; Howell, M.; Hseuh, H.; Huhn, A.; Hunter, T.; Ilg, T.; Jackson, J.; Jain, A.; Jason, A.; Jeon, D.; Johnson, G.; Jones, A.; Joseph, S.; Justice, A.; Kang, Y.; Kasemir, K.; Keller, R.; Kersevan, R.; Kerstiens, D.; Kesselman, M.; Kim, S.; Kneisel, P.; Kravchuk, L.; Kuneli, T.; Kurennoy, S.; Kustom, R.; Kwon, S.; Ladd, P.; Lambiase, R.; Lee, Y. Y.; Leitner, M.; Leung, K.-N.; Lewis, S.; Liaw, C.; Lionberger, C.; Lo, C. C.; Long, C.; Ludewig, H.; Ludvig, J.; Luft, P.; Lynch, M.; Ma, H.; MacGill, R.; Macha, K.; Madre, B.; Mahler, G.; Mahoney, K.; Maines, J.; Mammosser, J.; Mann, T.; Marneris, I.; Marroquin, P.; Martineau, R.; Matsumoto, K.; McCarthy, M.; McChesney, C.; McGahern, W.; McGehee, P.; Meng, W.; Merz, B.; Meyer, R.; Meyer, R.; Miller, B.; Mitchell, R.; Mize, J.; Monroy, M.; Munro, J.; Murdoch, G.; Musson, J.; Nath, S.; Nelson, R.; Nelson, R.; O`Hara, J.; Olsen, D.; Oren, W.; Oshatz, D.; Owens, T.; Pai, C.; Papaphilippou, I.; Patterson, N.; Patterson, J.; Pearson, C.; Pelaia, T.; Pieck, M.; Piller, C.; Plawski, T.; Plum, M.; Pogge, J.; Power, J.; Powers, T.; Preble, J.; Prokop, M.; Pruyn, J.; Purcell, D.; Rank, J.; Raparia, D.; Ratti, A.; Reass, W.; Reece, K.; Rees, D.; Regan, A.; Regis, M.; Reijonen, J.; Rej, D.; Richards, D.; Richied, D.; Rode, C.; Rodriguez, W.; Rodriguez, M.; Rohlev, A.; Rose, C.; Roseberry, T.; Rowton, L.; Roybal, W.; Rust, K.; Salazer, G.; Sandberg, J.; Saunders, J.; Schenkel, T.; Schneider, W.; Schrage, D.; Schubert, J.; Severino, F.; Shafer, R.; Shea, T.; Shishlo, A.; Shoaee, H.; Sibley, C.; Sims, J.; Smee, S.; Smith, J.; Smith, K.; Spitz, R.; Staples, J.; Stein, P.; Stettler, M.; Stirbet, M.; Stockli, M.; Stone, W.; Stout, D.; Stovall, J.; Strelo, W.; Strong, H.; Sundelin, R.; Syversrud, D.; Szajbler, M.; Takeda, H.; Tallerico, P.; Tang, J.; Tanke, E.; Tepikian, S.; Thomae, R.; Thompson, D.; Thomson, D.; Thuot, M.; Treml, C.; Tsoupas, N.; Tuozzolo, J.; Tuzel, W.; Vassioutchenko, A.; Virostek, S.; Wallig, J.; Wanderer, P.; Wang, Y.; Wang, J. G.; Wangler, T.; Warren, D.; Wei, J.; Weiss, D.; Welton, R.; Weng, J.; Weng, W.-T.; Wezensky, M.; White, M.; Whitlatch, T.; Williams, D.; Williams, E.; Wilson, K.; Wiseman, M.; Wood, R.; Wright, P.; Wu, A.; Ybarrolaza, N.; Young, K.; Young, L.; Yourd, R.; Zachoszcz, A.; Zaltsman, A.; Zhang, S.; Zhang, W.; Zhang, Y.; Zhukov, A.
2014-11-01
The Spallation Neutron Source (SNS) was designed and constructed by a collaboration of six U.S. Department of Energy national laboratories. The SNS accelerator system consists of a 1 GeV linear accelerator and an accumulator ring providing 1.4 MW of proton beam power in microsecond-long beam pulses to a liquid mercury target for neutron production. The accelerator complex consists of a front-end negative hydrogen-ion injector system, an 87 MeV drift tube linear accelerator, a 186 MeV side-coupled linear accelerator, a 1 GeV superconducting linear accelerator, a 248-m circumference accumulator ring and associated beam transport lines. The accelerator complex is supported by ~100 high-power RF power systems, a 2 K cryogenic plant, ~400 DC and pulsed power supply systems, ~400 beam diagnostic devices and a distributed control system handling ~100,000 I/O signals. The beam dynamics design of the SNS accelerator is presented, as is the engineering design of the major accelerator subsystems.
First Results on Angular Distributions of Thermal Dileptons in Nuclear Collisions
NASA Astrophysics Data System (ADS)
Arnaldi, R.; Banicz, K.; Castor, J.; Chaurand, B.; Cicalò, C.; Colla, A.; Cortese, P.; Damjanovic, S.; David, A.; de Falco, A.; Devaux, A.; Ducroux, L.; En'Yo, H.; Fargeix, J.; Ferretti, A.; Floris, M.; Förster, A.; Force, P.; Guettet, N.; Guichard, A.; Gulkanian, H.; Heuser, J. M.; Keil, M.; Kluberg, L.; Lourenço, C.; Lozano, J.; Manso, F.; Martins, P.; Masoni, A.; Neves, A.; Ohnishi, H.; Oppedisano, C.; Parracho, P.; Pillot, P.; Poghosyan, T.; Puddu, G.; Radermacher, E.; Ramalhete, P.; Rosinsky, P.; Scomparin, E.; Seixas, J.; Serci, S.; Shahoyan, R.; Sonderegger, P.; Specht, H. J.; Tieulent, R.; Usai, G.; Veenhof, R.; Wöhri, H. K.
2009-06-01
The NA60 experiment at the CERN Super Proton Synchrotron has studied dimuon production in 158AGeV In-In collisions. The strong excess of pairs above the known sources found in the complete mass region 0.2
Reliability analysis and initial requirements for FC systems and stacks
NASA Astrophysics Data System (ADS)
Åström, K.; Fontell, E.; Virtanen, S.
In the year 2000 Wärtsilä Corporation started an R&D program to develop SOFC systems for CHP applications. The program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with rated power output in the range of 50-250 kW for distributed generation and marine applications. In the program Wärtsilä focuses on system integration and development. System reliability and availability are key issues determining the competitiveness of the SOFC technology. In Wärtsilä, methods have been implemented for analysing the system in respect to reliability and safety as well as for defining reliability requirements for system components. A fault tree representation is used as the basis for reliability prediction analysis. A dynamic simulation technique has been developed to allow for non-static properties in the fault tree logic modelling. Special emphasis has been placed on reliability analysis of the fuel cell stacks in the system. A method for assessing reliability and critical failure predictability requirements for fuel cell stacks in a system consisting of several stacks has been developed. The method is based on a qualitative model of the stack configuration where each stack can be in a functional, partially failed or critically failed state, each of the states having different failure rates and effects on the system behaviour. The main purpose of the method is to understand the effect of stack reliability, critical failure predictability and operating strategy on the system reliability and availability. An example configuration, consisting of 5 × 5 stacks (series of 5 sets of 5 parallel stacks) is analysed in respect to stack reliability requirements as a function of predictability of critical failures and Weibull shape factor of failure rate distributions.
Energy management and control of active distribution systems
NASA Astrophysics Data System (ADS)
Shariatzadeh, Farshid
Advancements in the communication, control, computation and information technologies have driven the transition to the next generation active power distribution systems. Novel control techniques and management strategies are required to achieve the efficient, economic and reliable grid. The focus of this work is energy management and control of active distribution systems (ADS) with integrated renewable energy sources (RESs) and demand response (DR). Here, ADS mean automated distribution system with remotely operated controllers and distributed energy resources (DERs). DER as active part of the next generation future distribution system includes: distributed generations (DGs), RESs, energy storage system (ESS), plug-in hybrid electric vehicles (PHEV) and DR. Integration of DR and RESs into ADS is critical to realize the vision of sustainability. The objective of this dissertation is the development of management architecture to control and operate ADS in the presence of DR and RES. One of the most challenging issues for operating ADS is the inherent uncertainty of DR and RES as well as conflicting objective of DER and electric utilities. ADS can consist of different layers such as system layer and building layer and coordination between these layers is essential. In order to address these challenges, multi-layer energy management and control architecture is proposed with robust algorithms in this work. First layer of proposed multi-layer architecture have been implemented at the system layer. Developed AC optimal power flow (AC-OPF) generates fair price for all DR and non-DR loads which is used as a control signal for second layer. Second layer controls DR load at buildings using a developed look-ahead robust controller. Load aggregator collects information from all buildings and send aggregated load to the system optimizer. Due to the different time scale at these two management layers, time coordination scheme is developed. Robust and deterministic controllers are developed to maximize the energy usage from rooftop photovoltaic (PV) generation locally and minimize heat-ventilation and air conditioning (HVAC) consumption while maintaining inside temperature within comfort zone. The performance of the developed multi-layer architecture has been analyzed using test case studies and results show the robustness of developed controller in the presence of uncertainty.
Superposition and detection of two helical beams for optical orbital angular momentum communication
NASA Astrophysics Data System (ADS)
Liu, Yi-Dong; Gao, Chunqing; Gao, Mingwei; Qi, Xiaoqing; Weber, Horst
2008-07-01
A loop-like system with a Dove prism is used to generate a collinear superposition of two helical beams with different azimuthal quantum numbers in this manuscript. After the generation of the helical beams distributed on the circle centered at the optical axis by using a binary amplitude grating, the diffractive field is separated into two polarized ones with the same distribution. Rotated by the Dove prism in the loop-like system in counter directions and combined together, the two fields will generate the collinear superposition of two helical beams in certain direction. The experiment shows consistency with the theoretical analysis. This method has potential applications in optical communication by using orbital angular momentum of laser beams (optical vortices).
PIP-II Cryogenic System and the evolution of Superfluid Helium Cryogenic Plant Specifications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakravarty, Anindya; Rane, Tejas; Klebaner, Arkadiy
2017-07-06
The PIP-II cryogenic system consists of a Superfluid Helium Cryogenic Plant (SHCP) and a Cryogenic Distribution System (CDS) connecting the SHCP to the Superconducting (SC) Linac consisting of 25 cryomodules. The dynamic heat load of the SC cavities for continuous wave (CW) as well as pulsed mode of operation has been listed out. The static heat loads of the cavities along with the CDS have also been discussed. Simulation study has been carried out to compute the supercritical helium (SHe) flow requirements for each cryomodule. Comparison between the flow requirements of the cryomodules for the CW and pulsed modes ofmore » operation have also been made. From the total computed heat load and pressure drop values in the CDS, the basic specifications for the SHCP, required for cooling the SC Linac, have evolved.« less
Toward an Autonomous Telescope Network: the TBT Scheduler
NASA Astrophysics Data System (ADS)
Racero, E.; Ibarra, A.; Ocaña, F.; de Lis, S. B.; Ponz, J. D.; Castillo, M.; Sánchez-Portal, M.
2015-09-01
Within the ESA SSA program, it is foreseen to deploy several robotic telescopes to provide surveillance and tracking services for hazardous objects. The TBT project will procure a validation platform for an autonomous optical observing system in a realistic scenario, consisting of two telescopes located in Spain and Australia, to collect representative test data for precursor SSA services. In this context, the planning and scheduling of the night consists of two software modules, the TBT Scheduler, that will allow the manual and autonomous planning of the night, and the control of the real-time response of the system, done by the RTS2 internal scheduler. The TBT Scheduler allocates tasks for both telescopes without human intervention. Every night it takes all the inputs needed and prepares the schedule following some predefined rules. The main purpose of the scheduler is the distribution of the time for follow-up of recently discovered targets and surveys. The TBT Scheduler considers the overall performance of the system, and combine follow-up with a priori survey strategies for both kind of objects. The strategy is defined according to the expected combined performance for both systems the upcoming night (weather, sky brightness, object accessibility and priority). Therefore, TBT Scheduler defines the global approach for the network and relies on the RTS2 internal scheduler for the final detailed distribution of tasks at each sensor.
Feasibility of Synergy-Based Exoskeleton Robot Control in Hemiplegia.
Hassan, Modar; Kadone, Hideki; Ueno, Tomoyuki; Hada, Yasushi; Sankai, Yoshiyuki; Suzuki, Kenji
2018-06-01
Here, we present a study on exoskeleton robot control based on inter-limb locomotor synergies using a robot control method developed to target hemiparesis. The robot control is based on inter-limb locomotor synergies and kinesiological information from the non-paretic leg and a walking aid cane to generate motion patterns for the assisted leg. The developed synergy-based system was tested against an autonomous robot control system in five patients with hemiparesis and varying locomotor abilities. Three of the participants were able to walk using the robot. Results from these participants showed an improved spatial symmetry ratio and more consistent step length with the synergy-based method compared with that for the autonomous method, while the increase in the range of motion for the assisted joints was larger with the autonomous system. The kinematic synergy distribution of the participants walking without the robot suggests a relationship between each participant's synergy distribution and his/her ability to control the robot: participants with two independent synergies accounting for approximately 80% of the data variability were able to walk with the robot. This observation was not consistently apparent with conventional clinical measures such as the Brunnstrom stages. This paper contributes to the field of robot-assisted locomotion therapy by introducing the concept of inter-limb synergies, demonstrating performance differences between synergy-based and autonomous robot control, and investigating the range of disability in which the system is usable.
Topology Counts: Force Distributions in Circular Spring Networks.
Heidemann, Knut M; Sageman-Furnas, Andrew O; Sharma, Abhinav; Rehfeldt, Florian; Schmidt, Christoph F; Wardetzky, Max
2018-02-09
Filamentous polymer networks govern the mechanical properties of many biological materials. Force distributions within these networks are typically highly inhomogeneous, and, although the importance of force distributions for structural properties is well recognized, they are far from being understood quantitatively. Using a combination of probabilistic and graph-theoretical techniques, we derive force distributions in a model system consisting of ensembles of random linear spring networks on a circle. We show that characteristic quantities, such as the mean and variance of the force supported by individual springs, can be derived explicitly in terms of only two parameters: (i) average connectivity and (ii) number of nodes. Our analysis shows that a classical mean-field approach fails to capture these characteristic quantities correctly. In contrast, we demonstrate that network topology is a crucial determinant of force distributions in an elastic spring network. Our results for 1D linear spring networks readily generalize to arbitrary dimensions.
Li, Wenjin
2018-02-28
Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.
Effect of distributive mass of spring on power flow in engineering test
NASA Astrophysics Data System (ADS)
Sheng, Meiping; Wang, Ting; Wang, Minqing; Wang, Xiao; Zhao, Xuan
2018-06-01
Mass of spring is always neglected in theoretical and simulative analysis, while it may be a significance in practical engineering. This paper is concerned with the distributive mass of a steel spring which is used as an isolator to simulate isolation performance of a water pipe in a heating system. Theoretical derivation of distributive mass effect of steel spring on vibration is presented, and multiple eigenfrequencies are obtained, which manifest that distributive mass results in extra modes and complex impedance properties. Furthermore, numerical simulation visually shows several anti-resonances of the steel spring corresponding to impedance and power flow curves. When anti-resonances emerge, the spring collects large energy which may cause damage and unexpected consequences in practical engineering and needs to be avoided. Finally, experimental tests are conducted and results show consistency with that of the simulation of the spring with distributive mass.
Igneous proceses and closed system evolution of the Tharsis region of Mars
NASA Astrophysics Data System (ADS)
Finnerty, A. A.; Phillips, R. J.; Banerdt, W. B.
1988-09-01
A quantitative petrologic model for the evolution of the Tharsis region on Mars is presented, which is consistent with global gravity and topography data. It is demonstrated that it is possible to form and support the topographic relief of the Tharsis plateau by a closed-system mass-conservative nearly isostatic process involving generation of magmas from a mantle source region. Extrusion and/or intrusion (or underplating) of such magmas allows low-pressure solidification, with a consequent increase in volume relative to that which would be possible in the high-pressure source region, leading to elevated topology. The distribution of densities with depth obtained by the model is quantitatively consistent with the isostatic models of Sleep and Phillips (1979, 1985).
NASA Astrophysics Data System (ADS)
Yuan, Jindou; Xu, Jinliang; Wang, Yaodong
2018-03-01
Energy saving and emission reduction have become targets for modern society due to the potential energy crisis and the threat of climate change. A distributed hybrid renewable energy system (HRES) consists of photovoltaic (PV) arrays, a wood-syngas combined heat and power generator (CHP) and back-up batteries is designed to power a typical semi-detached rural house in China which aims to meet the energy demand of a house and to reduce greenhouse gas emissions from the use of fossil fuels. Based on the annual load information of the house and the local meteorological data including solar radiation, air temperature, etc., a system model is set up using HOMER software and is used to simulate all practical configurations to carry out technical and economic evaluations. The performance of the whole HRES system and each component under different configurations are evaluated. The optimized configuration of the system is found
A compact x-ray system for two-phase flow measurement
NASA Astrophysics Data System (ADS)
Song, Kyle; Liu, Yang
2018-02-01
In this paper, a compact x-ray densitometry system consisting of a 50 kV, 1 mA x-ray tube and several linear detector arrays is developed for two-phase flow measurement. The system is capable of measuring void fraction and velocity distributions with a spatial resolution of 0.4 mm per pixel and a frequency of 1000 Hz. A novel measurement model has been established for the system which takes account of the energy spectrum of x-ray photons and the beam hardening effect. An improved measurement accuracy has been achieved with this model compared with the conventional log model that has been widely used in the literature. Using this system, void fraction and velocity distributions are measured for a bubbly and a slug flow in a 25.4 mm I.D. air-water two-phase flow test loop. The measured superficial gas velocities show an error within ±4% when compared with the gas flowmeter for both conditions.
A Corrosion Risk Assessment Model for Underground Piping
NASA Technical Reports Server (NTRS)
Datta, Koushik; Fraser, Douglas R.
2009-01-01
The Pressure Systems Manager at NASA Ames Research Center (ARC) has embarked on a project to collect data and develop risk assessment models to support risk-informed decision making regarding future inspections of underground pipes at ARC. This paper shows progress in one area of this project - a corrosion risk assessment model for the underground high-pressure air distribution piping system at ARC. It consists of a Corrosion Model of pipe-segments, a Pipe Wrap Protection Model; and a Pipe Stress Model for a pipe segment. A Monte Carlo simulation of the combined models provides a distribution of the failure probabilities. Sensitivity study results show that the model uncertainty, or lack of knowledge, is the dominant contributor to the calculated unreliability of the underground piping system. As a result, the Pressure Systems Manager may consider investing resources specifically focused on reducing these uncertainties. Future work includes completing the data collection effort for the existing ground based pressure systems and applying the risk models to risk-based inspection strategies of the underground pipes at ARC.
Process of producing liquid hydrocarbon fuels from biomass
Kuester, J.L.
1987-07-07
A continuous thermochemical indirect liquefaction process is described to convert various biomass materials into diesel-type transportation fuels which fuels are compatible with current engine designs and distribution systems comprising feeding said biomass into a circulating solid fluidized bed gasification system to produce a synthesis gas containing olefins, hydrogen and carbon monoxide and thereafter introducing the synthesis gas into a catalytic liquefaction system to convert the synthesis gas into liquid hydrocarbon fuel consisting essentially of C[sub 7]-C[sub 17] paraffinic hydrocarbons having cetane indices of 50+. 1 fig.
The Mass Distribution of Stellar-mass Black Holes
NASA Astrophysics Data System (ADS)
Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky
2011-11-01
We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.
The Structure of the Distant Kuiper Belt in a Nice Model Scenario
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pike, R. E.; Shankman, C. J.; Kavelaars, J. J.
2017-03-01
This work explores the orbital distribution of minor bodies in the outer Solar System emplaced as a result of a Nice model migration from the simulations of Brasser and Morbidelli. This planetary migration scatters a planetesimal disk from between 29 and 34 au and emplaces a population of objects into the Kuiper Belt region. From the 2:1 Neptune resonance and outward, the test particles analyzed populate the outer resonances with orbital distributions consistent with trans-Neptunian object (TNO) detections in semimajor axis, inclination, and eccentricity, while capture into the closest resonances is too efficient. The relative populations of the simulated scatteringmore » objects and resonant objects in the 3:1 and 4:1 resonances are also consistent with observed populations based on debiased TNO surveys, but the 5:1 resonance is severely underpopulated compared to population estimates from survey results. Scattering emplacement results in the expected orbital distribution for the majority of the TNO populations; however, the origin of the large observed population in the 5:1 resonance remains unexplained.« less
Mullen, Douglas G; Banaszak Holl, Mark M
2011-11-15
Nanoparticles conjugated with functional ligands are expected to have a major impact in medicine, photonics, sensing, and nanoarchitecture design. One major obstacle to realizing the promise of these materials, however, is the difficulty in controlling the ligand/nanoparticle ratio. This obstacle can be segmented into three key areas: First, many designs of these systems have failed to account for the true heterogeneity of ligand/nanoparticle ratios that compose each material. Second, studies in the field often use the mean ligand/nanoparticle ratio as the accepted level of characterization of these materials. This measure is insufficient because it does not provide information about the distribution of ligand/nanoparticle species within a sample or the number and relative amount of the different species that compose a material. Without these data, researchers do not have an accurate definition of material composition necessary both to understand the material-property relationships and to monitor the consistency of the material. Third, some synthetic approaches now in use may not produce consistent materials because of their sensitivity to reaction kinetics and to the synthetic history of the nanoparticle. In this Account, we describe recent advances that we have made in under standing the material composition of ligand-nanoparticle systems. Our work has been enabled by a model system using poly(amidoamine) dendrimers and two small molecule ligands. Using reverse phase high-pressure liquid chromatography (HPLC), we have successfully resolved and quantified the relative amounts and ratios of each ligand/dendrimer combination. This type of information is rare within the field of ligand-nanoparticle materials because most analytical techniques have been unable to identify the components in the distribution. Our experimental data indicate that the actual distribution of ligand-nanoparticle components is much more heterogeneous than is commonly assumed. The mean ligand/nanoparticle ratio that is typically the only information known about a material is insufficient because the mean does not provide information on the diversity of components in the material and often does not describe the most common component (the mode). Additionally, our experimental data has provided examples of material batches with the same mean ligand/nanoparticle ratio and very different distributions. This discrepancy indicates that the mean cannot be used as the sole metric to assess the reproducibility of a system. We further found that distribution profiles can be highly sensitive to the synthetic history of the starting material as well as slight changes in reaction conditions. We have incorporated the lessons from our experimental data into the design of new ligand-nanoparticle systems to provide improved control over these ratios.
The structure of the distant Kuiper belt in a Nice model scenario
NASA Astrophysics Data System (ADS)
Pike, Rosemary E.; Lawler, Samantha; Brasser, Ramon; Shankman, Cory; Alexandersen, Mike; Kavelaars, J. J.
2016-10-01
By utilizing a well-sampled migration model and characterized survey detections, we demonstrate that the Nice-model scenario results in consistent populations of scattering trans-Neptunian objects (TNOs) and several resonant TNO populations, but fails to reproduce the large population of 5:1 resonators discovered in surveys. We examine in detail the TNO populations implanted by the Nice model simulation from Brasser and Morbidelli (2013, B&M). This analysis focuses on the region from 25-155 AU, probing the classical, scattering, detached, and major resonant populations. Additional integrations were necessary to classify the test particles and determine population sizes and characteristics. The classified simulation objects are compared to the real TNOs from the Canada-France Ecliptic Plane Survey (CFEPS), CFEPS high latitude fields, and the Alexandersen (2016) survey. These surveys all include a detailed characterization of survey depth, pointing, and tracking efficiency, which allows detailed testing of this independently produced model of TNO populations. In the B&M model, the regions of the outer Solar System populated via capture of scattering objects are consistent with survey constraints. The scattering TNOs and most n:1 resonant populations have consistent orbital distributions and population sizes with the real detections, as well as a starting disk mass consistent with expectations. The B&M 5:1 resonators have a consistent orbital distribution with the real detections and previous models. However, the B&M 5:1 Neptune resonance is underpopulated by a factor of ~100 and would require a starting proto-planetesimal disk with a mass of ~100 Earth masses. The large population in the 5:1 Neptune resonance is unexplained by scattering capture in a Nice-model scenario, however this model accurately produces the TNO subpopulations that result from scattering object capture and provides additional insight into sub-population orbital distributions.
de Waal, C; Rodger, J G; Anderson, B; Ellis, A G
2014-05-01
Dispersal and breeding system traits are thought to affect colonization success. As species have attained their present distribution ranges through colonization, these traits may vary geographically. Although several theories predict associations between dispersal ability, selfing ability and the relative position of a population within its geographic range, there is little theoretical or empirical consensus on exactly how these three variables are related. We investigated relationships between dispersal ability, selfing ability and range position across 28 populations of 13 annual, wind-dispersed Asteraceae species from the Namaqualand region of South Africa. Controlling for phylogeny, relative dispersal ability--assessed from vertical fall time of fruits--was positively related to an index of autofertility--determined from hand-pollination experiments. These findings support the existence of two discrete syndromes: high selfing ability associated with good dispersal and obligate outcrossing associated with lower dispersal ability. This is consistent with the hypothesis that selection for colonization success drives the evolution of an association between these traits. However, no general effect of range position on dispersal or breeding system traits was evident. This suggests selection on both breeding system and dispersal traits acts consistently across distribution ranges. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
A multiprocessing architecture for real-time monitoring
NASA Technical Reports Server (NTRS)
Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.
1988-01-01
A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.
A Distributed Data Acquisition System for the Sensor Network of the TAWARA_RTM Project
NASA Astrophysics Data System (ADS)
Fontana, Cristiano Lino; Donati, Massimiliano; Cester, Davide; Fanucci, Luca; Iovene, Alessandro; Swiderski, Lukasz; Moretto, Sandra; Moszynski, Marek; Olejnik, Anna; Ruiu, Alessio; Stevanato, Luca; Batsch, Tadeusz; Tintori, Carlo; Lunardon, Marcello
This paper describes a distributed Data Acquisition System (DAQ) developed for the TAWARA_RTM project (TAp WAter RAdioactivity Real Time Monitor). The aim is detecting the presence of radioactive contaminants in drinking water; in order to prevent deliberate or accidental threats. Employing a set of detectors, it is possible to detect alpha, beta and gamma radiations, from emitters dissolved in water. The Sensor Network (SN) consists of several heterogeneous nodes controlled by a centralized server. The SN cyber-security is guaranteed in order to protect it from external intrusions and malicious acts. The nodes were installed in different locations, along the water treatment processes, in the waterworks plant supplying the aqueduct of Warsaw, Poland. Embedded computers control the simpler nodes, and are directly connected to the SN. Local-PCs (LPCs) control the more complex nodes that consist signal digitizers acquiring data from several detectors. The DAQ in the LPC is split in several processes communicating with sockets in a local sub-network. Each process is dedicated to a very simple task (e.g. data acquisition, data analysis, hydraulics management) in order to have a flexible and fault-tolerant system. The main SN and the local DAQ networks are separated by data routers to ensure the cyber-security.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C; Lin, H; Chuang, K
2016-06-15
Purpose: To monitor the activity distribution and needle position during and after implantation in operating rooms. Methods: Simulation studies were conducted to assess the feasibility of measurement activity distribution and seed localization using the DuPECT system. The system consists of a LaBr3-based probe and planar detection heads, a collimation system, and a coincidence circuit. The two heads can be manipulated independently. Simplified Yb-169 brachytherapy seeds were used. A water-filled cylindrical phantom with a 40-mm diameter and 40-mm length was used to model a simplified prostate of the Asian man. Two simplified seeds were placed at a radial distance of 10more » mm and tangential distance of 10 mm from the center of the phantom. The probe head was arranged perpendicular to the planar head. Results of various imaging durations were analyzed and the accuracy of the seed localization was assessed by calculating the centroid of the seed. Results: The reconstructed images indicate that the DuPECT can measure the activity distribution and locate the seeds dwelt in different positions intraoperatively. The calculated centroid on average turned out to be accurate within the pixel size of 0.5 mm. The two sources were identified when the duration is longer than 15 s. The sensitivity measured in water was merely 0.07 cps/MBq. Conclusion: Preliminary results show that the measurement of the activity distribution and seed localization are feasible using the DuPECT system intraoperatively. It indicates the DuPECT system has potential to be an approach for dose-distribution-validation. The efficacy of acvtivity distribution measurement and source localization using the DuPECT system will evaluated in more realistic phantom studies (e.g., various attenuation materials and greater number of seeds) in the future investigation.« less
Considerations in the design of a communication network for an autonomously managed power system
NASA Technical Reports Server (NTRS)
Mckee, J. W.; Whitehead, Norma; Lollar, Louis
1989-01-01
The considerations involved in designing a communication network for an autonomously managed power system intended for use in space vehicles are examined. An overview of the design and implementation of a communication network implemented in a breadboard power system is presented. An assumption that the monitoring and control devices are distributed but physically close leads to the selection of a multidrop cable communication system. The assumption of a high-quality communication cable in which few messages are lost resulted in a simple recovery procedure consisting of a time out and retransmit process.
State-of-the-art cockpit design for the HH-65A helicopters
NASA Technical Reports Server (NTRS)
Castleberry, D. E.; Mcelreath, M. Y.
1982-01-01
In the design of a HH-65A helicopter cockpit, advanced integrated electronics systems technology was employed to achieve several important goals for this multimission aircraft. They were: (1) integrated systems operation with consistent and simplified cockpit procedures; (2) mission-task-related cockpit displays and controls, and (3) reduced pilot instrument scan effort with excellent outside visibility. The integrated avionics system was implemented to depend heavily upon distributed but complementary processing, multiplex digital bus technology, and multifunction CRT controls and displays. This avionics system was completely flight tested and will soon enter operational service with the Coast Guard.
Minsley, B.J.
2011-01-01
A meaningful interpretation of geophysical measurements requires an assessment of the space of models that are consistent with the data, rather than just a single, 'best' model which does not convey information about parameter uncertainty. For this purpose, a trans-dimensional Bayesian Markov chain Monte Carlo (MCMC) algorithm is developed for assessing frequency-domain electromagnetic (FDEM) data acquired from airborne or ground-based systems. By sampling the distribution of models that are consistent with measured data and any prior knowledge, valuable inferences can be made about parameter values such as the likely depth to an interface, the distribution of possible resistivity values as a function of depth and non-unique relationships between parameters. The trans-dimensional aspect of the algorithm allows the number of layers to be a free parameter that is controlled by the data, where models with fewer layers are inherently favoured, which provides a natural measure of parsimony and a significant degree of flexibility in parametrization. The MCMC algorithm is used with synthetic examples to illustrate how the distribution of acceptable models is affected by the choice of prior information, the system geometry and configuration and the uncertainty in the measured system elevation. An airborne FDEM data set that was acquired for the purpose of hydrogeological characterization is also studied. The results compare favourably with traditional least-squares analysis, borehole resistivity and lithology logs from the site, and also provide new information about parameter uncertainty necessary for model assessment. ?? 2011. Geophysical Journal International ?? 2011 RAS.
Study of Airflow Out of the Mouth During Speech.
ERIC Educational Resources Information Center
Catford, J.C.; And Others
Airflow outside the mouth is diagnostic of articulatory activities in the vocal tract, both total volume-velocity and the distribution of particle velocities over the flow-front being useful for this purpose. A system for recording and displaying both these types of information is described. This consists of a matrix of l6 hot-wire anemometer flow…
Checkpointing and Recovery in Distributed and Database Systems
ERIC Educational Resources Information Center
Wu, Jiang
2011-01-01
A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…
Description of the SSF PMAD DC testbed control system data acquisition function
NASA Technical Reports Server (NTRS)
Baez, Anastacio N.; Mackin, Michael; Wright, Theodore
1992-01-01
The NASA LeRC in Cleveland, Ohio has completed the development and integration of a Power Management and Distribution (PMAD) DC Testbed. This testbed is a reduced scale representation of the end to end, sources to loads, Space Station Freedom Electrical Power System (SSF EPS). This unique facility is being used to demonstrate DC power generation and distribution, power management and control, and system operation techniques considered to be prime candidates for the Space Station Freedom. A key capability of the testbed is its ability to be configured to address system level issues in support of critical SSF program design milestones. Electrical power system control and operation issues like source control, source regulation, system fault protection, end-to-end system stability, health monitoring, resource allocation, and resource management are being evaluated in the testbed. The SSF EPS control functional allocation between on-board computers and ground based systems is evolving. Initially, ground based systems will perform the bulk of power system control and operation. The EPS control system is required to continuously monitor and determine the current state of the power system. The DC Testbed Control System consists of standard controllers arranged in a hierarchical and distributed architecture. These controllers provide all the monitoring and control functions for the DC Testbed Electrical Power System. Higher level controllers include the Power Management Controller, Load Management Controller, Operator Interface System, and a network of computer systems that perform some of the SSF Ground based Control Center Operation. The lower level controllers include Main Bus Switch Controllers and Photovoltaic Controllers. Power system status information is periodically provided to the higher level controllers to perform system control and operation. The data acquisition function of the control system is distributed among the various levels of the hierarchy. Data requirements are dictated by the control system algorithms being implemented at each level. A functional description of the various levels of the testbed control system architecture, the data acquisition function, and the status of its implementationis presented.
Distributed media server for the support of multimedia teaching
NASA Astrophysics Data System (ADS)
Liepert, Michael; Griwodz, Carsten; On, Giwon; Zink, Michael; Steinmetz, Ralf
1999-11-01
One major problem of using multimedia material in lecturing is the trade-off between actuality of the content and quality of the presentations. A frequent need for content refreshment exists, but high quality presentations can not be authored by the individual teacher alone at the required rate. Several past and current projects have had the goal of developing so-called learning archives, a variation of digital libraries. On demand, these deliver material with limited structure to students. For lecturing, these systems provide just as insufficient service as the unreliable WWW. Based on our system HyNoDe [HYN97] we address these issues in our distributed media server built of 'medianodes.' We add content management that addresses teachers' needs and provide guaranteed service for connected as well as disconnected operation of their presentation systems. Medianode aims at a scenario for non-real-time, shared creation and modification of presentations and presentation elements. It provides user authentication, administrative roles and authorization mechanisms. It requires an understanding of consistency, versioning and alternative content tailored to lecturing. To allow for predictable presentation quality, medianode provides application level QoS supporting alternative media and alternative presentations. Viable presentation tracks are dynamically generated based on user requests, user profiles and hardware profiles. For machines that are removed from the system according to a schedule, the systems guarantees availability of consistent, complete tracks of selected presentations at disconnect time. In this paper we present the scope of the medianode project and afterwards its architecture, following the realization steps.
Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen
2016-01-01
To monitor two-dimensional (2D) distributions of temperature and H2O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors' knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H2O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm(-1) (1343.3 nm) and 7185.6 cm(-1) (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H2O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H2O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.
NASA Astrophysics Data System (ADS)
Xu, Lijun; Liu, Chang; Jing, Wenyang; Cao, Zhang; Xue, Xin; Lin, Yuzhen
2016-01-01
To monitor two-dimensional (2D) distributions of temperature and H2O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors' knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H2O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm-1 (1343.3 nm) and 7185.6 cm-1 (1391.67 nm), respectively. The tomographic sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H2O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H2O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Lijun, E-mail: lijunxu@buaa.edu.cn; Liu, Chang; Jing, Wenyang
2016-01-15
To monitor two-dimensional (2D) distributions of temperature and H{sub 2}O mole fraction, an on-line tomography system based on tunable diode laser absorption spectroscopy (TDLAS) was developed. To the best of the authors’ knowledge, this is the first report on a multi-view TDLAS-based system for simultaneous tomographic visualization of temperature and H{sub 2}O mole fraction in real time. The system consists of two distributed feedback (DFB) laser diodes, a tomographic sensor, electronic circuits, and a computer. The central frequencies of the two DFB laser diodes are at 7444.36 cm{sup −1} (1343.3 nm) and 7185.6 cm{sup −1} (1391.67 nm), respectively. The tomographicmore » sensor is used to generate fan-beam illumination from five views and to produce 60 ray measurements. The electronic circuits not only provide stable temperature and precise current controlling signals for the laser diodes but also can accurately sample the transmitted laser intensities and extract integrated absorbances in real time. Finally, the integrated absorbances are transferred to the computer, in which the 2D distributions of temperature and H{sub 2}O mole fraction are reconstructed by using a modified Landweber algorithm. In the experiments, the TDLAS-based tomography system was validated by using asymmetric premixed flames with fixed and time-varying equivalent ratios, respectively. The results demonstrate that the system is able to reconstruct the profiles of the 2D distributions of temperature and H{sub 2}O mole fraction of the flame and effectively capture the dynamics of the combustion process, which exhibits good potential for flame monitoring and on-line combustion diagnosis.« less
Zhang, Qinnan; Zhong, Liyun; Tang, Ping; Yuan, Yingjie; Liu, Shengde; Tian, Jindong; Lu, Xiaoxu
2017-05-31
Cell refractive index, an intrinsic optical parameter, is closely correlated with the intracellular mass and concentration. By combining optical phase-shifting interferometry (PSI) and atomic force microscope (AFM) imaging, we constructed a label free, non-invasive and quantitative refractive index of single cell measurement system, in which the accurate phase map of single cell was retrieved with PSI technique and the cell morphology with nanoscale resolution was achieved with AFM imaging. Based on the proposed AFM/PSI system, we achieved quantitative refractive index distributions of single red blood cell and Jurkat cell, respectively. Further, the quantitative change of refractive index distribution during Daunorubicin (DNR)-induced Jurkat cell apoptosis was presented, and then the content changes of intracellular biochemical components were achieved. Importantly, these results were consistent with Raman spectral analysis, indicating that the proposed PSI/AFM based refractive index system is likely to become a useful tool for intracellular biochemical components analysis measurement, and this will facilitate its application for revealing cell structure and pathological state from a new perspective.
Development of a scintillating G-GEM detector for a 6-MeV X-band Linac for medical applications
NASA Astrophysics Data System (ADS)
Fujiwara, T.; Tanaka, S.; Mitsuya, Y.; Takahashi, H.; Tagi, K.; Kusano, J.; Tanabe, E.; Yamamoto, M.; Nakamura, N.; Dobashi, K.; Tomita, H.; Uesaka, M.
2013-12-01
We recently developed glass gas electron multipliers (G-GEMs) with an entirely new process using photo-etchable glass. The photo-etchable glass used for the substrate is called PEG3 (Hoya Corporation). Taking advantage of low outgassing material, we have envisioned a medical application of G-GEMs. A two-dimensional position-sensitive dosimetry system based on a scintillating gas detector is being developed for real-time dose distribution monitoring in X-ray radiation therapy. The dosimetry system consists of a chamber filled with an Ar/CF4 scintillating gas mixture, inside of which G-GEM structures are mounted. Photons produced by the excited Ar/CF4 gas molecules during the gas multiplication in the GEM holes are detected by a mirror-lens-CCD-camera system. We found that the intensity distribution of the measured light spot is proportional to the 2D dose distribution. In this work, we report on the first results from a scintillating G-GEM detector for a position-sensitive X-ray beam dosimeter.
Experimental study of the oscillation of spheres in an acoustic levitator.
Andrade, Marco A B; Pérez, Nicolás; Adamowski, Julio C
2014-10-01
The spontaneous oscillation of solid spheres in a single-axis acoustic levitator is experimentally investigated by using a high speed camera to record the position of the levitated sphere as a function of time. The oscillations in the axial and radial directions are systematically studied by changing the sphere density and the acoustic pressure amplitude. In order to interpret the experimental results, a simple model based on a spring-mass system is applied in the analysis of the sphere oscillatory behavior. This model requires the knowledge of the acoustic pressure distribution, which was obtained numerically by using a linear finite element method (FEM). Additionally, the linear acoustic pressure distribution obtained by FEM was compared with that measured with a laser Doppler vibrometer. The comparison between numerical and experimental pressure distributions shows good agreement for low values of pressure amplitude. When the pressure amplitude is increased, the acoustic pressure distribution becomes nonlinear, producing harmonics of the fundamental frequency. The experimental results of the spheres oscillations for low pressure amplitudes are consistent with the results predicted by the simple model based on a spring-mass system.
Advanced algorithms for distributed fusion
NASA Astrophysics Data System (ADS)
Gelfand, A.; Smith, C.; Colony, M.; Bowman, C.; Pei, R.; Huynh, T.; Brown, C.
2008-03-01
The US Military has been undergoing a radical transition from a traditional "platform-centric" force to one capable of performing in a "Network-Centric" environment. This transformation will place all of the data needed to efficiently meet tactical and strategic goals at the warfighter's fingertips. With access to this information, the challenge of fusing data from across the batttlespace into an operational picture for real-time Situational Awareness emerges. In such an environment, centralized fusion approaches will have limited application due to the constraints of real-time communications networks and computational resources. To overcome these limitations, we are developing a formalized architecture for fusion and track adjudication that allows the distribution of fusion processes over a dynamically created and managed information network. This network will support the incorporation and utilization of low level tracking information within the Army Distributed Common Ground System (DCGS-A) or Future Combat System (FCS). The framework is based on Bowman's Dual Node Network (DNN) architecture that utilizes a distributed network of interlaced fusion and track adjudication nodes to build and maintain a globally consistent picture across all assets.
OPERATIONAL EXPERIENCE WITH BEAM ABORT SYSTEM FOR SUPERCONDUCTING UNDULATOR QUENCH MITIGATION*
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harkay, Katherine C.; Dooling, Jeffrey C.; Sajaev, Vadim
A beam abort system has been implemented in the Advanced Photon Source storage ring. The abort system works in tandem with the existing machine protection system (MPS), and its purpose is to control the beam loss location and, thereby, minimize beam loss-induced quenches at the two superconducting undulators (SCUs). The abort system consists of a dedicated horizontal kicker designed to kick out all the bunches in a few turns after being triggered by MPS. The abort system concept was developed on the basis of single- and multi-particle tracking simulations using elegant and bench measurements of the kicker pulse. Performance ofmore » the abort system—kick amplitudes and loss distributions of all bunches—was analyzed using beam position monitor (BPM) turn histories, and agrees reasonably well with the model. Beam loss locations indicated by the BPMs are consistent with the fast fiber-optic beam loss monitor (BLM) diagnostics described elsewhere [1,2]. Operational experience with the abort system, various issues that were encountered, limitations of the system, and quench statistics are described.« less
Slip distribution, strain accumulation and aseismic slip on the Chaman Fault system
NASA Astrophysics Data System (ADS)
Amelug, F.
2015-12-01
The Chaman fault system is a transcurrent fault system developed due to the oblique convergence of the India and Eurasia plates in the western boundary of the India plate. To evaluate the contemporary rates of strain accumulation along and across the Chaman Fault system, we use 2003-2011 Envisat SAR imagery and InSAR time-series methods to obtain a ground velocity field in radar line-of-sight (LOS) direction. We correct the InSAR data for different sources of systematic biases including the phase unwrapping errors, local oscillator drift, topographic residuals and stratified tropospheric delay and evaluate the uncertainty due to the residual delay using time-series of MODIS observations of precipitable water vapor. The InSAR velocity field and modeling demonstrates the distribution of deformation across the Chaman fault system. In the central Chaman fault system, the InSAR velocity shows clear strain localization on the Chaman and Ghazaband faults and modeling suggests a total slip rate of ~24 mm/yr distributed on the two faults with rates of 8 and 16 mm/yr, respectively corresponding to the 80% of the total ~3 cm/yr plate motion between India and Eurasia at these latitudes and consistent with the kinematic models which have predicted a slip rate of ~17-24 mm/yr for the Chaman Fault. In the northern Chaman fault system (north of 30.5N), ~6 mm/yr of the relative plate motion is accommodated across Chaman fault. North of 30.5 N where the topographic expression of the Ghazaband fault vanishes, its slip does not transfer to the Chaman fault but rather distributes among different faults in the Kirthar range and Sulaiman lobe. Observed surface creep on the southern Chaman fault between Nushki and north of City of Chaman, indicates that the fault is partially locked, consistent with the recorded M<7 earthquakes in last century on this segment. The Chaman fault between north of the City of Chaman to North of Kabul, does not show an increase in the rate of strain accumulation. However, lack of seismicity on this segment, presents a significant hazard on Kabul. The high rate of strain accumulation on the Ghazaband fault and lack of evidence for the rupture of the fault during the 1935 Quetta earthquake, present a growing earthquake hazard to the Balochistan and the populated areas such as the city of Quetta.
NASA Technical Reports Server (NTRS)
Khazanov, G. V.; Gallagher, D. L.; Gamayunov, K.
2007-01-01
It is well known that the effects of EMIC waves on RC ion and RB electron dynamics strongly depend on such particle/wave characteristics as the phase-space distribution function, frequency, wave-normal angle, wave energy, and the form of wave spectral energy density. Therefore, realistic characteristics of EMIC waves should be properly determined by modeling the RC-EMIC waves evolution self-consistently. Such a selfconsistent model progressively has been developing by Khaznnov et al. [2002-2006]. It solves a system of two coupled kinetic equations: one equation describes the RC ion dynamics and another equation describes the energy density evolution of EMIC waves. Using this model, we present the effectiveness of relativistic electron scattering and compare our results with previous work in this area of research.
Blatt, G J; Fitzgerald, C M; Guptill, J T; Booker, A B; Kemper, T L; Bauman, M L
2001-12-01
Neuropathological studies in autistic brains have shown small neuronal size and increased cell packing density in a variety of limbic system structures including the hippocampus, a change consistent with curtailment of normal development. Based on these observations in the hippocampus, a series of quantitative receptor autoradiographic studies were undertaken to determine the density and distribution of eight types of neurotransmitter receptors from four neurotransmitter systems (GABAergic, serotoninergic [5-HT], cholinergic, and glutamatergic). Data from these single concentration ligand binding studies indicate that the GABAergic receptor system (3[H]-flunitrazepam labeled benzodiazepine binding sites and 3[H]-muscimol labeled GABA(A) receptors) is significantly reduced in high binding regions, marking for the first time an abnormality in the GABA system in autism. In contrast, the density and distribution of the other six receptors studied (3[H]-80H-DPAT labeled 5-HT1A receptors, 3[H]-ketanserin labeled 5-HT2 receptors, 3[H]-pirenzepine labled M1 receptors, 3[H]-hemicholinium labeled high affinity choline uptake sites, 3[H]-MK801 labeled NMDA receptors, and 3[H]-kainate labeled kainate receptors) in the hippocampus did not demonstrate any statistically significant differences in binding.
Aerial cooperative transporting and assembling control using multiple quadrotor-manipulator systems
NASA Astrophysics Data System (ADS)
Qi, Yuhua; Wang, Jianan; Shan, Jiayuan
2018-02-01
In this paper, a fully distributed control scheme for aerial cooperative transporting and assembling is proposed using multiple quadrotor-manipulator systems with each quadrotor equipped with a robotic manipulator. First, the kinematic and dynamic models of a quadrotor with multi-Degree of Freedom (DOF) robotic manipulator are established together using Euler-Lagrange equations. Based on the aggregated dynamic model, the control scheme consisting of position controller, attitude controller and manipulator controller is presented. Regarding cooperative transporting and assembling, multiple quadrotor-manipulator systems should be able to form a desired formation without collision among quadrotors from any initial position. The desired formation is achieved by the distributed position controller and attitude controller, while the collision avoidance is guaranteed by an artificial potential function method. Then, the transporting and assembling tasks request the manipulators to reach the desired angles cooperatively, which is achieved by the distributed manipulator controller. The overall stability of the closed-loop system is proven by a Lyapunov method and Matrosov's theorem. In the end, the proposed control scheme is simplified for the real application and then validated by two formation flying missions of four quadrotors with 2-DOF manipulators.
Development of a Bio-nanobattery for Distributed Power Storage Systems
NASA Technical Reports Server (NTRS)
King, Glen C.; Choi, Sang H.; Chu, Sang-Hyon; Kim, Jae-Woo; Park, Yeonjoon; Lillehei, Peter; Watt, Gerald D.; Davis, Robert; Harb, John N.
2004-01-01
Currently available power storage systems, such as those used to supply power to microelectronic devices, typically consist of a single centralized canister and a series of wires to supply electrical power to where it is needed in a circuit. As the size of electrical circuits and components become smaller, there exists a need for a distributed power system to reduce Joule heating, wiring, and to allow autonomous operation of the various functions performed by the circuit. Our research is being conducted to develop a bio-nanobattery using ferritins reconstituted with both an iron core (Fe-ferritin) and a cobalt core (Co-ferritin). Both Co-ferritin and Fe-ferritin were synthesized and characterized as candidates for the bio-nanobattery. The reducing capability was determined as well as the half-cell electrical potentials, indicating an electrical output of nearly 0.5 V for the battery cell. Ferritins having other metallic cores are also being investigated, in order to increase the overall electrical output. Two dimensional ferritin arrays were also produced on various substrates, demonstrating the necessary building blocks for the bio-nanobattery. The bio-nanobattery will play a key role in moving to a distributed power storage system for electronic applications.
Rigorous Results for the Distribution of Money on Connected Graphs
NASA Astrophysics Data System (ADS)
Lanchier, Nicolas; Reed, Stephanie
2018-05-01
This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.
On Board Data Acquisition System with Intelligent Transducers for Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Rochala, Zdzisław
2012-02-01
This report presents conclusions from research project no. ON50900363 conducted at the Mechatronics Department, Military University of Technology in the years 2007-2010. As the main object of the study involved the preparation of a concept and the implementation of an avionics data acquisition system intended for research during flight of unmanned aerial vehicles of the mini class, this article presents a design of an avionics system and describes equipment solutions of a distributed measurement system intended for data acquisition consisting of intelligent transducers. The data collected during a flight controlled by an operator confirmed proper operation of the individual components of the data acquisition system.
Study on the Transient Process of 500kV Substations Secondary Equipment
NASA Astrophysics Data System (ADS)
Li, Hongbo; Li, Pei; Zhang, Yanyan; Niu, Lin; Gao, Nannan; Si, Tailong; Guo, Jiadong; Xu, Min-min; Li, Guofeng; Guo, Liangfeng
2017-05-01
By analyzing on the reason of the lightning accident occur in the substation, the way of lightning incoming surge invading the secondary system is summarized. The interference source acts on the secondary system through various coupling paths. It mainly consists of four ways: the conductance coupling mode, the Capacitive Coupling Mode, the inductive coupling mode, The Radiation Interference Model. Then simulated the way with the program-ATP. At last, from the three aspects of low-voltage power supply system, the impact potential distribution of grounding grid, the secondary system and the computer system. The lightning protection measures is put forward.
Multi-Head Very High Power Strobe System For Motion Picture Special Effects
NASA Astrophysics Data System (ADS)
Lovoi, P. A.; Fink, Michael L.
1983-10-01
A very large camera synchronizable strobe system has been developed for motion picture special effects. This system, the largest ever built, was delivered to MGM/UA to be used in the movie "War Games". The system consists of 12 individual strobe heads and a power supply distribution system. Each strobe head operates independently and may be flashed up to 24 times per second under computer control. An energy of 480 Joules per flash is used in six strobe heads and 240 Joules per flash in the remaining six strobe heads. The beam pattern is rectangular with a FWHM of 60° x 48°.
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-08-01
The paper considers gossip distributed estimation of a (static) distributed random field (a.k.a., large scale unknown parameter vector) observed by sparsely interconnected sensors, each of which only observes a small fraction of the field. We consider linear distributed estimators whose structure combines the information \\emph{flow} among sensors (the \\emph{consensus} term resulting from the local gossiping exchange among sensors when they are able to communicate) and the information \\emph{gathering} measured by the sensors (the \\emph{sensing} or \\emph{innovations} term.) This leads to mixed time scale algorithms--one time scale associated with the consensus and the other with the innovations. The paper establishes a distributed observability condition (global observability plus mean connectedness) under which the distributed estimates are consistent and asymptotically normal. We introduce the distributed notion equivalent to the (centralized) Fisher information rate, which is a bound on the mean square error reduction rate of any distributed estimator; we show that under the appropriate modeling and structural network communication conditions (gossip protocol) the distributed gossip estimator attains this distributed Fisher information rate, asymptotically achieving the performance of the optimal centralized estimator. Finally, we study the behavior of the distributed gossip estimator when the measurements fade (noise variance grows) with time; in particular, we consider the maximum rate at which the noise variance can grow and still the distributed estimator being consistent, by showing that, as long as the centralized estimator is consistent, the distributed estimator remains consistent.
NASA Astrophysics Data System (ADS)
Su, Huaizhi; Li, Hao; Kang, Yeyuan; Wen, Zhiping
2018-02-01
Seepage is one of key factors which affect the levee engineering safety. The seepage danger without timely detection and rapid response may likely lead to severe accidents such as seepage failure, slope instability, and even levee break. More than 90 percent of levee break events are caused by the seepage. It is very important for seepage behavior identification to determine accurately saturation line in levee engineering. Furthermore, the location of saturation line has a major impact on slope stability in levee engineering. Considering the structure characteristics and service condition of levee engineering, the distributed optical fiber sensing technology is introduced to implement the real-time observation of saturation line in levee engineering. The distributed optical fiber temperature sensor system (DTS)-based monitoring principle of saturation line in levee engineering is investigated. An experimental platform, which consists of DTS, heating system, water-supply system, auxiliary analysis system and levee model, is designed and constructed. The monitoring experiment of saturation line in levee model is implemented on this platform. According to the experimental results, the numerical relationship between moisture content and thermal conductivity in porous medium is identified. A line heat source-based distributed optical fiber method obtaining the thermal conductivity in porous medium is developed. A DTS-based approach is proposed to monitor the saturation line in levee engineering. The embedment pattern of optical fiber for monitoring saturation line is presented.
Enterprise-scale image distribution with a Web PACS.
Gropper, A; Doyle, S; Dreyer, K
1998-08-01
The integration of images with existing and new health care information systems poses a number of challenges in a multi-facility network: image distribution to clinicians; making DICOM image headers consistent across information systems; and integration of teleradiology into PACS. A novel, Web-based enterprise PACS architecture introduced at Massachusetts General Hospital provides a solution. Four AMICAS Web/Intranet Image Servers were installed as the default DICOM destination of 10 digital modalities. A fifth AMICAS receives teleradiology studies via the Internet. Each AMICAS includes: a Java-based interface to the IDXrad radiology information system (RIS), a DICOM autorouter to tape-library archives and to the Agfa PACS, a wavelet image compressor/decompressor that preserves compatibility with DICOM workstations, a Web server to distribute images throughout the enterprise, and an extensible interface which permits links between other HIS and AMICAS. Using wavelet compression and Internet standards as its native formats, AMICAS creates a bridge to the DICOM networks of remote imaging centers via the Internet. This teleradiology capability is integrated into the DICOM network and the PACS thereby eliminating the need for special teleradiology workstations. AMICAS has been installed at MGH since March of 1997. During that time, it has been a reliable component of the evolving digital image distribution system. As a result, the recently renovated neurosurgical ICU will be filmless and use only AMICAS workstations for mission-critical patient care.
Beam vacuum system of Brookhaven`s muon storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hseuth, H.C.; Snydstrup, L.; Mapes, M.
1995-11-01
A storage ring with a circumference of 45 m is being built at Brookhaven to measure the g-2 value of the muons to an accuracy of 0.35 ppm.. The beam vacuum system of the storage ring will operate at 10{sup -7} Torr and has to be completely non-magnetic. It consists of twelve sector chambers. The chambers are constructed of aluminum and are approximately 3.5 m in length with a rectangular cross-section of 16.5 cm high by 45 cm at the widest point. The design features, fabrication techniques and cleaning methods for these chambers are described. The beam vacuum system willmore » be pumped by forty eight non-magnetic distributed ion pumps with a total pumping speed of over 2000 {ell}/sec. Monte Carlo simulations of the pressure distribution in the muon storage region are presented.« less
An integrated content and metadata based retrieval system for art.
Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James
2004-03-01
A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.
Developments in fiber optics for distribution automation
NASA Technical Reports Server (NTRS)
Kirkham, H.; Friend, H.; Jackson, S.; Johnston, A.
1991-01-01
An optical fiber based communications system of unusual design is described. The system consists of a network of optical fibers overlaid on the distribution system. It is configured as a large number of interconnected rings, with some spurs. Protocols for access to and control of the network are described. Because of the way they function, the protocols are collectively called AbNET, in commemoration of the microbiologists' abbreviation Ab for antibody. Optical data links that could be optically powered are described. There are two versions, each of which has a good frequency response and minimal filtering requirements. In one, a conventional FM pulse train is used at the transmitter, and a novel form of phase-locked loop is used as demodulator. In the other, the FM transmitter is replaced with a pulse generator arranged so that the period between pulses represents the modulating signal. Transmitter and receiver designs, including temperature compensation methods, are presented. Experimental results are given.
M-OTDR sensing system based on 3D encoded microstructures
Sun, Qizhen; Ai, Fan; Liu, Deming; Cheng, Jianwei; Luo, Hongbo; Peng, Kuan; Luo, Yiyang; Yan, Zhijun; Shum, Perry Ping
2017-01-01
In this work, a quasi-distributed sensing scheme named as microstructured OTDR (M-OTDR) by introducing ultra-weak microstructures along the fiber is proposed. Owing to its relative higher reflectivity compared with the backscattered coefficient in fiber and three dimensional (3D) i.e. wavelength/frequency/time encoded property, the M-OTDR system exhibits the superiorities of high signal to noise ratio (SNR), high spatial resolution of millimeter level and high multiplexing capacity up to several ten thousands theoretically. A proof-of-concept system consisting of 64 sensing units is constructed to demonstrate the feasibility and sensing performance. With the help of the demodulation method based on 3D analysis and spectrum reconstruction of the signal light, quasi-distributed temperature sensing with a spatial resolution of 20 cm as well as a measurement resolution of 0.1 °C is realized. PMID:28106132
Zhang, Ridong; Tao, Jili; Lu, Renquan; Jin, Qibing
2018-02-01
Modeling of distributed parameter systems is difficult because of their nonlinearity and infinite-dimensional characteristics. Based on principal component analysis (PCA), a hybrid modeling strategy that consists of a decoupled linear autoregressive exogenous (ARX) model and a nonlinear radial basis function (RBF) neural network model are proposed. The spatial-temporal output is first divided into a few dominant spatial basis functions and finite-dimensional temporal series by PCA. Then, a decoupled ARX model is designed to model the linear dynamics of the dominant modes of the time series. The nonlinear residual part is subsequently parameterized by RBFs, where genetic algorithm is utilized to optimize their hidden layer structure and the parameters. Finally, the nonlinear spatial-temporal dynamic system is obtained after the time/space reconstruction. Simulation results of a catalytic rod and a heat conduction equation demonstrate the effectiveness of the proposed strategy compared to several other methods.
DeBlase, Andrew; Licata, Megan; Galbraith, John Morrison
2008-12-18
Three-center four-electron (3c4e) pi bonding systems analogous to that of the ozone molecule have been studied using modern valence bond theory. Molecules studied herein consist of combinations of first row atoms C, N, and O with the addition of H atoms where appropriate in order to preserve the 3c4e pi system. Breathing orbital valence bond (BOVB) calculations were preformed at the B3LYP/6-31G**-optimized geometries in order to determine structural weights, pi charge distributions, resonance energies, and pi bond energies. It is found that the most weighted VB structure depends on atomic electronegativity and charge distribution, with electronegativity as the dominant factor. By nature, these systems are delocalized, and therefore, resonance energy is the main contributor to pi bond energies. Molecules with a single dominant VB structure have low resonance energies and therefore low pi bond energies.
NASA Astrophysics Data System (ADS)
Anikushina, T. A.; Naumov, A. V.
2013-12-01
This article demonstrates the principal advantages of the technique for analysis of the long-term spectral evolution of single molecules (SM) in the study of the microscopic nature of the dynamic processes in low-temperature polymers. We performed the detailed analysis of the spectral trail of single tetra-tert-butylterrylene (TBT) molecule in an amorphous polyisobutylene matrix, measured over 5 hours at T = 7K. It has been shown that the slow temporal dynamics is in qualitative agreement with the standard model of two-level systems and stochastic sudden-jump model. At the same time the distributions of the first four moments (cumulants) of the spectra of the selected SM measured at different time points were found not consistent with the standard theory prediction. It was considered as evidence that in a given time interval the system is not ergodic
Numerical Modeling of Flow Distribution in Micro-Fluidics Systems
NASA Technical Reports Server (NTRS)
Majumdar, Alok; Cole, Helen; Chen, C. P.
2005-01-01
This paper describes an application of a general purpose computer program, GFSSP (Generalized Fluid System Simulation Program) for calculating flow distribution in a network of micro-channels. GFSSP employs a finite volume formulation of mass and momentum conservation equations in a network consisting of nodes and branches. Mass conservation equation is solved for pressures at the nodes while the momentum conservation equation is solved at the branches to calculate flowrate. The system of equations describing the fluid network is solved by a numerical method that is a combination of the Newton-Raphson and successive substitution methods. The numerical results have been compared with test data and detailed CFD (computational Fluid Dynamics) calculations. The agreement between test data and predictions is satisfactory. The discrepancies between the predictions and test data can be attributed to the frictional correlation which does not include the effect of surface tension or electro-kinetic effect.
The Distributed Space Exploration Simulation (DSES)
NASA Technical Reports Server (NTRS)
Crues, Edwin Z.; Chung, Victoria I.; Blum, Mike G.; Bowman, James D.
2007-01-01
The paper describes the Distributed Space Exploration Simulation (DSES) Project, a research and development collaboration between NASA centers which focuses on the investigation and development of technologies, processes and integrated simulations related to the collaborative distributed simulation of complex space systems in support of NASA's Exploration Initiative. This paper describes the three major components of DSES: network infrastructure, software infrastructure and simulation development. In the network work area, DSES is developing a Distributed Simulation Network that will provide agency wide support for distributed simulation between all NASA centers. In the software work area, DSES is developing a collection of software models, tool and procedures that ease the burden of developing distributed simulations and provides a consistent interoperability infrastructure for agency wide participation in integrated simulation. Finally, for simulation development, DSES is developing an integrated end-to-end simulation capability to support NASA development of new exploration spacecraft and missions. This paper will present current status and plans for each of these work areas with specific examples of simulations that support NASA's exploration initiatives.
A system for distributed intrusion detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snapp, S.R.; Brentano, J.; Dias, G.V.
1991-01-01
The study of providing security in computer networks is a rapidly growing area of interest because the network is the medium over which most attacks or intrusions on computer systems are launched. One approach to solving this problem is the intrusion-detection concept, whose basic premise is that not only abandoning the existing and huge infrastructure of possibly-insecure computer and network systems is impossible, but also replacing them by totally-secure systems may not be feasible or cost effective. Previous work on intrusion-detection systems were performed on stand-alone hosts and on a broadcast local area network (LAN) environment. The focus of ourmore » present research is to extend our network intrusion-detection concept from the LAN environment to arbitarily wider areas with the network topology being arbitrary as well. The generalized distributed environment is heterogeneous, i.e., the network nodes can be hosts or servers from different vendors, or some of them could be LAN managers, like our previous work, a network security monitor (NSM), as well. The proposed architecture for this distributed intrusion-detection system consists of the following components: a host manager in each host; a LAN manager for monitoring each LAN in the system; and a central manager which is placed at a single secure location and which receives reports from various host and LAN managers to process these reports, correlate them, and detect intrusions. 11 refs., 2 figs.« less
RAMP: A fault tolerant distributed microcomputer structure for aircraft navigation and control
NASA Technical Reports Server (NTRS)
Dunn, W. R.
1980-01-01
RAMP consists of distributed sets of parallel computers partioned on the basis of software and packaging constraints. To minimize hardware and software complexity, the processors operate asynchronously. It was shown that through the design of asymptotically stable control laws, data errors due to the asynchronism were minimized. It was further shown that by designing control laws with this property and making minor hardware modifications to the RAMP modules, the system became inherently tolerant to intermittent faults. A laboratory version of RAMP was constructed and is described in the paper along with the experimental results.
Payne, Philip R.O.; Greaves, Andrew W.; Kipps, Thomas J.
2003-01-01
The Chronic Lymphocytic Leukemia (CLL) Research Consortium (CRC) consists of 9 geographically distributed sites conducting a program of research including both basic science and clinical components. To enable the CRC’s clinical research efforts, a system providing for real-time collaboration was required. CTMS provides such functionality, and demonstrates that the use of novel data modeling, web-application platforms, and management strategies provides for the deployment of an extensible, cost effective solution in such an environment. PMID:14728471
Distributed Topology Organization and Transmission Scheduling in Wireless Ad Hoc Networks
2004-01-01
slot consists of two mini -slots that support that support full-duplex transfer–one for master-to-slave transmission and the other from slave-to-master...protocol storage requirement would only be 208 bits in this case. 111 Communication requirements Each half-duplex mini -slot can be used by either a data or...either a data or control packet, eq. (4.2) sets the minimum half-duplex mini -slot size in the system or equivalently, the maximum system period
Control system development for a 1 MW/e/ solar thermal power plant
NASA Technical Reports Server (NTRS)
Daubert, E. R.; Bergthold, F. M., Jr.; Fulton, D. G.
1981-01-01
The point-focusing distributed receiver power plant considered consists of a number of power modules delivering power to a central collection point. Each power module contains a parabolic dish concentrator with a closed-cycle receiver/turbine/alternator assembly. Currently, a single-module prototype plant is under construction. The major control system tasks required are related to concentrator pointing control, receiver temperature control, and turbine speed control. Attention is given to operational control details, control hardware and software, and aspects of CRT output display.
NASA Technical Reports Server (NTRS)
Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.
1994-01-01
Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.
Elixir - how to handle 2 trillion pixels
NASA Astrophysics Data System (ADS)
Magnier, Eugene A.; Cuillandre, Jean-Charles
2002-12-01
The Elixir system at CFHT provides automatic data quality assurance and calibration for the wide-field mosaic imager camera CFH12K. Elixir consists of a variety of tools, including: a real-time analysis suite which runs at the telescope to provide quick feedback to the observers; a detailed analysis of the calibration data; and an automated pipeline for processing data to be distributed to observers. To date, 2.4 × 1012 night-time sky pixels from CFH12K have been processed by the Elixir system.
Crater topography on Titan: implications for landscape evolution
Neish, Catherine D.; Kirk, R.L.; Lorenz, R.D.; Bray, V.J.; Schenk, P.; Stiles, B.W.; Turtle, E.; Mitchell, Ken; Hayes, A.
2013-01-01
We present a comprehensive review of available crater topography measurements for Saturn’s moon Titan. In general, the depths of Titan’s craters are within the range of depths observed for similarly sized fresh craters on Ganymede, but several hundreds of meters shallower than Ganymede’s average depth vs. diameter trend. Depth-to-diameter ratios are between 0.0012 ± 0.0003 (for the largest crater studied, Menrva, D ~ 425 km) and 0.017 ± 0.004 (for the smallest crater studied, Ksa, D ~ 39 km). When we evaluate the Anderson–Darling goodness-of-fit parameter, we find that there is less than a 10% probability that Titan’s craters have a current depth distribution that is consistent with the depth distribution of fresh craters on Ganymede. There is, however, a much higher probability that the relative depths are uniformly distributed between 0 (fresh) and 1 (completely infilled). This distribution is consistent with an infilling process that is relatively constant with time, such as aeolian deposition. Assuming that Ganymede represents a close ‘airless’ analogue to Titan, the difference in depths represents the first quantitative measure of the amount of modification that has shaped Titan’s surface, the only body in the outer Solar System with extensive surface–atmosphere exchange.
Confining the angular distribution of terrestrial gamma ray flash emission
NASA Astrophysics Data System (ADS)
Gjesteland, T.; Østgaard, N.; Collier, A. B.; Carlson, B. E.; Cohen, M. B.; Lehtinen, N. G.
2011-11-01
Terrestrial gamma ray flashes (TGFs) are bremsstrahlung emissions from relativistic electrons accelerated in electric fields associated with thunder storms, with photon energies up to at least 40 MeV, which sets the lowest estimate of the total potential of 40 MV. The electric field that produces TGFs will be reflected by the initial angular distribution of the TGF emission. Here we present the first constraints on the TGF emission cone based on accurately geolocated TGFs. The source lightning discharges associated with TGFs detected by RHESSI are determined from the Atmospheric Weather Electromagnetic System for Observation, Modeling, and Education (AWESOME) network and the World Wide Lightning Location Network (WWLLN). The distribution of the observation angles for 106 TGFs are compared to Monte Carlo simulations. We find that TGF emissions within a half angle >30° are consistent with the distributions of observation angle derived from the networks. In addition, 36 events occurring before 2006 are used for spectral analysis. The energy spectra are binned according to observation angle. The result is a significant softening of the TGF energy spectrum for large (>40°) observation angles, which is consistent with a TGF emission half angle (<40°). The softening is due to Compton scattering which reduces the photon energies.
NASA Technical Reports Server (NTRS)
Bose, Bimal K.; Kim, Min-Huei
1995-01-01
The report essentially summarizes the work performed in order to satisfy the above project objective. In the beginning, different energy storage devices, such as battery, flywheel and ultra capacitor are reviewed and compared, establishing the superiority of the battery. Then, the possible power sources, such as IC engine, diesel engine, gas turbine and fuel cell are reviewed and compared, and the superiority of IC engine has been established. Different types of machines for drive motor/engine generator, such as induction machine, PM synchronous machine and switched reluctance machine are compared, and the induction machine is established as the superior candidate. Similar discussion was made for power converters and devices. The Insulated Gate Bipolar Transistor (IGBT) appears to be the most superior device although Mercury Cadmium Telluride (MCT) shows future promise. Different types of candidate distribution systems with the possible combinations of power and energy sources have been discussed and the most viable system consisting of battery, IC engine and induction machine has been identified. Then, HFAC system has been compared with the DC system establishing the superiority of the former. The detailed component sizing calculations of HFAC and DC systems reinforce the superiority of the former. A preliminary control strategy has been developed for the candidate HFAC system. Finally, modeling and simulation study have been made to validate the system performance. The study in the report demonstrates the superiority of HFAC distribution system for next generation electric/hybrid vehicle.
NASA Astrophysics Data System (ADS)
Efstathiou, Angeliki; Tzanis, Andreas; Vallianatos, Filippos
2014-05-01
The context of Non Extensive Statistical Physics (NESP) has recently been suggested to comprise an appropriate tool for the analysis of complex dynamic systems with scale invariance, long-range interactions, long-range memory and systems that evolve in a fractal-like space-time. This is because the active tectonic grain is thought to comprise a (self-organizing) complex system; therefore, its expression (seismicity) should be manifested in the temporal and spatial statistics of energy release rates. In addition to energy release rates expressed by the magnitude M, measures of the temporal and spatial interactions are the time (Δt) and hypocentral distance (Δd) between consecutive events. Recent work indicated that if the distributions of M, Δt and Δd are independent so that the joint probability p(M,Δt,Δd) factorizes into the probabilities of M, Δt and Δd, i.e. p(M,Δt,Δd)= p(M)p(Δt)p(Δd), then the frequency of earthquake occurrence is multiply related, not only to magnitude as the celebrated Gutenberg - Richter law predicts, but also to interevent time and distance by means of well-defined power-laws consistent with NESP. The present work applies these concepts to investigate the self-organization and temporal/spatial dynamics of seismicity in Greece and western Turkey, for the period 1964-2011. The analysis was based on the ISC earthquake catalogue which is homogenous by construction with consistently determined hypocenters and magnitude. The presentation focuses on the analysis of bivariate Frequency-Magnitude-Time distributions, while using the interevent distances as spatial constraints (or spatial filters) for studying the spatial dependence of the energy and time dynamics of the seismicity. It is demonstrated that the frequency of earthquake occurrence is multiply related to the magnitude and the interevent time by means of well-defined multi-dimensional power-laws consistent with NESP and has attributes of universality,as its holds for a broad range of spatial, temporal and magnitude scales. Provided that the multivariate empirical frequency distributions are based on a sufficient number of observations as an empirical lower limit, the results are stable and consistent with the established ken, irrespective of the magnitude and spatio-temporal range of the earthquake catalogue, or operations pertaining to re-sampling, bootstrapping or re-arrangement of the catalogue. It is also demonstrated that that the expression of the regional active tectonic grain may comprise a mixture of processes significantly dependent on Δd. The analysis of the size (energy) distribution of earthquakes yielded results consistent with a correlated sub-extensive system; the results are also consistent with conventional determinations of Frequency-Magnitude distributions. The analysis of interevent times, has determined the existence of sub-extensivity and near-field interaction (correlation) in the complete catalogue of Greek and western Turkish seismicity (mixed background earthquake activity and aftershock processes),as well as in the pure background process (declustered catalogue).This could be attributed to the joint effect of near-field interaction between neighbouring earthquakes or seismic areas and interaction within aftershock sequences. The background process appears to be moderately - weakly correlated at the far field. Formal random temporal processes have not been detected. A general syllogism affordable by the above observations is that aftershock sequences may be an integral part of the seismogenetic process, as they appear to partake in long-range interaction. A formal explanation of such an effect is pending, but may nevertheless involve delayed remote triggering of seismic activity by (transient or static) stress transfer from the main shocks and large aftershocks and/or cascading effects already discussed by Marsan and Lengliné (2008). In this view, the effect weakens when aftershocks are removed because aftershocks are the link between the main shocks and their remote offshoot. Overall, the above results compare well to the results of North Californian seismicity which have shown that the expression of seismicity at Northern California is generally consistent with non-extensive (sub-extensive) thermodynamics. Acknowledgments: This work was supported by the THALES Program of the Ministry of Education of Greece and the European Union in the framework of the project "Integrated understanding of Seismicity, using innovative methodologies of Fracture Mechanics along with Earthquake and Non-Extensive Statistical Physics - Application to the geodynamic system of the Hellenic Arc - SEISMO FEAR HELLARC". References: Tzanis A., Vallianatos F., Efstathiou A., Multidimensional earthquake frequency distributions consistent with Non-Extensive Statistical Physics: the interdependence of magnitude, interevent time and interevent distance in North California. Bulletin of the Geological Society of Greece, vol. XLVII 2013. Proceedings of the 13th International Congress, Chania, Sept. 2013 Tzanis A., Vallianatos F., Efstathiou A., Generalized multidimensional earthquake frequency distributions consistent with Non-Extensive Statistical Physics: An appraisal of the universality in the interdependence of magnitude, interevent time and interevent distance Geophysical Research Abstracts, Vol. 15, EGU2013-628, 2013, EGU General Assembly 2013 Marsan, D. and Lengliné, O., 2008. Extending earthquakes's reach through cascading, Science, 319, 1076; doi: 10.1126/science.1148783 On-line Bulletin, http://www.isc.ac.uk, Internatl. Seis. Cent., Thatcham, United Kingdom, 2011.
First Results on Angular Distributions of Thermal Dileptons in Nuclear Collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnaldi, R.; Colla, A.; Cortese, P.
The NA60 experiment at the CERN Super Proton Synchrotron has studied dimuon production in 158A GeV In-In collisions. The strong excess of pairs above the known sources found in the complete mass region 0.2
Adaptive Management of Computing and Network Resources for Spacecraft Systems
NASA Technical Reports Server (NTRS)
Pfarr, Barbara; Welch, Lonnie R.; Detter, Ryan; Tjaden, Brett; Huh, Eui-Nam; Szczur, Martha R. (Technical Monitor)
2000-01-01
It is likely that NASA's future spacecraft systems will consist of distributed processes which will handle dynamically varying workloads in response to perceived scientific events, the spacecraft environment, spacecraft anomalies and user commands. Since all situations and possible uses of sensors cannot be anticipated during pre-deployment phases, an approach for dynamically adapting the allocation of distributed computational and communication resources is needed. To address this, we are evolving the DeSiDeRaTa adaptive resource management approach to enable reconfigurable ground and space information systems. The DeSiDeRaTa approach embodies a set of middleware mechanisms for adapting resource allocations, and a framework for reasoning about the real-time performance of distributed application systems. The framework and middleware will be extended to accommodate (1) the dynamic aspects of intra-constellation network topologies, and (2) the complete real-time path from the instrument to the user. We are developing a ground-based testbed that will enable NASA to perform early evaluation of adaptive resource management techniques without the expense of first deploying them in space. The benefits of the proposed effort are numerous, including the ability to use sensors in new ways not anticipated at design time; the production of information technology that ties the sensor web together; the accommodation of greater numbers of missions with fewer resources; and the opportunity to leverage the DeSiDeRaTa project's expertise, infrastructure and models for adaptive resource management for distributed real-time systems.
NASA Astrophysics Data System (ADS)
Schuetz, Christopher; Martin, Richard; Dillon, Thomas; Yao, Peng; Mackrides, Daniel; Harrity, Charles; Zablocki, Alicia; Shreve, Kevin; Bonnett, James; Curt, Petersen; Prather, Dennis
2013-05-01
Passive imaging using millimeter waves (mmWs) has many advantages and applications in the defense and security markets. All terrestrial bodies emit mmW radiation and these wavelengths are able to penetrate smoke, fog/clouds/marine layers, and even clothing. One primary obstacle to imaging in this spectrum is that longer wavelengths require larger apertures to achieve the resolutions desired for many applications. Accordingly, lens-based focal plane systems and scanning systems tend to require large aperture optics, which increase the achievable size and weight of such systems to beyond what can be supported by many applications. To overcome this limitation, a distributed aperture detection scheme is used in which the effective aperture size can be increased without the associated volumetric increase in imager size. This distributed aperture system is realized through conversion of the received mmW energy into sidebands on an optical carrier. This conversion serves, in essence, to scale the mmW sparse aperture array signals onto a complementary optical array. The side bands are subsequently stripped from the optical carrier and recombined to provide a real time snapshot of the mmW signal. Using this technique, we have constructed a real-time, video-rate imager operating at 75 GHz. A distributed aperture consisting of 220 upconversion channels is used to realize 2.5k pixels with passive sensitivity. Details of the construction and operation of this imager as well as field testing results will be presented herein.
A formally verified algorithm for interactive consistency under a hybrid fault model
NASA Technical Reports Server (NTRS)
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
Low latency messages on distributed memory multiprocessors
NASA Technical Reports Server (NTRS)
Rosing, Matthew; Saltz, Joel
1993-01-01
Many of the issues in developing an efficient interface for communication on distributed memory machines are described and a portable interface is proposed. Although the hardware component of message latency is less than one microsecond on many distributed memory machines, the software latency associated with sending and receiving typed messages is on the order of 50 microseconds. The reason for this imbalance is that the software interface does not match the hardware. By changing the interface to match the hardware more closely, applications with fine grained communication can be put on these machines. Based on several tests that were run on the iPSC/860, an interface that will better match current distributed memory machines is proposed. The model used in the proposed interface consists of a computation processor and a communication processor on each node. Communication between these processors and other nodes in the system is done through a buffered network. Information that is transmitted is either data or procedures to be executed on the remote processor. The dual processor system is better suited for efficiently handling asynchronous communications compared to a single processor system. The ability to send data or procedure is very flexible for minimizing message latency, based on the type of communication being performed. The test performed and the proposed interface are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fabrycky, Daniel C.; Winn, Joshua N.
One possible diagnostic of planet formation, orbital migration, and tidal evolution is the angle {psi} between a planet's orbital axis and the spin axis of its parent star. In general, {psi} cannot be measured, but for transiting planets one can measure the angle {lambda} between the sky projections of the two axes via the Rossiter-McLaughlin effect. Here, we show how to combine measurements of {lambda} in different systems to derive statistical constraints on {psi}. We apply the method to 11 published measurements of {lambda}, using two different single-parameter distributions to describe the ensemble. First, assuming a Rayleigh distribution (or moremore » precisely, a Fisher distribution on a sphere), we find that the peak value is less than 22{sup 0} with 95% confidence. Second, assuming that a fraction f of the orbits have random orientations relative to the stars, and the remaining fraction (1 - f) are perfectly aligned, we find f < 0.36 with 95% confidence. This latter model fits the data better than the Rayleigh distribution, mainly because the XO-3 system was found to be strongly misaligned while the other 10 systems are consistent with perfect alignment. If the XO-3 result proves robust, then our results may be interpreted as evidence for two distinct modes of planet migration.« less
NASA Technical Reports Server (NTRS)
Herskovits, Edward H.; Gerring, Joan P.; Davatzikos, Christos; Bryan, R. Nick
2002-01-01
PURPOSE: To determine whether there is an association between the spatial distributions of lesions detected at magnetic resonance (MR) imaging of the brain in children, adolescents, and young adults after closed-head injury (CHI) and development of the reexperiencing symptoms of posttraumatic stress disorder (PTSD). MATERIALS AND METHODS: Data obtained in 94 subjects without a history of PTSD as determined by parental interview were analyzed. MR images were obtained 3 months after CHI. Lesions were manually delineated and registered to the Talairach coordinate system. Mann-Whitney analysis of lesion distribution and PTSD status at 1 year (again, as determined by parental interview) was performed, consisting of an analysis of lesion distribution versus the major symptoms of PTSD: reexperiencing, hyperarousal, and avoidance. RESULTS: Of the 94 subjects, 41 met the PTSD reexperiencing criterion and nine met all three PTSD criteria. Subjects who met the reexperiencing criterion had fewer lesions in limbic system structures (eg, the cingulum) on the right than did subjects who did not meet this criterion (Mann-Whitney, P =.003). CONCLUSION: Lesions induced by CHI in the limbic system on the right may inhibit subsequent manifestation of PTSD reexperiencing symptoms in children, adolescents, and young adults. Copyright RSNA, 2002.
Navia, Marlon; Campelo, José Carlos; Bonastre, Alberto; Ors, Rafael
2017-12-23
Monitoring is one of the best ways to evaluate the behavior of computer systems. When the monitored system is a distributed system-such as a wireless sensor network (WSN)-the monitoring operation must also be distributed, providing a distributed trace for further analysis. The temporal sequence of occurrence of the events registered by the distributed monitoring platform (DMP) must be correctly established to provide cause-effect relationships between them, so the logs obtained in different monitor nodes must be synchronized. Many of synchronization mechanisms applied to DMPs consist in adjusting the internal clocks of the nodes to the same value as a reference time. However, these mechanisms can create an incoherent event sequence. This article presents a new method to achieve global synchronization of the traces obtained in a DMP. It is based on periodic synchronization signals that are received by the monitor nodes and logged along with the recorded events. This mechanism processes all traces and generates a global post-synchronized trace by scaling all times registered proportionally according with the synchronization signals. It is intended to be a simple but efficient offline mechanism. Its application in a WSN-DMP demonstrates that it guarantees a correct ordering of the events, avoiding the aforementioned issues.
Ensuring correct rollback recovery in distributed shared memory systems
NASA Technical Reports Server (NTRS)
Janssens, Bob; Fuchs, W. Kent
1995-01-01
Distributed shared memory (DSM) implemented on a cluster of workstations is an increasingly attractive platform for executing parallel scientific applications. Checkpointing and rollback techniques can be used in such a system to allow the computation to progress in spite of the temporary failure of one or more processing nodes. This paper presents the design of an independent checkpointing method for DSM that takes advantage of DSM's specific properties to reduce error-free and rollback overhead. The scheme reduces the dependencies that need to be considered for correct rollback to those resulting from transfers of pages. Furthermore, in-transit messages can be recovered without the use of logging. We extend the scheme to a DSM implementation using lazy release consistency, where the frequency of dependencies is further reduced.
A Maximum Entropy Method for Particle Filtering
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.; Kim, Sangil
2006-06-01
Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.
Visualizing Big Data Outliers through Distributed Aggregation.
Wilkinson, Leland
2017-08-29
Visualizing outliers in massive datasets requires statistical pre-processing in order to reduce the scale of the problem to a size amenable to rendering systems like D3, Plotly or analytic systems like R or SAS. This paper presents a new algorithm, called hdoutliers, for detecting multidimensional outliers. It is unique for a) dealing with a mixture of categorical and continuous variables, b) dealing with big-p (many columns of data), c) dealing with big-n (many rows of data), d) dealing with outliers that mask other outliers, and e) dealing consistently with unidimensional and multidimensional datasets. Unlike ad hoc methods found in many machine learning papers, hdoutliers is based on a distributional model that allows outliers to be tagged with a probability. This critical feature reduces the likelihood of false discoveries.
DAVIS: A direct algorithm for velocity-map imaging system
NASA Astrophysics Data System (ADS)
Harrison, G. R.; Vaughan, J. C.; Hidle, B.; Laurent, G. M.
2018-05-01
In this work, we report a direct (non-iterative) algorithm to reconstruct the three-dimensional (3D) momentum-space picture of any charged particles collected with a velocity-map imaging system from the two-dimensional (2D) projected image captured by a position-sensitive detector. The method consists of fitting the measured image with the 2D projection of a model 3D velocity distribution defined by the physics of the light-matter interaction. The meaningful angle-correlated information is first extracted from the raw data by expanding the image with a complete set of Legendre polynomials. Both the particle's angular and energy distributions are then directly retrieved from the expansion coefficients. The algorithm is simple, easy to implement, fast, and explicitly takes into account the pixelization effect in the measurement.
Large-deformation modal coordinates for nonrigid vehicle dynamics
NASA Technical Reports Server (NTRS)
Likins, P. W.; Fleischer, G. E.
1972-01-01
The derivation of minimum-dimension sets of discrete-coordinate and hybrid-coordinate equations of motion of a system consisting of an arbitrary number of hinge-connected rigid bodies assembled in tree topology is presented. These equations are useful for the simulation of dynamical systems that can be idealized as tree-like arrangements of substructures, with each substructure consisting of either a rigid body or a collection of elastically interconnected rigid bodies restricted to small relative rotations at each connection. Thus, some of the substructures represent elastic bodies subjected to small strains or local deformations, but possibly large gross deformations, in the hybrid formulation, distributed coordinates referred to herein as large-deformation modal coordinates, are used for the deformations of these substructures. The equations are in a form suitable for incorporation into one or more computer programs to be used as multipurpose tools in the simulation of spacecraft and other complex electromechanical systems.
Heavy residues from very mass asymmetric heavy ion reactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanold, Karl Alan
1994-08-01
The isotopic production cross sections and momenta of all residues with nuclear charge (Z) greater than 39 from the reaction of 26, 40, and 50 MeV/nucleon 129Xe + Be, C, and Al were measured. The isotopic cross sections, the momentum distribution for each isotope, and the cross section as a function of nuclear charge and momentum are presented here. The new cross sections are consistent with previous measurements of the cross sections from similar reaction systems. The shape of the cross section distribution, when considered as a function of Z and velocity, was found to be qualitatively consistent with thatmore » expected from an incomplete fusion reaction mechanism. An incomplete fusion model coupled to a statistical decay model is able to reproduce many features of these reactions: the shapes of the elemental cross section distributions, the emission velocity distributions for the intermediate mass fragments, and the Z versus velocity distributions. This model gives a less satisfactory prediction of the momentum distribution for each isotope. A very different model based on the Boltzman-Nordheim-Vlasov equation and which was also coupled to a statistical decay model reproduces many features of these reactions: the shapes of the elemental cross section distributions, the intermediate mass fragment emission velocity distributions, and the Z versus momentum distributions. Both model calculations over-estimate the average mass for each element by two mass units and underestimate the isotopic and isobaric widths of the experimental distributions. It is shown that the predicted average mass for each element can be brought into agreement with the data by small, but systematic, variation of the particle emission barriers used in the statistical model. The predicted isotopic and isobaric widths of the cross section distributions can not be brought into agreement with the experimental data using reasonable parameters for the statistical model.« less
NASA Astrophysics Data System (ADS)
Hata, Yutaka; Kanazawa, Seigo; Endo, Maki; Tsuchiya, Naoki; Nakajima, Hiroshi
2012-06-01
This paper proposes a heart rate monitoring system for detecting autonomic nervous system by the heart rate variability using an air pressure sensor to diagnose mental disease. Moreover, we propose a human behavior monitoring system for detecting the human trajectory in home by an infrared camera. In day and night times, the human behavior monitoring system detects the human movement in home. The heart rate monitoring system detects the heart rate in bed in night time. The air pressure sensor consists of a rubber tube, cushion cover and pressure sensor, and it detects the heart rate by setting it to bed. It unconstraintly detects the RR-intervals; thereby the autonomic nervous system can be assessed. The autonomic nervous system analysis can examine the mental disease. While, the human behavior monitoring system obtains distance distribution image by an infrared camera. It classifies adult, child and the other object from distance distribution obtained by the camera, and records their trajectories. This behavior, i.e., trajectory in home, strongly corresponds to cognitive disorders. Thus, the total system can detect mental disease and cognitive disorders by uncontacted sensors to human body.
Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Bieniawski, Stefan
2005-01-01
In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.
Campo, Xandra; Méndez, Roberto; Embid, Miguel; Ortego, Alberto; Novo, Manuel; Sanz, Javier
2018-05-01
Neutron fields inside and outside the independent spent fuel storage installation of Trillo Nuclear Power Plant are characterized exhaustively in terms of neutron spectra and ambient dose equivalent, measured by Bonner sphere system and LB6411 monitor. Measurements are consistent with storage casks and building shield characteristics, and also with casks distribution inside the building. Outer values at least five times lower than dose limit for free access area are found. Measurements with LB6411 and spectrometer are consistent with each other. Copyright © 2018 Elsevier Ltd. All rights reserved.
Conduits and dike distribution analysis in San Rafael Swell, Utah
NASA Astrophysics Data System (ADS)
Kiyosugi, K.; Connor, C.; Wetmore, P. H.; Ferwerda, B. P.; Germa, A.
2011-12-01
Volcanic fields generally consist of scattered monogenetic volcanoes, such as cinder cones and maars. The temporal and spatial distribution of monogenetic volcanoes and probability of future activity within volcanic fields is studied with the goals of understanding the origins of these volcano groups, and forecasting potential future volcanic hazards. The subsurface magmatic plumbing systems associated with volcanic fields, however, are rarely observed or studied. Therefore, we investigated a highly eroded and exposed magmatic plumbing system on the San Rafael Swell (UT) that consists of dikes, volcano conduits and sills. San Rafael Swell is part of the Colorado Plateau and is located east of the Rocky Mountain seismic belt and the Basin and Range. The overburden thickness at the time of mafic magma intrusion (Pliocene; ca. 4 Ma) into Jurassic sandstone is estimated to be ~800 m based on paleotopographical reconstructions. Based on a geologic map by P. Delaney and colleagues, and new field research, a total of 63 conduits are mapped in this former volcanic field. The conduits each reveal features of root zone and / or lower diatremes, including rapid dike expansion, peperite and brecciated intrusive and host rocks. Recrystallized baked zone of host rock is also observed around many conduits. Most conduits are basaltic or shonkinitic with thickness of >10 m and associated with feeder dikes intruded along N-S trend joints in the host rock, whereas two conduits are syenitic and suggesting development from underlying cognate sills. Conduit distribution, which is analyzed by a kernel function method with elliptical bandwidth, illustrates a N-S elongate higher conduit density area regardless of the azimuth of closely distributed conduits alignment (nearest neighbor distance <200 m). In addition, dike density was calculated as total dike length in unit area (km/km^2). Conduit and sill distribution is concordant with the high dike density area. Especially, the distribution of conduits is not random with respect to the dike distribution with greater than 99% confidence on the basis of the Kolmogorov-Smirnov test. On the other hand, dike density at each conduits location also suggests that there is no threshold of dike density for conduit formation. In other words, conduits may be possible to develop from even short mapped dikes in low dike density areas. These results show effectiveness of studying volcanic vent distribution to infer the size of magmatic system below volcanic fields and highlight the uncertainty of forecasting the location of new monogenetic volcanoes in active fields, which may be associated with a single dike intrusion.
Yu, Chun-Yang; Yang, Zhong-Zhi
2011-03-31
Hydrogen peroxide (HP) clusters (H(2)O(2))(n) (n = 1-6) and liquid-state HP have been systemically investigated by the newly constructed ABEEM/MM fluctuating charge model. Because of the explicit description of charge distribution and special treatment of the hydrogen-bond interaction region, the ABEEM/MM potential model gives reasonable properties of HP clusters, including geometries, interaction energies, and dipole moments, when comparing with the present ab initio results. Meanwhile, the average dipole moment, static dielectric constant, heats of vaporization, radial distribution function, and diffusion constant for the dynamic properties of liquid HP at 273 K and 1 atm are fairly consistent with the available experimental data. To the best of our knowledge, this is the first theoretical investigation of condensed HP. The properties of HP monomer are studied in detail involving the structure, torsion potentials, molecular orbital analysis, charge distribution, dipole moment, and vibrational frequency.
A 12 GHz wavelength spacing multi-wavelength laser source for wireless communication systems
NASA Astrophysics Data System (ADS)
Peng, P. C.; Shiu, R. K.; Bitew, M. A.; Chang, T. L.; Lai, C. H.; Junior, J. I.
2017-08-01
This paper presents a multi-wavelength laser source with 12 GHz wavelength spacing based on a single distributed feedback laser. A light wave generated from the distributed feedback laser is fed into a frequency shifter loop consisting of 50:50 coupler, dual-parallel Mach-Zehnder modulator, optical amplifier, optical filter, and polarization controller. The frequency of the input wavelength is shifted and then re-injected into the frequency shifter loop. By re-injecting the shifted wavelengths multiple times, we have generated 84 optical carriers with 12 GHz wavelength spacing and stable output power. For each channel, two wavelengths are modulated by a wireless data using the phase modulator and transmitted through a 25 km single mode fiber. In contrast to previously developed schemes, the proposed laser source does not incur DC bias drift problem. Moreover, it is a good candidate for radio-over-fiber systems to support multiple users using a single distributed feedback laser.
NASA Astrophysics Data System (ADS)
Shiki, Akira; Yokoyama, Akihiko; Baba, Jyunpei; Takano, Tomihiro; Gouda, Takahiro; Izui, Yoshio
Recently, because of the environmental burden mitigation, energy conservations, energy security, and cost reductions, distributed generations are attracting our strong attention. These distributed generations (DGs) have been already installed to the distribution system, and much more DGs will be expected to be connected in the future. On the other hand, a new concept called “Microgrid” which is a small power supply network consisting of only DGs was proposed and some prototype projects are ongoing in Japan. The purpose of this paper is to develop the three-phase instantaneous valued digital simulator of microgrid consisting of a lot of inverter based DGs and to develop a supply and demand control method in isolated microgrid. First, microgrid is modeled using MATLAB/SIMULINK. We develop models of three-phase instantaneous valued inverter type CVCF generator, PQ specified generator, PV specified generator, PQ specified load as storage battery, photovoltaic generation, fuel cell and inverter load respectively. Then we propose an autonomous decentralized control method of supply and demand in isolated microgrid where storage batteries, fuel cells, photovoltaic generations and loads are connected. It is proposed here that the system frequency is used as a means to control DG output. By changing the frequency of the storage battery due to unbalance of supply and demand, all inverter based DGs detect the frequency fluctuation and change their own outputs. Finally, a new frequency control method in autonomous decentralized control of supply and demand is proposed. Though the frequency is used to transmit the information on the supply and demand unbalance to DGs, after the frequency plays the role, the frequency finally has to return to a standard value. To return the frequency to the standard value, the characteristic curve of the fuel cell is shifted in parallel. This control is carried out corresponding to the fluctuation of the load. The simulation shows that the frequency can be controlled well and has been made clear the effectiveness of the frequency control system.
Goldszal, A F; Brown, G K; McDonald, H J; Vucich, J J; Staab, E V
2001-06-01
In this work, we describe the digital imaging network (DIN), picture archival and communication system (PACS), and radiology information system (RIS) currently being implemented at the Clinical Center, National Institutes of Health (NIH). These systems are presently in clinical operation. The DIN is a redundant meshed network designed to address gigabit density and expected high bandwidth requirements for image transfer and server aggregation. The PACS projected workload is 5.0 TB of new imaging data per year. Its architecture consists of a central, high-throughput Digital Imaging and Communications in Medicine (DICOM) data repository and distributed redundant array of inexpensive disks (RAID) servers employing fiber-channel technology for immediate delivery of imaging data. On demand distribution of images and reports to clinicians and researchers is accomplished via a clustered web server. The RIS follows a client-server model and provides tools to order exams, schedule resources, retrieve and review results, and generate management reports. The RIS-hospital information system (HIS) interfaces include admissions, discharges, and transfers (ATDs)/demographics, orders, appointment notifications, doctors update, and results.
Maintaining Consistency in Distributed Systems
1991-11-01
type of 8 concurrency is readily controlled using synchronization tools such as monitors or semaphores . which are a standard part of most threads...sug- gested that these issues are often best solved using traditional synchronization constructs, such as monitors and semaphores , and that...data structures would normally arise within individual programs, and be controlled using mutual exclusion constructs, such as semaphores and monitors
Distributed File System Utilities to Manage Large DatasetsVersion 0.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-05-21
FileUtils provides a suite of tools to manage large datasets typically created by large parallel MPI applications. They are written in C and use standard POSIX I/Ocalls. The current suite consists of tools to copy, compare, remove, and list. The tools provide dramatic speedup over existing Linux tools, which often run as a single process.
Statistical analysis of the surface figure of the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Lightsey, Paul A.; Chaney, David; Gallagher, Benjamin B.; Brown, Bob J.; Smith, Koby; Schwenker, John
2012-09-01
The performance of an optical system is best characterized by either the point spread function (PSF) or the optical transfer function (OTF). However, for system budgeting purposes, it is convenient to use a single scalar metric, or a combination of a few scalar metrics to track performance. For the James Webb Space Telescope, the Observatory level requirements were expressed in metrics of Strehl Ratio, and Encircled Energy. These in turn were converted to the metrics of total rms WFE and rms WFE within spatial frequency domains. The 18 individual mirror segments for the primary mirror segment assemblies (PMSA), the secondary mirror (SM), tertiary mirror (TM), and Fine Steering Mirror have all been fabricated. They are polished beryllium mirrors with a protected gold reflective coating. The statistical analysis of the resulting Surface Figure Error of these mirrors has been analyzed. The average spatial frequency distribution and the mirror-to-mirror consistency of the spatial frequency distribution are reported. The results provide insight to system budgeting processes for similar optical systems.
Laser SRS tracker for reverse prototyping tasks
NASA Astrophysics Data System (ADS)
Kolmakov, Egor; Redka, Dmitriy; Grishkanich, Aleksandr; Tsvetkov, Konstantin
2017-10-01
According to the current great interest concerning Large-Scale Metrology applications in many different fields of manufacturing industry, technologies and techniques for dimensional measurement have recently shown a substantial improvement. Ease-of-use, logistic and economic issues, as well as metrological performance, are assuming a more and more important role among system requirements. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of chip and microlasers as radiators on the linear-angular characteristics of existing measurement systems. The project is planned to conduct experimental studies aimed at identifying the impact of the application of the basic laws of microlasers as radiators on the linear-angular characteristics of existing measurement systems. The system consists of a distributed network-based layout, whose modularity allows to fit differently sized and shaped working volumes by adequately increasing the number of sensing units. Differently from existing spatially distributed metrological instruments, the remote sensor devices are intended to provide embedded data elaboration capabilities, in order to share the overall computational load.
Design of the PET-MR system for head imaging of the DREAM Project
NASA Astrophysics Data System (ADS)
González, A. J.; Conde, P.; Hernández, L.; Herrero, V.; Moliner, L.; Monzó, J. M.; Orero, A.; Peiró, A.; Rodríguez-Álvarez, M. J.; Ros, A.; Sánchez, F.; Soriano, A.; Vidal, L. F.; Benlloch, J. M.
2013-02-01
In this paper we describe the overall design of a PET-MR system for head imaging within the framework of the DREAM Project as well as the first detector module tests. The PET system design consists of 4 rings of 16 detector modules each and it is expected to be integrated in a head dedicated radio frequency coil of an MR scanner. The PET modules are based on monolithic LYSO crystals coupled by means of optical devices to an array of 256 Silicon Photomultipliers. These types of crystals allow to preserve the scintillation light distribution and, thus, to recover the exact photon impact position with the proper characterization of such a distribution. Every module contains 4 Application Specific Integrated Circuits (ASICs) which return detailed information of several light statistical momenta. The preliminary tests carried out on this design and controlled by means of ASICs have shown promising results towards the suitability of hybrid PET-MR systems.
Optimal Operation Method of Smart House by Controllable Loads based on Smart Grid Topology
NASA Astrophysics Data System (ADS)
Yoza, Akihiro; Uchida, Kosuke; Yona, Atsushi; Senju, Tomonobu
2013-08-01
From the perspective of global warming suppression and depletion of energy resources, renewable energy such as wind generation (WG) and photovoltaic generation (PV) are getting attention in distribution systems. Additionally, all electrification apartment house or residence such as DC smart house have increased in recent years. However, due to fluctuating power from renewable energy sources and loads, supply-demand balancing fluctuations of power system become problematic. Therefore, "smart grid" has become very popular in the worldwide. This article presents a methodology for optimal operation of a smart grid to minimize the interconnection point power flow fluctuations. To achieve the proposed optimal operation, we use distributed controllable loads such as battery and heat pump. By minimizing the interconnection point power flow fluctuations, it is possible to reduce the maximum electric power consumption and the electric cost. This system consists of photovoltaics generator, heat pump, battery, solar collector, and load. In order to verify the effectiveness of the proposed system, MATLAB is used in simulations.
Single phase inverter for a three phase power generation and distribution system
NASA Technical Reports Server (NTRS)
Lindena, S. J.
1976-01-01
A breadboard design of a single-phase inverter with sinusoidal output voltage for a three-phase power generation and distribution system was developed. The three-phase system consists of three single-phase inverters, whose output voltages are connected in a delta configuration. Upon failure of one inverter the two remaining inverters will continue to deliver three-phase power. Parallel redundancy as offered by two three-phase inverters is substituted by one three-phase inverter assembly with high savings in volume, weight, components count and complexity, and a considerable increase in reliability. The following requirements must be met: (1) Each single-phase, current-fed inverter must be capable of being synchronized to a three-phase reference system such that its output voltage remains phaselocked to its respective reference voltage. (2) Each single-phase, current-fed inverter must be capable of accepting leading and lagging power factors over a range from -0.7 through 1 to +0.7.
Disordered artificial spin ices: Avalanches and criticality (invited)
NASA Astrophysics Data System (ADS)
Reichhardt, Cynthia J. Olson; Chern, Gia-Wei; Libál, Andras; Reichhardt, Charles
2015-05-01
We show that square and kagome artificial spin ices with disconnected islands exhibit disorder-induced nonequilibrium phase transitions. The critical point of the transition is characterized by a diverging length scale and the effective spin reconfiguration avalanche sizes are power-law distributed. For weak disorder, the magnetization reversal is dominated by system-spanning avalanche events characteristic of a supercritical regime, while at strong disorder, the avalanche distributions have subcritical behavior and are cut off above a length scale that decreases with increasing disorder. The different type of geometrical frustration in the two lattices produces distinct forms of critical avalanche behavior. Avalanches in the square ice consist of the propagation of locally stable domain walls separating the two polarized ground states, and we find a scaling collapse consistent with an interface depinning mechanism. In the fully frustrated kagome ice, however, the avalanches branch strongly in a manner reminiscent of directed percolation. We also observe an interesting crossover in the power-law scaling of the kagome ice avalanches at low disorder. Our results show that artificial spin ices are ideal systems in which to study a variety of nonequilibrium critical point phenomena as the microscopic degrees of freedom can be accessed directly in experiments.
Hassan, Sergio A
2012-08-21
A self-consistent method is presented for the calculation of the local dielectric permittivity and electrostatic potential generated by a solute of arbitrary shape and charge distribution in a polar and polarizable liquid. The structure and dynamics behavior of the liquid at the solute/liquid interface determine the spatial variations of the density and the dielectric response. Emphasis here is on the treatment of the interface. The method is an extension of conventional methods used in continuum protein electrostatics, and can be used to estimate changes in the static dielectric response of the liquid as it adapts to charge redistribution within the solute. This is most relevant in the context of polarizable force fields, during electron structure optimization in quantum chemical calculations, or upon charge transfer. The method is computationally efficient and well suited for code parallelization, and can be used for on-the-fly calculations of the local permittivity in dynamics simulations of systems with large and heterogeneous charge distributions, such as proteins, nucleic acids, and polyelectrolytes. Numerical calculation of the system free energy is discussed for the general case of a liquid with field-dependent dielectric response.
NASA Astrophysics Data System (ADS)
Hassan, Sergio A.
2012-08-01
A self-consistent method is presented for the calculation of the local dielectric permittivity and electrostatic potential generated by a solute of arbitrary shape and charge distribution in a polar and polarizable liquid. The structure and dynamics behavior of the liquid at the solute/liquid interface determine the spatial variations of the density and the dielectric response. Emphasis here is on the treatment of the interface. The method is an extension of conventional methods used in continuum protein electrostatics, and can be used to estimate changes in the static dielectric response of the liquid as it adapts to charge redistribution within the solute. This is most relevant in the context of polarizable force fields, during electron structure optimization in quantum chemical calculations, or upon charge transfer. The method is computationally efficient and well suited for code parallelization, and can be used for on-the-fly calculations of the local permittivity in dynamics simulations of systems with large and heterogeneous charge distributions, such as proteins, nucleic acids, and polyelectrolytes. Numerical calculation of the system free energy is discussed for the general case of a liquid with field-dependent dielectric response.
Hassan, Sergio A.
2012-01-01
A self-consistent method is presented for the calculation of the local dielectric permittivity and electrostatic potential generated by a solute of arbitrary shape and charge distribution in a polar and polarizable liquid. The structure and dynamics behavior of the liquid at the solute/liquid interface determine the spatial variations of the density and the dielectric response. Emphasis here is on the treatment of the interface. The method is an extension of conventional methods used in continuum protein electrostatics, and can be used to estimate changes in the static dielectric response of the liquid as it adapts to charge redistribution within the solute. This is most relevant in the context of polarizable force fields, during electron structure optimization in quantum chemical calculations, or upon charge transfer. The method is computationally efficient and well suited for code parallelization, and can be used for on-the-fly calculations of the local permittivity in dynamics simulations of systems with large and heterogeneous charge distributions, such as proteins, nucleic acids, and polyelectrolytes. Numerical calculation of the system free energy is discussed for the general case of a liquid with field-dependent dielectric response. PMID:22920098
Visualization of superparamagnetic nanoparticles in vascular tissue using XμCT and histology.
Tietze, Rainer; Rahn, Helene; Lyer, Stefan; Schreiber, Eveline; Mann, Jenny; Odenbach, Stefan; Alexiou, Christoph
2011-02-01
In order to increase the dose of antineoplastic agents in the tumor area, the concept of magnetic drug targeting (MDT) has been developed. Magnetic nanoparticles consisting of iron oxide and a biocompatible cover layer suspended in an aqueous solution (ferrofluid) serve as carriers for chemotherapeutics being enriched by an external magnetic field after intra-arterial application in desired body compartments (i.e., tumor). We established an ex vivo model to simulate in vivo conditions in a circulating system consisting of magnetic iron oxide nanoparticles passing an intact bovine artery and being focused by an external magnetic field to study their distribution in the vessel. Micro-computed X-ray tomography (XμCT) and histology can elucidate the arrangement of these particles after application. XμCT-analysis has been performed on arterial sections after MDT in order to determine the distribution of the nanoparticles. These measurements have been carried out with a cone X-ray source and corresponding histological sections were stained with Prussian blue. It could be shown that combining XμCT and histology offers the opportunity for a better understanding of the mechanisms of nanoparticle deposition in the vascular system after MDT.
Disordered artificial spin ices: Avalanches and criticality (invited)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reichhardt, Cynthia J. Olson, E-mail: cjrx@lanl.gov; Chern, Gia-Wei; Reichhardt, Charles
2015-05-07
We show that square and kagome artificial spin ices with disconnected islands exhibit disorder-induced nonequilibrium phase transitions. The critical point of the transition is characterized by a diverging length scale and the effective spin reconfiguration avalanche sizes are power-law distributed. For weak disorder, the magnetization reversal is dominated by system-spanning avalanche events characteristic of a supercritical regime, while at strong disorder, the avalanche distributions have subcritical behavior and are cut off above a length scale that decreases with increasing disorder. The different type of geometrical frustration in the two lattices produces distinct forms of critical avalanche behavior. Avalanches in themore » square ice consist of the propagation of locally stable domain walls separating the two polarized ground states, and we find a scaling collapse consistent with an interface depinning mechanism. In the fully frustrated kagome ice, however, the avalanches branch strongly in a manner reminiscent of directed percolation. We also observe an interesting crossover in the power-law scaling of the kagome ice avalanches at low disorder. Our results show that artificial spin ices are ideal systems in which to study a variety of nonequilibrium critical point phenomena as the microscopic degrees of freedom can be accessed directly in experiments.« less
NASA Technical Reports Server (NTRS)
Prust, Chet D.; Haufler, W. A.; Marino, A. J.
1988-01-01
The results of the Independent Orbiter Assessment (IOA) of the Failure Modes and Effects Analysis (FMEA) and Critical Items List (CIL) are presented. The IOA effort first completed an analysis of the Orbital Maneuvering System (OMS) hardware and Electrical Power Distribution and Control (EPD and C), generating draft failure modes and potential critical items. To preserve independence, this analysis was accomplished without reliance upon the results contained within the NASA FMEA/CIL documentation. The IOA results were then compared to the proposed Post 51-L NASA FMEA/CIL baseline. This report documents the results of that comparison for the Orbiter OMS hardware. The IOA analysis defined the OMS as being comprised of the following subsystems: helium pressurization, propellant storage and distribution, Orbital Maneuvering Engine, and EPD and C. The IOA product for the OMS analysis consisted of 284 hardware and 667 EPD and C failure mode worksheets that resulted in 160 hardware and 216 EPD and C potential critical items (PCIs) being identified. A comparison was made of the IOA product to the NASA FMEA/CIL baseline which consisted of 101 hardware and 142 EPD and C CIL items.
Software architecture for a distributed real-time system in Ada, with application to telerobotics
NASA Technical Reports Server (NTRS)
Olsen, Douglas R.; Messiora, Steve; Leake, Stephen
1992-01-01
The architecture structure and software design methodology presented is described in the context of telerobotic application in Ada, specifically the Engineering Test Bed (ETB), which was developed to support the Flight Telerobotic Servicer (FTS) Program at GSFC. However, the nature of the architecture is such that it has applications to any multiprocessor distributed real-time system. The ETB architecture, which is a derivation of the NASA/NBS Standard Reference Model (NASREM), defines a hierarchy for representing a telerobot system. Within this hierarchy, a module is a logical entity consisting of the software associated with a set of related hardware components in the robot system. A module is comprised of submodules, which are cyclically executing processes that each perform a specific set of functions. The submodules in a module can run on separate processors. The submodules in the system communicate via command/status (C/S) interface channels, which are used to send commands down and relay status back up the system hierarchy. Submodules also communicate via setpoint data links, which are used to transfer control data from one submodule to another. A submodule invokes submodule algorithms (SMA's) to perform algorithmic operations. Data that describe or models a physical component of the system are stored as objects in the World Model (WM). The WM is a system-wide distributed database that is accessible to submodules in all modules of the system for creating, reading, and writing objects.
Operation and maintenance of the SOL-DANCE building solar system. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1980-07-29
The Sol-Dance building solar heating system consists of 136 flat plate solar collectors divided evenly into two separate building systems, each providing its total output to a common thermal storage tank. An aromatic base transformer oil is circulated through a closed loop consisting of the collectors and a heat exchanger. Water from the thermal storage tank is passed through the same heat exchanger where heat from the oil is given up to the thermal storage. Back-up heat is provided by air source heat pumps. Heat is transferred from the thermal storage to the living space by liquid-to-air coils in themore » distribution ducts. Separate domestic hot water systems are provided for each building. The system consists of 2 flat plate collectors with a single 66 gallon storage tank with oil circulated in a closed loop through an external tube and shell heat exchanger. Some problems encountered and lessons learned during the project construction are listed as well as beneficial aspects and a project description. As-built drawings are provided as well as system photographs. An acceptance test plan is provided that checks the collection, thermal storage, and space and water heating subsystems and the total system installation. Predicted performance data are tabulated. Details are discussed regarding operation, maintenance, and repair, and manufacturers data are provided. (LEW)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schrottke, L., E-mail: lutz@pdi-berlin.de; Lü, X.; Grahn, H. T.
We present a self-consistent model for carrier transport in periodic semiconductor heterostructures completely formulated in the Fourier domain. In addition to the Hamiltonian for the layer system, all expressions for the scattering rates, the applied electric field, and the carrier distribution are treated in reciprocal space. In particular, for slowly converging cases of the self-consistent solution of the Schrödinger and Poisson equations, numerous transformations between real and reciprocal space during the iterations can be avoided by using the presented method, which results in a significant reduction of computation time. Therefore, it is a promising tool for the simulation and efficientmore » design of complex heterostructures such as terahertz quantum-cascade lasers.« less
Potgieter, Sarah; Pinto, Ameet; Sigudu, Makhosazana; du Preez, Hein; Ncube, Esper; Venter, Stephanus
2018-08-01
Long-term spatial-temporal investigations of microbial dynamics in full-scale drinking water distribution systems are scarce. These investigations can reveal the process, infrastructure, and environmental factors that influence the microbial community, offering opportunities to re-think microbial management in drinking water systems. Often, these insights are missed or are unreliable in short-term studies, which are impacted by stochastic variabilities inherent to large full-scale systems. In this two-year study, we investigated the spatial and temporal dynamics of the microbial community in a large, full scale South African drinking water distribution system that uses three successive disinfection strategies (i.e. chlorination, chloramination and hypochlorination). Monthly bulk water samples were collected from the outlet of the treatment plant and from 17 points in the distribution system spanning nearly 150 km and the bacterial community composition was characterised by Illumina MiSeq sequencing of the V4 hypervariable region of the 16S rRNA gene. Like previous studies, Alpha- and Betaproteobacteria dominated the drinking water bacterial communities, with an increase in Betaproteobacteria post-chloramination. In contrast with previous reports, the observed richness, diversity, and evenness of the bacterial communities were higher in the winter months as opposed to the summer months in this study. In addition to temperature effects, the seasonal variations were also likely to be influenced by changes in average water age in the distribution system and corresponding changes in disinfectant residual concentrations. Spatial dynamics of the bacterial communities indicated distance decay, with bacterial communities becoming increasingly dissimilar with increasing distance between sampling locations. These spatial effects dampened the temporal changes in the bulk water community and were the dominant factor when considering the entire distribution system. However, temporal variations were consistently stronger as compared to spatial changes at individual sampling locations and demonstrated seasonality. This study emphasises the need for long-term studies to comprehensively understand the temporal patterns that would otherwise be missed in short-term investigations. Furthermore, systematic long-term investigations are particularly critical towards determining the impact of changes in source water quality, environmental conditions, and process operations on the changes in microbial community composition in the drinking water distribution system. Copyright © 2018 Elsevier Ltd. All rights reserved.
Cyber physical system based on resilient ICT
NASA Astrophysics Data System (ADS)
Iwatsuki, Katsumi
2016-02-01
While development of science and technology has built up the sophisticated civilized society, it has also resulted in quite a few disadvantages in global environment and human society. The common recognition has been increasingly shared worldwide on sustainable development society attaching greater importance to the symbiotic relationship with nature and social ethics. After the East Japan Great Earthquake, it is indispensable for sustainable social development to enhance capacity of resistance and restoration of society against natural disaster, so called "resilient society". Our society consists of various Cyber Physical Systems (CPSs) that make up the physical systems by fusing with an Information Communication Technology (ICT). We describe the proposed structure of CPS in order to realize resilient society. The configuration of resilient CPS consisting of ICT and physical system is discussed to introduce "autonomous, distributed, and cooperative" structure, where subsystems of ICT and physical system are simultaneously coordinated and cooperated with Business Continuity Planning (BCP) engine, respectively. We show the disaster response information system and energy network as examples of BCP engine and resilient CPS, respectively. We also propose the structure and key technology of resilient ICT.
Stick-slip behavior in a continuum-granular experiment.
Geller, Drew A; Ecke, Robert E; Dahmen, Karin A; Backhaus, Scott
2015-12-01
We report moment distribution results from a laboratory experiment, similar in character to an isolated strike-slip earthquake fault, consisting of sheared elastic plates separated by a narrow gap filled with a two-dimensional granular medium. Local measurement of strain displacements of the plates at 203 spatial points located adjacent to the gap allows direct determination of the event moments and their spatial and temporal distributions. We show that events consist of spatially coherent, larger motions and spatially extended (noncoherent), smaller events. The noncoherent events have a probability distribution of event moment consistent with an M(-3/2) power law scaling with Poisson-distributed recurrence times. Coherent events have a log-normal moment distribution and mean temporal recurrence. As the applied normal pressure increases, there are more coherent events and their log-normal distribution broadens and shifts to larger average moment.
Architecture for reactive planning of robot actions
NASA Astrophysics Data System (ADS)
Riekki, Jukka P.; Roening, Juha
1995-01-01
In this article, a reactive system for planning robot actions is described. The described hierarchical control system architecture consists of planning-executing-monitoring-modelling elements (PEMM elements). A PEMM element is a goal-oriented, combined processing and data element. It includes a planner, an executor, a monitor, a modeler, and a local model. The elements form a tree-like structure. An element receives tasks from its ancestor and sends subtasks to its descendants. The model knowledge is distributed into the local models, which are connected to each other. The elements can be synchronized. The PEMM architecture is strictly hierarchical. It integrated planning, sensing, and modelling into a single framework. A PEMM-based control system is reactive, as it can cope with asynchronous events and operate under time constraints. The control system is intended to be used primarily to control mobile robots and robot manipulators in dynamic and partially unknown environments. It is suitable especially for applications consisting of physically separated devices and computing resources.
Remote online monitoring and measuring system for civil engineering structures
NASA Astrophysics Data System (ADS)
Kujawińska, Malgorzata; Sitnik, Robert; Dymny, Grzegorz; Karaszewski, Maciej; Michoński, Kuba; Krzesłowski, Jakub; Mularczyk, Krzysztof; Bolewicki, Paweł
2009-06-01
In this paper a distributed intelligent system for civil engineering structures on-line measurement, remote monitoring, and data archiving is presented. The system consists of a set of optical, full-field displacement sensors connected to a controlling server. The server conducts measurements according to a list of scheduled tasks and stores the primary data or initial results in a remote centralized database. Simultaneously the server performs checks, ordered by the operator, which may in turn result with an alert or a specific action. The structure of whole system is analyzed along with the discussion on possible fields of application and the ways to provide a relevant security during data transport. Finally, a working implementation consisting of a fringe projection, geometrical moiré, digital image correlation and grating interferometry sensors and Oracle XE database is presented. The results from database utilized for on-line monitoring of a threshold value of strain for an exemplary area of interest at the engineering structure are presented and discussed.
Point-of-entry drinking-water treatment systems for Superfund applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chambers, C.D.; Janszen, T.A.
1989-06-01
The U.S. Environmental Protection Agency (EPA) and State Superfund agencies need a technical manual to assist their personnel in the selection of an effective drinking-water treatment system for individual households in areas where the drinking water has been adversely affected by Superfund site contaminants and no other alternative water supply is available or feasible. Commercially available water treatment systems for individual households are of two basic types: point-of-use (POU) and point-of-entry (POE). A POU device consists of equipment applied to selected water taps to reduce contaminants at each tap. A POE device consists of equipment to reduce the contaminants inmore » the water distributed throughout the entire structure of a house. The study was initiated to collect monitoring, operation and maintenance, performance, and design data on existing Superfund POE water-treatment systems. Evaluation of the collected data showed that the existing data are not sufficient for the preparation of a technical assistance document to meet the objectives of EPA and State Superfund personnel.« less
Embedded real-time operating system micro kernel design
NASA Astrophysics Data System (ADS)
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
Nobukawa, Teruyoshi; Nomura, Takanori
2016-09-05
A holographic data storage system using digital holography is proposed to record and retrieve multilevel complex amplitude data pages. Digital holographic techniques are capable of modulating and detecting complex amplitude distribution using current electronic devices. These techniques allow the development of a simple, compact, and stable holographic storage system that mainly consists of a single phase-only spatial light modulator and an image sensor. As a proof-of-principle experiment, complex amplitude data pages with binary amplitude and four-level phase are recorded and retrieved. Experimental results show the feasibility of the proposed holographic data storage system.
System Simulation by Recursive Feedback: Coupling A Set of Stand-Alone Subsystem Simulations
NASA Technical Reports Server (NTRS)
Nixon, Douglas D.; Hanson, John M. (Technical Monitor)
2002-01-01
Recursive feedback is defined and discussed as a framework for development of specific algorithms and procedures that propagate the time-domain solution for a dynamical system simulation consisting of multiple numerically coupled self-contained stand-alone subsystem simulations. A satellite motion example containing three subsystems (other dynamics, attitude dynamics, and aerodynamics) has been defined and constructed using this approach. Conventional solution methods are used in the subsystem simulations. Centralized and distributed versions of coupling structure have been addressed. Numerical results are evaluated by direct comparison with a standard total-system simultaneous-solution approach.
Initial Results from the Bloomsburg University Goniometer Laboratory
NASA Technical Reports Server (NTRS)
Shepard, M. K.
2002-01-01
The Bloomsburg University Goniometer Laboratory (B.U.G. Lab) consists of three systems for studying the photometric properties of samples. The primary system is an automated goniometer capable of measuring the entire bi-directional reflectance distribution function (BRDF) of samples. Secondary systems include a reflectance spectrometer and digital video camera with macro zoom lens for characterizing and documenting other physical properties of measured samples. Works completed or in progress include the characterization of the BRDF of calibration surfaces for the 2003 Mars Exploration Rovers (MER03), Martian analog soils including JSC-Mars-1, and tests of photometric models.
Samsygina, G A; Vykhristiuk, O F
1989-01-01
The anticoagulative blood system, blood and urine fibrinolysis were studied in 95 children with pyo-inflammatory diseases (PID) and in 56 normal neonates aged 2 to 28 days. The patients afflicted with PID were distributed into 3 groups; group I included patients with uneventful localized PID, group II consisted of patients with grave PID, and group III of patients with sepsis. Hemostasis and urine fibrinolysis were compared according to 20 indicators. The intensity and involvement of certain components of the fibrinolytic and anticoagulative blood systems in PID turned out different and were dependent on the disease gravity.