Sample records for distribution system components

  1. Hybrid solar lighting distribution systems and components

    DOEpatents

    Muhs, Jeffrey D [Lenoir City, TN; Earl, Dennis D [Knoxville, TN; Beshears, David L [Knoxville, TN; Maxey, Lonnie C [Powell, TN; Jordan, John K [Oak Ridge, TN; Lind, Randall F [Lenoir City, TN

    2011-07-05

    A hybrid solar lighting distribution system and components having at least one hybrid solar concentrator, at least one fiber receiver, at least one hybrid luminaire, and a light distribution system operably connected to each hybrid solar concentrator and each hybrid luminaire. A controller operates all components.

  2. Universal distribution of component frequencies in biological and technological systems

    PubMed Central

    Pang, Tin Yau; Maslov, Sergei

    2013-01-01

    Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195

  3. Efficient abstract data type components for distributed and parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastani, F.; Hilal, W.; Iyengar, S.S.

    1987-10-01

    One way of improving software system's comprehensibility and maintainability is to decompose it into several components, each of which encapsulates some information concerning the system. These components can be classified into four categories, namely, abstract data type, functional, interface, and control components. Such a classfication underscores the need for different specification, implementation, and performance-improvement methods for different types of components. This article focuses on the development of high-performance abstract data type components for distributed and parallel environments.

  4. The state-of-the-art of dc power distribution systems/components for space applications

    NASA Technical Reports Server (NTRS)

    Krauthamer, S.

    1988-01-01

    This report is a survey of the state of the art of high voltage dc systems and components. This information can be used for consideration of an alternative secondary distribution (120 Vdc) system for the Space Station. All HVdc components have been prototyped or developed for terrestrial, aircraft, and spacecraft applications, and are applicable for general space application with appropriate modification and qualification. HVdc systems offer a safe, reliable, low mass, high efficiency and low EMI alternative for Space Station secondary distribution.

  5. Calculating a checksum with inactive networking components in a computing system

    DOEpatents

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2014-12-16

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  6. Calculating a checksum with inactive networking components in a computing system

    DOEpatents

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2015-01-27

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  7. Architecture, Voltage, and Components for a Turboelectric Distributed Propulsion Electric Grid

    NASA Technical Reports Server (NTRS)

    Armstrong, Michael J.; Blackwelder, Mark; Bollman, Andrew; Ross, Christine; Campbell, Angela; Jones, Catherine; Norman, Patrick

    2015-01-01

    The development of a wholly superconducting turboelectric distributed propulsion system presents unique opportunities for the aerospace industry. However, this transition from normally conducting systems to superconducting systems significantly increases the equipment complexity necessary to manage the electrical power systems. Due to the low technology readiness level (TRL) nature of all components and systems, current Turboelectric Distributed Propulsion (TeDP) technology developments are driven by an ambiguous set of system-level electrical integration standards for an airborne microgrid system (Figure 1). While multiple decades' worth of advancements are still required for concept realization, current system-level studies are necessary to focus the technology development, target specific technological shortcomings, and enable accurate prediction of concept feasibility and viability. An understanding of the performance sensitivity to operating voltages and an early definition of advantageous voltage regulation standards for unconventional airborne microgrids will allow for more accurate targeting of technology development. Propulsive power-rated microgrid systems necessitate the introduction of new aircraft distribution system voltage standards. All protection, distribution, control, power conversion, generation, and cryocooling equipment are affected by voltage regulation standards. Information on the desired operating voltage and voltage regulation is required to determine nominal and maximum currents for sizing distribution and fault isolation equipment, developing machine topologies and machine controls, and the physical attributes of all component shielding and insulation. Voltage impacts many components and system performance.

  8. Statistics of Shared Components in Complex Component Systems

    NASA Astrophysics Data System (ADS)

    Mazzolini, Andrea; Gherardi, Marco; Caselle, Michele; Cosentino Lagomarsino, Marco; Osella, Matteo

    2018-04-01

    Many complex systems are modular. Such systems can be represented as "component systems," i.e., sets of elementary components, such as LEGO bricks in LEGO sets. The bricks found in a LEGO set reflect a target architecture, which can be built following a set-specific list of instructions. In other component systems, instead, the underlying functional design and constraints are not obvious a priori, and their detection is often a challenge of both scientific and practical importance, requiring a clear understanding of component statistics. Importantly, some quantitative invariants appear to be common to many component systems, most notably a common broad distribution of component abundances, which often resembles the well-known Zipf's law. Such "laws" affect in a general and nontrivial way the component statistics, potentially hindering the identification of system-specific functional constraints or generative processes. Here, we specifically focus on the statistics of shared components, i.e., the distribution of the number of components shared by different system realizations, such as the common bricks found in different LEGO sets. To account for the effects of component heterogeneity, we consider a simple null model, which builds system realizations by random draws from a universe of possible components. Under general assumptions on abundance heterogeneity, we provide analytical estimates of component occurrence, which quantify exhaustively the statistics of shared components. Surprisingly, this simple null model can positively explain important features of empirical component-occurrence distributions obtained from large-scale data on bacterial genomes, LEGO sets, and book chapters. Specific architectural features and functional constraints can be detected from occurrence patterns as deviations from these null predictions, as we show for the illustrative case of the "core" genome in bacteria.

  9. NASA's Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    NASA's Earth Science Data Systems (ESDS) Program has evolved over the last two decades, and currently has several core and community components. Core components provide the basic operational capabilities to process, archive, manage and distribute data from NASA missions. Community components provide a path for peer-reviewed research in Earth Science Informatics to feed into the evolution of the core components. The Earth Observing System Data and Information System (EOSDIS) is a core component consisting of twelve Distributed Active Archive Centers (DAACs) and eight Science Investigator-led Processing Systems spread across the U.S. The presentation covers how the ESDS Program continues to evolve and benefits from as well as contributes to advances in Earth Science Informatics.

  10. System Lifetimes, The Memoryless Property, Euler's Constant, and Pi

    ERIC Educational Resources Information Center

    Agarwal, Anurag; Marengo, James E.; Romero, Likin Simon

    2013-01-01

    A "k"-out-of-"n" system functions as long as at least "k" of its "n" components remain operational. Assuming that component failure times are independent and identically distributed exponential random variables, we find the distribution of system failure time. After some examples, we find the limiting…

  11. A Distributed Approach to System-Level Prognostics

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Bregon, Anibal; Roychoudhury, Indranil

    2012-01-01

    Prognostics, which deals with predicting remaining useful life of components, subsystems, and systems, is a key technology for systems health management that leads to improved safety and reliability with reduced costs. The prognostics problem is often approached from a component-centric view. However, in most cases, it is not specifically component lifetimes that are important, but, rather, the lifetimes of the systems in which these components reside. The system-level prognostics problem can be quite difficult due to the increased scale and scope of the prognostics problem and the relative Jack of scalability and efficiency of typical prognostics approaches. In order to address these is ues, we develop a distributed solution to the system-level prognostics problem, based on the concept of structural model decomposition. The system model is decomposed into independent submodels. Independent local prognostics subproblems are then formed based on these local submodels, resul ting in a scalable, efficient, and flexible distributed approach to the system-level prognostics problem. We provide a formulation of the system-level prognostics problem and demonstrate the approach on a four-wheeled rover simulation testbed. The results show that the system-level prognostics problem can be accurately and efficiently solved in a distributed fashion.

  12. Flight Demonstration of X-33 Vehicle Health Management System Components on the F/A-18 Systems Research Aircraft

    NASA Technical Reports Server (NTRS)

    Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond

    2001-01-01

    The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a COTS-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.

  13. Flight Demonstration of X-33 Vehicle Health Management System Components on the F/A-18 Systems Research Aircraft

    NASA Technical Reports Server (NTRS)

    Schweikhard, Keith A.; Richards, W. Lance; Theisen, John; Mouyos, William; Garbos, Raymond; Schkolnik, Gerald (Technical Monitor)

    1998-01-01

    The X-33 reusable launch vehicle demonstrator has identified the need to implement a vehicle health monitoring system that can acquire data that monitors system health and performance. Sanders, a Lockheed Martin Company, has designed and developed a commercial off-the-shelf (COTS)-based open architecture system that implements a number of technologies that have not been previously used in a flight environment. NASA Dryden Flight Research Center and Sanders teamed to demonstrate that the distributed remote health nodes, fiber optic distributed strain sensor, and fiber distributed data interface communications components of the X-33 vehicle health management (VHM) system could be successfully integrated and flown on a NASA F-18 aircraft. This paper briefly describes components of X-33 VHM architecture flown at Dryden and summarizes the integration and flight demonstration of these X-33 VHM components. Finally, it presents early results from the integration and flight efforts.

  14. NASA JPL Distributed Systems Technology (DST) Object-Oriented Component Approach for Software Inter-Operability and Reuse

    NASA Technical Reports Server (NTRS)

    Hall, Laverne; Hung, Chaw-Kwei; Lin, Imin

    2000-01-01

    The purpose of this paper is to provide a description of NASA JPL Distributed Systems Technology (DST) Section's object-oriented component approach to open inter-operable systems software development and software reuse. It will address what is meant by the terminology object component software, give an overview of the component-based development approach and how it relates to infrastructure support of software architectures and promotes reuse, enumerate on the benefits of this approach, and give examples of application prototypes demonstrating its usage and advantages. Utilization of the object-oriented component technology approach for system development and software reuse will apply to several areas within JPL, and possibly across other NASA Centers.

  15. FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)

    NASA Astrophysics Data System (ADS)

    Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.

    2017-02-01

    This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.

  16. Software for integrated manufacturing systems, part 2

    NASA Technical Reports Server (NTRS)

    Volz, R. A.; Naylor, A. W.

    1987-01-01

    Part 1 presented an overview of the unified approach to manufacturing software. The specific characteristics of the approach that allow it to realize the goals of reduced cost, increased reliability and increased flexibility are considered. Why the blending of a components view, distributed languages, generics and formal models is important, why each individual part of this approach is essential, and why each component will typically have each of these parts are examined. An example of a specification for a real material handling system is presented using the approach and compared with the standard interface specification given by the manufacturer. Use of the component in a distributed manufacturing system is then compared with use of the traditional specification with a more traditional approach to designing the system. An overview is also provided of the underlying mechanisms used for implementing distributed manufacturing systems using the unified software/hardware component approach.

  17. Systems Suitable for Information Professionals.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1983-01-01

    Describes computer operating systems applicable to microcomputers, noting hardware components, advantages and disadvantages of each system, local area networks, distributed processing, and a fully configured system. Lists of hardware components (disk drives, solid state disk emulators, input/output and memory components, and processors) and…

  18. New measuring system for the distribution of a magnetic force by using an optical fiber

    NASA Astrophysics Data System (ADS)

    Ishigaki, H.; Oya, T.; Itoh, M.; Hida, A.; Iwata, K.

    1993-01-01

    A new measuring system using an optical fiber and a position sensing photodetector was developed to measure a three-dimensional distribution of a magnetic force. A steel ball attached to a cantilever made of an optical fiber generated force in a magnetic field. The displacement of the ball due to the force was detected by a position-sensing photodetector with the capability of detecting two-directional coordinates of the position. By scanning the sensing system in a magnetic field, we obtained distributions of two-directional component of the magnetic force vector. The component represents the gradient of a squared magnetic field. The usefulness of the system for measuring the magnetic field distribution in a narrow clearance and for evaluating superconducting machine components such as magnetic bearings was verified experimentally.

  19. Methodology Evaluation Framework for Component-Based System Development.

    ERIC Educational Resources Information Center

    Dahanayake, Ajantha; Sol, Henk; Stojanovic, Zoran

    2003-01-01

    Explains component-based development (CBD) for distributed information systems and presents an evaluation framework, which highlights the extent to which a methodology is component oriented. Compares prominent CBD methods, discusses ways of modeling, and suggests that this is a first step towards a components-oriented systems development…

  20. Where is the Battle-Line for Supply Contractors?

    DTIC Science & Technology

    1999-04-01

    military supply distribution system initiates, at the Theater Distribution Management Center (TMC). 3 Chapter 2 Current peacetime supply process I don’t know...terms of distribution success on the battlefield. There are three components which comprise the idea of distribution and distribution management . They...throughout the distribution pipeline. Visibility is the most essential component of distribution management . History is full of examples that prove

  1. A distributed component framework for science data product interoperability

    NASA Technical Reports Server (NTRS)

    Crichton, D.; Hughes, S.; Kelly, S.; Hardman, S.

    2000-01-01

    Correlation of science results from multi-disciplinary communities is a difficult task. Traditionally data from science missions is archived in proprietary data systems that are not interoperable. The Object Oriented Data Technology (OODT) task at the Jet Propulsion Laboratory is working on building a distributed product server as part of a distributed component framework to allow heterogeneous data systems to communicate and share scientific results.

  2. Design challenges in nanoparticle-based platforms: Implications for targeted drug delivery systems

    NASA Astrophysics Data System (ADS)

    Mullen, Douglas Gurnett

    Characterization and control of heterogeneous distributions of nanoparticle-ligand components are major design challenges for nanoparticle-based platforms. This dissertation begins with an examination of poly(amidoamine) (PAMAM) dendrimer-based targeted delivery platform. A folic acid targeted modular platform was developed to target human epithelial cancer cells. Although active targeting was observed in vitro, active targeting was not found in vivo using a mouse tumor model. A major flaw of this platform design was that it did not provide for characterization or control of the component distribution. Motivated by the problems experienced with the modular design, the actual composition of nanoparticle-ligand distributions were examined using a model dendrimer-ligand system. High Pressure Liquid Chromatography (HPLC) resolved the distribution of components in samples with mean ligand/dendrimer ratios ranging from 0.4 to 13. A peak fitting analysis enabled the quantification of the component distribution. Quantified distributions were found to be significantly more heterogeneous than commonly expected and standard analytical parameters, namely the mean ligand/nanoparticle ratio, failed to adequately represent the component heterogeneity. The distribution of components was also found to be sensitive to particle modifications that preceded the ligand conjugation. With the knowledge gained from this detailed distribution analysis, a new platform design was developed to provide a system with dramatically improved control over the number of components and with improved batch reproducibility. Using semi-preparative HPLC, individual dendrimer-ligand components were isolated. The isolated dendrimer with precise numbers of ligands were characterized by NMR and analytical HPLC. In total, nine different dendrimer-ligand components were obtained with degrees of purity ≥80%. This system has the potential to serve as a platform to which a precise number of functional molecules can be attached and has the potential to dramatically improve platform efficacy. An additional investigation of reproducibility challenges for current dendrimer-based platform designs is also described. The mass transport quality during the partial acetylation reaction of the dendrimer was found to have a major impact on subsequent dendrimer-ligand distributions that cannot be detected by standard analytical techniques. Consequently, this reaction should be eliminated from the platform design. Finally, optimized protocols for purification and characterization of PAMAM dendrimer were detailed.

  3. Aerospace Systems Technical Research Operation Services (ASTROS) Industry Day (Briefing Charts)

    DTIC Science & Technology

    2014-07-01

    Integrated Motor Life Management AFM 315E – Green Propellant MCAT – Missile Component Advanced Tech EP – Electric Propulsion Distribution A...service life estimate •Distribution A: Approved for public release; unlimited distribution 23 MCAT (Motor Component Assessment Technology) What are

  4. Effect of Individual Component Life Distribution on Engine Life Prediction

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Hendricks, Robert C.; Soditus, Sherry M.

    2003-01-01

    The effect of individual engine component life distributions on engine life prediction was determined. A Weibull-based life and reliability analysis of the NASA Energy Efficient Engine was conducted. The engine s life at a 95 and 99.9 percent probability of survival was determined based upon the engine manufacturer s original life calculations and assumed values of each of the component s cumulative life distributions as represented by a Weibull slope. The lives of the high-pressure turbine (HPT) disks and blades were also evaluated individually and as a system in a similar manner. Knowing the statistical cumulative distribution of each engine component with reasonable engineering certainty is a condition precedent to predicting the life and reliability of an entire engine. The life of a system at a given reliability will be less than the lowest-lived component in the system at the same reliability (probability of survival). Where Weibull slopes of all the engine components are equal, the Weibull slope had a minimal effect on engine L(sub 0.1) life prediction. However, at a probability of survival of 95 percent (L(sub 5) life), life decreased with increasing Weibull slope.

  5. The Use of Probabilistic Methods to Evaluate the Systems Impact of Component Design Improvements on Large Turbofan Engines

    NASA Technical Reports Server (NTRS)

    Packard, Michael H.

    2002-01-01

    Probabilistic Structural Analysis (PSA) is now commonly used for predicting the distribution of time/cycles to failure of turbine blades and other engine components. These distributions are typically based on fatigue/fracture and creep failure modes of these components. Additionally, reliability analysis is used for taking test data related to particular failure modes and calculating failure rate distributions of electronic and electromechanical components. How can these individual failure time distributions of structural, electronic and electromechanical component failure modes be effectively combined into a top level model for overall system evaluation of component upgrades, changes in maintenance intervals, or line replaceable unit (LRU) redesign? This paper shows an example of how various probabilistic failure predictions for turbine engine components can be evaluated and combined to show their effect on overall engine performance. A generic model of a turbofan engine was modeled using various Probabilistic Risk Assessment (PRA) tools (Quantitative Risk Assessment Software (QRAS) etc.). Hypothetical PSA results for a number of structural components along with mitigation factors that would restrict the failure mode from propagating to a Loss of Mission (LOM) failure were used in the models. The output of this program includes an overall failure distribution for LOM of the system. The rank and contribution to the overall Mission Success (MS) is also given for each failure mode and each subsystem. This application methodology demonstrates the effectiveness of PRA for assessing the performance of large turbine engines. Additionally, the effects of system changes and upgrades, the application of different maintenance intervals, inclusion of new sensor detection of faults and other upgrades were evaluated in determining overall turbine engine reliability.

  6. Business logic for geoprocessing of distributed geodata

    NASA Astrophysics Data System (ADS)

    Kiehle, Christian

    2006-12-01

    This paper describes the development of a business-logic component for the geoprocessing of distributed geodata. The business logic acts as a mediator between the data and the user, therefore playing a central role in any spatial information system. The component is used in service-oriented architectures to foster the reuse of existing geodata inventories. Based on a geoscientific case study of groundwater vulnerability assessment and mapping, the demands for such architectures are identified with special regard to software engineering tasks. Methods are derived from the field of applied Geosciences (Hydrogeology), Geoinformatics, and Software Engineering. In addition to the development of a business logic component, a forthcoming Open Geospatial Consortium (OGC) specification is introduced: the OGC Web Processing Service (WPS) specification. A sample application is introduced to demonstrate the potential of WPS for future information systems. The sample application Geoservice Groundwater Vulnerability is described in detail to provide insight into the business logic component, and demonstrate how information can be generated out of distributed geodata. This has the potential to significantly accelerate the assessment and mapping of groundwater vulnerability. The presented concept is easily transferable to other geoscientific use cases dealing with distributed data inventories. Potential application fields include web-based geoinformation systems operating on distributed data (e.g. environmental planning systems, cadastral information systems, and others).

  7. Hybrid solar lighting systems and components

    DOEpatents

    Muhs, Jeffrey D [Lenoir City, TN; Earl, Dennis D [Knoxville, TN; Beshears, David L [Knoxville, TN; Maxey, Lonnie C [Powell, TN; Jordan, John K [Oak Ridge, TN; Lind, Randall F [Lenoir City, TN

    2007-06-12

    A hybrid solar lighting system and components having at least one hybrid solar concentrator, at least one fiber receiver, at least one hybrid luminaire, and a light distribution system operably connected to each hybrid solar concentrator and each hybrid luminaire. A controller operates each component.

  8. Empirical Analysis of Optical Attenuator Performance in Quantum Key Distribution Systems Using a Particle Model

    DTIC Science & Technology

    2012-03-01

    EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING A...DISTRIBUTION IS UNLIMITED AFIT/GCS/ENG/12-01 EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING ...challenging as the complexity of actual implementation specifics are considered. Two components common to most quantum key distribution

  9. Apollo experience report: Command and service module electrical power distribution on subsystem

    NASA Technical Reports Server (NTRS)

    Munford, R. E.; Hendrix, B.

    1974-01-01

    A review of the design philosophy and development of the Apollo command and service modules electrical power distribution subsystem, a brief history of the evolution of the total system, and some of the more significant components within the system are discussed. The electrical power distribution primarily consisted of individual control units, interconnecting units, and associated protective devices. Because each unit within the system operated more or less independently of other units, the discussion of the subsystem proceeds generally in descending order of complexity; the discussion begins with the total system, progresses to the individual units of the system, and concludes with the components within the units.

  10. A component-based, integrated spatially distributed hydrologic/water quality model: AgroEcoSystem-Watershed (AgES-W) overview and application

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components. The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the addition of nitrogen (N) and sediment modeling compo...

  11. Bonus-Malus System with the Claim Frequency Distribution is Geometric and the Severity Distribution is Truncated Weibull

    NASA Astrophysics Data System (ADS)

    Santi, D. N.; Purnaba, I. G. P.; Mangku, I. W.

    2016-01-01

    Bonus-Malus system is said to be optimal if it is financially balanced for insurance companies and fair for policyholders. Previous research about Bonus-Malus system concern with the determination of the risk premium which applied to all of the severity that guaranteed by the insurance company. In fact, not all of the severity that proposed by policyholder may be covered by insurance company. When the insurance company sets a maximum bound of the severity incurred, so it is necessary to modify the model of the severity distribution into the severity bound distribution. In this paper, optimal Bonus-Malus system is compound of claim frequency component has geometric distribution and severity component has truncated Weibull distribution is discussed. The number of claims considered to follow a Poisson distribution, and the expected number λ is exponentially distributed, so the number of claims has a geometric distribution. The severity with a given parameter θ is considered to have a truncated exponential distribution is modelled using the Levy distribution, so the severity have a truncated Weibull distribution.

  12. Failure-Time Distribution Of An m-Out-of-n System

    NASA Technical Reports Server (NTRS)

    Scheuer, Ernest M.

    1988-01-01

    Formulas for reliability extended to more general cases. Useful in analyses of reliabilities of practical systems and structures, especially of redundant systems of identical components, among which operating loads distributed equally.

  13. Electronic warfare microwave components

    NASA Astrophysics Data System (ADS)

    Cosby, L. A.

    1984-09-01

    The current and projected state-of-the-art for electronic warfare (EW) microwave components is reviewed, with attention given to microwave components used extensively in EW systems for reconnaissance, threat warning, direction finding, and repeater jamming. It is emphasized that distributed EW systems must be able to operate from manned tactical and strategic platforms, with requirements including remote aerospace and space elements, as well as the need for expandable devices for detection, location, and denial/deception functions. EW coordination, or battle management, across a distributed system is a rapidly emerging requirement that must be integrated into current and projected command-and-control programs.

  14. First Experiences Using XACML for Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lorch, Marcus; Proctor, Seth; Lepro, Rebekah; Kafura, Dennis; Shah, Sumit

    2003-01-01

    Authorization systems today are increasingly complex. They span domains of administration, rely on many different authentication sources, and manage permissions that can be as complex as the system itself. Worse still, while there are many standards that define authentication mechanisms, the standards that address authorization are less well defined and tend to work only within homogeneous systems. This paper presents XACML, a standard access control language, as one component of a distributed and inter-operable authorization framework. Several emerging systems which incorporate XACML are discussed. These discussions illustrate how authorization can be deployed in distributed, decentralized systems. Finally, some new and future topics are presented to show where this work is heading and how it will help connect the general components of an authorization system.

  15. Efficient Low-Lift Cooling with Radiant Distribution, Thermal Storage and Variable-Speed Chiller Controls Part I: Component and Subsystem Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armstrong, Peter; Jiang, Wei; Winiarski, David W.

    2009-03-31

    this paper develops component and subsystem models used to evaluat4e the performance of a low-lift cooling system with an air-colled chiller optimized for variable-speed and low-pressure-ratio operation, a hydronic radient distribution system, variable-speed transport miotor controls, and peak-shifting controls.

  16. DATMAN: A reliability data analysis program using Bayesian updating

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, M.; Feltus, M.A.

    1996-12-31

    Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less

  17. Integration of Decentralized Thermal Storages Within District Heating (DH) Networks

    NASA Astrophysics Data System (ADS)

    Schuchardt, Georg K.

    2016-12-01

    Thermal Storages and Thermal Accumulators are an important component within District Heating (DH) systems, adding flexibility and offering additional business opportunities for these systems. Furthermore, these components have a major impact on the energy and exergy efficiency as well as the heat losses of the heat distribution system. Especially the integration of Thermal Storages within ill-conditioned parts of the overall DH system enhances the efficiency of the heat distribution. Regarding an illustrative and simplified example for a DH system, the interactions of different heat storage concepts (centralized and decentralized) and the heat losses, energy and exergy efficiencies will be examined by considering the thermal state of the heat distribution network.

  18. Project W-320 acceptance test report for AY-farm electrical distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bevins, R.R.

    1998-04-02

    This Acceptance Test Procedure (ATP) has been prepared to demonstrate that the AY-Farm Electrical Distribution System functions as required by the design criteria. This test is divided into three parts to support the planned construction schedule; Section 8 tests Mini-Power Pane AY102-PPI and the EES; Section 9 tests the SSS support systems; Section 10 tests the SSS and the Multi-Pak Group Control Panel. This test does not include the operation of end-use components (loads) supplied from the distribution system. Tests of the end-use components (loads) will be performed by other W-320 ATPs.

  19. MIXING IN DISTRIBUTION SYSTEM STORAGE TANKS: ITS EFFECT ON WATER QUALITY

    EPA Science Inventory

    Nearly all distribution systems in the US include storage tanks and reservoirs. They are the most visible components of a wate distribution system but are generally the least understood in terms of their impact on water quality. Long residence times in storage tanks can have nega...

  20. A Systematic Classification for HVAC Systems and Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Han; Chen, Yan; Zhang, Jian

    Depending on the application, the complexity of an HVAC system can range from a small fan coil unit to a large centralized air conditioning system with primary and secondary distribution loops, and central plant components. Currently, the taxonomy of HVAC systems and the components has various aspects, which can get quite complex because of the various components and system configurations. For example, based on cooling and heating medium delivered to terminal units, systems can be classified as either air systems, water systems or air-water systems. In addition, some of the system names might be commonly used in a confusing manner,more » such as “unitary system” vs. “packaged system.” Without a systematic classification, these components and system terminology can be confusing to understand or differentiate from each other, and it creates ambiguity in communication, interpretation, and documentation. It is valuable to organize and classify HVAC systems and components so that they can be easily understood and used in a consistent manner. This paper aims to develop a systematic classification of HVAC systems and components. First, we summarize the HVAC component information and definitions based on published literature, such as ASHRAE handbooks, regulations, and rating standards. Then, we identify common HVAC system types and map them to the collected components in a meaningful way. Classification charts are generated and described based on the component information. Six main categories are identified for the HVAC components and equipment, i.e., heating and cooling production, heat extraction and rejection, air handling process, distribution system, terminal use, and stand-alone system. Components for each main category are further analyzed and classified in detail. More than fifty system names are identified and grouped based on their characteristics. The result from this paper will be helpful for education, communication, and systems and component documentation.« less

  1. Guest Editor's Introduction: Special section on dependable distributed systems

    NASA Astrophysics Data System (ADS)

    Fetzer, Christof

    1999-09-01

    We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)

  2. 76 FR 74753 - Authority To Manufacture and Distribute Postage Evidencing Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... revision of the rules governing the inventory control processes of Postage Evidencing Systems (PES... destruction or disposal of all Postage Evidencing Systems and their components to enable accurate accounting...) Postage Evidencing System repair process--any physical or electronic access to the internal components of...

  3. Definition, analysis and development of an optical data distribution network for integrated avionics and control systems. Part 2: Component development and system integration

    NASA Technical Reports Server (NTRS)

    Yen, H. W.; Morrison, R. J.

    1984-01-01

    Fiber optic transmission is emerging as an attractive concept in data distribution onboard civil aircraft. Development of an Optical Data Distribution Network for Integrated Avionics and Control Systems for commercial aircraft will provide a data distribution network that gives freedom from EMI-RFI and ground loop problems, eliminates crosstalk and short circuits, provides protection and immunity from lightning induced transients and give a large bandwidth data transmission capability. In addition there is a potential for significantly reducing the weight and increasing the reliability over conventional data distribution networks. Wavelength Division Multiplexing (WDM) is a candidate method for data communication between the various avionic subsystems. With WDM all systems could conceptually communicate with each other without time sharing and requiring complicated coding schemes for each computer and subsystem to recognize a message. However, the state of the art of optical technology limits the application of fiber optics in advanced integrated avionics and control systems. Therefore, it is necessary to address the architecture for a fiber optics data distribution system for integrated avionics and control systems as well as develop prototype components and systems.

  4. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  5. Intelligent Systems for Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2002-01-01

    The motivation behind an advanced technology program to develop intelligent power management and distribution (PMAD) systems is described. The program concentrates on developing digital control and distributed processing algorithms for PMAD components and systems to improve their size, weight, efficiency, and reliability. Specific areas of research in developing intelligent DC-DC converters and distributed switchgear are described. Results from recent development efforts are presented along with expected future benefits to the overall PMAD system performance.

  6. RTDS-Based Design and Simulation of Distributed P-Q Power Resources in Smart Grid

    NASA Astrophysics Data System (ADS)

    Taylor, Zachariah David

    In this Thesis, we propose to utilize a battery system together with its power electronics interfaces and bidirectional charger as a distributed P-Q resource in power distribution networks. First, we present an optimization-based approach to operate such distributed P-Q resources based on the characteristics of the battery and charger system as well as the features and needs of the power distribution network. Then, we use the RTDS Simulator, which is an industry-standard simulation tool of power systems, to develop two RTDS-based design approaches. The first design is based on an ideal four-quadrant distributed P-Q power resource. The second design is based on a detailed four-quadrant distributed P-Q power resource that is developed using power electronics components. The hardware and power electronics circuitry as well as the control units are explained for the second design. After that, given the two-RTDS designs, we conducted extensive RTDS simulations to assess the performance of the designed distributed P-Q Power Resource in an IEEE 13 bus test system. We observed that the proposed design can noticeably improve the operational performance of the power distribution grid in at least four key aspects: reducing power loss, active power peak load shaving at substation, reactive power peak load shaving at substation, and voltage regulation. We examine these performance measures across three design cases: Case 1: There is no P-Q Power Resource available on the power distribution network. Case 2: The installed P-Q Power Resource only supports active power, i.e., it only utilizes its battery component. Case 3: The installed P-Q Power Resource supports both active and reactive power, i.e., it utilizes both its battery component and its power electronics charger component. In the end, we present insightful interpretations on the simulation results and suggest some future works.

  7. WATER DISTRIBUTION SYSTEMS: A SPATIAL AND COST EVALUATION

    EPA Science Inventory

    Problems associated with maintaining and replacing water supply distribution systems are reviewed. Some of these problems are associated with public health, economic and spatial development of the community, and costs of repair and replacement of system components. A repair frequ...

  8. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  9. Distributed visualization framework architecture

    NASA Astrophysics Data System (ADS)

    Mishchenko, Oleg; Raman, Sundaresan; Crawfis, Roger

    2010-01-01

    An architecture for distributed and collaborative visualization is presented. The design goals of the system are to create a lightweight, easy to use and extensible framework for reasearch in scientific visualization. The system provides both single user and collaborative distributed environment. System architecture employs a client-server model. Visualization projects can be synchronously accessed and modified from different client machines. We present a set of visualization use cases that illustrate the flexibility of our system. The framework provides a rich set of reusable components for creating new applications. These components make heavy use of leading design patterns. All components are based on the functionality of a small set of interfaces. This allows new components to be integrated seamlessly with little to no effort. All user input and higher-level control functionality interface with proxy objects supporting a concrete implementation of these interfaces. These light-weight objects can be easily streamed across the web and even integrated with smart clients running on a user's cell phone. The back-end is supported by concrete implementations wherever needed (for instance for rendering). A middle-tier manages any communication and synchronization with the proxy objects. In addition to the data components, we have developed several first-class GUI components for visualization. These include a layer compositor editor, a programmable shader editor, a material editor and various drawable editors. These GUI components interact strictly with the interfaces. Access to the various entities in the system is provided by an AssetManager. The asset manager keeps track of all of the registered proxies and responds to queries on the overall system. This allows all user components to be populated automatically. Hence if a new component is added that supports the IMaterial interface, any instances of this can be used in the various GUI components that work with this interface. One of the main features is an interactive shader designer. This allows rapid prototyping of new visualization renderings that are shader-based and greatly accelerates the development and debug cycle.

  10. Sample dimensionality: a predictor of order-disorder in component peak distribution in multidimensional separation.

    PubMed

    Giddings, J C

    1995-05-26

    While the use of multiple dimensions in separation systems can create very high peak capacities, the effectiveness of the enhanced peak capacity in resolving large numbers of components depends strongly on whether the distribution of component peaks is ordered or disordered. Peak overlap is common in disordered distributions, even with a very high peak capacity. It is therefore of great importance to understand the origin of peak order/disorder in multidimensional separations and to address the question of whether any control can be exerted over observed levels of order and disorder and thus separation efficacy. It is postulated here that the underlying difference between ordered and disordered distributions of component peaks in separation systems is related to sample complexity as measured by a newly defined parameter, the sample dimensionality s, and by the derivative dimensionality s'. It is concluded that the type and degree of order and disorder is determined by the relationship of s (or s') to the dimensionality n of the separation system employed. Thus for some relatively simple samples (defined as having small s values), increased order and a consequent enhancement of resolution can be realized by increasing n. The resolution enhancement is in addition to the normal gain in resolving power resulting from the increased peak capacity of multidimensional systems. However, for other samples (having even smaller s values), an increase in n provides no additional benefit in enhancing component separability.

  11. DETERMINANTS AND OPTIONS FOR WATER DISTRIBUTION SYSTEM MANAGEMENT: A COST EVALUATION

    EPA Science Inventory

    This report deals with the problems associated with maintaining and replacing water supply distribution systems. Some of these problems are associated with public health, economic and spatial development of the community, and costs of repair and replacement of system components. ...

  12. Getting the lead out: understanding risks in the distribution system

    EPA Science Inventory

    This presentation discusses the importance of the water distribution system as a component of the source-to-tap continuum in public health protection. Issues covered include: understanding source water quality changes and their impacts throughout the system; use of mitigation me...

  13. Design of Distributed Engine Control Systems for Stability Under Communication Packet Dropouts

    DTIC Science & Technology

    2009-08-01

    remarks. II. Distributed Engine Control Systems A. FADEC based on Distributed Engine Control Architecture (DEC) In Distributed Engine...Control, the functions of Full Authority Digital Engine Control ( FADEC ) are distributed at the component level. Each sensor/actuator is to be replaced...diagnostics and health management functionality. Dual channel digital serial communication network is used to connect these smart modules with FADEC . Fig

  14. Star formation history: Modeling of visual binaries

    NASA Astrophysics Data System (ADS)

    Gebrehiwot, Y. M.; Tessema, S. B.; Malkov, O. Yu.; Kovaleva, D. A.; Sytov, A. Yu.; Tutukov, A. V.

    2018-05-01

    Most stars form in binary or multiple systems. Their evolution is defined by masses of components, orbital separation and eccentricity. In order to understand star formation and evolutionary processes, it is vital to find distributions of physical parameters of binaries. We have carried out Monte Carlo simulations in which we simulate different pairing scenarios: random pairing, primary-constrained pairing, split-core pairing, and total and primary pairing in order to get distributions of binaries over physical parameters at birth. Next, for comparison with observations, we account for stellar evolution and selection effects. Brightness, radius, temperature, and other parameters of components are assigned or calculated according to approximate relations for stars in different evolutionary stages (main-sequence stars, red giants, white dwarfs, relativistic objects). Evolutionary stage is defined as a function of system age and component masses. We compare our results with the observed IMF, binarity rate, and binary mass-ratio distributions for field visual binaries to find initial distributions and pairing scenarios that produce observed distributions.

  15. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  16. Maintaining consistency in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    In systems designed as assemblies of independently developed components, concurrent access to data or data structures normally arises within individual programs, and is controlled using mutual exclusion constructs, such as semaphores and monitors. Where data is persistent and/or sets of operation are related to one another, transactions or linearizability may be more appropriate. Systems that incorporate cooperative styles of distributed execution often replicate or distribute data within groups of components. In these cases, group oriented consistency properties must be maintained, and tools based on the virtual synchrony execution model greatly simplify the task confronting an application developer. All three styles of distributed computing are likely to be seen in future systems - often, within the same application. This leads us to propose an integrated approach that permits applications that use virtual synchrony with concurrent objects that respect a linearizability constraint, and vice versa. Transactional subsystems are treated as a special case of linearizability.

  17. Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)

    DTIC Science & Technology

    2007-04-01

    weight will be reduced by replacing heavy harness assemblies and FADECs , with distributed processing elements interconnected. This paper reviews...Digital Electronic Controls ( FADECs ), with distributed processing elements interconnected through a serial bus. Efficient data flow throughout the...because intelligence is embedded in components while overall control is maintained in the FADEC . The need for Distributed Control Systems in

  18. Distributed Electrical Energy Systems: Needs, Concepts, Approaches and Vision (in Chinese)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong

    Intelligent distributed electrical energy systems (IDEES) are featured by vast system components, diversifled component types, and difficulties in operation and management, which results in that the traditional centralized power system management approach no longer flts the operation. Thus, it is believed that the blockchain technology is one of the important feasible technical paths for building future large-scale distributed electrical energy systems. An IDEES is inherently with both social and technical characteristics, as a result, a distributed electrical energy system needs to be divided into multiple layers, and at each layer, a blockchain is utilized to model and manage its logicmore » and physical functionalities. The blockchains at difierent layers coordinate with each other and achieve successful operation of the IDEES. Speciflcally, the multi-layer blockchains, named 'blockchain group', consist of distributed data access and service blockchain, intelligent property management blockchain, power system analysis blockchain, intelligent contract operation blockchain, and intelligent electricity trading blockchain. It is expected that the blockchain group can be self-organized into a complex, autonomous and distributed IDEES. In this complex system, frequent and in-depth interactions and computing will derive intelligence, and it is expected that such intelligence can bring stable, reliable and efficient electrical energy production, transmission and consumption.« less

  19. Power management and distribution technology

    NASA Astrophysics Data System (ADS)

    Dickman, John Ellis

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  20. Power management and distribution technology

    NASA Technical Reports Server (NTRS)

    Dickman, John Ellis

    1993-01-01

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  1. Knowledge Management System Model for Learning Organisations

    ERIC Educational Resources Information Center

    Amin, Yousif; Monamad, Roshayu

    2017-01-01

    Based on the literature of knowledge management (KM), this paper reports on the progress of developing a new knowledge management system (KMS) model with components architecture that are distributed over the widely-recognised socio-technical system (STS) aspects to guide developers for selecting the most applicable components to support their KM…

  2. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.

  3. Observing System Simulation Experiment (OSSE) for the HyspIRI Spectrometer Mission

    NASA Technical Reports Server (NTRS)

    Turmon, Michael J.; Block, Gary L.; Green, Robert O.; Hua, Hook; Jacob, Joseph C.; Sobel, Harold R.; Springer, Paul L.; Zhang, Qingyuan

    2010-01-01

    The OSSE software provides an integrated end-to-end environment to simulate an Earth observing system by iteratively running a distributed modeling workflow based on the HyspIRI Mission, including atmospheric radiative transfer, surface albedo effects, detection, and retrieval for agile exploration of the mission design space. The software enables an Observing System Simulation Experiment (OSSE) and can be used for design trade space exploration of science return for proposed instruments by modeling the whole ground truth, sensing, and retrieval chain and to assess retrieval accuracy for a particular instrument and algorithm design. The OSSE in fra struc ture is extensible to future National Research Council (NRC) Decadal Survey concept missions where integrated modeling can improve the fidelity of coupled science and engineering analyses for systematic analysis and science return studies. This software has a distributed architecture that gives it a distinct advantage over other similar efforts. The workflow modeling components are typically legacy computer programs implemented in a variety of programming languages, including MATLAB, Excel, and FORTRAN. Integration of these diverse components is difficult and time-consuming. In order to hide this complexity, each modeling component is wrapped as a Web Service, and each component is able to pass analysis parameterizations, such as reflectance or radiance spectra, on to the next component downstream in the service workflow chain. In this way, the interface to each modeling component becomes uniform and the entire end-to-end workflow can be run using any existing or custom workflow processing engine. The architecture lets users extend workflows as new modeling components become available, chain together the components using any existing or custom workflow processing engine, and distribute them across any Internet-accessible Web Service endpoints. The workflow components can be hosted on any Internet-accessible machine. This has the advantages that the computations can be distributed to make best use of the available computing resources, and each workflow component can be hosted and maintained by their respective domain experts.

  4. A Performance Comparison of Tree and Ring Topologies in Distributed System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Min

    A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less

  6. PRMS-IV, the precipitation-runoff modeling system, version 4

    USGS Publications Warehouse

    Markstrom, Steven L.; Regan, R. Steve; Hay, Lauren E.; Viger, Roland J.; Webb, Richard M.; Payn, Robert A.; LaFontaine, Jacob H.

    2015-01-01

    Computer models that simulate the hydrologic cycle at a watershed scale facilitate assessment of variability in climate, biota, geology, and human activities on water availability and flow. This report describes an updated version of the Precipitation-Runoff Modeling System. The Precipitation-Runoff Modeling System is a deterministic, distributed-parameter, physical-process-based modeling system developed to evaluate the response of various combinations of climate and land use on streamflow and general watershed hydrology. Several new model components were developed, and all existing components were updated, to enhance performance and supportability. This report describes the history, application, concepts, organization, and mathematical formulation of the Precipitation-Runoff Modeling System and its model components. This updated version provides improvements in (1) system flexibility for integrated science, (2) verification of conservation of water during simulation, (3) methods for spatial distribution of climate boundary conditions, and (4) methods for simulation of soil-water flow and storage.

  7. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  8. Design of material management system of mining group based on Hadoop

    NASA Astrophysics Data System (ADS)

    Xia, Zhiyuan; Tan, Zhuoying; Qi, Kuan; Li, Wen

    2018-01-01

    Under the background of persistent slowdown in mining market at present, improving the management level in mining group has become the key link to improve the economic benefit of the mine. According to the practical material management in mining group, three core components of Hadoop are applied: distributed file system HDFS, distributed computing framework Map/Reduce and distributed database HBase. Material management system of mining group based on Hadoop is constructed with the three core components of Hadoop and SSH framework technology. This system was found to strengthen collaboration between mining group and affiliated companies, and then the problems such as inefficient management, server pressure, hardware equipment performance deficiencies that exist in traditional mining material-management system are solved, and then mining group materials management is optimized, the cost of mining management is saved, the enterprise profit is increased.

  9. ADMS State of the Industry and Gap Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agalgaonkar, Yashodhan P.; Marinovici, Maria C.; Vadari, Subramanian V.

    2016-03-31

    An Advanced distribution management system (ADMS) is a platform for optimized distribution system operational management. This platform comprises of distribution management system (DMS) applications, supervisory control and data acquisition (SCADA), outage management system (OMS), and distributed energy resource management system (DERMS). One of the primary objectives of this work is to study and analyze several ADMS component and auxiliary systems. All the important component and auxiliary systems, SCADA, GISs, DMSs, AMRs/AMIs, OMSs, and DERMS, are discussed in this report. Their current generation technologies are analyzed, and their integration (or evolution) with an ADMS technology is discussed. An ADMS technology statemore » of the art and gap analysis is also presented. There are two technical gaps observed. The integration challenge between the component operational systems is the single largest challenge for ADMS design and deployment. Another significant challenge noted is concerning essential ADMS applications, for instance, fault location, isolation, and service restoration (FLISR), volt-var optimization (VVO), etc. There are a relatively small number of ADMS application developers as ADMS software platform is not open source. There is another critical gap and while not being technical in nature (when compared the two above) is still important to consider. The data models currently residing in utility GIS systems are either incomplete or inaccurate or both. This data is essential for planning and operations because it is typically one of the primary sources from which power system model are created. To achieve the full potential of ADMS, the ability to execute acute Power Flow solution is an important pre-requisite. These critical gaps are hindering wider Utility adoption of an ADMS technology. The development of an open architecture platform can eliminate many of these barriers and also aid seamless integration of distribution Utility legacy systems with an ADMS.« less

  10. PQScal (Power Quality Score Calculation for Distribution Systems with DER Integration)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Power Quality is of great importance to evaluate the “health” of a distribution system, especially when the distributed energy resource (DER) penetration becomes more significant. The individual components that make up power quality, such as voltage magnitude and unbalance, can be measured in simulations or in the field, however, a comprehensive method to incorporate all of these values into a single score doesn't exist. As a result, we propose a methodology to quantify the power quality health using the single number value, named as Power Quality Score (PQS). The PQS is dependent on six metrics that are developed based onmore » both components that directly impact power quality and those are often reference in the context of power quality. These six metrics are named as System Average Voltage Magnitude Violation Index (SAVMVI), System Average Voltage Fluctuation Index (SAVFI), System Average Voltage Unbalance Index (SAVUI), System Control Device Operation Index (SCDOI), System Reactive Power Demand Index (SRPDI) and System Energy Loss Index (SELI). This software tool, PQScal, is developed based on this novel PQS methodology. Besides of traditional distribution systems, PQScal can also measure the power quality for distribution systems with various DER penetrations. PQScal has been tested on two utility distribution feeders with distinct model characteristics and its effectiveness has been proved. In sum, PQScal can help utilities or other parties to measure the power quality of distribution systems with DER integration easily and effectively.« less

  11. A gallium-arsenide digital phase shifter for clock and control signal distribution in high-speed digital systems

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.

    1992-01-01

    The design, implementation, testing, and applications of a gallium-arsenide digital phase shifter and fan-out buffer are described. The integrated circuit provides a method for adjusting the phase of high-speed clock and control signals in digital systems, without the need for pruning cables, multiplexing between cables of different lengths, delay lines, or similar techniques. The phase of signals distributed with the described chip can be dynamically adjusted in eight different steps of approximately 60 ps per step. The IC also serves as a fan-out buffer and provides 12 in-phase outputs. The chip is useful for distributing high-speed clock and control signals in synchronous digital systems, especially if components are distributed over a large physical area or if there is a large number of components.

  12. Research in Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1997-01-01

    This document summarizes the progress we have made on our study of issues concerning the schedulability of real-time systems. Our study has produced several results in the scalability issues of distributed real-time systems. In particular, we have used our techniques to resolve schedulability issues in distributed systems with end-to-end requirements. During the next year (1997-98), we propose to extend the current work to address the modeling and workload characterization issues in distributed real-time systems. In particular, we propose to investigate the effect of different workload models and component models on the design and the subsequent performance of distributed real-time systems.

  13. Simulation of Tasks Distribution in Horizontally Scalable Management System

    NASA Astrophysics Data System (ADS)

    Kustov, D.; Sherstneva, A.; Botygin, I.

    2016-08-01

    This paper presents an imitational model of the task distribution system for the components of territorially-distributed automated management system with a dynamically changing topology. Each resource of the distributed automated management system is represented with an agent, which allows to set behavior of every resource in the best possible way and ensure their interaction. The agent work load imitation was done via service query imitation formed in a system dynamics style using a stream diagram. The query generation took place in the abstract-represented center - afterwards, they were sent to the drive to be distributed to management system resources according to a ranking table.

  14. Load flow and state estimation algorithms for three-phase unbalanced power distribution systems

    NASA Astrophysics Data System (ADS)

    Madvesh, Chiranjeevi

    Distribution load flow and state estimation are two important functions in distribution energy management systems (DEMS) and advanced distribution automation (ADA) systems. Distribution load flow analysis is a tool which helps to analyze the status of a power distribution system under steady-state operating conditions. In this research, an effective and comprehensive load flow algorithm is developed to extensively incorporate the distribution system components. Distribution system state estimation is a mathematical procedure which aims to estimate the operating states of a power distribution system by utilizing the information collected from available measurement devices in real-time. An efficient and computationally effective state estimation algorithm adapting the weighted-least-squares (WLS) method has been developed in this research. Both the developed algorithms are tested on different IEEE test-feeders and the results obtained are justified.

  15. Mobility Demonstrator

    DTIC Science & Technology

    2013-08-22

    charging system for increased power density. Compact two-stage turbocharger systems. UNCLASSIFIED: Distribution Statement A. Approved for public...advanced waste heat recovery, solid state cooling, turbocharging /turbocompounding UNCLASSIFIED: Distribution Statement A. Approved for public release... Turbocharging For Official Use Only For Official Use Only 77 •System – develop components capable of handling multiple roles within thermal

  16. A system for measuring the pulse height distribution of ultrafast photomultipliers

    NASA Technical Reports Server (NTRS)

    Abshire, J. B.

    1977-01-01

    A system for measuring the pulse height distribution of gigahertz bandwidth photomultipliers was developed. This system uses a sampling oscilloscope as a sample-hold circuit and has a bandwidth of 12 gigahertz. Test results are given for a static crossed-filed photomultiplier tested with a demonstration system. Calculations on system amplitude resolution capabilities are included for currently available system components.

  17. Recent Technology Advances in Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis

    2017-01-01

    This presentation provides an overview of the work performed at NASA Glenn Research Center in distributed engine control technology. This is control system hardware technology that overcomes engine system constraints by modularizing control hardware and integrating the components over communication networks.

  18. The β Pictoris association low-mass members: Membership assessment, rotation period distribution, and dependence on multiplicity

    NASA Astrophysics Data System (ADS)

    Messina, S.; Lanzafame, A. C.; Malo, L.; Desidera, S.; Buccino, A.; Zhang, L.; Artemenko, S.; Millward, M.; Hambsch, F.-J.

    2017-10-01

    Context. Low-mass members of young loose stellar associations and open clusters exhibit a wide spread of rotation periods. Such a spread originates from the distributions of masses and initial rotation periods. However, multiplicity can also play a significant role. Aims: We aim to investigate the role played by physical companions in multiple systems in shortening the primordial disk lifetime, anticipating the rotation spin up with respect to single stars. Methods: We have compiled the most extensive list to date of low-mass bona fide and candidate members of the young 25-Myr β Pictoris association. We have measured from our own photometric time series or from archival time series the rotation periods of almost all members. In a few cases the rotation periods were retrieved from the literature. We used updated UVWXYZ components to assess the membership of the whole stellar sample. Thanks to the known basic properties of most members we built the rotation period distribution distinguishing between bona fide members and candidate members and according to their multiplicity status. Results: We find that single stars and components of multiple systems in wide orbits (>80 AU) have rotation periods that exhibit a well defined sequence arising from mass distribution with some level of spread likely arising from initial rotation period distribution. All components of multiple systems in close orbits (<80 AU) have rotation periods that are significantly shorter than their equal-mass single counterparts. For these close components of multiple systems a linear dependence of rotation rate on separation is only barely detected. A comparison with the younger 13 Myr h Per cluster and with the older 40-Myr open clusters and stellar associations NGC 2547, IC 2391, Argus, and IC 2602 and the 130-Myr Pleiades shows that whereas the evolution of F-G stars is well reproduced by angular momentum evolution models, this is not the case for the slow K and early-M stars. Finally, we find that the amplitude of their light curves is correlated neither with rotation nor with mass. Conclusions: Once single stars and wide components of multiple systems are separated from close components of multiple systems, the rotation period distributions exhibit a well defined dependence on mass that allows us to make a meaningful comparison with similar distributions of either younger or older associations and clusters. Such cleaned distributions allow us to use the stellar rotation period meaningfully as an age indicator for F and G type stars. Tables 2 and 3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A3

  19. Distributed optical fiber vibration sensor based on spectrum analysis of Polarization-OTDR system.

    PubMed

    Zhang, Ziyi; Bao, Xiaoyi

    2008-07-07

    A fully distributed optical fiber vibration sensor is demonstrated based on spectrum analysis of Polarization-OTDR system. Without performing any data averaging, vibration disturbances up to 5 kHz is successfully demonstrated in a 1km fiber link with 10m spatial resolution. The FFT is performed at each spatial resolution; the relation of the disturbance at each frequency component versus location allows detection of multiple events simultaneously with different and the same frequency components.

  20. Lightwave technology in microwave systems

    NASA Astrophysics Data System (ADS)

    Popa, A. E.; Gee, C. M.; Yen, H. W.

    1986-01-01

    Many advanced microwave system concepts such as active aperture phased array antennas use distributed topologies in which lightwave circuits are being proposed to interconnect both the analog and digital modules of the system. Lightwave components designed to implement these interconnects are reviewed and their performance analyzed. The impact of trends in component development are discussed.

  1. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  2. Observing System Evaluations Using GODAE Systems

    DTIC Science & Technology

    2009-09-01

    DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release, distribution is unlimite 13. SUPPLEMENTARY NOTES 20091228151 14. ABSTRACT Global ocean...forecast systems, developed under the Global Ocean Data Assimilation Experiment (GODAE), are a powerful means of assessing the impact of different...components of the Global Ocean Observing System (GOOS). Using a range of analysis tools and approaches, GODAE systems are useful for quantifying the

  3. Status of 20 kHz space station power distribution technology

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.

    1988-01-01

    Power Distribution on the NASA Space Station will be accomplished by a 20 kHz sinusoidal, 440 VRMS, single phase system. In order to minimize both system complexity and the total power coversion steps required, high frequency power will be distributed end-to-end in the system. To support the final design of flight power system hardware, advanced development and demonstrations have been made on key system technologies and components. The current status of this program is discussed.

  4. Space station power management and distribution

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1985-01-01

    The power system architecture is presented by a series of schematics which illustrate the power management and distribution (PMAD) system at the component level, including converters, controllers, switchgear, rotary power transfer devices, power and data cables, remote power controllers, and load converters. Power distribution options, reference power management, and control strategy are also outlined. A summary of advanced development status and plans and an overview of system test plans are presented.

  5. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  6. Mitigating component performance variation

    DOEpatents

    Gara, Alan G.; Sylvester, Steve S.; Eastep, Jonathan M.; Nagappan, Ramkumar; Cantalupo, Christopher M.

    2018-01-09

    Apparatus and methods may provide for characterizing a plurality of similar components of a distributed computing system based on a maximum safe operation level associated with each component and storing characterization data in a database and allocating non-uniform power to each similar component based at least in part on the characterization data in the database to substantially equalize performance of the components.

  7. Multi-KW dc distribution system technology research study

    NASA Technical Reports Server (NTRS)

    Dawson, S. G.

    1978-01-01

    The Multi-KW DC Distribution System Technology Research Study is the third phase of the NASA/MSFC study program. The purpose of this contract was to complete the design of the integrated technology test facility, provide test planning, support test operations and evaluate test results. The subjet of this study is a continuation of this contract. The purpose of this continuation is to study and analyze high voltage system safety, to determine optimum voltage levels versus power, to identify power distribution system components which require development for higher voltage systems and finally to determine what modifications must be made to the Power Distribution System Simulator (PDSS) to demonstrate 300 Vdc distribution capability.

  8. Clustering analysis of water distribution systems: identifying critical components and community impacts.

    PubMed

    Diao, K; Farmani, R; Fu, G; Astaraie-Imani, M; Ward, S; Butler, D

    2014-01-01

    Large water distribution systems (WDSs) are networks with both topological and behavioural complexity. Thereby, it is usually difficult to identify the key features of the properties of the system, and subsequently all the critical components within the system for a given purpose of design or control. One way is, however, to more explicitly visualize the network structure and interactions between components by dividing a WDS into a number of clusters (subsystems). Accordingly, this paper introduces a clustering strategy that decomposes WDSs into clusters with stronger internal connections than external connections. The detected cluster layout is very similar to the community structure of the served urban area. As WDSs may expand along with urban development in a community-by-community manner, the correspondingly formed distribution clusters may reveal some crucial configurations of WDSs. For verification, the method is applied to identify all the critical links during firefighting for the vulnerability analysis of a real-world WDS. Moreover, both the most critical pipes and clusters are addressed, given the consequences of pipe failure. Compared with the enumeration method, the method used in this study identifies the same group of the most critical components, and provides similar criticality prioritizations of them in a more computationally efficient time.

  9. Power Management and Distribution (PMAD) Model Development: Final Report

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    2011-01-01

    Power management and distribution (PMAD) models were developed in the early 1990's to model candidate architectures for various Space Exploration Initiative (SEI) missions. They were used to generate "ballpark" component mass estimates to support conceptual PMAD system design studies. The initial set of models was provided to NASA Lewis Research Center (since renamed Glenn Research Center) in 1992. They were developed to estimate the characteristics of power conditioning components predicted to be available in the 2005 timeframe. Early 90's component and device designs and material technologies were projected forward to the 2005 timeframe, and algorithms reflecting those design and material improvements were incorporated into the models to generate mass, volume, and efficiency estimates for circa 2005 components. The models are about ten years old now and NASA GRC requested a review of them to determine if they should be updated to bring them into agreement with current performance projections or to incorporate unforeseen design or technology advances. This report documents the results of this review and the updated power conditioning models and new transmission line models generated to estimate post 2005 PMAD system masses and sizes. This effort continues the expansion and enhancement of a library of PMAD models developed to allow system designers to assess future power system architectures and distribution techniques quickly and consistently.

  10. 40 CFR 86.1824-08 - Durability demonstration procedures for evaporative emissions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... deterioration rate and emission level that effectively represents a significant majority of the distribution of... stabilize the permeability of all non-metallic fuel and evaporative system components to the mileage... permeability of evaporative and fuel system components. The manufacturer must also provide information...

  11. 40 CFR 86.1824-08 - Durability demonstration procedures for evaporative emissions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... deterioration rate and emission level that effectively represents a significant majority of the distribution of... stabilize the permeability of all non-metallic fuel and evaporative system components to the mileage... permeability of evaporative and fuel system components. The manufacturer must also provide information...

  12. 40 CFR 86.1824-08 - Durability demonstration procedures for evaporative emissions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... deterioration rate and emission level that effectively represents a significant majority of the distribution of... stabilize the permeability of all non-metallic fuel and evaporative system components to the mileage... permeability of evaporative and fuel system components. The manufacturer must also provide information...

  13. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  14. Distributed Learning Metadata Standards

    ERIC Educational Resources Information Center

    McClelland, Marilyn

    2004-01-01

    Significant economies can be achieved in distributed learning systems architected with a focus on interoperability and reuse. The key building blocks of an efficient distributed learning architecture are the use of standards and XML technologies. The goal of plug and play capability among various components of a distributed learning system…

  15. A Framework for Seamless Interoperation of Heterogeneous Distributed Software Components

    DTIC Science & Technology

    2005-05-01

    interoperability, b) distributed resource discovery, and c) validation of quality requirements. Principles and prototypical systems were created to demonstrate the successful completion of the research.

  16. Advanced and secure architectural EHR approaches.

    PubMed

    Blobel, Bernd

    2006-01-01

    Electronic Health Records (EHRs) provided as a lifelong patient record advance towards core applications of distributed and co-operating health information systems and health networks. For meeting the challenge of scalable, flexible, portable, secure EHR systems, the underlying EHR architecture must be based on the component paradigm and model driven, separating platform-independent and platform-specific models. Allowing manageable models, real systems must be decomposed and simplified. The resulting modelling approach has to follow the ISO Reference Model - Open Distributing Processing (RM-ODP). The ISO RM-ODP describes any system component from different perspectives. Platform-independent perspectives contain the enterprise view (business process, policies, scenarios, use cases), the information view (classes and associations) and the computational view (composition and decomposition), whereas platform-specific perspectives concern the engineering view (physical distribution and realisation) and the technology view (implementation details from protocols up to education and training) on system components. Those views have to be established for components reflecting aspects of all domains involved in healthcare environments including administrative, legal, medical, technical, etc. Thus, security-related component models reflecting all view mentioned have to be established for enabling both application and communication security services as integral part of the system's architecture. Beside decomposition and simplification of system regarding the different viewpoint on their components, different levels of systems' granularity can be defined hiding internals or focusing on properties of basic components to form a more complex structure. The resulting models describe both structure and behaviour of component-based systems. The described approach has been deployed in different projects defining EHR systems and their underlying architectural principles. In that context, the Australian GEHR project, the openEHR initiative, the revision of CEN ENV 13606 "Electronic Health Record communication", all based on Archetypes, but also the HL7 version 3 activities are discussed in some detail. The latter include the HL7 RIM, the HL7 Development Framework, the HL7's clinical document architecture (CDA) as well as the set of models from use cases, activity diagrams, sequence diagrams up to Domain Information Models (DMIMs) and their building blocks Common Message Element Types (CMET) Constraining Models to their underlying concepts. The future-proof EHR architecture as open, user-centric, user-friendly, flexible, scalable, portable core application in health information systems and health networks has to follow advanced architectural paradigms.

  17. A Framework for Evaluating Economic Impacts of Rooftop PV Systems with or without Energy Storage on Thai Distribution Utilities and Ratepayers

    NASA Astrophysics Data System (ADS)

    Chaianong, A.; Bangviwat, A.; Menke, C.

    2017-07-01

    Driven by decreasing PV and energy storage prices, increasing electricity costs and policy supports from Thai government (self-consumption era), rooftop PV and energy storage systems are going to be deployed in the country rapidly that may disrupt existing business models structure of Thai distribution utilities due to revenue erosion and lost earnings opportunities. The retail rates that directly affect ratepayers (non-solar customers) are expected to increase. This paper focuses on a framework for evaluating impacts of PV with and without energy storage systems on Thai distribution utilities and ratepayers by using cost-benefit analysis (CBA). Prior to calculation of cost/benefit components, changes in energy sales need to be addressed. Government policies for the support of PV generation will also help in accelerating the rooftop PV installation. Benefit components include avoided costs due to transmission losses and deferring distribution capacity with appropriate PV penetration level, while cost components consist of losses in revenue, program costs, integration costs and unrecovered fixed costs. It is necessary for Thailand to compare total costs and total benefits of rooftop PV and energy storage systems in order to adopt policy supports and mitigation approaches, such as business model innovation and regulatory reform, effectively.

  18. GUEST EDITORS' INTRODUCTION: Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Guerraoui, Rachid; Vinoski, Steve

    1997-09-01

    The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.

  19. Embedded 100 Gbps Photonic Components

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznia, Charlie

    This innovation to fiber optic component technology increases the performance, reduces the size and reduces the power consumption of optical communications within dense network systems, such as advanced distributed computing systems and data centers. VCSEL technology is enabling short-reach (< 100 m) and >100 Gbps optical interconnections over multi-mode fiber in commercial applications.

  20. The QuakeSim Project: Web Services for Managing Geophysical Data and Applications

    NASA Astrophysics Data System (ADS)

    Pierce, Marlon E.; Fox, Geoffrey C.; Aktas, Mehmet S.; Aydin, Galip; Gadgil, Harshawardhan; Qi, Zhigang; Sayar, Ahmet

    2008-04-01

    We describe our distributed systems research efforts to build the “cyberinfrastructure” components that constitute a geophysical Grid, or more accurately, a Grid of Grids. Service-oriented computing principles are used to build a distributed infrastructure of Web accessible components for accessing data and scientific applications. Our data services fall into two major categories: Archival, database-backed services based around Geographical Information System (GIS) standards from the Open Geospatial Consortium, and streaming services that can be used to filter and route real-time data sources such as Global Positioning System data streams. Execution support services include application execution management services and services for transferring remote files. These data and execution service families are bound together through metadata information and workflow services for service orchestration. Users may access the system through the QuakeSim scientific Web portal, which is built using a portlet component approach.

  1. Linking Health Concepts in the Assessment and Evaluation of Water Distribution Systems

    ERIC Educational Resources Information Center

    Karney, Bryan W.; Filion, Yves R.

    2005-01-01

    The concept of health is not only a specific criterion for evaluation of water quality delivered by a distribution system but also a suitable paradigm for overall functioning of the hydraulic and structural components of the system. This article views health, despite its complexities, as the only criterion with suitable depth and breadth to allow…

  2. An architecture for object-oriented intelligent control of power systems in space

    NASA Technical Reports Server (NTRS)

    Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.

    1993-01-01

    A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.

  3. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  4. Getting the lead out: understanding risks in the distribution ...

    EPA Pesticide Factsheets

    This presentation discusses the importance of the water distribution system as a component of the source-to-tap continuum in public health protection. Issues covered include: understanding source water quality changes and their impacts throughout the system; use of mitigation measures such as filters); and holistic approaches and/or strategies that could be used to avoid unintended consequences of decisions from source to tap. Invited presentation on topics indicated as of interest. With exposure to lead as the context, this presentation discusses the importance of the water distribution system as a component of the source-to-tap continuum in public health protection. Issues covered include: understanding source water quality changes and their impacts throughout the system; use of mitigation measures such as filters); and holistic approaches and/or strategies that could be used to avoid unintended consequences of decisions from source to tap.

  5. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  6. RICIS research

    NASA Technical Reports Server (NTRS)

    Mckay, Charles W.; Feagin, Terry; Bishop, Peter C.; Hallum, Cecil R.; Freedman, Glenn B.

    1987-01-01

    The principle focus of one of the RICIS (Research Institute for Computing and Information Systems) components is computer systems and software engineering in-the-large of the lifecycle of large, complex, distributed systems which: (1) evolve incrementally over a long time; (2) contain non-stop components; and (3) must simultaneously satisfy a prioritized balance of mission and safety critical requirements at run time. This focus is extremely important because of the contribution of the scaling direction problem to the current software crisis. The Computer Systems and Software Engineering (CSSE) component addresses the lifestyle issues of three environments: host, integration, and target.

  7. 14 CFR 121.313 - Miscellaneous equipment.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... equivalent for each pilot station. (c) A power supply and distribution system that meets the requirements of... external power supply if any one power source or component of the power distribution system fails. The use... on separate engines. (d) A means for indicating the adequacy of the power being supplied to required...

  8. 14 CFR 121.313 - Miscellaneous equipment.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... equivalent for each pilot station. (c) A power supply and distribution system that meets the requirements of... external power supply if any one power source or component of the power distribution system fails. The use... on separate engines. (d) A means for indicating the adequacy of the power being supplied to required...

  9. 14 CFR 121.313 - Miscellaneous equipment.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... equivalent for each pilot station. (c) A power supply and distribution system that meets the requirements of... external power supply if any one power source or component of the power distribution system fails. The use... on separate engines. (d) A means for indicating the adequacy of the power being supplied to required...

  10. Delay-induced wave instabilities in single-species reaction-diffusion systems

    NASA Astrophysics Data System (ADS)

    Otto, Andereas; Wang, Jian; Radons, Günter

    2017-11-01

    The Turing (wave) instability is only possible in reaction-diffusion systems with more than one (two) components. Motivated by the fact that a time delay increases the dimension of a system, we investigate the presence of diffusion-driven instabilities in single-species reaction-diffusion systems with delay. The stability of arbitrary one-component systems with a single discrete delay, with distributed delay, or with a variable delay is systematically analyzed. We show that a wave instability can appear from an equilibrium of single-species reaction-diffusion systems with fluctuating or distributed delay, which is not possible in similar systems with constant discrete delay or without delay. More precisely, we show by basic analytic arguments and by numerical simulations that fast asymmetric delay fluctuations or asymmetrically distributed delays can lead to wave instabilities in these systems. Examples, for the resulting traveling waves are shown for a Fisher-KPP equation with distributed delay in the reaction term. In addition, we have studied diffusion-induced instabilities from homogeneous periodic orbits in the same systems with variable delay, where the homogeneous periodic orbits are attracting resonant periodic solutions of the system without diffusion, i.e., periodic orbits of the Hutchinson equation with time-varying delay. If diffusion is introduced, standing waves can emerge whose temporal period is equal to the period of the variable delay.

  11. Reliability demonstration test for load-sharing systems with exponential and Weibull components

    PubMed Central

    Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn’t yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics. PMID:29284030

  12. Reliability demonstration test for load-sharing systems with exponential and Weibull components.

    PubMed

    Xu, Jianyu; Hu, Qingpei; Yu, Dan; Xie, Min

    2017-01-01

    Conducting a Reliability Demonstration Test (RDT) is a crucial step in production. Products are tested under certain schemes to demonstrate whether their reliability indices reach pre-specified thresholds. Test schemes for RDT have been studied in different situations, e.g., lifetime testing, degradation testing and accelerated testing. Systems designed with several structures are also investigated in many RDT plans. Despite the availability of a range of test plans for different systems, RDT planning for load-sharing systems hasn't yet received the attention it deserves. In this paper, we propose a demonstration method for two specific types of load-sharing systems with components subject to two distributions: exponential and Weibull. Based on the assumptions and interpretations made in several previous works on such load-sharing systems, we set the mean time to failure (MTTF) of the total system as the demonstration target. We represent the MTTF as a summation of mean time between successive component failures. Next, we introduce generalized test statistics for both the underlying distributions. Finally, RDT plans for the two types of systems are established on the basis of these test statistics.

  13. Effects of the Shuttle Orbiter fuselage and elevon on the molecular distribution of water vapor from the flash evaporator system

    NASA Technical Reports Server (NTRS)

    Richmond, R. G.; Kelso, R. M.

    1980-01-01

    A concern has arisen regarding the emissive distribution of water molecules from the shuttle orbiter flash evaporator system (FES). The role of the orbiter fuselage and elevon in affecting molecular scattering distributions was nuclear. The effect of these components were evaluated. Molecular distributions of the water vapor effluents from the FE were measured. These data were compared with analytically predicted values and the resulting implications were calculated.

  14. Distribution of the Endocannabinoid System in the Central Nervous System.

    PubMed

    Hu, Sherry Shu-Jung; Mackie, Ken

    2015-01-01

    The endocannabinoid system consists of endogenous cannabinoids (endocannabinoids), the enzymes that synthesize and degrade endocannabinoids, and the receptors that transduce the effects of endocannabinoids. Much of what we know about the function of endocannabinoids comes from studies that combine localization of endocannabinoid system components with physiological or behavioral approaches. This review will focus on the localization of the best-known components of the endocannabinoid system for which the strongest anatomical evidence exists.

  15. Aerosol in the Pacific troposphere

    NASA Technical Reports Server (NTRS)

    Clarke, Antony D.

    1989-01-01

    The use of near real-time optical techniques is emphasized for the measurement of mid-tropospheric aerosol over the Central Pacific. The primary focus is on measurement of the aerosol size distribution over the range of particle diameters from 0.15 to 5.0 microns that are essential for modeling CO2 backscatter values in support of the laser atmospheric wind sounder (LAWS) program. The measurement system employs a LAS-X (Laser Aerosol Spectrometer-PMS, Boulder, CO) with a custom 256 channel pulse height analyzer and software for detailed measurement and analysis of aerosol size distributions. A thermal preheater system (Thermo Optic Aerosol Descriminator (TOAD) conditions the aerosol in a manner that allows the discrimination of the size distribution of individual aerosol components such as sulfuric acid, sulfates and refractory species. This allows assessment of the relative contribution of each component to the BCO2 signal. This is necessary since the different components have different sources, exhibit independent variability and provide different BCO2 signals for a given mass and particle size. Field activities involve experiments designed to examine both temporal and spatial variability of these aerosol components from ground based and aircraft platforms.

  16. DC current distribution mapping system of the solar panels using a HTS-SQUID gradiometer

    NASA Astrophysics Data System (ADS)

    Miyazaki, Shingo; Kasuya, Syohei; Mawardi Saari, Mohd; Sakai, Kenji; Kiwa, Toshihiko; Tsukamoto, Akira; Adachi, Seiji; Tanabe, Keiichi; Tsukada, Keiji

    2014-05-01

    Solar panels are expected to play a major role as a source of sustainable energy. In order to evaluate solar panels, non-destructive tests, such as defect inspections and response property evaluations, are necessary. We developed a DC current distribution mapping system of the solar panels using a High Critical Temperature Superconductor Superconducting Quantum Interference Device (HTS-SQUID) gradiometer with ramp edge type Josephson junctions. Two independent components of the magnetic fields perpendicular to the panel surface (∂Bz/∂x, ∂Bz/∂y) were detected. The direct current of the solar panel is visualized by calculating the composition of the two signal components, the phase angle, and mapping the DC current vector. The developed system can evaluate the uniformity of DC current distributions precisely and may be applicable for defect detection of solar panels.

  17. Dose factor entry and display tool for BNCT radiotherapy

    DOEpatents

    Wessol, Daniel E.; Wheeler, Floyd J.; Cook, Jeremy L.

    1999-01-01

    A system for use in Boron Neutron Capture Therapy (BNCT) radiotherapy planning where a biological distribution is calculated using a combination of conversion factors and a previously calculated physical distribution. Conversion factors are presented in a graphical spreadsheet so that a planner can easily view and modify the conversion factors. For radiotherapy in multi-component modalities, such as Fast-Neutron and BNCT, it is necessary to combine each conversion factor component to form an effective dose which is used in radiotherapy planning and evaluation. The Dose Factor Entry and Display System is designed to facilitate planner entry of appropriate conversion factors in a straightforward manner for each component. The effective isodose is then immediately computed and displayed over the appropriate background (e.g. digitized image).

  18. NASA Exhibits

    NASA Technical Reports Server (NTRS)

    Deardorff, Glenn; Djomehri, M. Jahed; Freeman, Ken; Gambrel, Dave; Green, Bryan; Henze, Chris; Hinke, Thomas; Hood, Robert; Kiris, Cetin; Moran, Patrick; hide

    2001-01-01

    A series of NASA presentations for the Supercomputing 2001 conference are summarized. The topics include: (1) Mars Surveyor Landing Sites "Collaboratory"; (2) Parallel and Distributed CFD for Unsteady Flows with Moving Overset Grids; (3) IP Multicast for Seamless Support of Remote Science; (4) Consolidated Supercomputing Management Office; (5) Growler: A Component-Based Framework for Distributed/Collaborative Scientific Visualization and Computational Steering; (6) Data Mining on the Information Power Grid (IPG); (7) Debugging on the IPG; (8) Debakey Heart Assist Device: (9) Unsteady Turbopump for Reusable Launch Vehicle; (10) Exploratory Computing Environments Component Framework; (11) OVERSET Computational Fluid Dynamics Tools; (12) Control and Observation in Distributed Environments; (13) Multi-Level Parallelism Scaling on NASA's Origin 1024 CPU System; (14) Computing, Information, & Communications Technology; (15) NAS Grid Benchmarks; (16) IPG: A Large-Scale Distributed Computing and Data Management System; and (17) ILab: Parameter Study Creation and Submission on the IPG.

  19. Enhanced interfaces for web-based enterprise-wide image distribution.

    PubMed

    Jost, R Gilbert; Blaine, G James; Fritz, Kevin; Blume, Hartwig; Sadhra, Sarbjit

    2002-01-01

    Modern Web browsers support image distribution with two shortcomings: (1) image grayscale presentation at client workstations is often sub-optimal and generally inconsistent with the presentation state on diagnostic workstations and (2) an Electronic Patient Record (EPR) application usually cannot directly access images with an integrated viewer. We have modified our EPR and our Web-based image-distribution system to allow access to images from within the EPR. In addition, at the client workstation, a grayscale transformation is performed that consists of two components: a client-display-specific component based on the characteristic display function of the class of display system, and a modality-specific transformation that is downloaded with every image. The described techniques have been implemented in our institution and currently support enterprise-wide clinical image distribution. The effectiveness of the techniques is reviewed.

  20. Data Concentrator

    NASA Technical Reports Server (NTRS)

    Willett, Mike

    2015-01-01

    Orbital Research, Inc., developed, built, and tested three high-temperature components for use in the design of a data concentrator module in distributed turbine engine control. The concentrator receives analog and digital signals related to turbine engine control and communicates with a full authority digital engine control (FADEC) or high-level command processor. This data concentrator follows the Distributed Engine Controls Working Group (DECWG) roadmap for turbine engine distributed controls communication development that operates at temperatures at least up to 225 C. In Phase I, Orbital Research developed detailed specifications for each component needed for the system and defined the total system specifications. This entailed a combination of system design, compiling existing component specifications, laboratory testing, and simulation. The results showed the feasibility of the data concentrator. Phase II of this project focused on three key objectives. The first objective was to update the data concentrator design modifications from DECWG and prime contractors. Secondly, the project defined requirements for the three new high-temperature, application-specific integrated circuits (ASICs): one-time programmable (OTP), transient voltage suppression (TVS), and 3.3V. Finally, the project validated each design by testing over temperature and under load.

  1. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  2. Double stars with wide separations in the AGK3 - II. The wide binaries and the multiple systems*

    NASA Astrophysics Data System (ADS)

    Halbwachs, J.-L.; Mayor, M.; Udry, S.

    2017-02-01

    A large observation programme was carried out to measure the radial velocities of the components of a selection of common proper motion (CPM) stars to select the physical binaries. 80 wide binaries (WBs) were detected, and 39 optical pairs were identified. By adding CPM stars with separations close enough to be almost certain that they are physical, a bias-controlled sample of 116 WBs was obtained, and used to derive the distribution of separations from 100 to 30 000 au. The distribution obtained does not match the log-constant distribution, but agrees with the log-normal distribution. The spectroscopic binaries detected among the WB components were used to derive statistical information about the multiple systems. The close binaries in WBs seem to be like those detected in other field stars. As for the WBs, they seem to obey the log-normal distribution of periods. The number of quadruple systems agrees with the no correlation hypothesis; this indicates that an environment conducive to the formation of WBs does not favour the formation of subsystems with periods shorter than 10 yr.

  3. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  4. Distributed user interfaces for clinical ubiquitous computing applications.

    PubMed

    Bång, Magnus; Larsson, Anders; Berglund, Erik; Eriksson, Henrik

    2005-08-01

    Ubiquitous computing with multiple interaction devices requires new interface models that support user-specific modifications to applications and facilitate the fast development of active workspaces. We have developed NOSTOS, a computer-augmented work environment for clinical personnel to explore new user interface paradigms for ubiquitous computing. NOSTOS uses several devices such as digital pens, an active desk, and walk-up displays that allow the system to track documents and activities in the workplace. We present the distributed user interface (DUI) model that allows standalone applications to distribute their user interface components to several devices dynamically at run-time. This mechanism permit clinicians to develop their own user interfaces and forms to clinical information systems to match their specific needs. We discuss the underlying technical concepts of DUIs and show how service discovery, component distribution, events and layout management are dealt with in the NOSTOS system. Our results suggest that DUIs--and similar network-based user interfaces--will be a prerequisite of future mobile user interfaces and essential to develop clinical multi-device environments.

  5. Research: Detailed and Selective Follow-up of Students for Improvement of Programs/Program Components in Business & Office Education and Marketing & Distributive Education. Final Report.

    ERIC Educational Resources Information Center

    Scott, Gary D.; Chapman, Alberta

    The Kentucky student follow-up system was studied to identify the current status of follow-up activities in business and office education and marketing and distributive education; to identify the impact of follow-up data on these programs; to identify program components for which detailed follow-up can provide information to assist in program…

  6. Space Power Management and Distribution Status and Trends

    NASA Technical Reports Server (NTRS)

    Reppucci, G. M.; Biess, J. J.; Inouye, L.

    1984-01-01

    An overview of space power management and distribution (PMAD) is provided which encompasses historical and current technology trends. The PMAD components discussed include power source control, energy storage control, and load power processing electronic equipment. The status of distribution equipment comprised of rotary joints and power switchgear is evaluated based on power level trends in the public, military, and commercial sectors. Component level technology thrusts, as driven by perceived system level trends, are compared to technology status of piece-parts such as power semiconductors, capacitors, and magnetics to determine critical barriers.

  7. FREQUENCY DISTRIBUTIONS OF 90SR AND 137CS CONCENTRATIONS IN AN ECOSYSTEM OF THE 'RED FOREST' AREA IN THE CHERNOBYL EXCLUSION ZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farfan, E.; Jannik, T.; Caldwell, E.

    2011-10-01

    In the most highly contaminated region of the Chernobyl Exclusion Zone: the 'Red Forest' site, the accumulation of the major dose-affecting radionuclides ({sup 90}Sr and {sup 137}Cs) within the components of an ecological system encompassing 3,000 m{sup 2} were characterized. The sampled components included soils (top 0-10 cm depth), Molina caerulea (blue moor grass), Camponotus vagus (carpenter ants) and Pelobates fuscus (spade-footed toad). In a comparison among the components of this ecosystem, the {sup 90}Sr and {sup 137}Cs concentrations measured in 40 separate grids exhibited significant differences, while the frequency distribution of the values were close to a logarithmically normalmore » leptokurtic distribution with a significant right-side skew. While it is important to identify localized areas of high contamination or 'hot spots,' including these values in the arithmetic mean may overestimate the exposure risk. In component sample sets that exhibited logarithmically normal distribution, the geometrical mean more accurately characterizes a site. Ideally, risk assessment is most confidently achieved when the arithmetic and geometrical means are most similar, meaning the distribution approaches normal. Through bioaccumulation, the highest concentrations of {sup 90}Sr and {sup 137}Cs were measured in the blue moor grass and spade-footed toad. These components also possessed distribution parameters that shifted toward a normal distribution.« less

  8. An Expert System Solution for the Quantitative Condition Assessment of Electrical Distribution Systems in the United States Air Force

    DTIC Science & Technology

    1991-09-01

    Distribution system ... ......... 4 2. Architechture of an Expert system .. .............. 66 vi List of Tables Table Page 1. Prototype Component Model...expert system to properly process work requests Ln civil engineering (8:23). Electric Power Research Institute (EPRI). EPRI is a private organization ...used (51) Training Level. The level of training shop technicians receive, and the resulting proficiency, are important in all organizations . Experts 1

  9. Solar Newsletter | Solar Research | NREL

    Science.gov Websites

    , General Electric Optimize Voltage Control for Utility-Scale PV As utilities increasingly add solar power components that may be used to integrate distributed solar PV onto distribution systems. More than 335 data Innovation Award for Grid Reliability PV Demonstration First Solar, the California Independent System

  10. Centralized Command, Distributed Control, and Decentralized Execution - a Command and Control Solution to US Air Force A2/AD Challenges

    DTIC Science & Technology

    2017-04-28

    Regional Air Component Commander (the Leader) 5 CC-DC- DE Solution to A2/AD – Distributed Theater Air Control System (the System) 9 CC-DC- DE ... Control , Decentralized Execution” to a new framework of “Centralized Command, Distributed Control , and Decentralized Execution” (CC-DC- DE ).4 5 This...USAF C2 challenges in A2/AD environments describes a three-part Centralized Command, Distributed Control , and Decentralized Execution (CC-DC- DE

  11. High-order rogue waves in vector nonlinear Schrödinger equations.

    PubMed

    Ling, Liming; Guo, Boling; Zhao, Li-Chen

    2014-04-01

    We study the dynamics of high-order rogue waves (RWs) in two-component coupled nonlinear Schrödinger equations. We find that four fundamental rogue waves can emerge from second-order vector RWs in the coupled system, in contrast to the high-order ones in single-component systems. The distribution shape can be quadrilateral, triangle, and line structures by varying the proper initial excitations given by the exact analytical solutions. The distribution pattern for vector RWs is more abundant than that for scalar rogue waves. Possibilities to observe these new patterns for rogue waves are discussed for a nonlinear fiber.

  12. Autonomic Management in a Distributed Storage System

    NASA Astrophysics Data System (ADS)

    Tauber, Markus

    2010-07-01

    This thesis investigates the application of autonomic management to a distributed storage system. Effects on performance and resource consumption were measured in experiments, which were carried out in a local area test-bed. The experiments were conducted with components of one specific distributed storage system, but seek to be applicable to a wide range of such systems, in particular those exposed to varying conditions. The perceived characteristics of distributed storage systems depend on their configuration parameters and on various dynamic conditions. For a given set of conditions, one specific configuration may be better than another with respect to measures such as resource consumption and performance. Here, configuration parameter values were set dynamically and the results compared with a static configuration. It was hypothesised that under non-changing conditions this would allow the system to converge on a configuration that was more suitable than any that could be set a priori. Furthermore, the system could react to a change in conditions by adopting a more appropriate configuration. Autonomic management was applied to the peer-to-peer (P2P) and data retrieval components of ASA, a distributed storage system. The effects were measured experimentally for various workload and churn patterns. The management policies and mechanisms were implemented using a generic autonomic management framework developed during this work. The experimental evaluations of autonomic management show promising results, and suggest several future research topics. The findings of this thesis could be exploited in building other distributed storage systems that focus on harnessing storage on user workstations, since these are particularly likely to be exposed to varying, unpredictable conditions.

  13. Integrated Micro-Power System (IMPS) Development at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Wilt, David; Hepp, Aloysius; Moran, Matt; Jenkins, Phillip; Scheiman, David; Raffaelle, Ryne

    2003-01-01

    Glenn Research Center (GRC) has a long history of energy related technology developments for large space related power systems, including photovoltaics, thermo-mechanical energy conversion, electrochemical energy storage. mechanical energy storage, power management and distribution and power system design. Recently, many of these technologies have begun to be adapted for small, distributed power system applications or Integrated Micro-Power Systems (IMPS). This paper will describe the IMPS component and system demonstration efforts to date.

  14. VASA: Interactive Computational Steering of Large Asynchronous Simulation Pipelines for Societal Infrastructure.

    PubMed

    Ko, Sungahn; Zhao, Jieqiong; Xia, Jing; Afzal, Shehzad; Wang, Xiaoyu; Abram, Greg; Elmqvist, Niklas; Kne, Len; Van Riper, David; Gaither, Kelly; Kennedy, Shaun; Tolone, William; Ribarsky, William; Ebert, David S

    2014-12-01

    We present VASA, a visual analytics platform consisting of a desktop application, a component model, and a suite of distributed simulation components for modeling the impact of societal threats such as weather, food contamination, and traffic on critical infrastructure such as supply chains, road networks, and power grids. Each component encapsulates a high-fidelity simulation model that together form an asynchronous simulation pipeline: a system of systems of individual simulations with a common data and parameter exchange format. At the heart of VASA is the Workbench, a visual analytics application providing three distinct features: (1) low-fidelity approximations of the distributed simulation components using local simulation proxies to enable analysts to interactively configure a simulation run; (2) computational steering mechanisms to manage the execution of individual simulation components; and (3) spatiotemporal and interactive methods to explore the combined results of a simulation run. We showcase the utility of the platform using examples involving supply chains during a hurricane as well as food contamination in a fast food restaurant chain.

  15. An atlas of monthly mean distributions of SSMI surface wind speed, ARGOS buoy drift, AVHRR/2 sea surface temperature, and ECMWF surface wind components during 1990

    NASA Technical Reports Server (NTRS)

    Halpern, D.; Knauss, W.; Brown, O.; Wentz, F.

    1993-01-01

    The following monthly mean global distributions for 1990 are proposed with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States (US) Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the advanced very high resolution radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) spacecraft; Cartesian components of free drifting buoys which are tracked by the ARGOS navigation system on NOAA satellites; and Cartesian components on the 10-m height wind vector computed by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of monthly mean value, sampling distribution, and standard deviation values are displayed. Annual mean distributions are displayed.

  16. An atlas of monthly mean distributions of SSMI surface wind speed, ARGOS buoy drift, AVHRR/2 sea surface temperature, and ECMWF surface wind components during 1991

    NASA Technical Reports Server (NTRS)

    Halpern, D.; Knauss, W.; Brown, O.; Wentz, F.

    1993-01-01

    The following monthly mean global distributions for 1991 are presented with a common color scale and geographical map: 10-m height wind speed estimated from the Special Sensor Microwave Imager (SSMI) on a United States Air Force Defense Meteorological Satellite Program (DMSP) spacecraft; sea surface temperature estimated from the advanced very high resolution radiometer (AVHRR/2) on a U.S. National Oceanic and Atmospheric Administration (NOAA) spacecraft; Cartesian components of free-drifting buoys which are tracked by the ARGOS navigation system on NOAA satellites; and Cartesian components of the 10-m height wind vector computed by the European Center for Medium-Range Weather Forecasting (ECMWF). Charts of monthly mean value, sampling distribution, and standard deviation value are displayed. Annual mean distributions are displayed.

  17. A Linguistic Model in Component Oriented Programming

    NASA Astrophysics Data System (ADS)

    Crăciunean, Daniel Cristian; Crăciunean, Vasile

    2016-12-01

    It is a fact that the component-oriented programming, well organized, can bring a large increase in efficiency in the development of large software systems. This paper proposes a model for building software systems by assembling components that can operate independently of each other. The model is based on a computing environment that runs parallel and distributed applications. This paper introduces concepts as: abstract aggregation scheme and aggregation application. Basically, an aggregation application is an application that is obtained by combining corresponding components. In our model an aggregation application is a word in a language.

  18. Main Power Distribution Unit for the Jupiter Icy Moons Orbiter (JIMO)

    NASA Technical Reports Server (NTRS)

    Papa, Melissa R.

    2004-01-01

    Around the year 2011, the Jupiter Icy Moons Orbiter (JIMO) will be launched and on its way to orbit three of Jupiter s planet-sized moons. The mission goals for the JIMO project revolve heavily around gathering scientific data concerning ingredients we, as humans, consider essential: water, energy and necessary chemical elements. The JIM0 is an ambitious mission which will implore propulsion from an ION thruster powered by a nuclear fission reactor. Glenn Research Center is responsible for the development of the dynamic power conversion, power management and distribution, heat rejection and ION thrusters. The first test phase for the JIM0 program concerns the High Power AC Power Management and Distribution (PMAD) Test Bed. The goal of this testing is to support electrical performance verification of the power systems. The test bed will incorporate a 2kW Brayton Rotating Unit (BRU) to simulate the nuclear reactor as well as two ION thrusters. The first module of the PMAD Test Bed to be designed is the Main Power Distribution Unit (MPDU) which relays the power input to the various propulsion systems and scientific instruments. The MPDU involves circuitry design as well as mechanical design to determine the placement of the components. The MPDU consists of fourteen relays of four different variations used to convert the input power into the appropriate power output. The three phase system uses 400 Vo1ts(sub L-L) rms at 1000 Hertz. The power is relayed through the circuit and distributed to the scientific instruments, the ION thrusters and other controlled systems. The mechanical design requires the components to be positioned for easy electrical wiring as well as allowing adequate room for the main buss bars, individual circuit boards connected to each component and power supplies. To accomplish creating a suitable design, AutoCAD was used as a drafting tool. By showing a visual layout of the components, it is easy to see where there is extra room or where the components may interfere with one another. By working with the electrical engineer who is designing the circuit, the specific design requirements for the MPDU were determined and used as guidelines. Space is limited due to the size of the mounting plate therefore each component must be strategically placed. Since the MPDU is being designed to fit into a simulated model of the spacecraft systems on the JIMO, components must be positioned where they are easily accessible to be wired to the other onboard systems. Mechanical and electrical requirements provided equally important limits which are combined to produce the best possible design of the MPDU.

  19. Real-Time Aircraft Engine-Life Monitoring

    NASA Technical Reports Server (NTRS)

    Klein, Richard

    2014-01-01

    This project developed an inservice life-monitoring system capable of predicting the remaining component and system life of aircraft engines. The embedded system provides real-time, inflight monitoring of the engine's thrust, exhaust gas temperature, efficiency, and the speed and time of operation. Based upon this data, the life-estimation algorithm calculates the remaining life of the engine components and uses this data to predict the remaining life of the engine. The calculations are based on the statistical life distribution of the engine components and their relationship to load, speed, temperature, and time.

  20. CPAD: Cyber-Physical Attack Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferragut, Erik M; Laska, Jason A

    The CPAD technology relates to anomaly detection and more specifically to cyber physical attack detection. It infers underlying physical relationships between components by analyzing the sensor measurements of a system. It then uses these measurements to detect signs of a non-physically realizable state, which is indicative of an integrity attack on the system. CPAD can be used on any highly-instrumented cyber-physical system to detect integrity attacks and identify the component or components compromised. It has applications to power transmission and distribution, nuclear and industrial plants, and complex vehicles.

  1. Using an architectural approach to integrate heterogeneous, distributed software components

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Purtilo, James M.

    1995-01-01

    Many computer programs cannot be easily integrated because their components are distributed and heterogeneous, i.e., they are implemented in diverse programming languages, use different data representation formats, or their runtime environments are incompatible. In many cases, programs are integrated by modifying their components or interposing mechanisms that handle communication and conversion tasks. For example, remote procedure call (RPC) helps integrate heterogeneous, distributed programs. When configuring such programs, however, mechanisms like RPC must be used explicitly by software developers in order to integrate collections of diverse components. Each collection may require a unique integration solution. This paper describes improvements to the concepts of software packaging and some of our experiences in constructing complex software systems from a wide variety of components in different execution environments. Software packaging is a process that automatically determines how to integrate a diverse collection of computer programs based on the types of components involved and the capabilities of available translators and adapters in an environment. Software packaging provides a context that relates such mechanisms to software integration processes and reduces the cost of configuring applications whose components are distributed or implemented in different programming languages. Our software packaging tool subsumes traditional integration tools like UNIX make by providing a rule-based approach to software integration that is independent of execution environments.

  2. Creep Life Prediction of Ceramic Components Using the Finite Element Based Integrated Design Program (CARES/Creep)

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1997-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.

  3. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  4. An adaptable product for material processing and life science missions

    NASA Technical Reports Server (NTRS)

    Wassick, Gregory; Dobbs, Michael

    1995-01-01

    The Experiment Control System II (ECS-II) is designed to make available to the microgravity research community the same tools and mode of automated experimentation that their ground-based counterparts have enjoyed for the last two decades. The design goal was accomplished by combining commercial automation tools familiar to the experimenter community with system control components that interface with the on-orbit platform in a distributed architecture. The architecture insulates the tools necessary for managing a payload. By using commercial software and hardware components whenever possible, development costs were greatly reduced when compared to traditional space development projects. Using commercial-off-the-shelf (COTS) components also improved the usability documentation, and reducing the need for training of the system by providing familiar user interfaces, providing a wealth of readily available documentation, and reducing the need for training on system-specific details. The modularity of the distributed architecture makes it very amenable for modification to different on-orbit experiments requiring robotics-based automation.

  5. High-Surety Telemedicine in a Distributed, 'Plug-andPlan' Environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, Richard L.; Funkhouser, Donald R.; Gallagher, Linda K.

    1999-05-17

    Commercial telemedicine systems are increasingly functional, incorporating video-conferencing capabilities, diagnostic peripherals, medication reminders, and patient education services. However, these systems (1) rarely utilize information architectures which allow them to be easily integrated with existing health information networks and (2) do not always protect patient confidentiality with adequate security mechanisms. Using object-oriented methods and software wrappers, we illustrate the transformation of an existing stand-alone telemedicine system into `plug-and-play' components that function in a distributed medical information environment. We show, through the use of open standards and published component interfaces, that commercial telemedicine offerings which were once incompatible with electronic patient recordmore » systems can now share relevant data with clinical information repositories while at the same time hiding the proprietary implementations of the respective systems. Additionally, we illustrate how leading-edge technology can secure this distributed telemedicine environment, maintaining patient confidentiality and the integrity of the associated electronic medical data. Information surety technology also encourages the development of telemedicine systems that have both read and write access to electronic medical records containing patient-identifiable information. The win-win approach to telemedicine information system development preserves investments in legacy software and hardware while promoting security and interoperability in a distributed environment.« less

  6. The use of programmable logic controllers (PLC) for rocket engine component testing

    NASA Technical Reports Server (NTRS)

    Nail, William; Scheuermann, Patrick; Witcher, Kern

    1991-01-01

    Application of PLCs to the rocket engine component testing at a new Stennis Space Center Component Test Facility is suggested as an alternative to dedicated specialized computers. The PLC systems are characterized by rugged design, intuitive software, fault tolerance, flexibility, multiple end device options, networking capability, and built-in diagnostics. A distributed PLC-based system is projected to be used for testing LH2/LOx turbopumps required for the ALS/NLS rocket engines.

  7. A loosely coupled framework for terminology controlled distributed EHR search for patient cohort identification in clinical research.

    PubMed

    Zhao, Lei; Lim Choi Keung, Sarah N; Taweel, Adel; Tyler, Edward; Ogunsina, Ire; Rossiter, James; Delaney, Brendan C; Peterson, Kevin A; Hobbs, F D Richard; Arvanitis, Theodoros N

    2012-01-01

    Heterogeneous data models and coding schemes for electronic health records present challenges for automated search across distributed data sources. This paper describes a loosely coupled software framework based on the terminology controlled approach to enable the interoperation between the search interface and heterogeneous data sources. Software components interoperate via common terminology service and abstract criteria model so as to promote component reuse and incremental system evolution.

  8. A Distributed Approach to System-Level Prognostics

    DTIC Science & Technology

    2012-09-01

    the end of (useful) life ( EOL ) and/or the remaining useful life (RUL) of components, subsystems, or systems. The prognostics problem itself can be...system state estimate, computes EOL and/or RUL. In this paper, we focus on a model-based prognostics approach (Orchard & Vachtse- vanos, 2009; Daigle...been focused on individual components, and determining their EOL and RUL, e.g., (Orchard & Vachtsevanos, 2009; Saha & Goebel, 2009; Daigle & Goebel

  9. Survey of aircraft electrical power systems

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Brandner, J. J.

    1972-01-01

    Areas investigated include: (1) load analysis; (2) power distribution, conversion techniques and generation; (3) design criteria and performance capabilities of hydraulic and pneumatic systems; (4) system control and protection methods; (5) component and heat transfer systems cooling; and (6) electrical system reliability.

  10. Risk Analysis using Corrosion Rate Parameter on Gas Transmission Pipeline

    NASA Astrophysics Data System (ADS)

    Sasikirono, B.; Kim, S. J.; Haryadi, G. D.; Huda, A.

    2017-05-01

    In the oil and gas industry, the pipeline is a major component in the transmission and distribution process of oil and gas. Oil and gas distribution process sometimes performed past the pipeline across the various types of environmental conditions. Therefore, in the transmission and distribution process of oil and gas, a pipeline should operate safely so that it does not harm the surrounding environment. Corrosion is still a major cause of failure in some components of the equipment in a production facility. In pipeline systems, corrosion can cause failures in the wall and damage to the pipeline. Therefore it takes care and periodic inspections or checks on the pipeline system. Every production facility in an industry has a level of risk for damage which is a result of the opportunities and consequences of damage caused. The purpose of this research is to analyze the level of risk of 20-inch Natural Gas Transmission Pipeline using Risk-based inspection semi-quantitative based on API 581 associated with the likelihood of failure and the consequences of the failure of a component of the equipment. Then the result is used to determine the next inspection plans. Nine pipeline components were observed, such as a straight pipes inlet, connection tee, and straight pipes outlet. The risk assessment level of the nine pipeline’s components is presented in a risk matrix. The risk level of components is examined at medium risk levels. The failure mechanism that is used in this research is the mechanism of thinning. Based on the results of corrosion rate calculation, remaining pipeline components age can be obtained, so the remaining lifetime of pipeline components are known. The calculation of remaining lifetime obtained and the results vary for each component. Next step is planning the inspection of pipeline components by NDT external methods.

  11. 49 CFR 192.197 - Control of the pressure of gas delivered from high-pressure distribution systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Control of the pressure of gas delivered from high-pressure distribution systems. 192.197 Section 192.197 Transportation Other Regulations Relating to... STANDARDS Design of Pipeline Components § 192.197 Control of the pressure of gas delivered from high...

  12. Advances in the spatially distributed ages-w model: parallel computation, java connection framework (JCF) integration, and streamflow/nitrogen dynamics assessment

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...

  13. The Spatially-Distributed Agroecosystem-Watershed (Ages-W) Hydrologic/Water Quality (H/WQ) model for assessment of conservation effects

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...

  14. Retention System and Splinting on Morse Taper Implants in the Posterior Maxilla by 3D Finite Element Analysis.

    PubMed

    Lemos, Cleidiel Aparecido Araujo; Verri, Fellippo Ramos; Santiago, Joel Ferreira; Almeida, Daniel Augusto de Faria; Batista, Victor Eduardo de Souza; Noritomi, Pedro Yoshito; Pellizzer, Duardo Piza

    2018-01-01

    The purpose of this study was to evaluate different retention systems (cement- or screw-retained) and crown designs (non-splinted or splinted) of fixed implant-supported restorations, in terms of stress distributions in implants/components and bone tissue, by 3-dimensional (3D) finite element analysis. Four 3D models were simulated with the InVesalius, Rhinoceros 3D, and SolidWorks programs. Models were made of type III bone from the posterior maxillary area. Models included three 4.0-mm-diameter Morse taper (MT) implants with different lengths, which supported metal-ceramic crowns. Models were processed by the Femap and NeiNastran programs, using an axial force of 400 N and oblique force of 200 N. Results were visualized as the von Mises stress and maximum principal stress (σmax). Under axial loading, there was no difference in the distribution of stress in implants/components between retention systems and splinted crowns; however, in oblique loading, cemented prostheses showed better stress distribution than screwed prostheses, whereas splinted crowns tended to reduce stress in the implant of the first molar. In the bone tissue cemented prostheses showed better stress distribution in bone tissue than screwed prostheses under axial and oblique loading. The splinted design only had an effect in the screwed prosthesis, with no influence in the cemented prosthesis. Cemented prostheses on MT implants showed more favorable stress distributions in implants/components and bone tissue. Splinting was favorable for stress distribution only for screwed prostheses under oblique loading.

  15. Domain Adaptation with Conditional Transferable Components

    PubMed Central

    Gong, Mingming; Zhang, Kun; Liu, Tongliang; Tao, Dacheng; Glymour, Clark; Schölkopf, Bernhard

    2017-01-01

    Domain adaptation arises in supervised learning when the training (source domain) and test (target domain) data have different distributions. Let X and Y denote the features and target, respectively, previous work on domain adaptation mainly considers the covariate shift situation where the distribution of the features P(X) changes across domains while the conditional distribution P(Y∣X) stays the same. To reduce domain discrepancy, recent methods try to find invariant components T(X) that have similar P(T(X)) on different domains by explicitly minimizing a distribution discrepancy measure. However, it is not clear if P(Y∣T(X)) in different domains is also similar when P(Y∣X) changes. Furthermore, transferable components do not necessarily have to be invariant. If the change in some components is identifiable, we can make use of such components for prediction in the target domain. In this paper, we focus on the case where P(X∣Y) and P(Y) both change in a causal system in which Y is the cause for X. Under appropriate assumptions, we aim to extract conditional transferable components whose conditional distribution P(T(X)∣Y) is invariant after proper location-scale (LS) transformations, and identify how P(Y) changes between domains simultaneously. We provide theoretical analysis and empirical evaluation on both synthetic and real-world data to show the effectiveness of our method. PMID:28239433

  16. TASK ALLOCATION IN GEO-DISTRIBUTED CYBER-PHYSICAL SYSTEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aggarwal, Rachit; Smidts, Carol

    This paper studies the task allocation algorithm for a distributed test facility (DTF), which aims to assemble geo-distributed cyber (software) and physical (hardware in the loop components into a prototype cyber-physical system (CPS). This allows low cost testing on an early conceptual prototype (ECP) of the ultimate CPS (UCPS) to be developed. The DTF provides an instrumentation interface for carrying out reliability experiments remotely such as fault propagation analysis and in-situ testing of hardware and software components in a simulated environment. Unfortunately, the geo-distribution introduces an overhead that is not inherent to the UCPS, i.e. a significant time delay inmore » communication that threatens the stability of the ECP and is not an appropriate representation of the behavior of the UCPS. This can be mitigated by implementing a task allocation algorithm to find a suitable configuration and assign the software components to appropriate computational locations, dynamically. This would allow the ECP to operate more efficiently with less probability of being unstable due to the delays introduced by geo-distribution. The task allocation algorithm proposed in this work uses a Monte Carlo approach along with Dynamic Programming to identify the optimal network configuration to keep the time delays to a minimum.« less

  17. A Novel Wide-Area Backup Protection Based on Fault Component Current Distribution and Improved Evidence Theory

    PubMed Central

    Zhang, Zhe; Kong, Xiangping; Yin, Xianggen; Yang, Zengli; Wang, Lijun

    2014-01-01

    In order to solve the problems of the existing wide-area backup protection (WABP) algorithms, the paper proposes a novel WABP algorithm based on the distribution characteristics of fault component current and improved Dempster/Shafer (D-S) evidence theory. When a fault occurs, slave substations transmit to master station the amplitudes of fault component currents of transmission lines which are the closest to fault element. Then master substation identifies suspicious faulty lines according to the distribution characteristics of fault component current. After that, the master substation will identify the actual faulty line with improved D-S evidence theory based on the action states of traditional protections and direction components of these suspicious faulty lines. The simulation examples based on IEEE 10-generator-39-bus system show that the proposed WABP algorithm has an excellent performance. The algorithm has low requirement of sampling synchronization, small wide-area communication flow, and high fault tolerance. PMID:25050399

  18. Distribution of kerosene components in rats following dermal exposure.

    PubMed

    Tsujino, Y; Hieda, Y; Kimura, K; Eto, H; Yakabe, T; Takayama, K; Dekio, S

    2002-08-01

    The systemic distribution of kerosene components in blood and tissues was analysed in rats following dermal exposure. Four types of trimethylbenzenes (TMBs) and aliphatic hydrocarbons (AHCs) with carbon numbers 9-16 (C(9)-C(16)) were analysed as major kerosene components by capillary gas chromatography/mass spectrometry (GC/MS). The kerosene components were detected in blood and all tissues after a small piece of cotton soaked with kerosene was applied to the abdominal skin. The amounts of TMBs detected were higher than those of AHCs. Greater increases in TMB levels were found in adipose tissue in an exposure duration-dependent manner. The amounts of TMBs detected were only at trace levels following post-mortem dermal exposure to kerosene. These findings suggest that kerosene components were absorbed percutaneously and distributed to various organs via the blood circulation. Post-mortem or ante-mortem exposure to kerosene could be distinguished when the exposure duration was relatively long. Adipose tissue would seem to be the most useful for estimating the degree of kerosene exposure.

  19. The transport along membrane nanotubes driven by the spontaneous curvature of membrane components.

    PubMed

    Kabaso, Doron; Bobrovska, Nataliya; Góźdź, Wojciech; Gongadze, Ekaterina; Kralj-Iglič, Veronika; Zorec, Robert; Iglič, Aleš

    2012-10-01

    Intercellular membrane nanotubes (ICNs) serve as a very specific transport system between neighboring cells. The underlying mechanisms responsible for the transport of membrane components and vesicular dilations along the ICNs are not clearly understood. The present study investigated the spatial distribution of anisotropic membrane components of tubular shapes and isotropic membrane components of spherical shapes. Experimental results revealed the preferential distribution of CTB (cholera toxin B)-GM1 complexes mainly on the spherical cell membrane, and cholesterol-sphingomyelin at the membrane leading edge and ICNs. In agreement with previous studies, we here propose that the spatial distribution of CTB-GM1 complexes and cholesterol-sphingomyelin rafts were due to their isotropic and anisotropic shapes, respectively. To elucidate the relationship between a membrane component shape and its spatial distribution, a two-component computational model was constructed. The minimization of the membrane bending (free) energy revealed the enrichment of the anisotropic component along the ICN and the isotropic component in the parent cell membrane, which was due to the curvature mismatch between the ICN curvature and the spontaneous curvature of the isotropic component. The equations of motion, derived from the differentiation of the membrane free energy, revealed a curvature-dependent flux of the isotropic component and a curvature-dependent force exerted on a vesicular dilation along the ICN. Finally, the effects of possible changes in the orientational ordering of the anisotropic component attendant to the transport of the vesicular dilation were discussed with connection to the propagation of electrical and chemical signals. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  1. High Concentration Standard Aerosol Generator.

    DTIC Science & Technology

    1985-07-31

    Noncommercial Components .. .. ........ A-1 B. Maintenance Instructions and material Properties of Purchased Components . .. .. .. ... . . . . . . B-1...tration (if a lower flow or a wider size distribution is acceptable and 2) precautions and suggestions for use of different aerosol materials . Additional...details of the system (including shop drawings, 𔃻i4t lists of materials , and maintenance of commercially available components) are given in

  2. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  3. Distributed Access View Integrated Database (DAVID) system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1991-01-01

    The Distributed Access View Integrated Database (DAVID) System, which was adopted by the Astrophysics Division for their Astrophysics Data System, is a solution to the system heterogeneity problem. The heterogeneous components of the Astrophysics problem is outlined. The Library and Library Consortium levels of the DAVID approach are described. The 'books' and 'kits' level is discussed. The Universal Object Typer Management System level is described. The relation of the DAVID project with the Small Business Innovative Research (SBIR) program is explained.

  4. PV System Component Fault and Failure Compilation and Analysis.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Geoffrey Taylor; Lavrova, Olga; Gooding, Renee Lynne

    This report describes data collection and analysis of solar photovoltaic (PV) equipment events, which consist of faults and fa ilures that occur during the normal operation of a distributed PV system or PV power plant. We present summary statistics from locations w here maintenance data is being collected at various intervals, as well as reliability statistics gathered from that da ta, consisting of fault/failure distributions and repair distributions for a wide range of PV equipment types.

  5. Data Applicability of Heritage and New Hardware For Launch Vehicle Reliability Models

    NASA Technical Reports Server (NTRS)

    Al Hassan, Mohammad; Novack, Steven

    2015-01-01

    Bayesian reliability requires the development of a prior distribution to represent degree of belief about the value of a parameter (such as a component's failure rate) before system specific data become available from testing or operations. Generic failure data are often provided in reliability databases as point estimates (mean or median). A component's failure rate is considered a random variable where all possible values are represented by a probability distribution. The applicability of the generic data source is a significant source of uncertainty that affects the spread of the distribution. This presentation discusses heuristic guidelines for quantifying uncertainty due to generic data applicability when developing prior distributions mainly from reliability predictions.

  6. An overview of the artificial intelligence and expert systems component of RICIS

    NASA Technical Reports Server (NTRS)

    Feagin, Terry

    1987-01-01

    Artificial Intelligence and Expert Systems are the important component of RICIS (Research Institute and Information Systems) research program. For space applications, a number of problem areas that should be able to make good use of the above tools include: resource allocation and management, control and monitoring, environmental control and life support, power distribution, communications scheduling, orbit and attitude maintenance, redundancy management, intelligent man-machine interfaces and fault detection, isolation and recovery.

  7. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.

  8. GEOSS AIP-2 Climate Change and Biodiversity Use Scenarios: Interoperability Infrastructures (Invited)

    NASA Astrophysics Data System (ADS)

    Nativi, S.; Santoro, M.

    2009-12-01

    Currently, one of the major challenges for scientific community is the study of climate change effects on life on Earth. To achieve this, it is crucial to understand how climate change will impact on biodiversity and, in this context, several application scenarios require modeling the impact of climate change on distribution of individual species. In the context of GEOSS AIP-2 (Global Earth Observation System of Systems, Architecture Implementation Pilot- Phase 2), the Climate Change & Biodiversity thematic Working Group developed three significant user scenarios. A couple of them make use of a GEOSS-based framework to study the impact of climate change factors on regional species distribution. The presentation introduces and discusses this framework which provides an interoperability infrastructures to loosely couple standard services and components to discover and access climate and biodiversity data, and run forecast and processing models. The framework is comprised of the following main components and services: a)GEO Portal: through this component end user is able to search, find and access the needed services for the scenario execution; b)Graphical User Interface (GUI): this component provides user interaction functionalities. It controls the workflow manager to perform the required operations for the scenario implementation; c)Use Scenario controller: this component acts as a workflow controller implementing the scenario business process -i.e. a typical climate change & biodiversity projection scenario; d)Service Broker implementing Mediation Services: this component realizes a distributed catalogue which federates several discovery and access components (exposing them through a unique CSW standard interface). Federated components publish climate, environmental and biodiversity datasets; e)Ecological Niche Model Server: this component is able to run one or more Ecological Niche Models (ENM) on selected biodiversity and climate datasets; f)Data Access Transaction server: this component publishes the model outputs. The framework was successfully tested in two use scenarios of the GEOSS AIP-2 Climate Change and Biodiversity WG aiming to predict species distribution changes due to Climate Change factors, with the scientific patronage of the University of Colorado and the University of Alaska. The first scenario dealt with the Pikas specie regional distribution in the Great Basin area (North America). While, the second one concerned the modeling of the Arctic Food Chain species in the North Pole area -the relationships between different environmental parameters and Polar Bears distribution was analyzed. Results are published in the GEOSS AIP-2 web site: http://www.ogcnetwork.net/AIP2develop .

  9. Distribution System Upgrade Unit Cost Database

    DOE Data Explorer

    Horowitz, Kelsey

    2017-11-30

    This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.

  10. Cloud-Based Computational Tools for Earth Science Applications

    NASA Astrophysics Data System (ADS)

    Arendt, A. A.; Fatland, R.; Howe, B.

    2015-12-01

    Earth scientists are increasingly required to think across disciplines and utilize a wide range of datasets in order to solve complex environmental challenges. Although significant progress has been made in distributing data, researchers must still invest heavily in developing computational tools to accommodate their specific domain. Here we document our development of lightweight computational data systems aimed at enabling rapid data distribution, analytics and problem solving tools for Earth science applications. Our goal is for these systems to be easily deployable, scalable and flexible to accommodate new research directions. As an example we describe "Ice2Ocean", a software system aimed at predicting runoff from snow and ice in the Gulf of Alaska region. Our backend components include relational database software to handle tabular and vector datasets, Python tools (NumPy, pandas and xray) for rapid querying of gridded climate data, and an energy and mass balance hydrological simulation model (SnowModel). These components are hosted in a cloud environment for direct access across research teams, and can also be accessed via API web services using a REST interface. This API is a vital component of our system architecture, as it enables quick integration of our analytical tools across disciplines, and can be accessed by any existing data distribution centers. We will showcase several data integration and visualization examples to illustrate how our system has expanded our ability to conduct cross-disciplinary research.

  11. Investigation of low-speed turbulent separated flow around airfoils

    NASA Technical Reports Server (NTRS)

    Wadcock, Alan J.

    1987-01-01

    Described is a low-speed wind tunnel experiment to measure the flowfield around a two-dimensional airfoil operating close to maximum lift. Boundary layer separation occurs on the upper surface at x/c=0.85. A three-component laser velocimeter, coupled with a computer-controlled data acquisition system, was used to obtain three orthogonal mean velocity components and three components of the Reynolds stress tensor in both the boundary layer and wake of the airfoil. Pressure distributions on the airfoil, skin friction distribution on the upper surface of the airfoil, and integral properties of the airfoil boudary layer are also documented. In addition to these near-field flow properties, static pressure distributions, both upstream and downstream from the airfoil and on the walls of the wind tunnel, are also presented.

  12. Progress Toward Efficient Laminar Flow Analysis and Design

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.; Campbell, Matthew L.; Streit, Thomas

    2011-01-01

    A multi-fidelity system of computer codes for the analysis and design of vehicles having extensive areas of laminar flow is under development at the NASA Langley Research Center. The overall approach consists of the loose coupling of a flow solver, a transition prediction method and a design module using shell scripts, along with interface modules to prepare the input for each method. This approach allows the user to select the flow solver and transition prediction module, as well as run mode for each code, based on the fidelity most compatible with the problem and available resources. The design module can be any method that designs to a specified target pressure distribution. In addition to the interface modules, two new components have been developed: 1) an efficient, empirical transition prediction module (MATTC) that provides n-factor growth distributions without requiring boundary layer information; and 2) an automated target pressure generation code (ATPG) that develops a target pressure distribution that meets a variety of flow and geometry constraints. The ATPG code also includes empirical estimates of several drag components to allow the optimization of the target pressure distribution. The current system has been developed for the design of subsonic and transonic airfoils and wings, but may be extendable to other speed ranges and components. Several analysis and design examples are included to demonstrate the current capabilities of the system.

  13. On-Chip Integrated Distributed Amplifier and Antenna Systems in SiGe BiCMOS for Transceivers with Ultra-Large Bandwidth

    NASA Astrophysics Data System (ADS)

    Valerio Testa, Paolo; Klein, Bernhard; Hahnel, Ronny; Plettemeier, Dirk; Carta, Corrado; Ellinger, Frank

    2017-09-01

    This paper presents an overview of the research work currently being performed within the frame of project DAAB and its successor DAAB-TX towards the integration of ultra-wideband transceivers operating at mm-wave frequencies and capable of data rates up to 100 Gbits-1. Two basic system architectures are being considered: integrating a broadband antenna with a distributed amplifier and integrate antennas centered at adjacent frequencies with broadband active combiners or dividers. The paper discusses in detail the design of such systems and their components, from the distributed amplifiers and combiners, to the broadband silicon antennas and their single-chip integration. All components are designed for fabrication in a commercially available SiGe:C BiCMOS technology. The presented results represent the state of the art in their respective areas: 170 GHz is the highest reported bandwidth for distributed amplifiers integrated in Silicon; 89 GHz is the widest reported bandwidth for integrated-system antennas; the simulated performance of the two antenna integrated receiver spans 105 GHz centered at 148GHz, which would improve the state of the art by a factor in excess of 4 even against III-V implementations, if confirmed by measurements.

  14. Reliability measurement for mixed mode failures of 33/11 kilovolt electric power distribution stations.

    PubMed

    Alwan, Faris M; Baharum, Adam; Hassan, Geehan S

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter [Formula: see text] and shape parameters [Formula: see text] and [Formula: see text]. Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models.

  15. Reliability Measurement for Mixed Mode Failures of 33/11 Kilovolt Electric Power Distribution Stations

    PubMed Central

    Alwan, Faris M.; Baharum, Adam; Hassan, Geehan S.

    2013-01-01

    The reliability of the electrical distribution system is a contemporary research field due to diverse applications of electricity in everyday life and diverse industries. However a few research papers exist in literature. This paper proposes a methodology for assessing the reliability of 33/11 Kilovolt high-power stations based on average time between failures. The objective of this paper is to find the optimal fit for the failure data via time between failures. We determine the parameter estimation for all components of the station. We also estimate the reliability value of each component and the reliability value of the system as a whole. The best fitting distribution for the time between failures is a three parameter Dagum distribution with a scale parameter and shape parameters and . Our analysis reveals that the reliability value decreased by 38.2% in each 30 days. We believe that the current paper is the first to address this issue and its analysis. Thus, the results obtained in this research reflect its originality. We also suggest the practicality of using these results for power systems for both the maintenance of power systems models and preventive maintenance models. PMID:23936346

  16. Profiling study of the major and minor components of kaffir lime oil (Citrus hystrix DC.) in the fractional distillation process.

    PubMed

    Warsito, Warsito; Palungan, Maimunah Hindun; Utomo, Edy Priyo

    2017-01-01

    Essential oil is consisting of complex component. It is divided into major and minor component. Therefore, this study aims to examine the distribution of major and minor components on Kaffir lime oil by using fractional distillation. Fractional distillation and distributional analysis of components within fractions have been performed on kaffir lime oil ( Citrus hystrix DC .). Fractional distillation was performed by using PiloDist 104-VTU, column length of 2 m (number of plate 120), the system pressure was set on 5 and 10 mBar, while the reflux ratio varied on 10/10, 20/10 and 60/10, and the chemical composition analysis was done by using GC-MS. Chemical composition of the distillated lime oil consisted of mix-twigs and leaves that composed of 20 compounds, with five main components β-citronellal (46.40%), L-linalool (13.11%), β-citronellol (11.03%), citronelyl acetate (6.76%) and sabinen (5.91%). The optimum conditions for fractional distillation were obtained at 5 mBar pressure with reflux ratio of 10/10. Components of β -citronellal and L-linalool were distributed in the fraction-1 to fraction 9, hydrocarbon monoterpenes components were distributed only on the fraction-1 to fraction 4, while the oxygenated monoterpenes components dominated the fraction-5 to fraction-9. The highest level of β-citronellal was 84.86% (fraction-7), L-linalool 20.13% (fraction-5), sabinen 19.83% (fraction-1), and the component level of 4-terpeneol, β-citronellol and sitronelyl acetate respectively 7.16%; 12.27%; 5.22% (fraction-9).

  17. Stability of Fiber Optic Networked Decentralized Distributed Engine Control Under Time Delays

    DTIC Science & Technology

    2009-08-01

    Nomenclature FADEC = Full Authority Digital Engine Control D2FADEC = Decentralized Distributed Full Authority Digital Engine Control DEC...Corporation (IFOS), bm@ifos.com. I American Institute of Aeronautics and Astronautics 2 II. Distributed Engine Control Systems FADEC Based on...of Full Authority Digital Engine Control ( FADEC ) are distributed at the component level. Each sensor/actuator is to be replaced by a smart sensor

  18. Supersonic propulsion simulation by incorporating component models in the large perturbation inlet (LAPIN) computer code

    NASA Technical Reports Server (NTRS)

    Cole, Gary L.; Richard, Jacques C.

    1991-01-01

    An approach to simulating the internal flows of supersonic propulsion systems is presented. The approach is based on a fairly simple modification of the Large Perturbation Inlet (LAPIN) computer code. LAPIN uses a quasi-one dimensional, inviscid, unsteady formulation of the continuity, momentum, and energy equations. The equations are solved using a shock capturing, finite difference algorithm. The original code, developed for simulating supersonic inlets, includes engineering models of unstart/restart, bleed, bypass, and variable duct geometry, by means of source terms in the equations. The source terms also provide a mechanism for incorporating, with the inlet, propulsion system components such as compressor stages, combustors, and turbine stages. This requires each component to be distributed axially over a number of grid points. Because of the distributed nature of such components, this representation should be more accurate than a lumped parameter model. Components can be modeled by performance map(s), which in turn are used to compute the source terms. The general approach is described. Then, simulation of a compressor/fan stage is discussed to show the approach in detail.

  19. Chloroplast two-component systems: evolution of the link between photosynthesis and gene expression

    PubMed Central

    Puthiyaveetil, Sujith; Allen, John F.

    2009-01-01

    Two-component signal transduction, consisting of sensor kinases and response regulators, is the predominant signalling mechanism in bacteria. This signalling system originated in prokaryotes and has spread throughout the eukaryotic domain of life through endosymbiotic, lateral gene transfer from the bacterial ancestors and early evolutionary precursors of eukaryotic, cytoplasmic, bioenergetic organelles—chloroplasts and mitochondria. Until recently, it was thought that two-component systems inherited from an ancestral cyanobacterial symbiont are no longer present in chloroplasts. Recent research now shows that two-component systems have survived in chloroplasts as products of both chloroplast and nuclear genes. Comparative genomic analysis of photosynthetic eukaryotes shows a lineage-specific distribution of chloroplast two-component systems. The components and the systems they comprise have homologues in extant cyanobacterial lineages, indicating their ancient cyanobacterial origin. Sequence and functional characteristics of chloroplast two-component systems point to their fundamental role in linking photosynthesis with gene expression. We propose that two-component systems provide a coupling between photosynthesis and gene expression that serves to retain genes in chloroplasts, thus providing the basis of cytoplasmic, non-Mendelian inheritance of plastid-associated characters. We discuss the role of this coupling in the chronobiology of cells and in the dialogue between nuclear and cytoplasmic genetic systems. PMID:19324807

  20. Chloroplast two-component systems: evolution of the link between photosynthesis and gene expression.

    PubMed

    Puthiyaveetil, Sujith; Allen, John F

    2009-06-22

    Two-component signal transduction, consisting of sensor kinases and response regulators, is the predominant signalling mechanism in bacteria. This signalling system originated in prokaryotes and has spread throughout the eukaryotic domain of life through endosymbiotic, lateral gene transfer from the bacterial ancestors and early evolutionary precursors of eukaryotic, cytoplasmic, bioenergetic organelles-chloroplasts and mitochondria. Until recently, it was thought that two-component systems inherited from an ancestral cyanobacterial symbiont are no longer present in chloroplasts. Recent research now shows that two-component systems have survived in chloroplasts as products of both chloroplast and nuclear genes. Comparative genomic analysis of photosynthetic eukaryotes shows a lineage-specific distribution of chloroplast two-component systems. The components and the systems they comprise have homologues in extant cyanobacterial lineages, indicating their ancient cyanobacterial origin. Sequence and functional characteristics of chloroplast two-component systems point to their fundamental role in linking photosynthesis with gene expression. We propose that two-component systems provide a coupling between photosynthesis and gene expression that serves to retain genes in chloroplasts, thus providing the basis of cytoplasmic, non-Mendelian inheritance of plastid-associated characters. We discuss the role of this coupling in the chronobiology of cells and in the dialogue between nuclear and cytoplasmic genetic systems.

  1. An experimental study of an adaptive-wall wind tunnel

    NASA Technical Reports Server (NTRS)

    Celik, Zeki; Roberts, Leonard

    1988-01-01

    A series of adaptive wall ventilated wind tunnel experiments was carried out to demonstrate the feasibility of using the side wall pressure distribution as the flow variable for the assessment of compatibility with free air conditions. Iterative and one step convergence methods were applied using the streamwise velocity component, the side wall pressure distribution and the normal velocity component in order to investigate their relative merits. The advantage of using the side wall pressure as the flow variable is to reduce the data taking time which is one the major contributors to the total testing time. In ventilated adaptive wall wind tunnel testing, side wall pressure measurements require simple instrumentation as opposed to the Laser Doppler Velocimetry used to measure the velocity components. In ventilated adaptive wall tunnel testing, influence coefficients are required to determine the pressure corrections in the plenum compartment. Experiments were carried out to evaluate the influence coefficients from side wall pressure distributions, and from streamwise and normal velocity distributions at two control levels. Velocity measurements were made using a two component Laser Doppler Velocimeter system.

  2. Lunar PMAD technology assessment

    NASA Technical Reports Server (NTRS)

    Metcalf, Kenneth J.

    1992-01-01

    This report documents an initial set of power conditioning models created to generate 'ballpark' power management and distribution (PMAD) component mass and size estimates. It contains converter, rectifier, inverter, transformer, remote bus isolator (RBI), and remote power controller (RPC) models. These models allow certain studies to be performed; however, additional models are required to assess a full range of PMAD alternatives. The intent is to eventually form a library of PMAD models that will allow system designers to evaluate various power system architectures and distribution techniques quickly and consistently. The models in this report are designed primarily for space exploration initiative (SEI) missions requiring continuous power and supporting manned operations. The mass estimates were developed by identifying the stages in a component and obtaining mass breakdowns for these stages from near term electronic hardware elements. Technology advances were then incorporated to generate hardware masses consistent with the 2000 to 2010 time period. The mass of a complete component is computed by algorithms that calculate the masses of the component stages, control and monitoring, enclosure, and thermal management subsystem.

  3. THE EPANET PROGRAMMER'S TOOLKIT FOR ANALYSIS OF WATER DISTRIBUTION SYSTEMS

    EPA Science Inventory

    The EPANET Programmer's Toolkit is a collection of functions that helps simplify computer programming of water distribution network analyses. the functions can be used to read in a pipe network description file, modify selected component properties, run multiple hydraulic and wa...

  4. Full Scale Drinking Water System Decontamination at the Water Security Test Bed.

    PubMed

    Szabo, Jeffrey; Hall, John; Reese, Steve; Goodrich, Jim; Panguluri, Sri; Meiners, Greg; Ernst, Hiba

    2018-03-20

    The EPA's Water Security Test Bed (WSTB) facility is a full-scale representation of a drinking water distribution system. In collaboration with the Idaho National Laboratory (INL), EPA designed the WSTB facility to support full-scale evaluations of water infrastructure decontamination, real-time sensors, mobile water treatment systems, and decontamination of premise plumbing and appliances. The EPA research focused on decontamination of 1) Bacillus globigii (BG) spores, a non-pathogenic surrogate for Bacillus anthracis and 2) Bakken crude oil. Flushing and chlorination effectively removed most BG spores from the bulk water but BG spores still remained on the pipe wall coupons. Soluble oil components of Bakken crude oil were removed by flushing although oil components persisted in the dishwasher and refrigerator water dispenser. Using this full-scale distribution system allows EPA to 1) test contaminants without any human health or ecological risk and 2) inform water systems on effective methodologies responding to possible contamination incidents.

  5. Estimating distributions with increasing failure rate in an imperfect repair model.

    PubMed

    Kvam, Paul H; Singh, Harshinder; Whitaker, Lyn R

    2002-03-01

    A failed system is repaired minimally if after failure, it is restored to the working condition of an identical system of the same age. We extend the nonparametric maximum likelihood estimator (MLE) of a system's lifetime distribution function to test units that are known to have an increasing failure rate. Such items comprise a significant portion of working components in industry. The order-restricted MLE is shown to be consistent. Similar results hold for the Brown-Proschan imperfect repair model, which dictates that a failed component is repaired perfectly with some unknown probability, and is otherwise repaired minimally. The estimators derived are motivated and illustrated by failure data in the nuclear industry. Failure times for groups of emergency diesel generators and motor-driven pumps are analyzed using the order-restricted methods. The order-restricted estimators are consistent and show distinct differences from the ordinary MLEs. Simulation results suggest significant improvement in reliability estimation is available in many cases when component failure data exhibit the IFR property.

  6. A distributed scheduling algorithm for heterogeneous real-time systems

    NASA Technical Reports Server (NTRS)

    Zeineldine, Osman; El-Toweissy, Mohamed; Mukkamala, Ravi

    1991-01-01

    Much of the previous work on load balancing and scheduling in distributed environments was concerned with homogeneous systems and homogeneous loads. Several of the results indicated that random policies are as effective as other more complex load allocation policies. The effects of heterogeneity on scheduling algorithms for hard real time systems is examined. A distributed scheduler specifically to handle heterogeneities in both nodes and node traffic is proposed. The performance of the algorithm is measured in terms of the percentage of jobs discarded. While a random task allocation is very sensitive to heterogeneities, the algorithm is shown to be robust to such non-uniformities in system components and load.

  7. Discrete Wavelet Transform for Fault Locations in Underground Distribution System

    NASA Astrophysics Data System (ADS)

    Apisit, C.; Ngaopitakkul, A.

    2010-10-01

    In this paper, a technique for detecting faults in underground distribution system is presented. Discrete Wavelet Transform (DWT) based on traveling wave is employed in order to detect the high frequency components and to identify fault locations in the underground distribution system. The first peak time obtained from the faulty bus is employed for calculating the distance of fault from sending end. The validity of the proposed technique is tested with various fault inception angles, fault locations and faulty phases. The result is found that the proposed technique provides satisfactory result and will be very useful in the development of power systems protection scheme.

  8. SURE (Science User Resource Expert): A science planning and scheduling assistant for a resource based environment

    NASA Technical Reports Server (NTRS)

    Thalman, Nancy E.; Sparn, Thomas P.

    1990-01-01

    SURE (Science User Resource Expert) is one of three components that compose the SURPASS (Science User Resource Planning and Scheduling System). This system is a planning and scheduling tool which supports distributed planning and scheduling, based on resource allocation and optimization. Currently SURE is being used within the SURPASS by the UARS (Upper Atmospheric Research Satellite) SOLSTICE instrument to build a daily science plan and activity schedule and in a prototyping effort with NASA GSFC to demonstrate distributed planning and scheduling for the SOLSTICE II instrument on the EOS platform. For the SOLSTICE application the SURE utilizes a rule-based system. Development of a rule-based program using Ada CLIPS as opposed to using conventional programming, allows for capture of the science planning and scheduling heuristics in rules and provides flexibility in inserting or removing rules as the scientific objectives and mission constraints change. The SURE system's role as a component in the SURPASS, the purpose of the SURE planning and scheduling tool, the SURE knowledge base, and the software architecture of the SURE component are described.

  9. Power components for the Space Station 20-kHz power distribution system

    NASA Technical Reports Server (NTRS)

    Renz, David D.

    1988-01-01

    Since 1984, NASA Lewis Research Center was developing high power, high frequency space power components as part of The Space Station Advanced Development program. The purpose of the Advanced Development program was to accelerate existing component programs to ensure their availability for use on the Space Station. These components include a rotary power transfer device, remote power controllers, remote bus isolators, high power semiconductor, a high power semiconductor package, high frequency-high power cable, high frequency-high power connectors, and high frequency-high power transformers. All the components were developed to the prototype level and will be installed in the Lewis Research Center Space Station power system test bed.

  10. Power components for the space station 20-kHz power distribution system

    NASA Technical Reports Server (NTRS)

    Renz, David D.

    1988-01-01

    Since 1984, NASA Lewis Research Center was developing high power, high frequency space power components as part of The Space Station Advanced Development program. The purpose of The Advanced Development program was to accelerate existing component programs to ensure their availability for use on the Space Station. These components include a rotary power transfer device, remote power controllers, remote bus isolators, high power semiconductor, a high power semiconductor package, high frequency-high power cable, high frequency-high power connectors, and high frequency-high power transformers. All the components were developed to the prototype level and will be installed in the Lewis Research Center Space Station power system test bed.

  11. Framework for teleoperated microassembly systems

    NASA Astrophysics Data System (ADS)

    Reinhart, Gunther; Anton, Oliver; Ehrenstrasser, Michael; Patron, Christian; Petzold, Bernd

    2002-02-01

    Manual assembly of minute parts is currently done using simple devices such as tweezers or magnifying glasses. The operator therefore requires a great deal of concentration for successful assembly. Teleoperated micro-assembly systems are a promising method for overcoming the scaling barrier. However, most of today's telepresence systems are based on proprietary and one-of-a-kind solutions. Frameworks which supply the basic functions of a telepresence system, e.g. to establish flexible communication links that depend on bandwidth requirements or to synchronize distributed components, are not currently available. Large amounts of time and money have to be invested in order to create task-specific teleoperated micro-assembly systems from scratch. For this reason, an object-oriented framework for telepresence systems that is based on CORBA as a common middleware was developed at the Institute for Machine Tools and Industrial Management (iwb). The framework is based on a distributed architectural concept and is realized in C++. External hardware components such as haptic, video or sensor devices are coupled to the system by means of defined software interfaces. In this case, the special requirements of teleoperation systems have to be considered, e.g. dynamic parameter settings for sensors during operation. Consequently, an architectural concept based on logical sensors has been developed to achieve maximum flexibility and to enable a task-oriented integration of hardware components.

  12. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    USGS Publications Warehouse

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  13. Dynamical evolution of young binaries and multiple systems

    NASA Astrophysics Data System (ADS)

    Reipurth, B.

    Most stars, and perhaps all, are born in small multiple systems whose components interact, leading to chaotic dynamic behavior. Some components are ejected, either into distant orbits or into outright escapes, while the remaining components form temporary and eventually permanent binary systems. More than half of all such breakups of multiple systems occur during the protostellar phase, leading to the occasional ejection of protostars outside their nascent cloud cores. Such orphaned protostars are observed as wide companions to embedded protostars, and thus allow the direct study of protostellar objects. Dynamic interactions during early stellar evolution explain the shape and enormous width of the separation distribution function of binaries, from close spectroscopic binaries to the widest binaries.

  14. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  15. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  16. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  17. Optoelectronics in TESLA, LHC, and pi-of-the-sky experiments

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Wrochna, Grzegorz; Simrock, Stefan

    2004-09-01

    Optical and optoelectronics technologies are more and more widely used in the biggest world experiments of high energy and nuclear physics, as well as in the astronomy. The paper is a kind of a broad digest describing the usage of optoelectronics is such experiments and information about some of the involved teams. The described experiments include: TESLA linear accelerator and FEL, Compact Muon Solenoid at LHC and recently started π-of-the-sky global gamma ray bursts (with asociated optical flashes) observation experiment. Optoelectornics and photonics offer several key features which are either extending the technical parameters of existing solutions or adding quite new practical application possibilities. Some of these favorable features of photonic systems are: high selectivity of optical sensors, immunity to some kinds of noise processes, extremely broad bandwidth exchangeable for either terabit rate transmission or ultrashort pulse generation, parallel image processing capability, etc. The following groups of photonic components and systems were described: (1) discrete components applications like: LED, PD, LD, CCD and CMOS cameras, active optical crystals and optical fibers in radiation dosimetry, astronomical image processing and for building of more complex photonic systems; (2) optical fiber networks serving as very stable phase distribution, clock signal distribution, distributed dosimeters, distributed gigabit transmission for control, diagnostics and data acquisition/processing; (3) fast and stable coherent femtosecond laser systems with active optical components for electro-optical sampling and photocathode excitation in the RF electron gun for linac; The parameters of some of these systems were quoted and discussed. A number of the debated solutions seems to be competitive against the classical ones. Several future fields seem to emerge involving direct coupling between the ultrafast photonic and the VLSI FPGA based technologies.

  18. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  19. Life and reliability models for helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Savage, M.; Knorr, R. J.; Coy, J. J.

    1982-01-01

    Computer models of life and reliability are presented for planetary gear trains with a fixed ring gear, input applied to the sun gear, and output taken from the planet arm. For this transmission the input and output shafts are co-axial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. The reliability model is based on the Weibull distributions of the individual reliabilities of the in transmission components. The system model is also a Weibull distribution. The load versus life model for the system is a power relationship as the models for the individual components. The load-life exponent and basic dynamic capacity are developed as functions of the components capacities. The models are used to compare three and four planet, 150 kW (200 hp), 5:1 reduction transmissions with 1500 rpm input speed to illustrate their use.

  20. ATLAS Eventlndex monitoring system using the Kibana analytics and visualization platform

    NASA Astrophysics Data System (ADS)

    Barberis, D.; Cárdenas Zárate, S. E.; Favareto, A.; Fernandez Casani, A.; Gallas, E. J.; Garcia Montoro, C.; Gonzalez de la Hoz, S.; Hrivnac, J.; Malon, D.; Prokoshin, F.; Salt, J.; Sanchez, J.; Toebbicke, R.; Yuan, R.; ATLAS Collaboration

    2016-10-01

    The ATLAS EventIndex is a data catalogue system that stores event-related metadata for all (real and simulated) ATLAS events, on all processing stages. As it consists of different components that depend on other applications (such as distributed storage, and different sources of information) we need to monitor the conditions of many heterogeneous subsystems, to make sure everything is working correctly. This paper describes how we gather information about the EventIndex components and related subsystems: the Producer-Consumer architecture for data collection, health parameters from the servers that run EventIndex components, EventIndex web interface status, and the Hadoop infrastructure that stores EventIndex data. This information is collected, processed, and then displayed using CERN service monitoring software based on the Kibana analytic and visualization package, provided by CERN IT Department. EventIndex monitoring is used both by the EventIndex team and ATLAS Distributed Computing shifts crew.

  1. Software For Graphical Representation Of A Network

    NASA Technical Reports Server (NTRS)

    Mcallister, R. William; Mclellan, James P.

    1993-01-01

    System Visualization Tool (SVT) computer program developed to provide systems engineers with means of graphically representing networks. Generates diagrams illustrating structures and states of networks defined by users. Provides systems engineers powerful tool simplifing analysis of requirements and testing and maintenance of complex software-controlled systems. Employs visual models supporting analysis of chronological sequences of requirements, simulation data, and related software functions. Applied to pneumatic, hydraulic, and propellant-distribution networks. Used to define and view arbitrary configurations of such major hardware components of system as propellant tanks, valves, propellant lines, and engines. Also graphically displays status of each component. Advantage of SVT: utilizes visual cues to represent configuration of each component within network. Written in Turbo Pascal(R), version 5.0.

  2. Space shuttle solid rocket booster recovery system definition, volume 1

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The performance requirements, preliminary designs, and development program plans for an airborne recovery system for the space shuttle solid rocket booster are discussed. The analyses performed during the study phase of the program are presented. The basic considerations which established the system configuration are defined. A Monte Carlo statistical technique using random sampling of the probability distribution for the critical water impact parameters was used to determine the failure probability of each solid rocket booster component as functions of impact velocity and component strength capability.

  3. Profiling study of the major and minor components of kaffir lime oil (Citrus hystrix DC.) in the fractional distillation process

    PubMed Central

    Warsito, Warsito; Palungan, Maimunah Hindun; Utomo, Edy Priyo

    2017-01-01

    Introduction Essential oil is consisting of complex component. It is divided into major and minor component. Therefore, this study aims to examine the distribution of major and minor components on Kaffir lime oil by using fractional distillation. Fractional distillation and distributional analysis of components within fractions have been performed on kaffir lime oil (Citrus hystrix DC.). Methods Fractional distillation was performed by using PiloDist 104-VTU, column length of 2 m (number of plate 120), the system pressure was set on 5 and 10 mBar, while the reflux ratio varied on 10/10, 20/10 and 60/10, and the chemical composition analysis was done by using GC-MS. Chemical composition of the distillated lime oil consisted of mix-twigs and leaves that composed of 20 compounds, with five main components β-citronellal (46.40%), L-linalool (13.11%), β-citronellol (11.03%), citronelyl acetate (6.76%) and sabinen (5.91%). Results The optimum conditions for fractional distillation were obtained at 5 mBar pressure with reflux ratio of 10/10. Components of β -citronellal and L-linalool were distributed in the fraction-1 to fraction 9, hydrocarbon monoterpenes components were distributed only on the fraction-1 to fraction 4, while the oxygenated monoterpenes components dominated the fraction-5 to fraction-9. Conclusion The highest level of β-citronellal was 84.86% (fraction-7), L-linalool 20.13% (fraction-5), sabinen 19.83% (fraction-1), and the component level of 4-terpeneol, β-citronellol and sitronelyl acetate respectively 7.16%; 12.27%; 5.22% (fraction-9). PMID:29187951

  4. LEGOS: Object-based software components for mission-critical systems. Final report, June 1, 1995--December 31, 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-08-01

    An estimated 85% of the installed base of software is a custom application with a production quantity of one. In practice, almost 100% of military software systems are custom software. Paradoxically, the marginal costs of producing additional units are near zero. So why hasn`t the software market, a market with high design costs and low productions costs evolved like other similar custom widget industries, such as automobiles and hardware chips? The military software industry seems immune to market pressures that have motivated a multilevel supply chain structure in other widget industries: design cost recovery, improve quality through specialization, and enablemore » rapid assembly from purchased components. The primary goal of the ComponentWare Consortium (CWC) technology plan was to overcome barriers to building and deploying mission-critical information systems by using verified, reusable software components (Component Ware). The adoption of the ComponentWare infrastructure is predicated upon a critical mass of the leading platform vendors` inevitable adoption of adopting emerging, object-based, distributed computing frameworks--initially CORBA and COM/OLE. The long-range goal of this work is to build and deploy military systems from verified reusable architectures. The promise of component-based applications is to enable developers to snap together new applications by mixing and matching prefabricated software components. A key result of this effort is the concept of reusable software architectures. A second important contribution is the notion that a software architecture is something that can be captured in a formal language and reused across multiple applications. The formalization and reuse of software architectures provide major cost and schedule improvements. The Unified Modeling Language (UML) is fast becoming the industry standard for object-oriented analysis and design notation for object-based systems. However, the lack of a standard real-time distributed object operating system, lack of a standard Computer-Aided Software Environment (CASE) tool notation and lack of a standard CASE tool repository has limited the realization of component software. The approach to fulfilling this need is the software component factory innovation. The factory approach takes advantage of emerging standards such as UML, CORBA, Java and the Internet. The key technical innovation of the software component factory is the ability to assemble and test new system configurations as well as assemble new tools on demand from existing tools and architecture design repositories.« less

  5. Space industrialization - Education. [via communication satellites

    NASA Technical Reports Server (NTRS)

    Joels, K. M.

    1978-01-01

    The components of an educational system based on, and perhaps enhanced by, space industrialization communications technology are considered. Satellite technology has introduced a synoptic distribution system for various transmittable educational media. The cost of communications satellite distribution for educational programming has been high. It has, therefore, been proposed to utilize Space Shuttle related technology and Large Space Structures (LSS) to construct a system with a quantum advancement in communication capability and a quantum reduction in user cost. LSS for communications purposes have three basic advantages for both developed and emerging nations, including the ability to distribute signals over wide geographic areas, the reduced cost of satellite communications systems versus installation of land based systems, and the ability of a communication satellite system to create instant educational networks.

  6. Final Report: Studies in Structural, Stochastic and Statistical Reliability for Communication Networks and Engineered Systems

    DTIC Science & Technology

    to do so, and (5) three distinct versions of the problem of estimating component reliability from system failure-time data are treated, each resulting inconsistent estimators with asymptotically normal distributions.

  7. Adaptive, full-spectrum solar energy system

    DOEpatents

    Muhs, Jeffrey D.; Earl, Dennis D.

    2003-08-05

    An adaptive full spectrum solar energy system having at least one hybrid solar concentrator, at least one hybrid luminaire, at least one hybrid photobioreactor, and a light distribution system operably connected to each hybrid solar concentrator, each hybrid luminaire, and each hybrid photobioreactor. A lighting control system operates each component.

  8. New Generation Power System for Space Applications

    NASA Technical Reports Server (NTRS)

    Jones, Loren; Carr, Greg; Deligiannis, Frank; Lam, Barbara; Nelson, Ron; Pantaleon, Jose; Ruiz, Ian; Treicler, John; Wester, Gene; Sauers, Jim; hide

    2004-01-01

    The Deep Space Avionics (DSA) Project is developing a new generation of power system building blocks. Using application specific integrated circuits (ASICs) and power switching modules a scalable power system can be constructed for use on multiple deep space missions including future missions to Mars, comets, Jupiter and its moons. The key developments of the DSA power system effort are five power ASICs and a mod ule for power switching. These components enable a modular and scalab le design approach, which can result in a wide variety of power syste m architectures to meet diverse mission requirements and environments . Each component is radiation hardened to one megarad) total dose. The power switching module can be used for power distribution to regular spacecraft loads, to propulsion valves and actuation of pyrotechnic devices. The number of switching elements per load, pyrotechnic firin gs and valve drivers can be scaled depending on mission needs. Teleme try data is available from the switch module via an I2C data bus. The DSA power system components enable power management and distribution for a variety of power buses and power system architectures employing different types of energy storage and power sources. This paper will describe each power ASIC#s key performance characteristics as well a s recent prototype test results. The power switching module test results will be discussed and will demonstrate its versatility as a multip urpose switch. Finally, the combination of these components will illu strate some of the possible power system architectures achievable fro m small single string systems to large fully redundant systems.

  9. Condition monitoring of distributed systems using two-stage Bayesian inference data fusion

    NASA Astrophysics Data System (ADS)

    Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł

    2017-03-01

    In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of each individual element, as well as the entire piece of machinery.

  10. Visualizing Java uncertainty

    NASA Astrophysics Data System (ADS)

    Knight, Claire; Munro, Malcolm

    2001-07-01

    Distributed component based systems seem to be the immediate future for software development. The use of such techniques, object oriented languages, and the combination with ever more powerful higher-level frameworks has led to the rapid creation and deployment of such systems to cater for the demand of internet and service driven business systems. This diversity of solution through both components utilised and the physical/virtual locations of those components can provide powerful resolutions to the new demand. The problem lies in the comprehension and maintenance of such systems because they then have inherent uncertainty. The components combined at any given time for a solution may differ, the messages generated, sent, and/or received may differ, and the physical/virtual locations cannot be guaranteed. Trying to account for this uncertainty and to build in into analysis and comprehension tools is important for both development and maintenance activities.

  11. Guest Editorial Introduction to the Special Issue on 'Advanced Signal Processing Techniques and Telecommunications Network Infrastructures for Smart Grid Analysis, Monitoring, and Management'

    DOE PAGES

    Bracale, Antonio; Barros, Julio; Cacciapuoti, Angela Sara; ...

    2015-06-10

    Electrical power systems are undergoing a radical change in structure, components, and operational paradigms, and are progressively approaching the new concept of smart grids (SGs). Future power distribution systems will be characterized by the simultaneous presence of various distributed resources, such as renewable energy systems (i.e., photovoltaic power plant and wind farms), storage systems, and controllable/non-controllable loads. Control and optimization architectures will enable network-wide coordination of these grid components in order to improve system efficiency and reliability and to limit greenhouse gas emissions. In this context, the energy flows will be bidirectional from large power plants to end users andmore » vice versa; producers and consumers will continuously interact at different voltage levels to determine in advance the requests of loads and to adapt the production and demand for electricity flexibly and efficiently also taking into account the presence of storage systems.« less

  12. How robust are distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1989-01-01

    A distributed system is made up of large numbers of components operating asynchronously from one another and hence with imcomplete and inaccurate views of one another's state. Load fluctuations are common as new tasks arrive and active tasks terminate. Jointly, these aspects make it nearly impossible to arrive at detailed predictions for a system's behavior. It is important to the successful use of distributed systems in situations in which humans cannot provide the sorts of predictable realtime responsiveness of a computer, that the system be robust. The technology of today can too easily be affected by worn programs or by seemingly trivial mechanisms that, for example, can trigger stock market disasters. Inventors of a technology have an obligation to overcome flaws that can exact a human cost. A set of principles for guiding solutions to distributed computing problems is presented.

  13. Price percolation model

    NASA Astrophysics Data System (ADS)

    Kanai, Yasuhiro; Abe, Keiji; Seki, Yoichi

    2015-06-01

    We propose a price percolation model to reproduce the price distribution of components used in industrial finished goods. The intent is to show, using the price percolation model and a component category as an example, that percolation behaviors, which exist in the matter system, the ecosystem, and human society, also exist in abstract, random phenomena satisfying the power law. First, we discretize the total potential demand for a component category, considering it a random field. Second, we assume that the discretized potential demand corresponding to a function of a finished good turns into actual demand if the difficulty of function realization is less than the maximum difficulty of the realization. The simulations using this model suggest that changes in a component category's price distribution are due to changes in the total potential demand corresponding to the lattice size and the maximum difficulty of realization, which is an occupation probability. The results are verified using electronic components' sales data.

  14. A distributed telerobotics construction set

    NASA Technical Reports Server (NTRS)

    Wise, James D.

    1994-01-01

    During the course of our research on distributed telerobotic systems, we have assembled a collection of generic, reusable software modules and an infrastructure for connecting them to form a variety of telerobotic configurations. This paper describes the structure of this 'Telerobotics Construction Set' and lists some of the components which comprise it.

  15. Heterogeneous Integration Technology

    DTIC Science & Technology

    2017-05-19

    Distribution A. Approved for public release; distribution unlimited. (APRS-RY-17-0383) Heterogeneous Integration Technology Dr. Burhan...2013 and 2015 [4]. ...................................... 9 Figure 3: 3D integration of similar or diverse technology components follows More Moore and...10 Figure 4: Many different technologies are used in the implementation of modern microelectronics systems can benefit from

  16. Power management and distribution system for a More-Electric Aircraft (MADMEL) -- Program status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maldonado, M.A.; Shah, N.M.; Cleek, K.J.

    1995-12-31

    A number of technology breakthroughs in recent years have rekindled the concept of a more-electric aircraft. High-power solid-state switching devices, electrohydrostatic actuators (EHAs), electromechanical actuators (EMAs), and high-power generators are just a few examples of component developments that have made dramatic improvements in properties such as weight, size, power, and cost. However, these components cannot be applied piecemeal. A complete, and somewhat revolutionary, system design approach is needed to exploit the benefits that a more-electric aircraft can provide. A five-phase Power Management and Distribution System for a More-Electric Aircraft (MADMEL) program was awarded by the Air Force to the Northrop/Grumman,more » Military Aircraft Division team in September 1991. The objective of the program is to design, develop, and demonstrate an advanced electrical power generation and distribution system for a more-electric aircraft (MEA). The MEA emphasizes the use of electrical power in place of hydraulics, pneumatic, and mechanical power to optimize the performance and life cycle cost of the aircraft. This paper presents an overview of the MADMEL program and a top-level summary of the program results, development and testing of major components to date. In Phase 1 and Phase 2 studies, the electrical load requirements were established and the electrical power system architecture was defined for both near-term (NT-year 1996) and far-term (FT-year 2003) MEA application. The detailed design and specification for the electrical power system (EPS), its interface with the Vehicle Management System, and the test set-up were developed under the recently completed Phase 3. The subsystem level hardware fabrication and testing will be performed under the on-going Phase 4 activities. The overall system level integration and testing will be performed in Phase 5.« less

  17. Effects of Interaction Imbalance in a Strongly Repulsive One-Dimensional Bose Gas

    NASA Astrophysics Data System (ADS)

    Barfknecht, R. E.; Foerster, A.; Zinner, N. T.

    2018-05-01

    We calculate the spatial distributions and the dynamics of a few-body two-component strongly interacting Bose gas confined to an effectively one-dimensional trapping potential. We describe the densities for each component in the trap for different interaction and population imbalances. We calculate the time evolution of the system and show that, for a certain ratio of interactions, the minority population travels through the system as an effective wave packet.

  18. Transition in Gas Turbine Control System Architecture: Modular, Distributed, and Embedded

    NASA Technical Reports Server (NTRS)

    Culley, Dennis

    2010-01-01

    Controls systems are an increasingly important component of turbine-engine system technology. However, as engines become more capable, the control system itself becomes ever more constrained by the inherent environmental conditions of the engine; a relationship forced by the continued reliance on commercial electronics technology. A revolutionary change in the architecture of turbine-engine control systems will change this paradigm and result in fully distributed engine control systems. Initially, the revolution will begin with the physical decoupling of the control law processor from the hostile engine environment using a digital communications network and engine-mounted high temperature electronics requiring little or no thermal control. The vision for the evolution of distributed control capability from this initial implementation to fully distributed and embedded control is described in a roadmap and implementation plan. The development of this plan is the result of discussions with government and industry stakeholders

  19. ACARA - AVAILABILITY, COST AND RESOURCE ALLOCATION

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    ACARA is a program for analyzing availability, lifecycle cost, and resource scheduling. It uses a statistical Monte Carlo method to simulate a system's capacity states as well as component failure and repair. Component failures are modelled using a combination of exponential and Weibull probability distributions. ACARA schedules component replacement to achieve optimum system performance. The scheduling will comply with any constraints on component production, resupply vehicle capacity, on-site spares, or crew manpower and equipment. ACARA is capable of many types of analyses and trade studies because of its integrated approach. It characterizes the system performance in terms of both state availability and equivalent availability (a weighted average of state availability). It can determine the probability of exceeding a capacity state to assess reliability and loss of load probability. It can also evaluate the effect of resource constraints on system availability and lifecycle cost. ACARA interprets the results of a simulation and displays tables and charts for: (1) performance, i.e., availability and reliability of capacity states, (2) frequency of failure and repair, (3) lifecycle cost, including hardware, transportation, and maintenance, and (4) usage of available resources, including mass, volume, and maintenance man-hours. ACARA incorporates a user-friendly, menu-driven interface with full screen data entry. It provides a file management system to store and retrieve input and output datasets for system simulation scenarios. ACARA is written in APL2 using the APL2 interpreter for IBM PC compatible systems running MS-DOS. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. A dot matrix printer is required if the user wishes to print a graph from a results table. A sample MS-DOS executable is provided on the distribution medium. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The standard distribution medium for this program is a set of three 5.25 inch 360K MS-DOS format diskettes. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ACARA was developed in 1992.

  20. Dynamic analysis methods for detecting anomalies in asynchronously interacting systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Akshat; Solis, John Hector; Matschke, Benjamin

    2014-01-01

    Detecting modifications to digital system designs, whether malicious or benign, is problematic due to the complexity of the systems being analyzed. Moreover, static analysis techniques and tools can only be used during the initial design and implementation phases to verify safety and liveness properties. It is computationally intractable to guarantee that any previously verified properties still hold after a system, or even a single component, has been produced by a third-party manufacturer. In this paper we explore new approaches for creating a robust system design by investigating highly-structured computational models that simplify verification and analysis. Our approach avoids the needmore » to fully reconstruct the implemented system by incorporating a small verification component that dynamically detects for deviations from the design specification at run-time. The first approach encodes information extracted from the original system design algebraically into a verification component. During run-time this component randomly queries the implementation for trace information and verifies that no design-level properties have been violated. If any deviation is detected then a pre-specified fail-safe or notification behavior is triggered. Our second approach utilizes a partitioning methodology to view liveness and safety properties as a distributed decision task and the implementation as a proposed protocol that solves this task. Thus the problem of verifying safety and liveness properties is translated to that of verifying that the implementation solves the associated decision task. We develop upon results from distributed systems and algebraic topology to construct a learning mechanism for verifying safety and liveness properties from samples of run-time executions.« less

  1. Self-organisation of random oscillators with Lévy stable distributions

    NASA Astrophysics Data System (ADS)

    Moradi, Sara; Anderson, Johan

    2017-08-01

    A novel possibility of self-organized behaviour of stochastically driven oscillators is presented. It is shown that synchronization by Lévy stable processes is significantly more efficient than that by oscillators with Gaussian statistics. The impact of outlier events from the tail of the distribution function was examined by artificially introducing a few additional oscillators with very strong coupling strengths and it is found that remarkably even one such rare and extreme event may govern the long term behaviour of the coupled system. In addition to the multiplicative noise component, we have investigated the impact of an external additive Lévy distributed noise component on the synchronisation properties of the oscillators.

  2. Economic optimization of the energy transport component of a large distributed solar power plant

    NASA Technical Reports Server (NTRS)

    Turner, R. H.

    1976-01-01

    A solar thermal power plant with a field of collectors, each locally heating some transport fluid, requires a pipe network system for eventual delivery of energy power generation equipment. For a given collector distribution and pipe network geometry, a technique is herein developed which manipulates basic cost information and physical data in order to design an energy transport system consistent with minimized cost constrained by a calculated technical performance. For a given transport fluid and collector conditions, the method determines the network pipe diameter and pipe thickness distribution and also insulation thickness distribution associated with minimum system cost; these relative distributions are unique. Transport losses, including pump work and heat leak, are calculated operating expenses and impact the total system cost. The minimum cost system is readily selected. The technique is demonstrated on six candidate transport fluids to emphasize which parameters dominate the system cost and to provide basic decision data. Three different power plant output sizes are evaluated in each case to determine severity of diseconomy of scale.

  3. High performance VLSI telemetry data systems

    NASA Technical Reports Server (NTRS)

    Chesney, J.; Speciale, N.; Horner, W.; Sabia, S.

    1990-01-01

    NASA's deployment of major space complexes such as Space Station Freedom (SSF) and the Earth Observing System (EOS) will demand increased functionality and performance from ground based telemetry acquisition systems well above current system capabilities. Adaptation of space telemetry data transport and processing standards such as those specified by the Consultative Committee for Space Data Systems (CCSDS) standards and those required for commercial ground distribution of telemetry data, will drive these functional and performance requirements. In addition, budget limitations will force the requirement for higher modularity, flexibility, and interchangeability at lower cost in new ground telemetry data system elements. At NASA's Goddard Space Flight Center (GSFC), the design and development of generic ground telemetry data system elements, over the last five years, has resulted in significant solutions to these problems. This solution, referred to as the functional components approach includes both hardware and software components ready for end user application. The hardware functional components consist of modern data flow architectures utilizing Application Specific Integrated Circuits (ASIC's) developed specifically to support NASA's telemetry data systems needs and designed to meet a range of data rate requirements up to 300 Mbps. Real-time operating system software components support both embedded local software intelligence, and overall system control, status, processing, and interface requirements. These components, hardware and software, form the superstructure upon which project specific elements are added to complete a telemetry ground data system installation. This paper describes the functional components approach, some specific component examples, and a project example of the evolution from VLSI component, to basic board level functional component, to integrated telemetry data system.

  4. Implementing Access to Data Distributed on Many Processors

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.

  5. Instrument control software development process for the multi-star AO system ARGOS

    NASA Astrophysics Data System (ADS)

    Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.

    2012-09-01

    The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.

  6. Multi-agent systems and their applications

    DOE PAGES

    Xie, Jing; Liu, Chen-Ching

    2017-07-14

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  7. Multi-agent systems and their applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Jing; Liu, Chen-Ching

    The number of distributed energy components and devices continues to increase globally. As a result, distributed control schemes are desirable for managing and utilizing these devices, together with the large amount of data. In recent years, agent-based technology becomes a powerful tool for engineering applications. As a computational paradigm, multi agent systems (MASs) provide a good solution for distributed control. Here in this paper, MASs and applications are discussed. A state-of-the-art literature survey is conducted on the system architecture, consensus algorithm, and multi-agent platform, framework, and simulator. In addition, a distributed under-frequency load shedding (UFLS) scheme is proposed using themore » MAS. Simulation results for a case study are presented. The future of MASs is discussed in the conclusion.« less

  8. Anomalous behavior of poly(ethylene glycol) p-tert-octylphenyl ether (Triton X-100) in the water-cyclohexane system

    NASA Astrophysics Data System (ADS)

    Chernysheva, M. G.; Tyasto, Z. A.; Badun, G. A.

    2009-02-01

    The distribution of Triton X-100 nonionic surfactant in the water-cyclohexane system was investigated by the scintillating phase method. It was shown that an increase in the distribution coefficient as the volume ratio between the aqueous and organic phases grew was explained by the presence in Triton X-100 of homologues with different numbers of ethoxyethyl groups and with the distribution coefficients between the phases different by many times. For the real composition of Triton X-100, distribution coefficients of components of the surfactant were estimated, and the behavior of the surfactant in the system under consideration was simulated; the results were in close agreement with the experimental data.

  9. An eco-hydrological approach to predicting regional vegetation and groundwater response to ecological water convergence in dryland riparian ecosystems

    USDA-ARS?s Scientific Manuscript database

    To improve the management strategy of riparian restoration, better understanding of the dynamic of eco-hydrological system and its feedback between hydrological and ecological components are needed. The fully distributed eco-hydrological model coupled with a hydrology component was developed based o...

  10. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  11. Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Magee, Jeff; Moffett, Jonathan

    1996-06-01

    Special Issue on Management This special issue contains seven papers originally presented at an International Workshop on Services for Managing Distributed Systems (SMDS'95), held in September 1995 in Karslruhe, Germany. The workshop was organized to present the results of two ESPRIT III funded projects, Sysman and IDSM, and more generally to bring together work in the area of distributed systems management. The workshop focused on the tools and techniques necessary for managing future large-scale, multi-organizational distributed systems. The open call for papers attracted a large number of submissions and the subsequent attendance at the workshop, which was larger than expected, clearly indicated that the topics addressed by the workshop were of considerable interest both to industry and academia. The papers selected for this special issue represent an excellent coverage of the issues addressed by the workshop. A particular focus of the workshop was the need to help managers deal with the size and complexity of modern distributed systems by the provision of automated support. This automation must have two prime characteristics: it must provide a flexible management system which responds rapidly to changing organizational needs, and it must provide both human managers and automated management components with the information that they need, in a form which can be used for decision-making. These two characteristics define the two main themes of this special issue. To satisfy the requirement for a flexible management system, workers in both industry and universities have turned to architectures which support policy directed management. In these architectures policy is explicitly represented and can be readily modified to meet changing requirements. The paper `Towards implementing policy-based systems management' by Meyer, Anstötz and Popien describes an approach whereby policy is enforced by event-triggered rules. Krause and Zimmermann in their paper `Implementing configuration management policies for distributed applications' present a system in which the configuration of the system in terms of its constituent components and their interconnections can be controlled by reconfiguration rules. Neumair and Wies in the paper `Case study: applying management policies to manage distributed queuing systems' examine how high-level policies can be transformed into practical and efficient implementations for the case of distributed job queuing systems. Koch and Krämer in `Rules and agents for automated management of distributed systems' describe the results of an experiment in using the software development environment Marvel to provide a rule based implementation of management policy. The paper by Jardin, `Supporting scalability and flexibility in a distributed management platform' reports on the experience of using a policy directed approach in the industrial strength TeMIP management platform. Both human managers and automated management components rely on a comprehensive monitoring system to provide accurate and timely information on which decisions are made to modify the operation of a system. The monitoring service must deal with condensing and summarizing the vast amount of data available to produce the events of interest to the controlling components of the overall management system. The paper `Distributed intelligent monitoring and reporting facilities' by Pavlou, Mykoniatis and Sanchez describes a flexible monitoring system in which the monitoring agents themselves are policy directed. Their monitoring system has been implemented in the context of the OSIMIS management platform. Debski and Janas in `The SysMan monitoring service and its management environment' describe the overall SysMan management system architecture and then concentrate on how event processing and distribution is supported in that architecture. The collection of papers gives a good overview of the current state of the art in distributed system management. It has reached a point at which a first generation of systems, based on policy representation within systems and automated monitoring systems, are coming into practical use. The papers also serve to identify many of the issues which are open research questions. In particular, as management systems increase in complexity, how far can we automate the refinement of high-level policies into implementations? How can we detect and resolve conflicts between policies? And how can monitoring services deal efficiently with ever-growing complexity and volume? We wish to acknowledge the many contributors, besides the authors, who have made this issue possible: the anonymous reviewers who have done much to assure the quality of these papers, Morris Sloman and his Programme Committee who convened the Workshop, and Thomas Usländer and his team at the Fraunhofer Institute in Karlsruhe who acted as hosts.

  12. Unbiased free energy estimates in fast nonequilibrium transformations using Gaussian mixtures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Procacci, Piero

    2015-04-21

    In this paper, we present an improved method for obtaining unbiased estimates of the free energy difference between two thermodynamic states using the work distribution measured in nonequilibrium driven experiments connecting these states. The method is based on the assumption that any observed work distribution is given by a mixture of Gaussian distributions, whose normal components are identical in either direction of the nonequilibrium process, with weights regulated by the Crooks theorem. Using the prototypical example for the driven unfolding/folding of deca-alanine, we show that the predicted behavior of the forward and reverse work distributions, assuming a combination of onlymore » two Gaussian components with Crooks derived weights, explains surprisingly well the striking asymmetry in the observed distributions at fast pulling speeds. The proposed methodology opens the way for a perfectly parallel implementation of Jarzynski-based free energy calculations in complex systems.« less

  13. Training a Constitutional Dynamic Network for Effector Recognition: Storage, Recall, and Erasing of Information.

    PubMed

    Holub, Jan; Vantomme, Ghislaine; Lehn, Jean-Marie

    2016-09-14

    Constitutional dynamic libraries (CDLs) of hydrazones, acylhydrazones, and imines undergo reorganization and adaptation in response to chemical effectors (herein metal cations) via component exchange and selection. Such CDLs can be subjected to training by exposition to given effectors and keep memory of the information stored by interaction with a specific metal ion. The long-term storage of the acquired information into the set of constituents of the system allows for fast recognition on subsequent contacts with the same effector(s). Dynamic networks of constituents were designed to adapt orthogonally to different metal cations by up- and down-regulation of specific constituents in the final distribution. The memory may be erased by component exchange between the constituents so as to regenerate the initial (statistical) distribution. The libraries described represent constitutional dynamic systems capable of acting as information storage molecular devices, in which the presence of components linked by reversible covalent bonds in slow exchange and bearing adequate coordination sites allows for the adaptation to different metal ions by constitutional variation. The system thus performs information storage, recall, and erase processes.

  14. Distribution of electromagnetic field and group velocities in two-dimensional periodic systems with dissipative metallic components

    NASA Astrophysics Data System (ADS)

    Kuzmiak, Vladimir; Maradudin, Alexei A.

    1998-09-01

    We study the distribution of the electromagnetic field of the eigenmodes and corresponding group velocities associated with the photonic band structures of two-dimensional periodic systems consisting of an array of infinitely long parallel metallic rods whose intersections with a perpendicular plane form a simple square lattice. We consider both nondissipative and lossy metallic components characterized by a complex frequency-dependent dielectric function. Our analysis is based on the calculation of the complex photonic band structure obtained by using a modified plane-wave method that transforms the problem of solving Maxwell's equations into the problem of diagonalizing an equivalent non-Hermitian matrix. In order to investigate the nature and the symmetry properties of the eigenvectors, which significantly affect the optical properties of the photonic lattices, we evaluate the associated field distribution at the high symmetry points and along high symmetry directions in the two-dimensional first Brillouin zone of the periodic system. By considering both lossless and lossy metallic rods we study the effect of damping on the spatial distribution of the eigenvectors. Then we use the Hellmann-Feynman theorem and the eigenvectors and eigenfrequencies obtained from a photonic band-structure calculation based on a standard plane-wave approach applied to the nondissipative system to calculate the components of the group velocities associated with individual bands as functions of the wave vector in the first Brillouin zone. From the group velocity of each eigenmode the flow of energy is examined. The results obtained indicate a strong directional dependence of the group velocity, and confirm the experimental observation that a photonic crystal is a potentially efficient tool in controlling photon propagation.

  15. A model-based executive for commanding robot teams

    NASA Technical Reports Server (NTRS)

    Barrett, Anthony

    2005-01-01

    The paper presents a way to robustly command a system of systems as a single entity. Instead of modeling each component system in isolation and then manually crafting interaction protocols, this approach starts with a model of the collective population as a single system. By compiling the model into separate elements for each component system and utilizing a teamwork model for coordination, it circumvents the complexities of manually crafting robust interaction protocols. The resulting systems are both globally responsive by virtue of a team oriented interaction model and locally responsive by virtue of a distributed approach to model-based fault detection, isolation, and recovery.

  16. SPS Energy Conversion Power Management Workshop

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Energy technology concerning photovoltaic conversion, solar thermal conversion systems, and electrical power distribution processing is discussed. The manufacturing processes involving solar cells and solar array production are summarized. Resource issues concerning gallium arsenides and silicon alternatives are reported. Collector structures for solar construction are described and estimates in their service life, failure rates, and capabilities are presented. Theories of advanced thermal power cycles are summarized. Power distribution system configurations and processing components are presented.

  17. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew

    2016-01-01

    EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.

  18. Heterogeneous ligand-nanoparticle distributions: a major obstacle to scientific understanding and commercial translation.

    PubMed

    Mullen, Douglas G; Banaszak Holl, Mark M

    2011-11-15

    Nanoparticles conjugated with functional ligands are expected to have a major impact in medicine, photonics, sensing, and nanoarchitecture design. One major obstacle to realizing the promise of these materials, however, is the difficulty in controlling the ligand/nanoparticle ratio. This obstacle can be segmented into three key areas: First, many designs of these systems have failed to account for the true heterogeneity of ligand/nanoparticle ratios that compose each material. Second, studies in the field often use the mean ligand/nanoparticle ratio as the accepted level of characterization of these materials. This measure is insufficient because it does not provide information about the distribution of ligand/nanoparticle species within a sample or the number and relative amount of the different species that compose a material. Without these data, researchers do not have an accurate definition of material composition necessary both to understand the material-property relationships and to monitor the consistency of the material. Third, some synthetic approaches now in use may not produce consistent materials because of their sensitivity to reaction kinetics and to the synthetic history of the nanoparticle. In this Account, we describe recent advances that we have made in under standing the material composition of ligand-nanoparticle systems. Our work has been enabled by a model system using poly(amidoamine) dendrimers and two small molecule ligands. Using reverse phase high-pressure liquid chromatography (HPLC), we have successfully resolved and quantified the relative amounts and ratios of each ligand/dendrimer combination. This type of information is rare within the field of ligand-nanoparticle materials because most analytical techniques have been unable to identify the components in the distribution. Our experimental data indicate that the actual distribution of ligand-nanoparticle components is much more heterogeneous than is commonly assumed. The mean ligand/nanoparticle ratio that is typically the only information known about a material is insufficient because the mean does not provide information on the diversity of components in the material and often does not describe the most common component (the mode). Additionally, our experimental data has provided examples of material batches with the same mean ligand/nanoparticle ratio and very different distributions. This discrepancy indicates that the mean cannot be used as the sole metric to assess the reproducibility of a system. We further found that distribution profiles can be highly sensitive to the synthetic history of the starting material as well as slight changes in reaction conditions. We have incorporated the lessons from our experimental data into the design of new ligand-nanoparticle systems to provide improved control over these ratios.

  19. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  20. CompatPM: enabling energy efficient multimedia workloads for distributed mobile platforms

    NASA Astrophysics Data System (ADS)

    Nathuji, Ripal; O'Hara, Keith J.; Schwan, Karsten; Balch, Tucker

    2007-01-01

    The computation and communication abilities of modern platforms are enabling increasingly capable cooperative distributed mobile systems. An example is distributed multimedia processing of sensor data in robots deployed for search and rescue, where a system manager can exploit the application's cooperative nature to optimize the distribution of roles and tasks in order to successfully accomplish the mission. Because of limited battery capacities, a critical task a manager must perform is online energy management. While support for power management has become common for the components that populate mobile platforms, what is lacking is integration and explicit coordination across the different management actions performed in a variety of system layers. This papers develops an integration approach for distributed multimedia applications, where a global manager specifies both a power operating point and a workload for a node to execute. Surprisingly, when jointly considering power and QoS, experimental evaluations show that using a simple deadline-driven approach to assigning frequencies can be non-optimal. These trends are further affected by certain characteristics of underlying power management mechanisms, which in our research, are identified as groupings that classify component power management as "compatible" (VFC) or "incompatible" (VFI) with voltage and frequency scaling. We build on these findings to develop CompatPM, a vertically integrated control strategy for power management in distributed mobile systems. Experimental evaluations of CompatPM indicate average energy improvements of 8% when platform resources are managed jointly rather than independently, demonstrating that previous attempts to maximize battery life by simply minimizing frequency are inappropriate from a platform-level perspective.

  1. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  2. DAISY-DAMP: A distributed AI system for the dynamic allocation and management of power

    NASA Technical Reports Server (NTRS)

    Hall, Steven B.; Ohler, Peter C.

    1988-01-01

    One of the critical parameters that must be addressed when designing a loosely coupled Distributed AI SYstem (DAISY) has to do with the degree to which authority is centralized or decentralized. The decision to implement the Dynamic Allocation and Management of Power (DAMP) system as a network of cooperating agents mandated this study. The DAISY-DAMP problem is described; the component agents of the system are characterized; and the communication protocols system elucidated. The motivations and advantages in designing the system with authority decentralized is discussed. Progress in the area of Speech Act theory is proposed as playing a role in constructing decentralized systems.

  3. Baseline Architecture of ITER Control System

    NASA Astrophysics Data System (ADS)

    Wallander, A.; Di Maio, F.; Journeaux, J.-Y.; Klotz, W.-D.; Makijarvi, P.; Yonekawa, I.

    2011-08-01

    The control system of ITER consists of thousands of computers processing hundreds of thousands of signals. The control system, being the primary tool for operating the machine, shall integrate, control and coordinate all these computers and signals and allow a limited number of staff to operate the machine from a central location with minimum human intervention. The primary functions of the ITER control system are plant control, supervision and coordination, both during experimental pulses and 24/7 continuous operation. The former can be split in three phases; preparation of the experiment by defining all parameters; executing the experiment including distributed feed-back control and finally collecting, archiving, analyzing and presenting all data produced by the experiment. We define the control system as a set of hardware and software components with well defined characteristics. The architecture addresses the organization of these components and their relationship to each other. We distinguish between physical and functional architecture, where the former defines the physical connections and the latter the data flow between components. In this paper, we identify the ITER control system based on the plant breakdown structure. Then, the control system is partitioned into a workable set of bounded subsystems. This partition considers at the same time the completeness and the integration of the subsystems. The components making up subsystems are identified and defined, a naming convention is introduced and the physical networks defined. Special attention is given to timing and real-time communication for distributed control. Finally we discuss baseline technologies for implementing the proposed architecture based on analysis, market surveys, prototyping and benchmarking carried out during the last year.

  4. Decreasing inventory of a cement factory roller mill parts using reliability centered maintenance method

    NASA Astrophysics Data System (ADS)

    Witantyo; Rindiyah, Anita

    2018-03-01

    According to data from maintenance planning and control, it was obtained that highest inventory value is non-routine components. Maintenance components are components which procured based on maintenance activities. The problem happens because there is no synchronization between maintenance activities and the components required. Reliability Centered Maintenance method is used to overcome the problem by reevaluating maintenance activities required components. The case chosen is roller mill system because it has the highest unscheduled downtime record. Components required for each maintenance activities will be determined by its failure distribution, so the number of components needed could be predicted. Moreover, those components will be reclassified from routine component to be non-routine component, so the procurement could be carried out regularly. Based on the conducted analysis, failure happens in almost every maintenance task are classified to become scheduled on condition task, scheduled discard task, schedule restoration task and no schedule maintenance. From 87 used components for maintenance activities are evaluated and there 19 components that experience reclassification from non-routine components to routine components. Then the reliability and need of those components were calculated for one-year operation period. Based on this invention, it is suggested to change all of the components in overhaul activity to increase the reliability of roller mill system. Besides, the inventory system should follow maintenance schedule and the number of required components in maintenance activity so the value of procurement will be decreased and the reliability system will increase.

  5. Tools and Techniques for Adding Fault Tolerance to Distributed and Parallel Programs

    DTIC Science & Technology

    1991-12-07

    is rapidly approaching dimensions where fault tolerance can no longer be ignored. No matter how reliable the i .nd~ividual components May be, the...The scale of parallel computing systems is rapidly approaching dimensions where 41to’- erance can no longer be ignored. No matter how relitble the...those employed in the Tandem [71 and Stratus [35] systems, is clearly impractical. * No matter how reliable the individual components are, the sheer

  6. Defense strategies for asymmetric networked systems under composite utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively.more » They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure« less

  7. A combination strategy for extraction and isolation of multi-component natural products by systematic two-phase solvent extraction-(13)C nuclear magnetic resonance pattern recognition and following conical counter-current chromatography separation: Podophyllotoxins and flavonoids from Dysosma versipellis (Hance) as examples.

    PubMed

    Yang, Zhi; Wu, Youqian; Wu, Shihua

    2016-01-29

    Despite of substantial developments of extraction and separation techniques, isolation of natural products from natural resources is still a challenging task. In this work, an efficient strategy for extraction and isolation of multi-component natural products has been successfully developed by combination of systematic two-phase liquid-liquid extraction-(13)C NMR pattern recognition and following conical counter-current chromatography separation. A small-scale crude sample was first distributed into 9 systematic hexane-ethyl acetate-methanol-water (HEMWat) two-phase solvent systems for determination of the optimum extraction solvents and partition coefficients of the prominent components. Then, the optimized solvent systems were used in succession to enrich the hydrophilic and lipophilic components from the large-scale crude sample. At last, the enriched components samples were further purified by a new conical counter-current chromatography (CCC). Due to the use of (13)C NMR pattern recognition, the kinds and structures of major components in the solvent extracts could be predicted. Therefore, the method could collect simultaneously the partition coefficients and the structural information of components in the selected two-phase solvents. As an example, a cytotoxic extract of podophyllotoxins and flavonoids from Dysosma versipellis (Hance) was selected. After the systematic HEMWat system solvent extraction and (13)C NMR pattern recognition analyses, the crude extract of D. versipellis was first degreased by the upper phase of HEMWat system (9:1:9:1, v/v), and then distributed in the two phases of the system of HEMWat (2:8:2:8, v/v) to obtain the hydrophilic lower phase extract and lipophilic upper phase extract, respectively. These extracts were further separated by conical CCC with the HEMWat systems (1:9:1:9 and 4:6:4:6, v/v). As results, total 17 cytotoxic compounds were isolated and identified. In general, whole results suggested that the strategy was very efficient for the systematic extraction and isolation of biological active components from the complex biomaterials. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. A fractal approach to dynamic inference and distribution analysis

    PubMed Central

    van Rooij, Marieke M. J. W.; Nash, Bertha A.; Rajaraman, Srinivasan; Holden, John G.

    2013-01-01

    Event-distributions inform scientists about the variability and dispersion of repeated measurements. This dispersion can be understood from a complex systems perspective, and quantified in terms of fractal geometry. The key premise is that a distribution's shape reveals information about the governing dynamics of the system that gave rise to the distribution. Two categories of characteristic dynamics are distinguished: additive systems governed by component-dominant dynamics and multiplicative or interdependent systems governed by interaction-dominant dynamics. A logic by which systems governed by interaction-dominant dynamics are expected to yield mixtures of lognormal and inverse power-law samples is discussed. These mixtures are described by a so-called cocktail model of response times derived from human cognitive performances. The overarching goals of this article are twofold: First, to offer readers an introduction to this theoretical perspective and second, to offer an overview of the related statistical methods. PMID:23372552

  9. Visualizing request-flow comparison to aid performance diagnosis in distributed systems.

    PubMed

    Sambasivan, Raja R; Shafer, Ilari; Mazurek, Michelle L; Ganger, Gregory R

    2013-12-01

    Distributed systems are complex to develop and administer, and performance problem diagnosis is particularly challenging. When performance degrades, the problem might be in any of the system's many components or could be a result of poor interactions among them. Recent research efforts have created tools that automatically localize the problem to a small number of potential culprits, but research is needed to understand what visualization techniques work best for helping distributed systems developers understand and explore their results. This paper compares the relative merits of three well-known visualization approaches (side-by-side, diff, and animation) in the context of presenting the results of one proven automated localization technique called request-flow comparison. Via a 26-person user study, which included real distributed systems developers, we identify the unique benefits that each approach provides for different problem types and usage modes.

  10. A development framework for artificial intelligence based distributed operations support systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1990-01-01

    Advanced automation is required to reduce costly human operations support requirements for complex space-based and ground control systems. Existing knowledge based technologies have been used successfully to automate individual operations tasks. Considerably less progress has been made in integrating and coordinating multiple operations applications for unified intelligent support systems. To fill this gap, SOCIAL, a tool set for developing Distributed Artificial Intelligence (DAI) systems is being constructed. SOCIAL consists of three primary language based components defining: models of interprocess communication across heterogeneous platforms; models for interprocess coordination, concurrency control, and fault management; and for accessing heterogeneous information resources. DAI applications subsystems, either new or existing, will access these distributed services non-intrusively, via high-level message-based protocols. SOCIAL will reduce the complexity of distributed communications, control, and integration, enabling developers to concentrate on the design and functionality of the target DAI system itself.

  11. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  12. Quantitative refractive index distribution of single cell by combining phase-shifting interferometry and AFM imaging.

    PubMed

    Zhang, Qinnan; Zhong, Liyun; Tang, Ping; Yuan, Yingjie; Liu, Shengde; Tian, Jindong; Lu, Xiaoxu

    2017-05-31

    Cell refractive index, an intrinsic optical parameter, is closely correlated with the intracellular mass and concentration. By combining optical phase-shifting interferometry (PSI) and atomic force microscope (AFM) imaging, we constructed a label free, non-invasive and quantitative refractive index of single cell measurement system, in which the accurate phase map of single cell was retrieved with PSI technique and the cell morphology with nanoscale resolution was achieved with AFM imaging. Based on the proposed AFM/PSI system, we achieved quantitative refractive index distributions of single red blood cell and Jurkat cell, respectively. Further, the quantitative change of refractive index distribution during Daunorubicin (DNR)-induced Jurkat cell apoptosis was presented, and then the content changes of intracellular biochemical components were achieved. Importantly, these results were consistent with Raman spectral analysis, indicating that the proposed PSI/AFM based refractive index system is likely to become a useful tool for intracellular biochemical components analysis measurement, and this will facilitate its application for revealing cell structure and pathological state from a new perspective.

  13. A Brief Review of the Need for Robust Smart Wireless Sensor Systems for Future Propulsion Systems, Distributed Engine Controls, and Propulsion Health Management

    NASA Technical Reports Server (NTRS)

    Hunter, Gary W.; Behbahani, Alireza

    2012-01-01

    Smart Sensor Systems with wireless capability operational in high temperature, harsh environments are a significant component in enabling future propulsion systems to meet a range of increasingly demanding requirements. These propulsion systems must incorporate technology that will monitor engine component conditions, analyze the incoming data, and modify operating parameters to optimize propulsion system operations. This paper discusses the motivation towards the development of high temperature, smart wireless sensor systems that include sensors, electronics, wireless communication, and power. The challenges associated with the use of traditional wired sensor systems will be reviewed and potential advantages of Smart Sensor Systems will be discussed. A brief review of potential applications for wireless smart sensor networks and their potential impact on propulsion system operation, with emphasis on Distributed Engine Control and Propulsion Health Management, will be given. A specific example related to the development of high temperature Smart Sensor Systems based on silicon carbide electronics will be discussed. It is concluded that the development of a range of robust smart wireless sensor systems are a foundation for future development of intelligent propulsion systems with enhanced capabilities.

  14. EOSDIS: Archive and Distribution Systems in the Year 2000

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lake, Alla

    2000-01-01

    Earth Science Enterprise (ESE) is a long-term NASA research mission to study the processes leading to global climate change. The Earth Observing System (EOS) is a NASA campaign of satellite observatories that are a major component of ESE. The EOS Data and Information System (EOSDIS) is another component of ESE that will provide the Earth science community with easy, affordable, and reliable access to Earth science data. EOSDIS is a distributed system, with major facilities at seven Distributed Active Archive Centers (DAACs) located throughout the United States. The EOSDIS software architecture is being designed to receive, process, and archive several terabytes of science data on a daily basis. Thousands of science users and perhaps several hundred thousands of non-science users are expected to access the system. The first major set of data to be archived in the EOSDIS is from Landsat-7. Another EOS satellite, Terra, was launched on December 18, 1999. With the Terra launch, the EOSDIS will be required to support approximately one terabyte of data into and out of the archives per day. Since EOS is a multi-mission program, including the launch of more satellites and many other missions, the role of the archive systems becomes larger and more critical. In 1995, at the fourth convening of NASA Mass Storage Systems and Technologies Conference, the development plans for the EOSDIS information system and archive were described. Five years later, many changes have occurred in the effort to field an operational system. It is interesting to reflect on some of the changes driving the archive technology and system development for EOSDIS. This paper principally describes the Data Server subsystem including how the other subsystems access the archive, the nature of the data repository, and the mass-storage I/O management. The paper reviews the system architecture (both hardware and software) of the basic components of the archive. It discusses the operations concept, code development, and testing phase of the system. Finally, it describes the future plans for the archive.

  15. Intelligent, Self-Diagnostic Thermal Protection System for Future Spacecraft

    NASA Technical Reports Server (NTRS)

    Hyers, Robert W.; SanSoucie, Michael P.; Pepyne, David; Hanlon, Alaina B.; Deshmukh, Abhijit

    2005-01-01

    The goal of this project is to provide self-diagnostic capabilities to the thermal protection systems (TPS) of future spacecraft. Self-diagnosis is especially important in thermal protection systems (TPS), where large numbers of parts must survive extreme conditions after weeks or years in space. In-service inspections of these systems are difficult or impossible, yet their reliability must be ensured before atmospheric entry. In fact, TPS represents the greatest risk factor after propulsion for any transatmospheric mission. The concepts and much of the technology would be applicable not only to the Crew Exploration Vehicle (CEV), but also to ablative thermal protection for aerocapture and planetary exploration. Monitoring a thermal protection system on a Shuttle-sized vehicle is a daunting task: there are more than 26,000 components whose integrity must be verified with very low rates of both missed faults and false positives. The large number of monitored components precludes conventional approaches based on centralized data collection over separate wires; a distributed approach is necessary to limit the power, mass, and volume of the health monitoring system. Distributed intelligence with self-diagnosis further improves capability, scalability, robustness, and reliability of the monitoring subsystem. A distributed system of intelligent sensors can provide an assurance of the integrity of the system, diagnosis of faults, and condition-based maintenance, all with provable bounds on errors.

  16. Component masses of young, wide, non-magnetic white dwarf binaries in the Sloan Digital Sky Survey Data Release 7

    NASA Astrophysics Data System (ADS)

    Baxter, R. B.; Dobbie, P. D.; Parker, Q. A.; Casewell, S. L.; Lodieu, N.; Burleigh, M. R.; Lawrie, K. A.; Külebi, B.; Koester, D.; Holland, B. R.

    2014-06-01

    We present a spectroscopic component analysis of 18 candidate young, wide, non-magnetic, double-degenerate binaries identified from a search of the Sloan Digital Sky Survey Data Release 7 (DR7). All but two pairings are likely to be physical systems. We show SDSS J084952.47+471247.7 + SDSS J084952.87+471249.4 to be a wide DA + DB binary, only the second identified to date. Combining our measurements for the components of 16 new binaries with results for three similar, previously known systems within the DR7, we have constructed a mass distribution for the largest sample to date (38) of white dwarfs in young, wide, non-magnetic, double-degenerate pairings. This is broadly similar in form to that of the isolated field population with a substantial peak around M ˜ 0.6 M⊙. We identify an excess of ultramassive white dwarfs and attribute this to the primordial separation distribution of their progenitor systems peaking at relatively larger values and the greater expansion of their binary orbits during the final stages of stellar evolution. We exploit this mass distribution to probe the origins of unusual types of degenerates, confirming a mild preference for the progenitor systems of high-field-magnetic white dwarfs, at least within these binaries, to be associated with early-type stars. Additionally, we consider the 19 systems in the context of the stellar initial mass-final mass relation. None appear to be strongly discordant with current understanding of this relationship.

  17. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Powers, L. M.; Jadaan, O. M.; Gyekenyesi, J. P.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural application such as in advanced turbine engine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilizes commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life, of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the Ceramics Analysis and Reliability Evaluation of Structures/CREEP (CARES/CREEP) integrated design program, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benchmark problems and engine components are included.

  18. Creep Life of Ceramic Components Using a Finite-Element-Based Integrated Design Program (CARES/CREEP)

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.; Powers, L. M.; Jadaan, O. M.

    1998-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. The long life requirement necessitates subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this paper is to present a design methodology for predicting the lifetimes of structural components subjected to creep rupture conditions. This methodology utilized commercially available finite element packages and takes into account the time-varying creep strain distributions (stress relaxation). The creep life of a component is discretized into short time steps, during which the stress and strain distributions are assumed constant. The damage is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. Failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity. The corresponding time will be the creep rupture life for that component. Examples are chosen to demonstrate the CARES/CREEP (Ceramics Analysis and Reliability Evaluation of Structures/CREEP) integrated design programs, which is written for the ANSYS finite element package. Depending on the component size and loading conditions, it was found that in real structures one of two competing failure modes (creep or slow crack growth) will dominate. Applications to benechmark problems and engine components are included.

  19. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed Central

    Law, V.; Goldberg, H. S.; Jones, P.; Safran, C.

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system. PMID:9929252

  20. A component-based problem list subsystem for the HOLON testbed. Health Object Library Online.

    PubMed

    Law, V; Goldberg, H S; Jones, P; Safran, C

    1998-01-01

    One of the deliverables of the HOLON (Health Object Library Online) project is the specification of a reference architecture for clinical information systems that facilitates the development of a variety of discrete, reusable software components. One of the challenges facing the HOLON consortium is determining what kinds of components can be made available in a library for developers of clinical information systems. To further explore the use of component architectures in the development of reusable clinical subsystems, we have incorporated ongoing work in the development of enterprise terminology services into a Problem List subsystem for the HOLON testbed. We have successfully implemented a set of components using CORBA (Common Object Request Broker Architecture) and Java distributed object technologies that provide a functional problem list application and UMLS-based "Problem Picker." Through this development, we have overcome a variety of obstacles characteristic of rapidly emerging technologies, and have identified architectural issues necessary to scale these components for use and reuse within an enterprise clinical information system.

  1. Defense Strategies for Asymmetric Networked Systems with Discrete Components.

    PubMed

    Rao, Nageswara S V; Ma, Chris Y T; Hausken, Kjell; He, Fei; Yau, David K Y; Zhuang, Jun

    2018-05-03

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models.

  2. Defense Strategies for Asymmetric Networked Systems with Discrete Components

    PubMed Central

    Rao, Nageswara S. V.; Ma, Chris Y. T.; Hausken, Kjell; He, Fei; Yau, David K. Y.

    2018-01-01

    We consider infrastructures consisting of a network of systems, each composed of discrete components. The network provides the vital connectivity between the systems and hence plays a critical, asymmetric role in the infrastructure operations. The individual components of the systems can be attacked by cyber and physical means and can be appropriately reinforced to withstand these attacks. We formulate the problem of ensuring the infrastructure performance as a game between an attacker and a provider, who choose the numbers of the components of the systems and network to attack and reinforce, respectively. The costs and benefits of attacks and reinforcements are characterized using the sum-form, product-form and composite utility functions, each composed of a survival probability term and a component cost term. We present a two-level characterization of the correlations within the infrastructure: (i) the aggregate failure correlation function specifies the infrastructure failure probability given the failure of an individual system or network, and (ii) the survival probabilities of the systems and network satisfy first-order differential conditions that capture the component-level correlations using multiplier functions. We derive Nash equilibrium conditions that provide expressions for individual system survival probabilities and also the expected infrastructure capacity specified by the total number of operational components. We apply these results to derive and analyze defense strategies for distributed cloud computing infrastructures using cyber-physical models. PMID:29751588

  3. Earthquake hazards to domestic water distribution systems in Salt Lake County, Utah

    USGS Publications Warehouse

    Highland, Lynn M.

    1985-01-01

    A magnitude-7. 5 earthquake occurring along the central portion of the Wasatch Fault, Utah, may cause significant damage to Salt Lake County's domestic water system. This system is composed of water treatment plants, aqueducts, distribution mains, and other facilities that are vulnerable to ground shaking, liquefaction, fault movement, and slope failures. Recent investigations into surface faulting, landslide potential, and earthquake intensity provide basic data for evaluating the potential earthquake hazards to water-distribution systems in the event of a large earthquake. Water supply system components may be vulnerable to one or more earthquake-related effects, depending on site geology and topography. Case studies of water-system damage by recent large earthquakes in Utah and in other regions of the United States offer valuable insights in evaluating water system vulnerability to earthquakes.

  4. Self-Consistency of the Theory of Elementary Stage Rates of Reversible Processes and the Equilibrium Distribution of Reaction Mixture Components

    NASA Astrophysics Data System (ADS)

    Tovbin, Yu. K.

    2018-06-01

    An analysis is presented of one of the key concepts of physical chemistry of condensed phases: the theory self-consistency in describing the rates of elementary stages of reversible processes and the equilibrium distribution of components in a reaction mixture. It posits that by equating the rates of forward and backward reactions, we must obtain the same equation for the equilibrium distribution of reaction mixture components, which follows directly from deducing the equation in equilibrium theory. Ideal reaction systems always have this property, since the theory is of a one-particle character. Problems arise in considering interparticle interactions responsible for the nonideal behavior of real systems. The Eyring and Temkin approaches to describing nonideal reaction systems are compared. Conditions for the self-consistency of the theory for mono- and bimolecular processes in different types of interparticle potentials, the degree of deviation from the equilibrium state, allowing for the internal motions of molecules in condensed phases, and the electronic polarization of the reagent environment are considered within the lattice gas model. The inapplicability of the concept of an activated complex coefficient for reaching self-consistency is demonstrated. It is also shown that one-particle approximations for considering intermolecular interactions do not provide a theory of self-consistency for condensed phases. We must at a minimum consider short-range order correlations.

  5. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  6. Telecommunications Technology in the 1980s.

    ERIC Educational Resources Information Center

    Baer, Walter S.

    This paper describes some of the advances in telecommunications technology that can be anticipated during the 1980's in the areas of computer and component technologies, computer influences on telecommunications systems and services, communications terminals, transmission and switching systems, and local distribution. Specific topics covered…

  7. Neuroanatomical distribution of five semantic components of verbs: evidence from fMRI.

    PubMed

    Kemmerer, David; Castillo, Javier Gonzalez; Talavage, Thomas; Patterson, Stephanie; Wiley, Cynthia

    2008-10-01

    The Simulation Framework, also known as the Embodied Cognition Framework, maintains that conceptual knowledge is grounded in sensorimotor systems. To test several predictions that this theory makes about the neural substrates of verb meanings, we used functional magnetic resonance imaging (fMRI) to scan subjects' brains while they made semantic judgments involving five classes of verbs-specifically, Running verbs (e.g., run, jog, walk), Speaking verbs (e.g., shout, mumble, whisper), Hitting verbs (e.g., hit, poke, jab), Cutting verbs (e.g., cut, slice, hack), and Change of State verbs (e.g., shatter, smash, crack). These classes were selected because they vary with respect to the presence or absence of five distinct semantic components-specifically, ACTION, MOTION, CONTACT, CHANGE OF STATE, and TOOL USE. Based on the Simulation Framework, we hypothesized that the ACTION component depends on the primary motor and premotor cortices, that the MOTION component depends on the posterolateral temporal cortex, that the CONTACT component depends on the intraparietal sulcus and inferior parietal lobule, that the CHANGE OF STATE component depends on the ventral temporal cortex, and that the TOOL USE component depends on a distributed network of temporal, parietal, and frontal regions. Virtually all of the predictions were confirmed. Taken together, these findings support the Simulation Framework and extend our understanding of the neuroanatomical distribution of different aspects of verb meaning.

  8. State-of-the-art fiber optics for short distance frequency reference distribution

    NASA Astrophysics Data System (ADS)

    Lutes, G. F.; Primas, L. E.

    1989-05-01

    A number of recently developed fiber-optic components that hold the promise of unprecedented stability for passively stabilized frequency distribution links are characterized. These components include a fiber-optic transmitter, an optical isolator, and a new type of fiber-optic cable. A novel laser transmitter exhibits extremely low sensitivity to intensity and polarization changes of reflected light due to cable flexure. This virtually eliminates one of the shortcomings in previous laser transmitters. A high-isolation, low-loss optical isolator has been developed which also virtually eliminates laser sensitivity to changes in intensity and polarization of reflected light. A newly developed fiber has been tested. This fiber has a thermal coefficient of delay of less than 0.5 parts per million per deg C, nearly 20 times lower than the best coaxial hardline cable and 10 times lower than any previous fiber-optic cable. These components are highly suitable for distribution systems with short extent, such as within a Deep Space Communications Complex. Here, these new components are described and the test results presented.

  9. State-of-the-art fiber optics for short distance frequency reference distribution

    NASA Technical Reports Server (NTRS)

    Lutes, G. F.; Primas, L. E.

    1989-01-01

    A number of recently developed fiber-optic components that hold the promise of unprecedented stability for passively stabilized frequency distribution links are characterized. These components include a fiber-optic transmitter, an optical isolator, and a new type of fiber-optic cable. A novel laser transmitter exhibits extremely low sensitivity to intensity and polarization changes of reflected light due to cable flexure. This virtually eliminates one of the shortcomings in previous laser transmitters. A high-isolation, low-loss optical isolator has been developed which also virtually eliminates laser sensitivity to changes in intensity and polarization of reflected light. A newly developed fiber has been tested. This fiber has a thermal coefficient of delay of less than 0.5 parts per million per deg C, nearly 20 times lower than the best coaxial hardline cable and 10 times lower than any previous fiber-optic cable. These components are highly suitable for distribution systems with short extent, such as within a Deep Space Communications Complex. Here, these new components are described and the test results presented.

  10. WAMS measurements pre-processing for detecting low-frequency oscillations in power systems

    NASA Astrophysics Data System (ADS)

    Kovalenko, P. Y.

    2017-07-01

    Processing the data received from measurement systems implies the situation when one or more registered values stand apart from the sample collection. These values are referred to as “outliers”. The processing results may be influenced significantly by the presence of those in the data sample under consideration. In order to ensure the accuracy of low-frequency oscillations detection in power systems the corresponding algorithm has been developed for the outliers detection and elimination. The algorithm is based on the concept of the irregular component of measurement signal. This component comprises measurement errors and is assumed to be Gauss-distributed random. The median filtering is employed to detect the values lying outside the range of the normally distributed measurement error on the basis of a 3σ criterion. The algorithm has been validated involving simulated signals and WAMS data as well.

  11. COTS-based OO-component approach for software inter-operability and reuse (software systems engineering methodology)

    NASA Technical Reports Server (NTRS)

    Yin, J.; Oyaki, A.; Hwang, C.; Hung, C.

    2000-01-01

    The purpose of this research and study paper is to provide a summary description and results of rapid development accomplishments at NASA/JPL in the area of advanced distributed computing technology using a Commercial-Off--The-Shelf (COTS)-based object oriented component approach to open inter-operable software development and software reuse.

  12. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  13. Code CUGEL: A code to unfold Ge(Li) spectrometer polyenergetic gamma photon experimental distributions

    NASA Technical Reports Server (NTRS)

    Steyn, J. J.; Born, U.

    1970-01-01

    A FORTRAN code was developed for the Univac 1108 digital computer to unfold lithium-drifted germanium semiconductor spectrometers, polyenergetic gamma photon experimental distributions. It was designed to analyze the combination continuous and monoenergetic gamma radiation field of radioisotope volumetric sources. The code generates the detector system response matrix function and applies it to monoenergetic spectral components discretely and to the continuum iteratively. It corrects for system drift, source decay, background, and detection efficiency. Results are presented in digital form for differential and integrated photon number and energy distributions, and for exposure dose.

  14. Optical system

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Page, N. A.; Shack, R. V.; Shannon, R. R. (Inventor)

    1985-01-01

    Disclosed is an otpical system used in a spacecraft to observe a remote surface and provide a spatial and spectral image of this surface. The optical system includes aspheric and spherical mirrors aligned to focus at a first focal plane an image of the surface, and a mirror at this first focal plane which reflects light back on to the spherical mirror. This spherical mirror collimates the light and directs it through a prism which disperses it. The dispersed light is then focused on an array of light responsive elements disposed at a second focal plane. The prism is designed such that it disperses light into components of different wavelengths, with the components of shorter wavelengths being dispersed more than the components of longer wavelengths to present at the second focal plane a distribution pattern in which preselected groupings of the components are dispersed over essentially equal spacing intervals.

  15. Evidence for the Gompertz curve in the income distribution of Brazil 1978-2005

    NASA Astrophysics Data System (ADS)

    Moura, N. J., Jr.; Ribeiro, M. B.

    2009-01-01

    This work presents an empirical study of the evolution of the personal income distribution in Brazil. Yearly samples available from 1978 to 2005 were studied and evidence was found that the complementary cumulative distribution of personal income for 99% of the economically less favorable population is well represented by a Gompertz curve of the form G(x) = exp [exp (A-Bx)], where x is the normalized individual income. The complementary cumulative distribution of the remaining 1% richest part of the population is well represented by a Pareto power law distribution P(x) = βx-α. This result means that similarly to other countries, Brazil’s income distribution is characterized by a well defined two class system. The parameters A, B, α, β were determined by a mixture of boundary conditions, normalization and fitting methods for every year in the time span of this study. Since the Gompertz curve is characteristic of growth models, its presence here suggests that these patterns in income distribution could be a consequence of the growth dynamics of the underlying economic system. In addition, we found out that the percentage share of both the Gompertzian and Paretian components relative to the total income shows an approximate cycling pattern with periods of about 4 years and whose maximum and minimum peaks in each component alternate at about every 2 years. This finding suggests that the growth dynamics of Brazil’s economic system might possibly follow a Goodwin-type class model dynamics based on the application of the Lotka-Volterra equation to economic growth and cycle.

  16. Geography and the costs of urban energy infrastructure: The case of electricity and natural gas capital investments

    NASA Astrophysics Data System (ADS)

    Senyel, Muzeyyen Anil

    Investments in the urban energy infrastructure for distributing electricity and natural gas are analyzed using (1) property data measuring distribution plant value at the local/tax district level, and (2) system outputs such as sectoral numbers of customers and energy sales, input prices, company-specific characteristics such as average wages and load factor. Socio-economic and site-specific urban and geographic variables, however, often been neglected in past studies. The purpose of this research is to incorporate these site-specific characteristics of electricity and natural gas distribution into investment cost model estimations. These local characteristics include (1) socio-economic variables, such as income and wealth; (2) urban-related variables, such as density, land-use, street pattern, housing pattern; (3) geographic and environmental variables, such as soil, topography, and weather, and (4) company-specific characteristics such as average wages, and load factor. The classical output variables include residential and commercial-industrial customers and sales. In contrast to most previous research, only capital investments at the local level are considered. In addition to aggregate cost modeling, the analysis focuses on the investment costs for the system components: overhead conductors, underground conductors, conduits, poles, transformers, services, street lighting, and station equipment for electricity distribution; and mains, services, regular and industrial measurement and regulation stations for natural gas distribution. The Box-Cox, log-log and additive models are compared to determine the best fitting cost functions. The Box-Cox form turns out to be superior to the other forms at the aggregate level and for network components. However, a linear additive form provides a better fit for end-user related components. The results show that, in addition to output variables and company-specific variables, various site-specific variables are statistically significant at the aggregate and disaggregate levels. Local electricity and natural gas distribution networks are characterized by a natural monopoly cost structure and economies of scale and density. The results provide evidence for the economies of scale and density for the aggregate electricity and natural gas distribution systems. However, distribution components have varying economic characteristics. The backbones of the networks (overhead conductors for electricity, and mains for natural gas) display economies of scale and density, but services in both systems and street lighting display diseconomies of scale and diseconomies of density. Finally multi-utility network cost analyses are presented for aggregate and disaggregate electricity and natural gas capital investments. Economies of scope analyses investigate whether providing electricity and natural gas jointly is economically advantageous, as compared to providing these products separately. Significant economies of scope are observed for both the total network and the underground capital investments.

  17. A failure management prototype: DR/Rx

    NASA Technical Reports Server (NTRS)

    Hammen, David G.; Baker, Carolyn G.; Kelly, Christine M.; Marsh, Christopher A.

    1991-01-01

    This failure management prototype performs failure diagnosis and recovery management of hierarchical, distributed systems. The prototype, which evolved from a series of previous prototypes following a spiral model for development, focuses on two functions: (1) the diagnostic reasoner (DR) performs integrated failure diagnosis in distributed systems; and (2) the recovery expert (Rx) develops plans to recover from the failure. Issues related to expert system prototype design and the previous history of this prototype are discussed. The architecture of the current prototype is described in terms of the knowledge representation and functionality of its components.

  18. Seismic Retrofit for Electric Power Systems

    DOE PAGES

    Romero, Natalia; Nozick, Linda K.; Dobson, Ian; ...

    2015-05-01

    Our paper develops a two-stage stochastic program and solution procedure to optimize the selection of seismic retrofit strategies to increase the resilience of electric power systems against earthquake hazards. The model explicitly considers the range of earthquake events that are possible and, for each, an approximation of the distribution of damage experienced. Furthermore, this is important because electric power systems are spatially distributed and so their performance is driven by the distribution of component damage. We also test this solution procedure against the nonlinear integer solver in LINGO 13 and apply the formulation and solution strategy to the Eastern Interconnection,more » where seismic hazard stems from the New Madrid seismic zone.« less

  19. Mission Assurance in a Distributed Environment

    DTIC Science & Technology

    2009-06-01

    Notation ( BPMN ) – Graphical representation of business processes in a workflow • Unified Modeling Language (UML) – Use standard UML diagrams to model the system – Component, sequence, activity diagrams

  20. Modular space vehicle boards, control software, reprogramming, and failure recovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judd, Stephen; Dallmann, Nicholas; McCabe, Kevin

    A space vehicle may have a modular board configuration that commonly uses some or all components and a common operating system for at least some of the boards. Each modular board may have its own dedicated processing, and processing loads may be distributed. The space vehicle may be reprogrammable, and may be launched without code that enables all functionality and/or components. Code errors may be detected and the space vehicle may be reset to a working code version to prevent system failure.

  1. Distributed data transmitter

    DOEpatents

    Brown, Kenneth Dewayne [Grain Valley, MO; Dunson, David [Kansas City, MO

    2006-08-08

    A distributed data transmitter (DTXR) which is an adaptive data communication microwave transmitter having a distributable architecture of modular components, and which incorporates both digital and microwave technology to provide substantial improvements in physical and operational flexibility. The DTXR has application in, for example, remote data acquisition involving the transmission of telemetry data across a wireless link, wherein the DTXR is integrated into and utilizes available space within a system (e.g., a flight vehicle). In a preferred embodiment, the DTXR broadly comprises a plurality of input interfaces; a data modulator; a power amplifier; and a power converter, all of which are modularly separate and distinct so as to be substantially independently physically distributable and positionable throughout the system wherever sufficient space is available.

  2. Distributed data transmitter

    DOEpatents

    Brown, Kenneth Dewayne [Grain Valley, MO; Dunson, David [Kansas City, MO

    2008-06-03

    A distributed data transmitter (DTXR) which is an adaptive data communication microwave transmitter having a distributable architecture of modular components, and which incorporates both digital and microwave technology to provide substantial improvements in physical and operational flexibility. The DTXR has application in, for example, remote data acquisition involving the transmission of telemetry data across a wireless link, wherein the DTXR is integrated into and utilizes available space within a system (e.g., a flight vehicle). In a preferred embodiment, the DTXR broadly comprises a plurality of input interfaces; a data modulator; a power amplifier; and a power converter, all of which are modularly separate and distinct so as to be substantially independently physically distributable and positionable throughout the system wherever sufficient space is available.

  3. IEEE 342 Node Low Voltage Networked Test System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin P.; Phanivong, Phillippe K.; Lacroix, Jean-Sebastian

    The IEEE Distribution Test Feeders provide a benchmark for new algorithms to the distribution analyses community. The low voltage network test feeder represents a moderate size urban system that is unbalanced and highly networked. This is the first distribution test feeder developed by the IEEE that contains unbalanced networked components. The 342 node Low Voltage Networked Test System includes many elements that may be found in a networked system: multiple 13.2kV primary feeders, network protectors, a 120/208V grid network, and multiple 277/480V spot networks. This paper presents a brief review of the history of low voltage networks and how theymore » evolved into the modern systems. This paper will then present a description of the 342 Node IEEE Low Voltage Network Test System and power flow results.« less

  4. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  5. Trickling Filters. Student Manual. Biological Treatment Process Control.

    ERIC Educational Resources Information Center

    Richwine, Reynold D.

    The textual material for a unit on trickling filters is presented in this student manual. Topic areas discussed include: (1) trickling filter process components (preliminary treatment, media, underdrain system, distribution system, ventilation, and secondary clarifier); (2) operational modes (standard rate filters, high rate filters, roughing…

  6. High voltage-high power components for large space power distribution systems

    NASA Technical Reports Server (NTRS)

    Renz, D. D.

    1984-01-01

    Space power components including a family of bipolar power switching transistors, fast switching power diodes, heat pipe cooled high frequency transformers and inductors, high frequency conduction cooled transformers, high power-high frequency capacitors, remote power controllers and rotary power transfer devices were developed. Many of these components such as the power switching transistors, power diodes and the high frequency capacitor are commercially available. All the other components were developed to the prototype level. The dc/dc series resonant converters were built to the 25 kW level.

  7. Lithotype characterizations by Nuclear Magnetic Resonance (NMR): A case study on limestone and associated rocks from the eastern Dahomey Basin, Nigeria

    NASA Astrophysics Data System (ADS)

    Olatinsu, O. B.; Olorode, D. O.; Clennell, B.; Esteban, L.; Josh, M.

    2017-05-01

    Three representative rock types (limestone, sandstone, and shale) and glauconite samples collected from Ewekoro Quarry, eastern Dahomey Basin in Nigeria were characterized using low field 2 MHz and 20 MHz Nuclear Magnetic Resonance (NMR) techniques. NMR T2 relaxation time decay measurement was conducted on disc samples under partial water-saturation and full water-saturation conditions using CPMG spin-echo routine. The T2 relaxation decay was converted into T2 distribution in the time domain to assess and evaluate the pore size distribution of the samples. Good agreement exists between water content from T2 NMR distributions and water imbibition porosity (WIP) technique. Results show that the most useful characteristics to discriminate the different facies come from full saturation NMR 2 MHz pore size distribution (PSD). Shale facies depict a quasi-unimodal distribution with greater than 90% contribution from clay bound water component (T2s) coupled to capillary bound water component (T2i) centred on 2 ms. The other facies with well connected pore structure show either bimodal or trimodal T2 distribution composed of the similar clay bound water component centred on 0.3 ms and quasi-capillary bound water component centred on 10 ms. But their difference depends on the movable water T2 component (T2l) that does not exist in the glauconite facies (bimodal distribution) while it exists in both the sandstone and limestone facies. The basic difference between the limestone and sandstone facies is related to the longer T2 coupling: T2i and T2l populations are coupled in sandstone generating a single population which convolves both populations (bimodal distribution). Limestone with a trimodal distribution attests to the fact that carbonate rocks have more complex pore system than siliclastic rocks. The degree of pore connectivity is highest in sandstone, followed by limestone and least in glauconite. Therefore a basic/quick NMR log run on samples along a geological formation can provide precise lithofacies characterization with quantitative information on pore size, structure and distributions.

  8. Preliminary measurements of kinetic dust temperature using stereoscopic particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Williams, Jeremiah; Thomas, Edward

    2004-11-01

    A dusty (or complex) plasma is a four-component system composed of ions, electrons, neutral particles and charged microparticles. The presence of the microparticle (i.e., dust) component alters the plasma environment, giving rise to a wide variety of new plasma phenomena. Recently, the Auburn Plasma Sciences Laboratory (PSL) has acquired and installed a stereoscopic PIV (stereo-PIV) diagnostic tool for dusty plasma investigations [Thomas, et. al., Phys. Plasmas, 11, L37 (2004)]. This presentation discusses the use of the stereo-PIV technique for determining the velocity space distribution function of the microparticle component of a dc glow discharge dusty plasma. These distribution functions are then used to make preliminary estimates of the kinetic temperature of the dust component. The data is compared to a simple energy balance model that relates the dust temperature to the electric field and neutral pressure.

  9. Low Field Squid MRI Devices, Components and Methods

    NASA Technical Reports Server (NTRS)

    Hahn, Inseob (Inventor); Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor)

    2013-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  10. Low Field Squid MRI Devices, Components and Methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2014-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  11. Low field SQUID MRI devices, components and methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2011-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  12. Low field SQUID MRI devices, components and methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H (Inventor); Hahn, Inseob (Inventor)

    2010-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  13. An introduction of component fusion extend Kalman filtering method

    NASA Astrophysics Data System (ADS)

    Geng, Yue; Lei, Xusheng

    2018-05-01

    In this paper, the Component Fusion Extend Kalman Filtering (CFEKF) algorithm is proposed. Assuming each component of error propagation are independent with Gaussian distribution. The CFEKF can be obtained through the maximum likelihood of propagation error, which can adjust the state transition matrix and the measured matrix adaptively. With minimize linearization error, CFEKF can an effectively improve the estimation accuracy of nonlinear system state. The computation of CFEKF is similar to EKF which is easy for application.

  14. Distributed Optimization of Sustainable Power Dispatch and Flexible Consumer Loads for Resilient Power Grid Operations

    NASA Astrophysics Data System (ADS)

    Srikantha, Pirathayini

    Today's electric grid is rapidly evolving to provision for heterogeneous system components (e.g. intermittent generation, electric vehicles, storage devices, etc.) while catering to diverse consumer power demand patterns. In order to accommodate this changing landscape, the widespread integration of cyber communication with physical components can be witnessed in all tenets of the modern power grid. This ubiquitous connectivity provides an elevated level of awareness and decision-making ability to system operators. Moreover, devices that were typically passive in the traditional grid are now `smarter' as these can respond to remote signals, learn about local conditions and even make their own actuation decisions if necessary. These advantages can be leveraged to reap unprecedented long-term benefits that include sustainable, efficient and economical power grid operations. Furthermore, challenges introduced by emerging trends in the grid such as high penetration of distributed energy sources, rising power demands, deregulations and cyber-security concerns due to vulnerabilities in standard communication protocols can be overcome by tapping onto the active nature of modern power grid components. In this thesis, distributed constructs in optimization and game theory are utilized to design the seamless real-time integration of a large number of heterogeneous power components such as distributed energy sources with highly fluctuating generation capacities and flexible power consumers with varying demand patterns to achieve optimal operations across multiple levels of hierarchy in the power grid. Specifically, advanced data acquisition, cloud analytics (such as prediction), control and storage systems are leveraged to promote sustainable and economical grid operations while ensuring that physical network, generation and consumer comfort requirements are met. Moreover, privacy and security considerations are incorporated into the core of the proposed designs and these serve to improve the resiliency of the future smart grid. It is demonstrated both theoretically and practically that the techniques proposed in this thesis are highly scalable and robust with superior convergence characteristics. These distributed and decentralized algorithms allow individual actuating nodes to execute self-healing and adaptive actions when exposed to changes in the grid so that the optimal operating state in the grid is maintained consistently.

  15. An adequacy-constrained integrated planning method for effective accommodation of DG and electric vehicles in smart distribution systems

    NASA Astrophysics Data System (ADS)

    Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming

    2018-06-01

    This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.

  16. Software architecture for a distributed real-time system in Ada, with application to telerobotics

    NASA Technical Reports Server (NTRS)

    Olsen, Douglas R.; Messiora, Steve; Leake, Stephen

    1992-01-01

    The architecture structure and software design methodology presented is described in the context of telerobotic application in Ada, specifically the Engineering Test Bed (ETB), which was developed to support the Flight Telerobotic Servicer (FTS) Program at GSFC. However, the nature of the architecture is such that it has applications to any multiprocessor distributed real-time system. The ETB architecture, which is a derivation of the NASA/NBS Standard Reference Model (NASREM), defines a hierarchy for representing a telerobot system. Within this hierarchy, a module is a logical entity consisting of the software associated with a set of related hardware components in the robot system. A module is comprised of submodules, which are cyclically executing processes that each perform a specific set of functions. The submodules in a module can run on separate processors. The submodules in the system communicate via command/status (C/S) interface channels, which are used to send commands down and relay status back up the system hierarchy. Submodules also communicate via setpoint data links, which are used to transfer control data from one submodule to another. A submodule invokes submodule algorithms (SMA's) to perform algorithmic operations. Data that describe or models a physical component of the system are stored as objects in the World Model (WM). The WM is a system-wide distributed database that is accessible to submodules in all modules of the system for creating, reading, and writing objects.

  17. Managed traffic evacuation using distributed sensor processing

    NASA Astrophysics Data System (ADS)

    Ramuhalli, Pradeep; Biswas, Subir

    2005-05-01

    This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.

  18. Experimental consideration of capillary chromatography based on tube radial distribution of ternary mixture carrier solvents under laminar flow conditions.

    PubMed

    Jinno, Naoya; Hashimoto, Masahiko; Tsukagoshi, Kazuhiko

    2011-01-01

    A capillary chromatography system has been developed based on the tube radial distribution of the carrier solvents using an open capillary tube and a water-acetonitrile-ethyl acetate mixture carrier solution. This tube radial distribution chromatography (TRDC) system works under laminar flow conditions. In this study, a phase diagram for the ternary mixture carrier solvents of water, acetonitrile, and ethyl acetate was constructed. The phase diagram that included a boundary curve between homogeneous and heterogeneous solutions was considered together with the component ratios of the solvents in the homogeneous carrier solutions required for the TRDC system. It was found that the TRDC system performed well with homogeneous solutions having component ratios of the solvents that were positioned near the homogeneous-heterogeneous solution boundary of the phase diagram. For preparing the carrier solutions of water-hydrophilic/hydrophobic organic solvents for the TRDC system, we used for the first time methanol, ethanol, 1,4-dioxane, and 1-propanol, instead of acetonitrile (hydrophilic organic solvent), as well as chloroform and 1-butanol, instead of ethyl acetate (hydrophobic organic solvent). The homogeneous ternary mixture carrier solutions were prepared near the homogeneous-heterogeneous solution boundary. Analyte mixtures of 2,6-naphthalenedisulfonic acid and 1-naphthol were separated with the TRDC system using these homogeneous ternary mixture carrier solutions. The pressure change in the capillary tube under laminar flow conditions might alter the carrier solution from homogeneous in the batch vessel to heterogeneous, thus affecting the tube radial distribution of the solvents in the capillary tube.

  19. Development of a Sediment Transport Component for DHSVM

    NASA Astrophysics Data System (ADS)

    Doten, C. O.; Bowling, L. C.; Maurer, E. P.; Voisin, N.; Lettenmaier, D. P.

    2003-12-01

    The effect of forest management and disturbance on aquatic resources is a problem of considerable, contemporary, scientific and public concern in the West. Sediment generation is one of the factors linking land surface conditions with aquatic systems, with implications for fisheries protection and enhancement. Better predictive techniques that allow assessment of the effects of fire and logging, in particular, on sediment transport could help to provide a more scientific basis for the management of forests in the West. We describe the development of a sediment transport component for the Distributed Hydrology Soil Vegetation Model (DHSVM), a spatially distributed hydrologic model that was developed specifically for assessment of the hydrologic consequences of forest management. The sediment transport module extends the hydrologic dynamics of DHSVM to predict sediment generation in response to dynamic meteorological inputs and hydrologic conditions via mass wasting and surface erosion from forest roads and hillslopes. The mass wasting component builds on existing stochastic slope stability models, by incorporating distributed basin hydrology (from DHSVM), and post-failure, rule-based redistribution of sediment downslope. The stochastic nature of the mass wasting component allows specification of probability distributions that describe the spatial variability of soil and vegetation characteristics used in the infinite slope model. The forest roads and hillslope surface erosion algorithms account for erosion from rain drop impact and overland erosion. A simple routing scheme is used to transport eroded sediment from mass wasting and forest roads surface erosion that reaches the channel system to the basin outlet. A sensitivity analysis of the model input parameters and forest cover conditions is described for the Little Wenatchee River basin in the northeastern Washington Cascades.

  20. Tracking-Data-Conversion Tool

    NASA Technical Reports Server (NTRS)

    Flora-Adams, Dana; Makihara, Jeanne; Benenyan, Zabel; Berner, Jeff; Kwok, Andrew

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-topeer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  1. Software Framework for Peer Data-Management Services

    NASA Technical Reports Server (NTRS)

    Hughes, John; Hardman, Sean; Crichton, Daniel; Hyon, Jason; Kelly, Sean; Tran, Thuy

    2007-01-01

    Object Oriented Data Technology (OODT) is a software framework for creating a Web-based system for exchange of scientific data that are stored in diverse formats on computers at different sites under the management of scientific peers. OODT software consists of a set of cooperating, distributed peer components that provide distributed peer-to-peer (P2P) services that enable one peer to search and retrieve data managed by another peer. In effect, computers running OODT software at different locations become parts of an integrated data-management system.

  2. Fiber-optic technology for transport aircraft

    NASA Astrophysics Data System (ADS)

    1993-07-01

    A development status evaluation is presented for fiber-optic devices that are advantageously applicable to commercial aircraft. Current developmental efforts at a major U.S. military and commercial aircraft manufacturer encompass installation techniques and data distribution practices, as well as the definition and refinement of an optical propulsion management interface system, environmental sensing systems, and component-qualification criteria. Data distribution is the most near-term implementable of fiber-optic technologies aboard commercial aircraft in the form of onboard local-area networks for intercomputer connections and passenger entertainment.

  3. Web-Based Distributed Simulation of Aeronautical Propulsion System

    NASA Technical Reports Server (NTRS)

    Zheng, Desheng; Follen, Gregory J.; Pavlik, William R.; Kim, Chan M.; Liu, Xianyou; Blaser, Tammy M.; Lopez, Isaac

    2001-01-01

    An application was developed to allow users to run and view the Numerical Propulsion System Simulation (NPSS) engine simulations from web browsers. Simulations were performed on multiple INFORMATION POWER GRID (IPG) test beds. The Common Object Request Broker Architecture (CORBA) was used for brokering data exchange among machines and IPG/Globus for job scheduling and remote process invocation. Web server scripting was performed by JavaServer Pages (JSP). This application has proven to be an effective and efficient way to couple heterogeneous distributed components.

  4. Continental-scale simulation of burn probabilities, flame lengths, and fire size distribution for the United States

    Treesearch

    Mark A. Finney; Charles W. McHugh; Isaac Grenfell; Karin L. Riley

    2010-01-01

    Components of a quantitative risk assessment were produced by simulation of burn probabilities and fire behavior variation for 134 fire planning units (FPUs) across the continental U.S. The system uses fire growth simulation of ignitions modeled from relationships between large fire occurrence and the fire danger index Energy Release Component (ERC). Simulations of 10,...

  5. A Developmental Shift from Similar to Language-Specific Strategies in Verb Acquisition: A Comparison of English, Spanish, and Japanese

    ERIC Educational Resources Information Center

    Maguire, Mandy J.; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick; Imai, Mutsumi; Haryu, Etsuko; Vanegas, Sandra; Okada, Hiroyuki; Pulverman, Rachel; Sanchez-Davis, Brenda

    2010-01-01

    The world's languages draw on a common set of event components for their verb systems. Yet, these components are differentially distributed across languages. At what age do children begin to use language-specific patterns to narrow possible verb meanings? English-, Japanese-, and Spanish-speaking adults, toddlers, and preschoolers were shown…

  6. [Coupling of brain oscillatory systems with cognitive (experience and valence) and physiological (cardiovascular reactivity) components of emotion].

    PubMed

    Aftanas, L I; Reva, N V; Pavlov, S V; Korenek, V V; Brak, I V

    2014-02-01

    We investigated the coupling of EEG oscillators with cognitive (experience and valence) and physiological (cardiovascular reactivity) components of emotion. Emotions of anger and joy were evoked in healthy males (n = 49) using a guided imagery method, multichannel EEG and cardiovascular reactivity (Finometer) were simultaneously recorded. Correlational analysis revealed that specially distributed EEG oscillators seem to be selectively involved into cognitive (experience and valence) and physiological (cardiovascular reactivity) components of emotional responding. We showed that low theta (4-6 Hz) activity from medial and lateral frontal cortex of the right hemisphere predominantly correlated with the anger experience, high alpha (10-12 and 12-14 Hz) and gamma (30-45 Hz) activity from central-parieto-occipital regions of the left hemisphere--with cardiovascular reactivity to anger and joy, gamma-activity (30-45 Hz) from the left hemisphere in parietal areas--with cardiovascular reactivity to joy. The findings suggest that specially distributed neuronal networks oscillating at different frequencies may be regarded as a putative neurobiological mechanism coordination dynamical balance between cognitive and physiological components of emotion as well as psycho-neuro-somatic relationships within the mind-brain-body system.

  7. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  8. Subtlenoise: sonification of distributed computing operations

    NASA Astrophysics Data System (ADS)

    Love, P. A.

    2015-12-01

    The operation of distributed computing systems requires comprehensive monitoring to ensure reliability and robustness. There are two components found in most monitoring systems: one being visually rich time-series graphs and another being notification systems for alerting operators under certain pre-defined conditions. In this paper the sonification of monitoring messages is explored using an architecture that fits easily within existing infrastructures based on mature opensource technologies such as ZeroMQ, Logstash, and Supercollider (a synth engine). Message attributes are mapped onto audio attributes based on broad classification of the message (continuous or discrete metrics) but keeping the audio stream subtle in nature. The benefits of audio rendering are described in the context of distributed computing operations and may provide a less intrusive way to understand the operational health of these systems.

  9. The Community Climate System Model Version 4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gent, Peter R.; Danabasoglu, Gokhan; Donner, Leo J.

    The fourth version of the Community Climate System Model (CCSM4) was recently completed and released to the climate community. This paper describes developments to all the CCSM components, and documents fully coupled pre-industrial control runs compared to the previous version, CCSM3. Using the standard atmosphere and land resolution of 1{sup o} results in the sea surface temperature biases in the major upwelling regions being comparable to the 1.4{sup o} resolution CCSM3. Two changes to the deep convection scheme in the atmosphere component result in the CCSM4 producing El Nino/Southern Oscillation variability with a much more realistic frequency distribution than themore » CCSM3, although the amplitude is too large compared to observations. They also improve the representation of the Madden-Julian Oscillation, and the frequency distribution of tropical precipitation. A new overflow parameterization in the ocean component leads to an improved simulation of the deep ocean density structure, especially in the North Atlantic. Changes to the CCSM4 land component lead to a much improved annual cycle of water storage, especially in the tropics. The CCSM4 sea ice component uses much more realistic albedos than the CCSM3, and the Arctic sea ice concentration is improved in the CCSM4. An ensemble of 20th century simulations runs produce an excellent match to the observed September Arctic sea ice extent from 1979 to 2005. The CCSM4 ensemble mean increase in globally-averaged surface temperature between 1850 and 2005 is larger than the observed increase by about 0.4 C. This is consistent with the fact that the CCSM4 does not include a representation of the indirect effects of aerosols, although other factors may come into play. The CCSM4 still has significant biases, such as the mean precipitation distribution in the tropical Pacific Ocean, too much low cloud in the Arctic, and the latitudinal distributions of short-wave and long-wave cloud forcings.« less

  10. 24 CFR 200.925b - Residential and institutional building code comparison items.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...., materials, allowable stresses, design; (6) Excavation; (e) Materials standards. (f) Construction components...) Plumbing fixtures; (7) Water supply and distribution; (8) Storm drain systems. (j) Electrical. (1) Wiring...

  11. 24 CFR 200.925b - Residential and institutional building code comparison items.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...., materials, allowable stresses, design; (6) Excavation; (e) Materials standards. (f) Construction components...) Plumbing fixtures; (7) Water supply and distribution; (8) Storm drain systems. (j) Electrical. (1) Wiring...

  12. Recovery and purification of ethylene

    DOEpatents

    Reyneke, Rian [Katy, TX; Foral, Michael J [Aurora, IL; Lee, Guang-Chung [Houston, TX; Eng, Wayne W. Y. [League City, TX; Sinclair, Iain [Warrington, GB; Lodgson, Jeffery S [Naperville, IL

    2008-10-21

    A process for the recovery and purification of ethylene and optionally propylene from a stream containing lighter and heavier components that employs an ethylene distributor column and a partially thermally coupled distributed distillation system.

  13. Modeling and Analysis of Mixed Synchronous/Asynchronous Systems

    NASA Technical Reports Server (NTRS)

    Driscoll, Kevin R.; Madl. Gabor; Hall, Brendan

    2012-01-01

    Practical safety-critical distributed systems must integrate safety critical and non-critical data in a common platform. Safety critical systems almost always consist of isochronous components that have synchronous or asynchronous interface with other components. Many of these systems also support a mix of synchronous and asynchronous interfaces. This report presents a study on the modeling and analysis of asynchronous, synchronous, and mixed synchronous/asynchronous systems. We build on the SAE Architecture Analysis and Design Language (AADL) to capture architectures for analysis. We present preliminary work targeted to capture mixed low- and high-criticality data, as well as real-time properties in a common Model of Computation (MoC). An abstract, but representative, test specimen system was created as the system to be modeled.

  14. [The endogenous opioid system and drug addiction].

    PubMed

    Maldonado, R

    2010-01-01

    Drug addiction is a chronic brain disorder leading to complex adaptive changes within the brain reward circuits. Several neurotransmitters, including the endogenous opioid system are involved in these changes. The opioid system plays a pivotal role in different aspects of addiction. Thus, opioid receptors and endogenous opioid peptides are largely distributed in the mesolimbic system and modulate dopaminergic activity within the reward circuits. Opioid receptors and peptides are selectively involved in several components of the addictive processes induced by opioids, cannabinoids, psychostimulants, alcohol and nicotine. This review is focused on the contribution of each component of the endogenous opioid system in the addictive properties of the different drugs of abuse. Copyright 2010 Elsevier Masson SAS. All rights reserved.

  15. A new technology perspective and engineering tools approach for large, complex and distributed mission and safety critical systems components

    NASA Technical Reports Server (NTRS)

    Carrio, Miguel A., Jr.

    1988-01-01

    Rapidly emerging technology and methodologies have out-paced the systems development processes' ability to use them effectively, if at all. At the same time, the tools used to build systems are becoming obsolescent themselves as a consequence of the same technology lag that plagues systems development. The net result is that systems development activities have not been able to take advantage of available technology and have become equally dependent on aging and ineffective computer-aided engineering tools. New methods and tools approaches are essential if the demands of non-stop and Mission and Safety Critical (MASC) components are to be met.

  16. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  17. GEOSS AIP-2 Climate Change and Biodiversity Use Scenarios: Interoperability Infrastructures

    NASA Astrophysics Data System (ADS)

    Nativi, Stefano; Santoro, Mattia

    2010-05-01

    In the last years, scientific community is producing great efforts in order to study the effects of climate change on life on Earth. In this general framework, a key role is played by the impact of climate change on biodiversity. To assess this, several use scenarios require the modeling of climatological change impact on the regional distribution of biodiversity species. Designing and developing interoperability infrastructures which enable scientists to search, discover, access and use multi-disciplinary resources (i.e. datasets, services, models, etc.) is currently one of the main research fields for the Earth and Space Science Informatics. This presentation introduces and discusses an interoperability infrastructure which implements the discovery, access, and chaining of loosely-coupled resources in the climatology and biodiversity domains. This allows to set up and run forecast and processing models. The presented framework was successfully developed and experimented in the context of GEOSS AIP-2 (Global Earth Observation System of Systems, Architecture Implementation Pilot- Phase 2) Climate Change & Biodiversity thematic Working Group. This interoperability infrastructure is comprised of the following main components and services: a)GEO Portal: through this component end user is able to search, find and access the needed services for the scenario execution; b)Graphical User Interface (GUI): this component provides user interaction functionalities. It controls the workflow manager to perform the required operations for the scenario implementation; c)Use Scenario controller: this component acts as a workflow controller implementing the scenario business process -i.e. a typical climate change & biodiversity projection scenario; d)Service Broker implementing Mediation Services: this component realizes a distributed catalogue which federates several discovery and access components (exposing them through a unique CSW standard interface). Federated components publish climate, environmental and biodiversity datasets; e)Ecological Niche Model Server: this component is able to run one or more Ecological Niche Models (ENM) on selected biodiversity and climate datasets; f)Data Access Transaction server: this component publishes the model outputs. This framework was assessed in two use scenarios of GEOSS AIP-2 Climate Change and Biodiversity WG. Both scenarios concern the prediction of species distributions driven by climatological change forecasts. The first scenario dealt with the Pikas specie regional distribution in the Great Basin area (North America). While, the second one concerned the modeling of the Arctic Food Chain species in the North Pole area -the relationships between different environmental parameters and Polar Bears distribution was analyzed. The scientific patronage was provided by the University of Colorado and the University of Alaska, respectively. Results are published in the GEOSS AIP-2 web site: http://www.ogcnetwork.net/AIP2develop.

  18. Solid cryogen: a cooling system for future MgB2 MRI magnet.

    PubMed

    Patel, Dipak; Hossain, Md Shahriar Al; Qiu, Wenbin; Jie, Hyunseock; Yamauchi, Yusuke; Maeda, Minoru; Tomsic, Mike; Choi, Seyong; Kim, Jung Ho

    2017-03-02

    An efficient cooling system and the superconducting magnet are essential components of magnetic resonance imaging (MRI) technology. Herein, we report a solid nitrogen (SN 2 ) cooling system as a valuable cryogenic feature, which is targeted for easy usability and stable operation under unreliable power source conditions, in conjunction with a magnesium diboride (MgB 2 ) superconducting magnet. The rationally designed MgB 2 /SN 2 cooling system was first considered by conducting a finite element analysis simulation, and then a demonstrator coil was empirically tested under the same conditions. In the SN 2 cooling system design, a wide temperature distribution on the SN 2 chamber was observed due to the low thermal conductivity of the stainless steel components. To overcome this temperature distribution, a copper flange was introduced to enhance the temperature uniformity of the SN 2 chamber. In the coil testing, an operating current as high as 200 A was applied at 28 K (below the critical current) without any operating or thermal issues. This work was performed to further the development of SN 2 cooled MgB 2 superconducting coils for MRI applications.

  19. Solid cryogen: a cooling system for future MgB2 MRI magnet

    NASA Astrophysics Data System (ADS)

    Patel, Dipak; Hossain, Md Shahriar Al; Qiu, Wenbin; Jie, Hyunseock; Yamauchi, Yusuke; Maeda, Minoru; Tomsic, Mike; Choi, Seyong; Kim, Jung Ho

    2017-03-01

    An efficient cooling system and the superconducting magnet are essential components of magnetic resonance imaging (MRI) technology. Herein, we report a solid nitrogen (SN2) cooling system as a valuable cryogenic feature, which is targeted for easy usability and stable operation under unreliable power source conditions, in conjunction with a magnesium diboride (MgB2) superconducting magnet. The rationally designed MgB2/SN2 cooling system was first considered by conducting a finite element analysis simulation, and then a demonstrator coil was empirically tested under the same conditions. In the SN2 cooling system design, a wide temperature distribution on the SN2 chamber was observed due to the low thermal conductivity of the stainless steel components. To overcome this temperature distribution, a copper flange was introduced to enhance the temperature uniformity of the SN2 chamber. In the coil testing, an operating current as high as 200 A was applied at 28 K (below the critical current) without any operating or thermal issues. This work was performed to further the development of SN2 cooled MgB2 superconducting coils for MRI applications.

  20. Towards a regional coastal ocean observing system: An initial design for the Southeast Coastal Ocean Observing Regional Association

    NASA Astrophysics Data System (ADS)

    Seim, H. E.; Fletcher, M.; Mooers, C. N. K.; Nelson, J. R.; Weisberg, R. H.

    2009-05-01

    A conceptual design for a southeast United States regional coastal ocean observing system (RCOOS) is built upon a partnership between institutions of the region and among elements of the academic, government and private sectors. This design envisions support of a broad range of applications (e.g., marine operations, natural hazards, and ecosystem-based management) through the routine operation of predictive models that utilize the system observations to ensure their validity. A distributed information management system enables information flow, and a centralized information hub serves to aggregate information regionally and distribute it as needed. A variety of observing assets are needed to satisfy model requirements. An initial distribution of assets is proposed that recognizes the physical structure and forcing in the southeast U.S. coastal ocean. In-situ data collection includes moorings, profilers and gliders to provide 3D, time-dependent sampling, HF radar and surface drifters for synoptic sampling of surface currents, and satellite remote sensing of surface ocean properties. Nested model systems are required to properly represent ocean conditions from the outer edge of the EEZ to the watersheds. An effective RCOOS will depend upon a vital "National Backbone" (federally supported) system of in situ and satellite observations, model products, and data management. This dependence highlights the needs for a clear definition of the National Backbone components and a Concept of Operations (CONOPS) that defines the roles, functions and interactions of regional and federal components of the integrated system. A preliminary CONOPS is offered for the Southeast (SE) RCOOS. Thorough system testing is advocated using a combination of application-specific and process-oriented experiments. Estimates of costs and personnel required as initial components of the SE RCOOS are included. Initial thoughts on the Research and Development program required to support the RCOOS are also outlined.

  1. Intelligent Engine Systems: Thermal Management and Advanced Cooling

    NASA Technical Reports Server (NTRS)

    Bergholz, Robert

    2008-01-01

    The objective of the Advanced Turbine Cooling and Thermal Management program is to develop intelligent control and distribution methods for turbine cooling, while achieving a reduction in total cooling flow and assuring acceptable turbine component safety and reliability. The program also will develop embedded sensor technologies and cooling system models for real-time engine diagnostics and health management. Both active and passive control strategies will be investigated that include the capability of intelligent modulation of flow quantities, pressures, and temperatures both within the supply system and at the turbine component level. Thermal management system concepts were studied, with a goal of reducing HPT blade cooling air supply temperature. An assessment will be made of the use of this air by the active clearance control system as well. Turbine component cooling designs incorporating advanced, high-effectiveness cooling features, will be evaluated. Turbine cooling flow control concepts will be studied at the cooling system level and the component level. Specific cooling features or sub-elements of an advanced HPT blade cooling design will be downselected for core fabrication and casting demonstrations.

  2. NASA's Earth Observing Data and Information System - Near-Term Challenges

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Mitchell, Andrew; Ramapriyan, Hampapuram

    2018-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) has been a central component of the NASA Earth observation program since the 1990's. EOSDIS manages data covering a wide range of Earth science disciplines including cryosphere, land cover change, polar processes, field campaigns, ocean surface, digital elevation, atmosphere dynamics and composition, and inter-disciplinary research, and many others. One of the key components of EOSDIS is a set of twelve discipline-based Distributed Active Archive Centers (DAACs) distributed across the United States. Managed by NASA's Earth Science Data and Information System (ESDIS) Project at Goddard Space Flight Center, these DAACs serve over 3 million users globally. The ESDIS Project provides the infrastructure support for EOSDIS, which includes other components such as the Science Investigator-led Processing systems (SIPS), common metadata and metrics management systems, specialized network systems, standards management, and centralized support for use of commercial cloud capabilities. Given the long-term requirements, and the rapid pace of information technology and changing expectations of the user community, EOSDIS has evolved continually over the past three decades. However, many challenges remain. Challenges addressed in this paper include: growing volume and variety, achieving consistency across a diverse set of data producers, managing information about a large number of datasets, migration to a cloud computing environment, optimizing data discovery and access, incorporating user feedback from a diverse community, keeping metadata updated as data collections grow and age, and ensuring that all the content needed for understanding datasets by future users is identified and preserved.

  3. Combined laser heating and tandem acousto-optical filter for two-dimensional temperature distribution on the surface of the heated microobject

    NASA Astrophysics Data System (ADS)

    Bykov, A. A.; Kutuza, I. B.; Zinin, P. V.; Machikhin, A. S.; Troyan, I. A.; Bulatov, K. M.; Batshev, V. I.; Mantrova, Y. V.; Gaponov, M. I.; Prakapenka, V. B.; Sharma, S. K.

    2018-01-01

    Recently it has been shown that it is possible to measure the two-dimensional distribution of the surface temperature of microscopic specimens. The main component of the system is a tandem imaging acousto-optical tunable filter synchronized with a video camera. In this report, we demonstrate that combining the laser heating system with a tandem imaging acousto-optical tunable filter allows measurement of the temperature distribution under laser heating of the platinum plates as well as a visualization of the infrared laser beam, that is widely used for laser heating in diamond anvil cells.

  4. Development of Web-based Distributed Cooperative Development Environmentof Sign-Language Animation System and its Evaluation

    NASA Astrophysics Data System (ADS)

    Yuizono, Takaya; Hara, Kousuke; Nakayama, Shigeru

    A web-based distributed cooperative development environment of sign-language animation system has been developed. We have extended the system from the previous animation system that was constructed as three tiered system which consists of sign-language animation interface layer, sign-language data processing layer, and sign-language animation database. Two components of a web client using VRML plug-in and web servlet are added to the previous system. The systems can support humanoid-model avatar for interoperability, and can use the stored sign language animation data shared on the database. It is noted in the evaluation of this system that the inverse kinematics function of web client improves the sign-language animation making.

  5. 38 CFR Appendix A to Part 200 - Categorical Exclusions

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of equipment or components in AFRH-controlled facilities without change in location, e.g., HVAC, electrical distribution systems, windows, doors or roof. A.3(e) Disposal or other disposition of claimed or...

  6. 38 CFR Appendix A to Part 200 - Categorical Exclusions

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of equipment or components in AFRH-controlled facilities without change in location, e.g., HVAC, electrical distribution systems, windows, doors or roof. A.3(e) Disposal or other disposition of claimed or...

  7. 38 CFR Appendix A to Part 200 - Categorical Exclusions

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of equipment or components in AFRH-controlled facilities without change in location, e.g., HVAC, electrical distribution systems, windows, doors or roof. A.3(e) Disposal or other disposition of claimed or...

  8. 38 CFR Appendix A to Part 200 - Categorical Exclusions

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of equipment or components in AFRH-controlled facilities without change in location, e.g., HVAC, electrical distribution systems, windows, doors or roof. A.3(e) Disposal or other disposition of claimed or...

  9. Performance of mixed RF/FSO systems in exponentiated Weibull distributed channels

    NASA Astrophysics Data System (ADS)

    Zhao, Jing; Zhao, Shang-Hong; Zhao, Wei-Hu; Liu, Yun; Li, Xuan

    2017-12-01

    This paper presented the performances of asymmetric mixed radio frequency (RF)/free-space optical (FSO) system with the amplify-and-forward relaying scheme. The RF channel undergoes Nakagami- m channel, and the Exponentiated Weibull distribution is adopted for the FSO component. The mathematical formulas for cumulative distribution function (CDF), probability density function (PDF) and moment generating function (MGF) of equivalent signal-to-noise ratio (SNR) are achieved. According to the end-to-end statistical characteristics, the new analytical expressions of outage probability are obtained. Under various modulation techniques, we derive the average bit-error-rate (BER) based on the Meijer's G function. The evaluation and simulation are provided for the system performance, and the aperture average effect is discussed as well.

  10. CHARACTERIZATION OF SEVEN ULTRA-WIDE TRANS-NEPTUNIAN BINARIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parker, Alex H.; Kavelaars, J. J.; Petit, Jean-Marc

    2011-12-10

    The low-inclination component of the Classical Kuiper Belt is host to a population of extremely widely separated binaries. These systems are similar to other trans-Neptunian binaries (TNBs) in that the primary and secondary components of each system are of roughly equal size. We have performed an astrometric monitoring campaign of a sample of seven wide-separation, long-period TNBs and present the first-ever well-characterized mutual orbits for each system. The sample contains the most eccentric (2006 CH{sub 69}, e{sub m} = 0.9) and the most widely separated, weakly bound (2001 QW{sub 322}, a/R{sub H} {approx_equal} 0.22) binary minor planets known, and alsomore » contains the system with lowest-measured mass of any TNB (2000 CF{sub 105}, M{sub sys} {approx_equal} 1.85 Multiplication-Sign 10{sup 17} kg). Four systems orbit in a prograde sense, and three in a retrograde sense. They have a different mutual inclination distribution compared to all other TNBs, preferring low mutual-inclination orbits. These systems have geometric r-band albedos in the range of 0.09-0.3, consistent with radiometric albedo estimates for larger solitary low-inclination Classical Kuiper Belt objects, and we limit the plausible distribution of albedos in this region of the Kuiper Belt. We find that gravitational collapse binary formation models produce an orbital distribution similar to that currently observed, which along with a confluence of other factors supports formation of the cold Classical Kuiper Belt in situ through relatively rapid gravitational collapse rather than slow hierarchical accretion. We show that these binary systems are sensitive to disruption via collisions, and their existence suggests that the size distribution of TNOs at small sizes remains relatively shallow.« less

  11. A Framework System for Intelligent Support in Open Distributed Learning Environments--A Look Back from 16 Years Later

    ERIC Educational Resources Information Center

    Hoppe, H. Ulrich

    2016-01-01

    The 1998 paper by Martin Mühlenbrock, Frank Tewissen, and myself introduced a multi-agent architecture and a component engineering approach for building open distributed learning environments to support group learning in different types of classroom settings. It took up prior work on "multiple student modeling" as a method to configure…

  12. System for Measuring Conditional Amplitude, Phase, or Time Distributions of Pulsating Phenomena

    PubMed Central

    Van Brunt, Richard J.; Cernyar, Eric W.

    1992-01-01

    A detailed description is given of an electronic stochastic analyzer for use with direct “real-time” measurements of the conditional distributions needed for a complete stochastic characterization of pulsating phenomena that can be represented as random point processes. The measurement system described here is designed to reveal and quantify effects of pulse-to-pulse or phase-to-phase memory propagation. The unraveling of memory effects is required so that the physical basis for observed statistical properties of pulsating phenomena can be understood. The individual unique circuit components that comprise the system and the combinations of these components for various measurements, are thoroughly documented. The system has been applied to the measurement of pulsating partial discharges generated by applying alternating or constant voltage to a discharge gap. Examples are shown of data obtained for conditional and unconditional amplitude, time interval, and phase-of-occurrence distributions of partial-discharge pulses. The results unequivocally show the existence of significant memory effects as indicated, for example, by the observations that the most probable amplitudes and phases-of-occurrence of discharge pulses depend on the amplitudes and/or phases of the preceding pulses. Sources of error and fundamental limitations of the present measurement approach are analyzed. Possible extensions of the method are also discussed. PMID:28053450

  13. An integrated database with system optimization and design features

    NASA Technical Reports Server (NTRS)

    Arabyan, A.; Nikravesh, P. E.; Vincent, T. L.

    1992-01-01

    A customized, mission-specific relational database package was developed to allow researchers working on the Mars oxygen manufacturing plant to enter physical description, engineering, and connectivity data through a uniform, graphical interface and to store the data in formats compatible with other software also developed as part of the project. These latter components include an optimization program to maximize or minimize various criteria as the system evolves into its final design; programs to simulate the behavior of various parts of the plant in Martian conditions; an animation program which, in different modes, provides visual feedback to designers and researchers about the location of and temperature distribution among components as well as heat, mass, and data flow through the plant as it operates in different scenarios; and a control program to investigate the stability and response of the system under different disturbance conditions. All components of the system are interconnected so that changes entered through one component are reflected in the others.

  14. Accuracy improvement in laser stripe extraction for large-scale triangulation scanning measurement system

    NASA Astrophysics Data System (ADS)

    Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan

    2015-10-01

    Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.

  15. Development of probabilistic internal dosimetry computer code

    NASA Astrophysics Data System (ADS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-02-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.

  16. Evaluation and Application of Overvoltage into Communication Equipment Due to Potential Rise at Earthing Terminal of Distribution Line Induced by Lightning Surge

    NASA Astrophysics Data System (ADS)

    Ito, Katsuji; Hirose, Yasuo

    Overvoltage induced by surge currents due to thunderstorm lightnings causes harmful breakdown troubles of CATV communication equipment installed in and with power distribution systems. In this paper, the origin and natures of surge currents, their invading route into the system, and the system components such as earth impedances affecting over voltages are studied. Transient analyses are then performed using an equivalent circuit to evaluate over voltages. Application of the obtained results to the field fault data of communication equipment and possible protection method of them are discussed.

  17. NASA's NPOESS Preparatory Project Science Data Segment: A Framework for Measurement-based Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Schwaller, Mathew R.; Schweiss, Robert J.

    2007-01-01

    The NPOESS Preparatory Project (NPP) Science Data Segment (SDS) provides a framework for the future of NASA s distributed Earth science data systems. The NPP SDS performs research and data product assessment while using a fully distributed architecture. The components of this architecture are organized around key environmental data disciplines: land, ocean, ozone, atmospheric sounding, and atmospheric composition. The SDS thus establishes a set of concepts and a working prototypes. This paper describes the framework used by the NPP Project as it enabled Measurement-Based Earth Science Data Systems for the assessment of NPP products.

  18. Vulnerability and cosusceptibility determine the size of network cascades

    DOE PAGES

    Yang, Yang; Nishikawa, Takashi; Motter, Adilson E.

    2017-01-27

    In a network, a local disturbance can propagate and eventually cause a substantial part of the system to fail in cascade events that are easy to conceptualize but extraordinarily difficult to predict. Furthermore, we develop a statistical framework that can predict cascade size distributions by incorporating two ingredients only: the vulnerability of individual components and the cosusceptibility of groups of components (i.e., their tendency to fail together). Using cascades in power grids as a representative example, we show that correlations between component failures define structured and often surprisingly large groups of cosusceptible components. Aside from their implications for blackout studies,more » these results provide insights and a new modeling framework for understanding cascades in financial systems, food webs, and complex networks in general.« less

  19. Overcoming Communication Restrictions in Collectives

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2004-01-01

    Many large distributed system are characterized by having a large number of components (eg., agents, neurons) whose actions and interactions determine a %orld utility which rates the performance of the overall system. Such collectives are often subject to communication restrictions, making it difficult for components which try to optimize their own private utilities, to take actions that also help optimize the world utility. In this article we address that coordination problem and derive four utility functions which present different compromises between how aligned a component s private utility is with the world utility and how readily that component can determine the actions that optimize its utility. The results show that the utility functions specifically derived to operate under communication restrictions outperform both traditional methods and previous collective-based methods by up to 75%.

  20. Development and Validation of a Lifecycle-based Prognostics Architecture with Test Bed Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hines, J. Wesley; Upadhyaya, Belle; Sharp, Michael

    On-line monitoring and tracking of nuclear plant system and component degradation is being investigated as a method for improving the safety, reliability, and maintainability of aging nuclear power plants. Accurate prediction of the current degradation state of system components and structures is important for accurate estimates of their remaining useful life (RUL). The correct quantification and propagation of both the measurement uncertainty and model uncertainty is necessary for quantifying the uncertainty of the RUL prediction. This research project developed and validated methods to perform RUL estimation throughout the lifecycle of plant components. Prognostic methods should seamlessly operate from beginning ofmore » component life (BOL) to end of component life (EOL). We term this "Lifecycle Prognostics." When a component is put into use, the only information available may be past failure times of similar components used in similar conditions, and the predicted failure distribution can be estimated with reliability methods such as Weibull Analysis (Type I Prognostics). As the component operates, it begins to degrade and consume its available life. This life consumption may be a function of system stresses, and the failure distribution should be updated to account for the system operational stress levels (Type II Prognostics). When degradation becomes apparent, this information can be used to again improve the RUL estimate (Type III Prognostics). This research focused on developing prognostics algorithms for the three types of prognostics, developing uncertainty quantification methods for each of the algorithms, and, most importantly, developing a framework using Bayesian methods to transition between prognostic model types and update failure distribution estimates as new information becomes available. The developed methods were then validated on a range of accelerated degradation test beds. The ultimate goal of prognostics is to provide an accurate assessment for RUL predictions, with as little uncertainty as possible. From a reliability and maintenance standpoint, there would be improved safety by avoiding all failures. Calculated risk would decrease, saving money by avoiding unnecessary maintenance. One major bottleneck for data-driven prognostics is the availability of run-to-failure degradation data. Without enough degradation data leading to failure, prognostic models can yield RUL distributions with large uncertainty or mathematically unsound predictions. To address these issues a "Lifecycle Prognostics" method was developed to create RUL distributions from Beginning of Life (BOL) to End of Life (EOL). This employs established Type I, II, and III prognostic methods, and Bayesian transitioning between each Type. Bayesian methods, as opposed to classical frequency statistics, show how an expected value, a priori, changes with new data to form a posterior distribution. For example, when you purchase a component you have a prior belief, or estimation, of how long it will operate before failing. As you operate it, you may collect information related to its condition that will allow you to update your estimated failure time. Bayesian methods are best used when limited data are available. The use of a prior also means that information is conserved when new data are available. The weightings of the prior belief and information contained in the sampled data are dependent on the variance (uncertainty) of the prior, the variance (uncertainty) of the data, and the amount of measured data (number of samples). If the variance of the prior is small compared to the uncertainty of the data, the prior will be weighed more heavily. However, as more data are collected, the data will be weighted more heavily and will eventually swamp out the prior in calculating the posterior distribution of model parameters. Fundamentally Bayesian analysis updates a prior belief with new data to get a posterior belief. The general approach to applying the Bayesian method to lifecycle prognostics consisted of identifying the prior, which is the RUL estimate and uncertainty from the previous prognostics type, and combining it with observational data related to the newer prognostics type. The resulting lifecycle prognostics algorithm uses all available information throughout the component lifecycle.« less

  1. Time-of-flight expansion of binary Bose–Einstein condensates at finite temperature

    NASA Astrophysics Data System (ADS)

    Lee, K. L.; Jørgensen, N. B.; Wacker, L. J.; Skou, M. G.; Skalmstang, K. T.; Arlt, J. J.; Proukakis, N. P.

    2018-05-01

    Ultracold quantum gases provide a unique setting for studying and understanding the properties of interacting quantum systems. Here, we investigate a multi-component system of 87Rb–39K Bose–Einstein condensates (BECs) with tunable interactions both theoretically and experimentally. Such multi-component systems can be characterized by their miscibility, where miscible components lead to a mixed ground state and immiscible components form a phase-separated state. Here we perform the first full simulation of the dynamical expansion of this system including both BECs and thermal clouds, which allows for a detailed comparison with experimental results. In particular we show that striking features emerge in time-of-flight (TOF) for BECs with strong interspecies repulsion, even for systems which were separated in situ by a large gravitational sag. An analysis of the centre of mass positions of the BECs after expansion yields qualitative agreement with the homogeneous criterion for phase-separation, but reveals no clear transition point between the mixed and the separated phases. Instead one can identify a transition region, for which the presence of a gravitational sag is found to be advantageous. Moreover, we analyse the situation where only one component is condensed and show that the density distribution of the thermal component also shows some distinct features. Our work sheds new light on the analysis of multi-component systems after TOF and will guide future experiments on the detection of miscibility in these systems.

  2. Storage resource manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perelmutov, T.; Bakken, J.; Petravick, D.

    Storage Resource Managers (SRMs) are middleware components whose function is to provide dynamic space allocation and file management on shared storage components on the Grid[1,2]. SRMs support protocol negotiation and reliable replication mechanism. The SRM standard supports independent SRM implementations, allowing for a uniform access to heterogeneous storage elements. SRMs allow site-specific policies at each location. Resource Reservations made through SRMs have limited lifetimes and allow for automatic collection of unused resources thus preventing clogging of storage systems with ''orphan'' files. At Fermilab, data handling systems use the SRM management interface to the dCache Distributed Disk Cache [5,6] and themore » Enstore Tape Storage System [15] as key components to satisfy current and future user requests [4]. The SAM project offers the SRM interface for its internal caches as well.« less

  3. [Development and application of a multi-species water quality model for water distribution systems with EPANET-MSX].

    PubMed

    Sun, Fu; Chen, Ji-ning; Zeng, Si-yu

    2008-12-01

    A conceptual multi-species water quality model for water distribution systems was developed on the basis of the toolkit of the EPANET-MSX software. The model divided the pipe segment into four compartments including pipe wall, biofilm, boundary layer and bulk liquid. The involved processes were substrate utilization and microbial growth, decay and inactivation of microorganisms, mass transfer of soluble components through the boundary layer, adsorption and desorption of particular components between bulk liquid and biofilm, oxidation and halogenation of organic matter by residual chlorine, and chlorine consumption by pipe wall. The fifteen simulated variables included the seven common variables both in the biofilm and in the bulk liquid, i.e. soluble organic matter, particular organic matter, ammonia nitrogen, residual chlorine, heterotrophic bacteria, autotrophic bacteria and inert solids, as well as biofilm thickness on the pipe wall. The model was validated against the data from a series of pilot experiments, and the simulation accuracy for residual chlorine and turbidity were 0.1 mg/L and 0.3 NTU respectively. A case study showed that the model could reasonably reflect the dynamic variation of residual chlorine and turbidity in the studied water distribution system, while Monte Carlo simulation, taking into account both the variability of finished water from the waterworks and the uncertainties of model parameters, could be performed to assess the violation risk of water quality in the water distribution system.

  4. Capture of carbon dioxide by hybrid sorption

    DOEpatents

    Srinivasachar, Srivats

    2014-09-23

    A composition, process and system for capturing carbon dioxide from a combustion gas stream. The composition has a particulate porous support medium that has a high volume of pores, an alkaline component distributed within the pores and on the surface of the support medium, and water adsorbed on the alkaline component, wherein the proportion of water in the composition is between about 5% and about 35% by weight of the composition. The process and system contemplates contacting the sorbent and the flowing gas stream together at a temperature and for a time such that some water remains adsorbed in the alkaline component when the contact of the sorbent with the flowing gas ceases.

  5. Impacts of Outer Continental Shelf (OCS) development on recreation and tourism. Volume 5. Program logic manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    The final report for the project is presented in five volumes. This volume is the Programmer's Manual. It covers: a system overview, attractiveness component of gravity model, trip-distribution component of gravity model, economic-effects model, and the consumer-surplus model. The project sought to determine the impact of Outer Continental Shelf development on recreation and tourism.

  6. Modeling, Simulation, and Analysis of a Decoy State Enabled Quantum Key Distribution System

    DTIC Science & Technology

    2015-03-26

    through the fiber , we assume Alice and Bob have correct basis alignment and timing control for reference frame correction and precise photon detection...optical components ( laser , polarization modulator, electronic variable optical attenuator, fixed optical attenuator, fiber channel, beamsplitter...generated by the laser in the CPG propagate through multiple optical components, each with a unique propagation delay before reaching the OPM. Timing

  7. Communication and control in an integrated manufacturing system

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Throne, Robert D.; Muthuswamy, Yogesh K.

    1987-01-01

    Typically, components in a manufacturing system are all centrally controlled. Due to possible communication bottlenecking, unreliability, and inflexibility caused by using a centralized controller, a new concept of system integration called an Integrated Multi-Robot System (IMRS) was developed. The IMRS can be viewed as a distributed real time system. Some of the current research issues being examined to extend the framework of the IMRS to meet its performance goals are presented. These issues include the use of communication coprocessors to enhance performance, the distribution of tasks and the methods of providing fault tolerance in the IMRS. An application example of real time collision detection, as it relates to the IMRS concept, is also presented and discussed.

  8. Availability Estimation for Facilities in Extreme Geographical Locations

    NASA Technical Reports Server (NTRS)

    Fischer, Gerd M.; Omotoso, Oluseun; Chen, Guangming; Evans, John W.

    2012-01-01

    A value added analysis for the Reliability. Availability and Maintainability of McMurdo Ground Station was developed, which will be a useful tool for system managers in sparing, maintenance planning and determining vital performance metrics needed for readiness assessment of the upgrades to the McMurdo System. Output of this study can also be used as inputs and recommendations for the application of Reliability Centered Maintenance (RCM) for the system. ReliaSoft's BlockSim. a commercial Reliability Analysis software package, has been used to model the availability of the system upgrade to the National Aeronautics and Space Administration (NASA) Near Earth Network (NEN) Ground Station at McMurdo Station in the Antarctica. The logistics challenges due to the closure of access to McMurdo Station during the Antarctic winter was modeled using a weighted composite of four Weibull distributions. one of the possible choices for statistical distributions throughout the software program and usually used to account for failure rates of components supplied by different manufacturers. The inaccessibility of the antenna site on a hill outside McMurdo Station throughout one year due to severe weather was modeled with a Weibull distribution for the repair crew availability. The Weibull distribution is based on an analysis of the available weather data for the antenna site for 2007 in combination with the rules for travel restrictions due to severe weather imposed by the administrating agency, the National Science Foundation (NSF). The simulations resulted in an upper bound for the system availability and allowed for identification of components that would improve availability based on a higher on-site spare count than initially planned.

  9. Rural telemedicine project in northern New Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zink, S.; Hahn, H.; Rudnick, J.

    A virtual electronic medical record system is being deployed over the Internet with security in northern New Mexico using TeleMed, a multimedia medical records management system that uses CORBA-based client-server technology and distributed database architecture. The goal of the NNM Rural Telemedicine Project is to implement TeleMed into fifteen rural clinics and two hospitals within a 25,000 square mile area of northern New Mexico. Evaluation of the project consists of three components: job task analysis, audit of immunized children, and time motion studies. Preliminary results of the evaluation components are presented.

  10. Evaluation of solution stability for two-component polydisperse systems by small-angle scattering

    NASA Astrophysics Data System (ADS)

    Kryukova, A. E.; Konarev, P. V.; Volkov, V. V.

    2017-12-01

    The article is devoted to the modelling of small-angle scattering data using the program MIXTURE designed for the study of polydisperse multicomponent mixtures. In this work we present the results of solution stability studies for theoretical small-angle scattering data sets from two-component models. It was demonstrated that the addition of the noise to the data influences the stability range of the restored structural parameters. The recommendations for the optimal minimization schemes that permit to restore the volume size distributions for polydisperse systems are suggested.

  11. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  12. Physical and geometrical parameters of VCBS XIII: HIP 105947

    NASA Astrophysics Data System (ADS)

    Gumaan Masda, Suhail; Al-Wardat, Mashhoor Ahmed; Pathan, Jiyaulla Khan Moula Khan

    2018-06-01

    The best physical and geometrical parameters of the main sequence close visual binary system (CVBS), HIP 105947, are presented. These parameters have been constructed conclusively using Al-Wardat’s complex method for analyzing CVBSs, which is a method for constructing a synthetic spectral energy distribution (SED) for the entire binary system using individual SEDs for each component star. The model atmospheres are in its turn built using the Kurucz (ATLAS9) line-blanketed plane-parallel models. At the same time, the orbital parameters for the system are calculated using Tokovinin’s dynamical method for constructing the best orbits of an interferometric binary system. Moreover, the mass-sum of the components, as well as the Δθ and Δρ residuals for the system, is introduced. The combination of Al-Wardat’s and Tokovinin’s methods yields the best estimations of the physical and geometrical parameters. The positions of the components in the system on the evolutionary tracks and isochrones are plotted and the formation and evolution of the system are discussed.

  13. Evaluation of Microcomputer-Based Operation and Maintenance Management Systems for Army Water/Wastewater Treatment Plant Operation.

    DTIC Science & Technology

    1986-07-01

    COMPUTER-AIDED OPERATION MANAGEMENT SYSTEM ................. 29 Functions of an Off-Line Computer-Aided Operation Management System Applications of...System Comparisons 85 DISTRIBUTION 5V J. • 0. FIGURES Number Page 1 Hardware Components 21 2 Basic Functions of a Computer-Aided Operation Management System...Plant Visits 26 4 Computer-Aided Operation Management Systems Reviewed for Analysis of Basic Functions 29 5 Progress of Software System Installation and

  14. Operation of remote mobile sensors for security of drinking water distribution systems.

    PubMed

    Perelman, By Lina; Ostfeld, Avi

    2013-09-01

    The deployment of fixed online water quality sensors in water distribution systems has been recognized as one of the key components of contamination warning systems for securing public health. This study proposes to explore how the inclusion of mobile sensors for inline monitoring of various water quality parameters (e.g., residual chlorine, pH) can enhance water distribution system security. Mobile sensors equipped with sampling, sensing, data acquisition, wireless transmission and power generation systems are being designed, fabricated, and tested, and prototypes are expected to be released in the very near future. This study initiates the development of a theoretical framework for modeling mobile sensor movement in water distribution systems and integrating the sensory data collected from stationary and non-stationary sensor nodes to increase system security. The methodology is applied and demonstrated on two benchmark networks. Performance of different sensor network designs are compared for fixed and combined fixed and mobile sensor networks. Results indicate that complementing online sensor networks with inline monitoring can increase detection likelihood and decrease mean time to detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Using Ada to implement the operations management system in a community of experts

    NASA Technical Reports Server (NTRS)

    Frank, M. S.

    1986-01-01

    An architecture is described for the Space Station Operations Management System (OMS), consisting of a distributed expert system framework implemented in Ada. The motivation for such a scheme is based on the desire to integrate the very diverse elements of the OMS while taking maximum advantage of knowledge based systems technology. Part of the foundation of an Ada based distributed expert system was accomplished in the form of a proof of concept prototype for the KNOMES project (Knowledge-based Maintenance Expert System). This prototype successfully used concurrently active experts to accomplish monitoring and diagnosis for the Remote Manipulator System. The basic concept of this software architecture is named ACTORS for Ada Cognitive Task ORganization Scheme. It is when one considers the overall problem of integrating all of the OMS elements into a cooperative system that the AI solution stands out. By utilizing a distributed knowledge based system as the framework for OMS, it is possible to integrate those components which need to share information in an intelligent manner.

  16. Document for 270 Voltage Direct Current (270 V dc) System

    NASA Astrophysics Data System (ADS)

    1992-09-01

    The paper presents the technical design and application information established by the SAE Aerospace Recommended Practice concerning the generation, distribution, control, and utilization of aircraft 270 V dc electrical power systems and support equipment. Also presented are references and definitions making it possible to compare various electrical systems and components. A diagram of the generic 270 V Direct Current High-Voltage Direct System is included.

  17. XML Technology Assessment

    DTIC Science & Technology

    2001-01-01

    System (GCCS) Track Database Management System (TDBM) (3) GCCS Integrated Imagery and Intelligence (3) Intelligence Shared Data Server (ISDS) General ...The CTH is a powerful model that will allow more than just message systems to exchange information. It could be used for object-oriented databases, as...of the Naval Integrated Tactical Environmental System I (NITES I) is used as a case study to demonstrate the utility of this distributed component

  18. A framework for conducting mechanistic based reliability assessments of components operating in complex systems

    NASA Astrophysics Data System (ADS)

    Wallace, Jon Michael

    2003-10-01

    Reliability prediction of components operating in complex systems has historically been conducted in a statistically isolated manner. Current physics-based, i.e. mechanistic, component reliability approaches focus more on component-specific attributes and mathematical algorithms and not enough on the influence of the system. The result is that significant error can be introduced into the component reliability assessment process. The objective of this study is the development of a framework that infuses the needs and influence of the system into the process of conducting mechanistic-based component reliability assessments. The formulated framework consists of six primary steps. The first three steps, identification, decomposition, and synthesis, are primarily qualitative in nature and employ system reliability and safety engineering principles to construct an appropriate starting point for the component reliability assessment. The following two steps are the most unique. They involve a step to efficiently characterize and quantify the system-driven local parameter space and a subsequent step using this information to guide the reduction of the component parameter space. The local statistical space quantification step is accomplished using two proposed multivariate probability models: Multi-Response First Order Second Moment and Taylor-Based Inverse Transformation. Where existing joint probability models require preliminary distribution and correlation information of the responses, these models combine statistical information of the input parameters with an efficient sampling of the response analyses to produce the multi-response joint probability distribution. Parameter space reduction is accomplished using Approximate Canonical Correlation Analysis (ACCA) employed as a multi-response screening technique. The novelty of this approach is that each individual local parameter and even subsets of parameters representing entire contributing analyses can now be rank ordered with respect to their contribution to not just one response, but the entire vector of component responses simultaneously. The final step of the framework is the actual probabilistic assessment of the component. Although the same multivariate probability tools employed in the characterization step can be used for the component probability assessment, variations of this final step are given to allow for the utilization of existing probabilistic methods such as response surface Monte Carlo and Fast Probability Integration. The overall framework developed in this study is implemented to assess the finite-element based reliability prediction of a gas turbine airfoil involving several failure responses. Results of this implementation are compared to results generated using the conventional 'isolated' approach as well as a validation approach conducted through large sample Monte Carlo simulations. The framework resulted in a considerable improvement to the accuracy of the part reliability assessment and an improved understanding of the component failure behavior. Considerable statistical complexity in the form of joint non-normal behavior was found and accounted for using the framework. Future applications of the framework elements are discussed.

  19. Real-time detection of organic contamination events in water distribution systems by principal components analysis of ultraviolet spectral data.

    PubMed

    Zhang, Jian; Hou, Dibo; Wang, Ke; Huang, Pingjie; Zhang, Guangxin; Loáiciga, Hugo

    2017-05-01

    The detection of organic contaminants in water distribution systems is essential to protect public health from potential harmful compounds resulting from accidental spills or intentional releases. Existing methods for detecting organic contaminants are based on quantitative analyses such as chemical testing and gas/liquid chromatography, which are time- and reagent-consuming and involve costly maintenance. This study proposes a novel procedure based on discrete wavelet transform and principal component analysis for detecting organic contamination events from ultraviolet spectral data. Firstly, the spectrum of each observation is transformed using discrete wavelet with a coiflet mother wavelet to capture the abrupt change along the wavelength. Principal component analysis is then employed to approximate the spectra based on capture and fusion features. The significant value of Hotelling's T 2 statistics is calculated and used to detect outliers. An alarm of contamination event is triggered by sequential Bayesian analysis when the outliers appear continuously in several observations. The effectiveness of the proposed procedure is tested on-line using a pilot-scale setup and experimental data.

  20. A Component-based Programming Model for Composite, Distributed Applications

    NASA Technical Reports Server (NTRS)

    Eidson, Thomas M.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The nature of scientific programming is evolving to larger, composite applications that are composed of smaller element applications. These composite applications are more frequently being targeted for distributed, heterogeneous networks of computers. They are most likely programmed by a group of developers. Software component technology and computational frameworks are being proposed and developed to meet the programming requirements of these new applications. Historically, programming systems have had a hard time being accepted by the scientific programming community. In this paper, a programming model is outlined that attempts to organize the software component concepts and fundamental programming entities into programming abstractions that will be better understood by the application developers. The programming model is designed to support computational frameworks that manage many of the tedious programming details, but also that allow sufficient programmer control to design an accurate, high-performance application.

  1. The application of high temperature superconductors to space electrical power distribution components

    NASA Technical Reports Server (NTRS)

    Aron, Paul R.; Myers, Ira T.

    1988-01-01

    Some important space based electrical power distribution systems and components are examined to determine what might be achieved with the introduction of high temperature superconductors (HTS). Components that are compared in a before-and-after fashion include transformers, transmission lines, and capacitors. It is concluded that HTS has its greatest effect on the weight associated with transmission lines, where the weight penalty could be reduced by as much as 130 kg/kW/km of cable. Transformers, because 28 percent of their mass is in the conductor, are reduced in weight by the same factor. Capacitors are helped the least with only negligible savings possible. Finally, because HTS can relax the requirement to use alternating current in order to reduce conductor mass, it will be possible to generate significant savings by eliminating most transformers and capacitors.

  2. The application of high temperature superconductors to space electrical power distribution components

    NASA Technical Reports Server (NTRS)

    Aron, Paul R.; Myers, Ira T.

    1988-01-01

    Some important space based electrical power distribution systems and components are examined to determine what might be achieved with the introduction of high temperature superconductors (HTS). Components that are compared in a before and after fashion include transformers, transmission lines, and capacitors. It is concluded that HTS has its greatest effect on the weight associated with transmission lines, where the weight penalty could be reduced by as much as 130 kg/kW/km of cable. Transformers, because 28 percent of their mass is in the conductor, are reduced in weight by the same factor. Capacitors are helped the least with only negligible savings possible. Finally, because HTS can relax the requirement to use alternating current in order to reduce conductor mass, it will be possible to generate significant savings by eliminating most transformers and capacitors.

  3. Quantitative simulation of extraterrestrial engineering devices

    NASA Technical Reports Server (NTRS)

    Arabyan, A.; Nikravesh, P. E.; Vincent, T. L.

    1991-01-01

    This is a multicomponent, multidisciplinary project whose overall objective is to build an integrated database, simulation, visualization, and optimization system for the proposed oxygen manufacturing plant on Mars. Specifically, the system allows users to enter physical description, engineering, and connectivity data through a uniform, user-friendly interface and stores the data in formats compatible with other software also developed as part of this project. These latter components include: (1) programs to simulate the behavior of various parts of the plant in Martian conditions; (2) an animation program which, in different modes, provides visual feedback to designers and researchers about the location of and temperature distribution among components as well as heat, mass, and data flow through the plant as it operates in different scenarios; (3) a control program to investigate the stability and response of the system under different disturbance conditions; and (4) an optimization program to maximize or minimize various criteria as the system evolves into its final design. All components of the system are interconnected so that changes entered through one component are reflected in the others.

  4. Detecting fission from special nuclear material sources

    DOEpatents

    Rowland, Mark S [Alamo, CA; Snyderman, Neal J [Berkeley, CA

    2012-06-05

    A neutron detector system for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. The system includes a graphing component that displays the plot of the neutron distribution from the unknown source over a Poisson distribution and a plot of neutrons due to background or environmental sources. The system further includes a known neutron source placed in proximity to the unknown source to actively interrogate the unknown source in order to accentuate differences in neutron emission from the unknown source from Poisson distributions and/or environmental sources.

  5. ArgoEcoSystem-watershed (AgES-W) model evaluation for streamflow and nitrogen/sediment dynamics on a midwest agricultural watershed

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components under the Object Modeling System Version 3 (OMS3). The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the ad...

  6. The distributed agent-based approach in the e-manufacturing environment

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Kost, G.; Dobrzańska-Danikiewicz, A.; Banaś, W.; Foit, K.

    2015-11-01

    The deficiency of a coherent flow of information from a production department causes unplanned downtime and failures of machines and their equipment, which in turn results in production planning process based on incorrect and out-of-date information. All of these factors entail, as the consequence, the additional difficulties associated with the process of decision-making. They concern, among other, the coordination of components of a distributed system and providing the access to the required information, thereby generating unnecessary costs. The use of agent technology significantly speeds up the flow of information within the virtual enterprise. This paper includes the proposal of a multi-agent approach for the integration of processes within the virtual enterprise concept. The presented concept was elaborated to investigate the possible solutions of the ways of transmission of information in the production system taking into account the self-organization of constituent components. Thus it implicated the linking of the concept of multi-agent system with the system of managing the production information, based on the idea of e-manufacturing. The paper presents resulting scheme that should be the base for elaborating an informatics model of the target virtual system. The computer system itself is intended to be developed next.

  7. Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaufman, Michael C.; Lau, Cornwall H.; Hanson, Gregory R.

    A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system canmore » be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.« less

  8. Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines

    DOE PAGES

    Kaufman, Michael C.; Lau, Cornwall H.; Hanson, Gregory R.

    2018-03-08

    A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system canmore » be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.« less

  9. Microwave Analysis with Monte Carlo Methods for ECH Transmission Lines

    NASA Astrophysics Data System (ADS)

    Kaufman, M. C.; Lau, C.; Hanson, G. R.

    2018-03-01

    A new code framework, MORAMC, is presented which model transmission line (TL) systems consisting of overmoded circular waveguide and other components including miter bends and transmission line gaps. The transmission line is modeled as a set of mode converters in series where each component is composed of one or more converters. The parametrization of each mode converter can account for the fabrication tolerances of physically realizable components. These tolerances as well as the precision to which these TL systems can be installed and aligned gives a practical limit to which the uncertainty of the microwave performance of the system can be calculated. Because of this, Monte Carlo methods are a natural fit and are employed to calculate the probability distribution that a given TL can deliver a required power and mode purity. Several examples are given to demonstrate the usefulness of MORAMC in optimizing TL systems.

  10. Cross-frequency and band-averaged response variance prediction in the hybrid deterministic-statistical energy analysis method

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin P. B.; Langley, Robin S.

    2018-08-01

    The hybrid deterministic-statistical energy analysis method has proven to be a versatile framework for modeling built-up vibro-acoustic systems. The stiff system components are modeled deterministically, e.g., using the finite element method, while the wave fields in the flexible components are modeled as diffuse. In the present paper, the hybrid method is extended such that not only the ensemble mean and variance of the harmonic system response can be computed, but also of the band-averaged system response. This variance represents the uncertainty that is due to the assumption of a diffuse field in the flexible components of the hybrid system. The developments start with a cross-frequency generalization of the reciprocity relationship between the total energy in a diffuse field and the cross spectrum of the blocked reverberant loading at the boundaries of that field. By making extensive use of this generalization in a first-order perturbation analysis, explicit expressions are derived for the cross-frequency and band-averaged variance of the vibrational energies in the diffuse components and for the cross-frequency and band-averaged variance of the cross spectrum of the vibro-acoustic field response of the deterministic components. These expressions are extensively validated against detailed Monte Carlo analyses of coupled plate systems in which diffuse fields are simulated by randomly distributing small point masses across the flexible components, and good agreement is found.

  11. DEPEND - A design environment for prediction and evaluation of system dependability

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.

  12. Report on the Armed Services Technical Information Agency

    DTIC Science & Technology

    1957-06-30

    insert controlling DoD office). • DISTRIBUTION STATEMENT E . Distribution authorized to DoD Components only (fill in reason) (date of determination...Forecast of ASTIA Activity E Proposed DOD Directive re: Cataloging and Abstracting of Reports by Originators F Statistics on ASTIA...for resources, and ( e ) systems and proce- dures. External considerations of user requirements and user satis- faction were beyond the scope of

  13. MODIS Information, Data, and Control System (MIDACS) system specifications and conceptual design

    NASA Technical Reports Server (NTRS)

    Han, D.; Salomonson, V.; Ormsby, J.; Ardanuy, P.; Mckay, A.; Hoyt, D.; Jaffin, S.; Vallette, B.; Sharts, B.; Folta, D.

    1988-01-01

    The MODIS Information, Data, and Control System (MIDACS) Specifications and Conceptual Design Document discusses system level requirements, the overall operating environment in which requirements must be met, and a breakdown of MIDACS into component subsystems, which include the Instrument Support Terminal, the Instrument Control Center, the Team Member Computing Facility, the Central Data Handling Facility, and the Data Archive and Distribution System. The specifications include sizing estimates for the processing and storage capacities of each data system element, as well as traffic analyses of data flows between the elements internally, and also externally across the data system interfaces. The specifications for the data system, as well as for the individual planning and scheduling, control and monitoring, data acquisition and processing, calibration and validation, and data archive and distribution components, do not yet fully specify the data system in the complete manner needed to achieve the scientific objectives of the MODIS instruments and science teams. The teams have not yet been formed; however, it was possible to develop the specifications and conceptual design based on the present concept of EosDIS, the Level-1 and Level-2 Functional Requirements Documents, the Operations Concept, and through interviews and meetings with key members of the scientific community.

  14. What Sets the Radial Locations of Warm Debris Disks?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballering, Nicholas P.; Rieke, George H.; Su, Kate Y. L.

    The architectures of debris disks encode the history of planet formation in these systems. Studies of debris disks via their spectral energy distributions (SEDs) have found infrared excesses arising from cold dust, warm dust, or a combination of the two. The cold outer belts of many systems have been imaged, facilitating their study in great detail. Far less is known about the warm components, including the origin of the dust. The regularity of the disk temperatures indicates an underlying structure that may be linked to the water snow line. If the dust is generated from collisions in an exo-asteroid belt,more » the dust will likely trace the location of the water snow line in the primordial protoplanetary disk where planetesimal growth was enhanced. If instead the warm dust arises from the inward transport from a reservoir of icy material farther out in the system, the dust location is expected to be set by the current snow line. We analyze the SEDs of a large sample of debris disks with warm components. We find that warm components in single-component systems (those without detectable cold components) follow the primordial snow line rather than the current snow line, so they likely arise from exo-asteroid belts. While the locations of many warm components in two-component systems are also consistent with the primordial snow line, there is more diversity among these systems, suggesting additional effects play a role.« less

  15. Probabilistic analysis and fatigue damage assessment of offshore mooring system due to non-Gaussian bimodal tension processes

    NASA Astrophysics Data System (ADS)

    Chang, Anteng; Li, Huajun; Wang, Shuqing; Du, Junfeng

    2017-08-01

    Both wave-frequency (WF) and low-frequency (LF) components of mooring tension are in principle non-Gaussian due to nonlinearities in the dynamic system. This paper conducts a comprehensive investigation of applicable probability density functions (PDFs) of mooring tension amplitudes used to assess mooring-line fatigue damage via the spectral method. Short-term statistical characteristics of mooring-line tension responses are firstly investigated, in which the discrepancy arising from Gaussian approximation is revealed by comparing kurtosis and skewness coefficients. Several distribution functions based on present analytical spectral methods are selected to express the statistical distribution of the mooring-line tension amplitudes. Results indicate that the Gamma-type distribution and a linear combination of Dirlik and Tovo-Benasciutti formulas are suitable for separate WF and LF mooring tension components. A novel parametric method based on nonlinear transformations and stochastic optimization is then proposed to increase the effectiveness of mooring-line fatigue assessment due to non-Gaussian bimodal tension responses. Using time domain simulation as a benchmark, its accuracy is further validated using a numerical case study of a moored semi-submersible platform.

  16. Production of a novel antioxidant furan fatty acid from 7,10-dihydroxy-8(E)-octadecenoic acid

    USDA-ARS?s Scientific Manuscript database

    Furan fatty acids (F-acids) have gained attention since they are known to play important roles in a variety of biological systems. Specifically F-acids are known to have strong antioxidant activity. Although widely distributed in most biological systems, F-acids are trace components and their biosyn...

  17. 46 CFR 162.161-6 - Tests for approval.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... have been conditioned for 24 hours at 32 °F or at the expected service temperature, if lower than 32 °F... distribution of the extinguishing agent; (3) Salt spray corrosion resistance for marine-type systems; (4) Vibration resistance of installed components for marine-type systems; and (5) Any additional tests contained...

  18. 46 CFR 162.161-6 - Tests for approval.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... have been conditioned for 24 hours at 32 °F or at the expected service temperature, if lower than 32 °F... distribution of the extinguishing agent; (3) Salt spray corrosion resistance for marine-type systems; (4) Vibration resistance of installed components for marine-type systems; and (5) Any additional tests contained...

  19. 46 CFR 162.161-6 - Tests for approval.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... have been conditioned for 24 hours at 32 °F or at the expected service temperature, if lower than 32 °F... distribution of the extinguishing agent; (3) Salt spray corrosion resistance for marine-type systems; (4) Vibration resistance of installed components for marine-type systems; and (5) Any additional tests contained...

  20. Materials handbook for fusion energy systems

    NASA Astrophysics Data System (ADS)

    Davis, J. W.; Marchbanks, M. F.

    A materials data book for use in the design and analysis of components and systems in near term experimental and commercial reactor concepts has been created by the Office of Fusion Energy. The handbook is known as the Materials Handbook for Fusion Energy Systems (MHFES) and is available to all organizations actively involved in fusion related research or system designs. Distribution of the MHFES and its data pages is handled by the Hanford Engineering Development Laboratory (HEDL), while its direction and content is handled by McDonnell Douglas Astronautics Company — St. Louis (MDAC-STL). The MHFES differs from other handbooks in that its format is geared more to the designer and structural analyst than to the materials scientist or materials engineer. The format that is used organizes the handbook by subsystems or components rather than material. Within each subsystem is information pertaining to material selection, specific material properties, and comments or recommendations on treatment of data. Since its inception a little more than a year ago, over 80 copies have been distributed to over 28 organizations consisting of national laboratories, universities, and private industries.

  1. A quality of service negotiation procedure for distributed multimedia presentational applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafid, A.; Bochmann, G.V.; Kerherve, B.

    Most of current approaches in designing and implementing distributed multimedia (MM) presentational applications, e.g. news-on-demand, have concentrated on the performance of the continuous media file servers in terms of seek time overhead, and real-time disk scheduling; particularly the QoS negotiation mechanisms they provide are used in a rather static manner that is, these mechanisms are restricted to the evaluation of the capacity of certain system components, e.g. file server a priori known to support a specific quality of service (QoS). In contrast to those approaches, we propose a general QoS negotiation framework that supports the dynamic choice of a configurationmore » of system components to support the QoS requirements of the user of a specific application: we consider different possible system configurations and select an optimal one to provide the appropriate QoS support. In this paper we document the design and implementation of a QoS negotiation procedure for distributed MM presentational applications, such as news-on-demand. The negotiation procedure described here is an instantiation of the general framework for QoS negotiation which was developed earlier Our proposal differs in many respect with the negotiation functions provided by existing approaches: (1) the negotiation process uses an optimization approach to find a configuration of system components which supports the user requirements, (2) the negotiation process supports the negotiation of a MM document and not only a single monomedia object, (3) the QoS negotiation takes into account the cost to the user, and (4) the negotiation process may be used to support automatic adaptation to react to QoS degradations, without intervention by the user/application.« less

  2. Simulation and analysis of conjunctive use with MODFLOW's farm process

    USGS Publications Warehouse

    Hanson, R.T.; Schmid, W.; Faunt, C.C.; Lockwood, B.

    2010-01-01

    The extension of MODFLOW onto the landscape with the Farm Process (MF-FMP) facilitates fully coupled simulation of the use and movement of water from precipitation, streamflow and runoff, groundwater flow, and consumption by natural and agricultural vegetation throughout the hydrologic system at all times. This allows for more complete analysis of conjunctive use water-resource systems than previously possible with MODFLOW by combining relevant aspects of the landscape with the groundwater and surface water components. This analysis is accomplished using distributed cell-by-cell supply-constrained and demand-driven components across the landscape within " water-balance subregions" comprised of one or more model cells that can represent a single farm, a group of farms, or other hydrologic or geopolitical entities. Simulation of micro-agriculture in the Pajaro Valley and macro-agriculture in the Central Valley are used to demonstrate the utility of MF-FMP. For Pajaro Valley, the simulation of an aquifer storage and recovery system and related coastal water distribution system to supplant coastal pumpage was analyzed subject to climate variations and additional supplemental sources such as local runoff. For the Central Valley, analysis of conjunctive use from different hydrologic settings of northern and southern subregions shows how and when precipitation, surface water, and groundwater are important to conjunctive use. The examples show that through MF-FMP's ability to simulate natural and anthropogenic components of the hydrologic cycle, the distribution and dynamics of supply and demand can be analyzed, understood, and managed. This analysis of conjunctive use would be difficult without embedding them in the simulation and are difficult to estimate a priori. Journal compilation ?? 2010 National Ground Water Association. No claim to original US government works.

  3. RF-based power distribution system for optogenetic experiments

    NASA Astrophysics Data System (ADS)

    Filipek, Tomasz A.; Kasprowicz, Grzegorz H.

    2017-08-01

    In this paper, the wireless power distribution system for optogenetic experiment was demonstrated. The design and the analysis of the power transfer system development is described in details. The architecture is outlined in the context of performance requirements that had to be met. We show how to design a wireless power transfer system using resonant coupling circuits which consist of a number of receivers and one transmitter covering the entire cage area with a specific power density. The transmitter design with the full automated protection stage is described with detailed consideration of the specification and the construction of the transmitting loop antenna. In addition, the design of the receiver is described, including simplification of implementation and the minimization of the impact of component tolerances on the performance of the distribution system. The conducted analysis has been confirmed by calculations and measurement results. The presented distribution system was designed to provide 100 mW power supply to each of the ten possible receivers in a limited 490 x 350 mm cage space while using a single transmitter working at the coupling resonant frequency of 27 MHz.

  4. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  5. Improved Coast Guard Communications Using Commercial Satellites and WWW Technology

    DOT National Transportation Integrated Search

    1997-06-18

    Information collection and distribution are essential components of most Coast Guard missions. However, information needs have typically outpaced the ability of the installed communications systems to meet those needs. This mismatch leads to reduced ...

  6. Analysis of possible designs of processing units with radial plasma flows

    NASA Astrophysics Data System (ADS)

    Kolesnik, V. V.; Zaitsev, S. V.; Vashilin, V. S.; Limarenko, M. V.; Prochorenkov, D. S.

    2018-03-01

    Analysis of plasma-ion methods of obtaining thin-film coatings shows that their development goes along the path of the increasing use of sputter deposition processes, which allow one to obtain multicomponent coatings with varying percentage of particular components. One of the methods that allow one to form multicomponent coatings with virtually any composition of elementary components is the method of coating deposition using quasi-magnetron sputtering systems [1]. This requires the creation of an axial magnetic field of a defined configuration with the flux density within the range of 0.01-0.1 T [2]. In order to compare and analyze various configurations of processing unit magnetic systems, it is necessary to obtain the following dependencies: the dependency of magnetic core section on the input power to inductors, the distribution of magnetic induction within the equatorial plane in the corresponding sections, the distribution of the magnetic induction value in the area of cathode target location.

  7. Distribution and dynamics of electron transport complexes in cyanobacterial thylakoid membranes☆

    PubMed Central

    Liu, Lu-Ning

    2016-01-01

    The cyanobacterial thylakoid membrane represents a system that can carry out both oxygenic photosynthesis and respiration simultaneously. The organization, interactions and mobility of components of these two electron transport pathways are indispensable to the biosynthesis of thylakoid membrane modules and the optimization of bioenergetic electron flow in response to environmental changes. These are of fundamental importance to the metabolic robustness and plasticity of cyanobacteria. This review summarizes our current knowledge about the distribution and dynamics of electron transport components in cyanobacterial thylakoid membranes. Global understanding of the principles that govern the dynamic regulation of electron transport pathways in nature will provide a framework for the design and synthetic engineering of new bioenergetic machinery to improve photosynthesis and biofuel production. This article is part of a Special Issue entitled: Organization and dynamics of bioenergetic systems in bacteria, edited by Conrad Mullineaux. PMID:26619924

  8. Distribution and dynamics of electron transport complexes in cyanobacterial thylakoid membranes.

    PubMed

    Liu, Lu-Ning

    2016-03-01

    The cyanobacterial thylakoid membrane represents a system that can carry out both oxygenic photosynthesis and respiration simultaneously. The organization, interactions and mobility of components of these two electron transport pathways are indispensable to the biosynthesis of thylakoid membrane modules and the optimization of bioenergetic electron flow in response to environmental changes. These are of fundamental importance to the metabolic robustness and plasticity of cyanobacteria. This review summarizes our current knowledge about the distribution and dynamics of electron transport components in cyanobacterial thylakoid membranes. Global understanding of the principles that govern the dynamic regulation of electron transport pathways in nature will provide a framework for the design and synthetic engineering of new bioenergetic machinery to improve photosynthesis and biofuel production. This article is part of a Special Issue entitled: Organization and dynamics of bioenergetic systems in bacteria, edited by Conrad Mullineaux. Copyright © 2015 The Author. Published by Elsevier B.V. All rights reserved.

  9. Vector wind profile gust model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1981-01-01

    To enable development of a vector wind gust model suitable for orbital flight test operations and trade studies, hypotheses concerning the distributions of gust component variables were verified. Methods for verification of hypotheses that observed gust variables, including gust component magnitude, gust length, u range, and L range, are gamma distributed and presented. Observed gust modulus has been drawn from a bivariate gamma distribution that can be approximated with a Weibull distribution. Zonal and meridional gust components are bivariate gamma distributed. An analytical method for testing for bivariate gamma distributed variables is presented. Two distributions for gust modulus are described and the results of extensive hypothesis testing of one of the distributions are presented. The validity of the gamma distribution for representation of gust component variables is established.

  10. Research study on multi-KW-DC distribution system

    NASA Technical Reports Server (NTRS)

    Berkery, E. A.; Krausz, A.

    1975-01-01

    A detailed definition of the HVDC test facility and the equipment required to implement the test program are provided. The basic elements of the test facility are illustrated, and consist of: the power source, conventional and digital supervision and control equipment, power distribution harness and simulated loads. The regulated dc power supplies provide steady-state power up to 36 KW at 120 VDC. Power for simulated line faults will be obtained from two banks of 90 ampere-hour lead-acid batteries. The relative merits of conventional and multiplexed power control will be demonstrated by the Supervision and Monitor Unit (SMU) and the Automatically Controlled Electrical Systems (ACES) hardware. The distribution harness is supported by a metal duct which is bonded to all component structures and functions as the system ground plane. The load banks contain passive resistance and reactance loads, solid state power controllers and active pulse width modulated loads. The HVDC test facility is designed to simulate a power distribution system for large aerospace vehicles.

  11. Exploring the functional architecture of person recognition system with event-related potentials in a within- and cross-domain self-priming of faces.

    PubMed

    Jemel, Boutheina; Pisani, Michèle; Rousselle, Laurence; Crommelinck, Marc; Bruyer, Raymond

    2005-01-01

    In this paper, we explored the functional properties of person recognition system by investigating the onset, magnitude, and scalp distribution of within- and cross-domain self-priming effects on event-related potentials (ERPs). Recognition of degraded pictures of famous people was enhanced by a prior exposure to the same person's face (within-domain self-priming) or name (cross-domain self-priming) as compared to those preceded by neutral or unrelated primes. The ERP results showed first that the amplitude of the N170 component to famous face targets was modulated by within- and cross-domain self-priming, suggesting not only that the N170 component can be affected by top-down influences but also that this top-down effect crosses domains. Second, similar to our behavioral data, later ERPs to famous faces showed larger ERP self-priming effects in the within-domain than in the cross-domain condition. In addition, the present data dissociated between two topographically and temporally overlapping priming-sensitive ERP components: the first one, with a strongly posterior distribution arising at an early onset, was modulated more by within-domain priming irrespective whether the repeated face was familiar or not. The second component, with a relatively uniform scalp distribution, was modulated by within- and cross-domain priming of familiar faces. Moreover, there was no evidence for ERP-induced modulations for unfamiliar face targets in the cross-domain condition. Together, our findings suggest that multiple neurocognitive events that are possibly mediated by distinct brain loci contribute to face priming effects.

  12. Few-mode fiber based distributed curvature sensor through quasi-single-mode Brillouin frequency shift.

    PubMed

    Wu, Hao; Wang, Ruoxu; Liu, Deming; Fu, Songnian; Zhao, Can; Wei, Huifeng; Tong, Weijun; Shum, Perry Ping; Tang, Ming

    2016-04-01

    We proposed and demonstrated a few-mode fiber (FMF) based optical-fiber sensor for distributed curvature measurement through quasi-single-mode Brillouin frequency shift (BFS). By central-alignment splicing FMF and single-mode fiber (SMF) with a fusion taper, a SMF-components-compatible distributed curvature sensor based on FMF is realized using the conventional Brillouin optical time-domain analysis system. The distributed BFS change induced by bending in FMF has been theoretically and experimentally investigated. The precise BFS response to the curvature along the fiber link has been calibrated. A proof-of-concept experiment is implemented to validate its effectiveness in distributed curvature measurement.

  13. Optimization of Borehole Thermal Energy Storage System Design Using Comprehensive Coupled Simulation Models

    NASA Astrophysics Data System (ADS)

    Welsch, Bastian; Rühaak, Wolfram; Schulte, Daniel O.; Formhals, Julian; Bär, Kristian; Sass, Ingo

    2017-04-01

    Large-scale borehole thermal energy storage (BTES) is a promising technology in the development of sustainable, renewable and low-emission district heating concepts. Such systems consist of several components and assemblies like the borehole heat exchangers (BHE), other heat sources (e.g. solarthermics, combined heat and power plants, peak load boilers, heat pumps), distribution networks and heating installations. The complexity of these systems necessitates numerical simulations in the design and planning phase. Generally, the subsurface components are simulated separately from the above ground components of the district heating system. However, as fluid and heat are exchanged, the subsystems interact with each other and thereby mutually affect their performances. For a proper design of the overall system, it is therefore imperative to take into account the interdependencies of the subsystems. Based on a TCP/IP communication we have developed an interface for the coupling of a simulation package for heating installations with a finite element software for the modeling of the heat flow in the subsurface and the underground installations. This allows for a co-simulation of all system components, whereby the interaction of the different subsystems is considered. Furthermore, the concept allows for a mathematical optimization of the components and the operational parameters. Consequently, a finer adjustment of the system can be ensured and a more precise prognosis of the system's performance can be realized.

  14. Electrical/electronics working group summary

    NASA Technical Reports Server (NTRS)

    Schoenfeld, A. D.

    1984-01-01

    The electrical/electronics, technology area was considered. It was found that there are no foreseeable circuit or component problems to hinder the implementation of the flywheel energy storage concept. The definition of the major component or technology developments required to permit a technology ready date of 1987 was addressed. Recommendations: motor/generators, suspension electronics, power transfer, power conditioning and distribution, and modeling. An introduction to the area of system engineering is also included.

  15. Natural Materials and Systems

    DTIC Science & Technology

    2013-03-07

    distribution is unlimited 7 Program Trends – BRI is biggest impact • Chromophores/Bioluminescence – Bio-X STT phase 1 focus. One of its discoveries are...C) Tm ( ferritin ) Tm (nAl) Bio-thermite complex nAl only nAl FeO(OH) TGA/DSC profile (~40-44 cages/nAl...stabilize reactive components, interact with nAl, and quickly deliver components to the surface. • Ferritin used in a single or multi-layer

  16. How Are the Costs of Care for Medical Falls Distributed? The Costs of Medical Falls by Component of Cost, Timing, and Injury Severity

    ERIC Educational Resources Information Center

    Bohl, Alex A.; Phelan, Elizabeth A.; Fishman, Paul A.; Harris, Jeffrey R.

    2012-01-01

    Purpose of the Study: To examine the components of cost that drive increased total costs after a medical fall over time, stratified by injury severity. Design and Methods: We used 2004-2007 cost and utilization data for persons enrolled in an integrated care delivery system. We used a longitudinal cohort study design, where each individual…

  17. Thermodynamics of rock forming crystalline solutions

    NASA Technical Reports Server (NTRS)

    Saxena, S. K.

    1971-01-01

    Analysis of phase diagrams and cation distributions within crystalline solutions as means of obtaining thermodynamic data on rock forming crystalline solutions is discussed along with some aspects of partitioning of elements in coexisting phases. Crystalline solutions, components in a silicate mineral, and chemical potentials of these components were defined. Examples were given for calculating thermodynamic mixing functions in the CaW04-SrW04, olivine-chloride solution, and orthopyroxene systems.

  18. Theory of Ostwald ripening in a two-component system

    NASA Technical Reports Server (NTRS)

    Baird, J. K.; Lee, L. K.; Frazier, D. O.; Naumann, R. J.

    1986-01-01

    When a two-component system is cooled below the minimum temperature for its stability, it separates into two or more immiscible phases. The initial nucleation produces grains (if solid) or droplets (if liquid) of one of the phases dispersed in the other. The dynamics by which these nuclei proceed toward equilibrium is called Ostwald ripening. The dynamics of growth of the droplets depends upon the following factors: (1) The solubility of the droplet depends upon its radius and the interfacial energy between it and the surrounding (continuous) phase. There is a critical radius determined by the supersaturation in the continuous phase. Droplets with radii smaller than critical dissolve, while droplets with radii larger grow. (2) The droplets concentrate one component and reject the other. The rate at which this occurs is assumed to be determined by the interdiffusion of the two components in the continuous phase. (3) The Ostwald ripening is constrained by conservation of mass; e.g., the amount of materials in the droplet phase plus the remaining supersaturation in the continuous phase must equal the supersaturation available at the start. (4) There is a distribution of droplet sizes associated with a mean droplet radius, which grows continuously with time. This distribution function satisfies a continuity equation, which is solved asymptotically by a similarity transformation method.

  19. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  20. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  1. Service-oriented architecture for the ARGOS instrument control software

    NASA Astrophysics Data System (ADS)

    Borelli, J.; Barl, L.; Gässler, W.; Kulas, M.; Rabien, Sebastian

    2012-09-01

    The Advanced Rayleigh Guided ground layer Adaptive optic System, ARGOS, equips the Large Binocular Telescope (LBT) with a constellation of six rayleigh laser guide stars. By correcting atmospheric turbulence near the ground, the system is designed to increase the image quality of the multi-object spectrograph LUCIFER approximately by a factor of 3 over a field of 4 arc minute diameter. The control software has the critical task of orchestrating several devices, instruments, and high level services, including the already existing adaptive optic system and the telescope control software. All these components are widely distributed over the telescope, adding more complexity to the system design. The approach used by the ARGOS engineers is to write loosely coupled and distributed services under the control of different ownership systems, providing a uniform mechanism to offer, discover, interact and use these distributed capabilities. The control system counts with several finite state machines, vibration and flexure compensation loops, and safety mechanism, such as interlocks, aircraft, and satellite avoidance systems.

  2. DataFed: A Federated Data System for Visualization and Analysis of Spatio-Temporal Air Quality Data

    NASA Astrophysics Data System (ADS)

    Husar, R. B.; Hoijarvi, K.

    2017-12-01

    DataFed is a distributed web-services-based computing environment for accessing, processing, and visualizing atmospheric data in support of air quality science and management. The flexible, adaptive environment facilitates the access and flow of atmospheric data from provider to users by enabling the creation of user-driven data processing/visualization applications. DataFed `wrapper' components, non-intrusively wrap heterogeneous, distributed datasets for access by standards-based GIS web services. The mediator components (also web services) map the heterogeneous data into a spatio-temporal data model. Chained web services provide homogeneous data views (e.g., geospatial, time views) using a global multi-dimensional data model. In addition to data access and rendering, the data processing component services can be programmed for filtering, aggregation, and fusion of multidimensional data. A complete application software is written in a custom made data flow language. Currently, the federated data pool consists of over 50 datasets originating from globally distributed data providers delivering surface-based air quality measurements, satellite observations, emissions data as well as regional and global-scale air quality models. The web browser-based user interface allows point and click navigation and browsing the XYZT multi-dimensional data space. The key applications of DataFed are for exploring spatial pattern of pollutants, seasonal, weekly, diurnal cycles and frequency distributions for exploratory air quality research. Since 2008, DataFed has been used to support EPA in the implementation of the Exceptional Event Rule. The data system is also used at universities in the US, Europe and Asia.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei, E-mail: atokovinin@ctio.noao.edu

    Radial velocity (RV) monitoring of solar-type visual binaries has been conducted at the CTIO/SMARTS 1.5 m telescope to study short-period systems. The data reduction is described, and mean and individual RVs of 163 observed objects are given. New spectroscopic binaries are discovered or suspected in 17 objects, and for some of them the orbital periods could be determined. Subsystems are efficiently detected even in a single observation by double lines and/or by the RV difference between the components of visual binaries. The potential of this detection technique is quantified by simulation and used for statistical assessment of 96 wide binariesmore » within 67 pc. It is found that 43 binaries contain at least one subsystem, and the occurrence of subsystems is equally probable in either primary or secondary components. The frequency of subsystems and their periods matches the simple prescription proposed by the author. The remaining 53 simple wide binaries with a median projected separation of 1300 AU have an RV difference distribution between their components that is not compatible with the thermal eccentricity distribution f (e) = 2e but rather matches the uniform eccentricity distribution.« less

  4. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  5. Using WNTR to Model Water Distribution System Resilience ...

    EPA Pesticide Factsheets

    The Water Network Tool for Resilience (WNTR) is a new open source Python package developed by the U.S. Environmental Protection Agency and Sandia National Laboratories to model and evaluate resilience of water distribution systems. WNTR can be used to simulate a wide range of disruptive events, including earthquakes, contamination incidents, floods, climate change, and fires. The software includes the EPANET solver as well as a WNTR solver with the ability to model pressure-driven demand hydraulics, pipe breaks, component degradation and failure, changes to supply and demand, and cascading failure. Damage to individual components in the network (i.e. pipes, tanks) can be selected probabilistically using fragility curves. WNTR can also simulate different types of resilience-enhancing actions, including scheduled pipe repair or replacement, water conservation efforts, addition of back-up power, and use of contamination warning systems. The software can be used to estimate potential damage in a network, evaluate preparedness, prioritize repair strategies, and identify worse case scenarios. As a Python package, WNTR takes advantage of many existing python capabilities, including parallel processing of scenarios and graphics capabilities. This presentation will outline the modeling components in WNTR, demonstrate their use, give the audience information on how to get started using the code, and invite others to participate in this open source project. This pres

  6. Spatially distributed modal signals of free shallow membrane shell structronic system

    NASA Astrophysics Data System (ADS)

    Yue, H. H.; Deng, Z. Q.; Tzou, H. S.

    2008-11-01

    Based on the smart material and structronics technology, distributed sensor and control of shell structures have been rapidly developed for the last 20 years. This emerging technology has been utilized in aerospace, telecommunication, micro-electromechanical systems and other engineering applications. However, distributed monitoring technique and its resulting global spatially distributed sensing signals of shallow paraboloidal membrane shells are not clearly understood. In this paper, modeling of free flexible paraboloidal shell with spatially distributed sensor, micro-sensing signal characteristics, and location of distributed piezoelectric sensor patches are investigated based on a new set of assumed mode shape functions. Parametric analysis indicates that the signal generation depends on modal membrane strains in the meridional and circumferential directions in which the latter is more significant than the former, when all bending strains vanish in membrane shells. This study provides a modeling and analysis technique for distributed sensors laminated on lightweight paraboloidal flexible structures and identifies critical components and regions that generate significant signals.

  7. Polarized radiance distribution measurements of skylight. I. System description and characterization.

    PubMed

    Voss, K J; Liu, Y

    1997-08-20

    A new system to measure the natural skylight polarized radiance distribution has been developed. The system is based on a fish-eye lens, CCD camera system, and filter changer. With this system sequences of images can be combined to determine the linear polarization components of the incident light field. Calibration steps to determine the system 's polarization characteristics are described. Comparisons of the radiance measurements of this system and a simple pointing radiometer were made in the field and agreed within 10 % for measurements at 560 and 670 nm and 25 % at 860 nm. Polarization tests were done in the laboratory. The accuracy of the intensity measurements is estimated to be 10 %, while the accuracy of measurements of elements of the Mueller matrix are estimated to be 2 %.

  8. Interaction of dissolution, sorption and biodegradation on transport of BTEX in a saturated groundwater system: Numerical modeling and spatial moment analysis

    NASA Astrophysics Data System (ADS)

    Valsala, Renu; Govindarajan, Suresh Kumar

    2018-06-01

    Interaction of various physical, chemical and biological transport processes plays an important role in deciding the fate and migration of contaminants in groundwater systems. In this study, a numerical investigation on the interaction of various transport processes of BTEX in a saturated groundwater system is carried out. In addition, the multi-component dissolution from a residual BTEX source under unsteady flow conditions is incorporated in the modeling framework. The model considers Benzene, Toluene, Ethyl Benzene and Xylene dissolving from the residual BTEX source zone to undergo sorption and aerobic biodegradation within the groundwater aquifer. Spatial concentration profiles of dissolved BTEX components under the interaction of various sorption and biodegradation conditions have been studied. Subsequently, a spatial moment analysis is carried out to analyze the effect of interaction of various transport processes on the total dissolved mass and the mobility of dissolved BTEX components. Results from the present numerical study suggest that the interaction of dissolution, sorption and biodegradation significantly influence the spatial distribution of dissolved BTEX components within the saturated groundwater system. Mobility of dissolved BTEX components is also found to be affected by the interaction of these transport processes.

  9. Operation of U.S. Geological Survey unmanned digital magnetic observatories

    USGS Publications Warehouse

    Wilson, L.R.

    1990-01-01

    The precision and continuity of data recorded by unmanned digital magnetic observatories depend on the type of data acquisition equipment used and operating procedures employed. Three generations of observatory systems used by the U.S. Geological Survey are described. A table listing the frequency of component failures in the current observatory system has been compiled for a 54-month period of operation. The cause of component failure was generally mechanical or due to lightning. The average percentage data loss per month for 13 observatories operating a combined total of 637 months was 9%. Frequency distributions of data loss intervals show the highest frequency of occurrence to be intervals of less than 1 h. Installation of the third generation system will begin in 1988. The configuration of the third generation observatory system will eliminate most of the mechanical problems, and its components should be less susceptible to lightning. A quasi-absolute coil-proton system will be added to obtain baseline control for component variation data twice daily. Observatory data, diagnostics, and magnetic activity indices will be collected at 12-min intervals via satellite at Golden, Colorado. An improvement in the quality and continuity of data obtained with the new system is expected. ?? 1990.

  10. Thermal Signature Identification System (TheSIS)

    NASA Technical Reports Server (NTRS)

    Merritt, Scott; Bean, Brian

    2015-01-01

    We characterize both nonlinear and high order linear responses of fiber-optic and optoelectronic components using spread spectrum temperature cycling methods. This Thermal Signature Identification System (TheSIS) provides much more detail than conventional narrowband or quasi-static temperature profiling methods. This detail allows us to match components more thoroughly, detect subtle reversible shifts in performance, and investigate the cause of instabilities or irreversible changes. In particular, we create parameterized models of athermal fiber Bragg gratings (FBGs), delay line interferometers (DLIs), and distributed feedback (DFB) lasers, then subject the alternative models to selection via the Akaike Information Criterion (AIC). Detailed pairing of components, e.g. FBGs, is accomplished by means of weighted distance metrics or norms, rather than on the basis of a single parameter, such as center wavelength.

  11. The RS CVn Binary HD 155555: A Comparative Study of the Atmospheres for the Two Component Stars

    NASA Technical Reports Server (NTRS)

    Airapetian, V. S.; Dempsey, R. C.

    1997-01-01

    We present GHRS/HST observations of the RS CVn binary system HD 155555. Several key UV emission lines (Fe XXI, Si IV, O V, C IV) have been analyzed to provide information about the heating rate throughout the atmosphere from the chromosphere to the corona. We show that both the G and K components reveal features of a chromosphere, transition region and corona. The emission measure distribution as a function of temperature for both components is derived and compared with the RS Cvn system, HR 1099, and the Sun. The transition region and coronal lines of both stars show nonthermal broadenings of approx. 20-30 km/s. Possible physical implications for coronal heating mechanisms are discussed.

  12. Microelectromechanical Systems

    NASA Technical Reports Server (NTRS)

    Gabriel, Kaigham J.

    1995-01-01

    Micro-electromechanical systems (MEMS) is an enabling technology that merges computation and communication with sensing and actuation to change the way people and machines interact with the physical world. MEMS is a manufacturing technology that will impact widespread applications including: miniature inertial measurement measurement units for competent munitions and personal navigation; distributed unattended sensors; mass data storage devices; miniature analytical instruments; embedded pressure sensors; non-invasive biomedical sensors; fiber-optics components and networks; distributed aerodynamic control; and on-demand structural strength. The long term goal of ARPA's MEMS program is to merge information processing with sensing and actuation to realize new systems and strategies for both perceiving and controlling systems, processes, and the environment. The MEMS program has three major thrusts: advanced devices and processes, system design, and infrastructure.

  13. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Mather, Barry A

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less

  14. Description of a 20 kilohertz power distribution system

    NASA Technical Reports Server (NTRS)

    Hansen, I. G.

    1986-01-01

    A single phase, 440 VRMS, 20 kHz power distribution system with a regulated sinusoidal wave form is discussed. A single phase power system minimizes the wiring, sensing, and control complexities required in a multi-sourced redundantly distributed power system. The single phase addresses only the distribution links multiphase lower frequency inputs and outputs accommodation techniques are described. While the 440 V operating potential was initially selected for aircraft operating below 50,000 ft, this potential also appears suitable for space power systems. This voltage choice recognizes a reasonable upper limit for semiconductor ratings, yet will direct synthesis of 220 V, 3 power. A 20 kHz operating frequency was selected to be above the range of audibility, minimize the weight of reactive components, yet allow the construction of single power stages of 25 to 30 kW. The regulated sinusoidal distribution system has several advantages. With a regulated voltage, most ac/dc conversions involve rather simple transformer rectifier applications. A sinusoidal distribution system, when used in conjunction with zero crossing switching, represents a minimal source of EMI. The present state of 20 kHz power technology includes computer controls of voltage and/or frequency, low inductance cable, current limiting circuit protection, bi-directional power flow, and motor/generator operating using standard induction machines. A status update and description of each of these items and their significance is presented.

  15. Description of a 20 Kilohertz power distribution system

    NASA Technical Reports Server (NTRS)

    Hansen, I. G.

    1986-01-01

    A single phase, 440 VRMS, 20 kHz power distribution system with a regulated sinusoidal wave form is discussed. A single phase power system minimizes the wiring, sensing, and control complexities required in a multi-sourced redundantly distributed power system. The single phase addresses only the distribution link; mulitphase lower frequency inputs and outputs accommodation techniques are described. While the 440 V operating potential was initially selected for aircraft operating below 50,000 ft, this potential also appears suitable for space power systems. This voltage choice recognizes a reasonable upper limit for semiconductor ratings, yet will direct synthesis of 220 V, 3 power. A 20 kHz operating frequency was selected to be above the range of audibility, minimize the weight of reactive components, yet allow the construction of single power stages of 25 to 30 kW. The regulated sinusoidal distribution system has several advantages. With a regulated voltage, most ac/dc conversions involve rather simple transformer rectifier applications. A sinusoidal distribution system, when used in conjunction with zero crossing switching, represents a minimal source of EMI. The present state of 20 kHz power technology includes computer controls of voltage and/or frequency, low inductance cable, current limiting circuit protection, bi-directional power flow, and motor/generator operating using standard induction machines. A status update and description of each of these items and their significance is presented.

  16. Ad hoc Laser networks component technology for modular spacecraft

    NASA Astrophysics Data System (ADS)

    Huang, Xiujun; Shi, Dele; Ma, Zongfeng; Shen, Jingshi

    2016-03-01

    Distributed reconfigurable satellite is a new kind of spacecraft system, which is based on a flexible platform of modularization and standardization. Based on the module data flow analysis of the spacecraft, this paper proposes a network component of ad hoc Laser networks architecture. Low speed control network with high speed load network of Microwave-Laser communication mode, no mesh network mode, to improve the flexibility of the network. Ad hoc Laser networks component technology was developed, and carried out the related performance testing and experiment. The results showed that ad hoc Laser networks components can meet the demand of future networking between the module of spacecraft.

  17. Ad hoc laser networks component technology for modular spacecraft

    NASA Astrophysics Data System (ADS)

    Huang, Xiujun; Shi, Dele; Shen, Jingshi

    2017-10-01

    Distributed reconfigurable satellite is a new kind of spacecraft system, which is based on a flexible platform of modularization and standardization. Based on the module data flow analysis of the spacecraft, this paper proposes a network component of ad hoc Laser networks architecture. Low speed control network with high speed load network of Microwave-Laser communication mode, no mesh network mode, to improve the flexibility of the network. Ad hoc Laser networks component technology was developed, and carried out the related performance testing and experiment. The results showed that ad hoc Laser networks components can meet the demand of future networking between the module of spacecraft.

  18. Solid cryogen: a cooling system for future MgB2 MRI magnet

    PubMed Central

    Patel, Dipak; Hossain, Md Shahriar Al; Qiu, Wenbin; Jie, Hyunseock; Yamauchi, Yusuke; Maeda, Minoru; Tomsic, Mike; Choi, Seyong; Kim, Jung Ho

    2017-01-01

    An efficient cooling system and the superconducting magnet are essential components of magnetic resonance imaging (MRI) technology. Herein, we report a solid nitrogen (SN2) cooling system as a valuable cryogenic feature, which is targeted for easy usability and stable operation under unreliable power source conditions, in conjunction with a magnesium diboride (MgB2) superconducting magnet. The rationally designed MgB2/SN2 cooling system was first considered by conducting a finite element analysis simulation, and then a demonstrator coil was empirically tested under the same conditions. In the SN2 cooling system design, a wide temperature distribution on the SN2 chamber was observed due to the low thermal conductivity of the stainless steel components. To overcome this temperature distribution, a copper flange was introduced to enhance the temperature uniformity of the SN2 chamber. In the coil testing, an operating current as high as 200 A was applied at 28 K (below the critical current) without any operating or thermal issues. This work was performed to further the development of SN2 cooled MgB2 superconducting coils for MRI applications. PMID:28251984

  19. VIDAC; A New Technology for Increasing the Effectiveness of Television Distribution Networks: Report on a Feasibility Study of a Central Library "Integrated Media" Satellite Delivery System.

    ERIC Educational Resources Information Center

    Diambra, Henry M.; And Others

    VIDAC (Video Audio Compressed), a new technology based upon non-real-time transmission of audiovisual information via conventional television systems, has been invented by the Westinghouse Electric Corporation. This system permits time compression, during storage and transmission of the audio component of a still visual-narrative audio…

  20. School Finance: A Primer. A Practical Guide to the Structural Components of, Alternative Approaches to, and Policy Questions about State School Finance Systems.

    ERIC Educational Resources Information Center

    Augenblick, John; And Others

    Although school funding structures are similar in many ways across the states, no two states have school finance systems that are precisely the same. School finance systems which are used to achieve multiple objectives, must consider characteristics of numerous school districts, distribute large amounts of money, and have developed incrementally…

  1. Distributed Application of the Unified Noah LSM with Hydrologic Flow Routing on an Appalachian Headwater Basin

    NASA Astrophysics Data System (ADS)

    Garcia, M.; Kumar, S.; Gochis, D.; Yates, D.; McHenry, J.; Burnet, T.; Coats, C.; Condrey, J.

    2006-05-01

    Collaboration between scientists at UMBC-GEST and NASA-GSFC, the NCAR Research Applications Laboratory (RAL), and Baron Advanced Meteorological Services (BAMS), has produced a modeling framework for the application of traditional land surface models (LSMs) in a distributed hydrologic system which can be used for diagnosis and prediction of routed stream discharge hydrographs. This collaboration is oriented on near-term system implementation across Romania for flood and flash-flood analyses and forecasting as part of the World Bank-funded Destructive Waters Abatement (DESWAT) program. Meteorological forcing from surface observations, model analyses and numerical forecasts are employed in the NASA-GSFC Land Information System (LIS) to drive the Unified Noah LSM with Noah-Distributed components, stream network delineation and routing schemes original to this work. The Unified Noah LSM is the outgrowth of a joint modeling effort between several research partners including NCAR, the NOAA National Center for Environmental Prediction (NCEP), and the Air Force Weather Agency (AFWA). At NCAR, hydrologically-oriented extensions to the Noah LSM have been developed for LSM applications in a distributed domain in order to address the lateral redistribution of soil moisture by surface and subsurface flow processes. These advancements have been integrated into the NASA-GSFC Land Information System (LIS) and coupled with an original framework for hydraulic channel network definition and specification, linkages with the Noah-Distributed overland and subsurface flow framework, and distributed cell- to-cell (or link-node) hydraulic routing. This poster presents an overview of the system components and their organization, as well as results of the first U.S. case study performed with this system under various configurations. The case study simulated precipitation events over a headwater basin in the southern Appalachian Mountains in October 2005 following the landfall of Tropical Storm Tammy in South Carolina. These events followed on a long dry period in the region, lending to the demonstration of watershed response to strong precipitation forcing under nearly ideal and easily-specified initial conditions. The results presented here will compare simulated versus observed streamflow conditions at various locations in the test watershed using a selection of routing methods.

  2. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  3. Game-Theoretic strategies for systems of components using product-form utilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Ma, Cheng-Yu; Hausken, K.

    Many critical infrastructures are composed of multiple systems of components which are correlated so that disruptions to one may propagate to others. We consider such infrastructures with correlations characterized in two ways: (i) an aggregate failure correlation function specifies the conditional failure probability of the infrastructure given the failure of an individual system, and (ii) a pairwise correlation function between two systems specifies the failure probability of one system given the failure of the other. We formulate a game for ensuring the resilience of the infrastructure, wherein the utility functions of the provider and attacker are products of an infrastructuremore » survival probability term and a cost term, both expressed in terms of the numbers of system components attacked and reinforced. The survival probabilities of individual systems satisfy first-order differential conditions that lead to simple Nash Equilibrium conditions. We then derive sensitivity functions that highlight the dependence of infrastructure resilience on the cost terms, correlation functions, and individual system survival probabilities. We apply these results to simplified models of distributed cloud computing and energy grid infrastructures.« less

  4. A Proposed Information Architecture for Telehealth System Interoperability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Craft, R.L.; Funkhouser, D.R.; Gallagher, L.K.

    1999-04-20

    We propose an object-oriented information architecture for telemedicine systems that promotes secure `plug-and-play' interaction between system components through standardized interfaces, communication protocols, messaging formats, and data definitions. In this architecture, each component functions as a black box, and components plug together in a ''lego-like'' fashion to achieve the desired device or system functionality. Introduction Telemedicine systems today rely increasingly on distributed, collaborative information technology during the care delivery process. While these leading-edge systems are bellwethers for highly advanced telemedicine, most are custom-designed and do not interoperate with other commercial offerings. Users are limited to a set of functionality that amore » single vendor provides and must often pay high prices to obtain this functionality, since vendors in this marketplace must deliver en- tire systems in order to compete. Besides increasing corporate research and development costs, this inhibits the ability of the user to make intelligent purchasing decisions regarding best-of-breed technologies. This paper proposes a reference architecture for plug-and-play telemedicine systems that addresses these issues.« less

  5. Capillary Discharge Thruster Experiments and Modeling (Briefing Charts)

    DTIC Science & Technology

    2016-06-01

    Martin1 ERC INC.1, IN-SPACE PROPULSION BRANCH, AIR FORCE RESEARCH LABORATORY EDWARDS AIR FORCE BASE, CA USA Electric propulsion systems June 2016... PROPULSION MODELS & EXPERIMENTS Spacecraft Propulsion Relevant Plasma: From hall thrusters to plumes and fluxes on components Complex reaction physics i.e... Propulsion Plumes FRC Chamber Environment R.S. MARTIN (ERC INC.) DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA# 16279 3 / 30 ELECTRIC

  6. Two takes on the social brain: a comparison of theory of mind tasks.

    PubMed

    Gobbini, Maria Ida; Koralek, Aaron C; Bryan, Ronald E; Montgomery, Kimberly J; Haxby, James V

    2007-11-01

    We compared two tasks that are widely used in research on mentalizing--false belief stories and animations of rigid geometric shapes that depict social interactions--to investigate whether the neural systems that mediate the representation of others' mental states are consistent across these tasks. Whereas false belief stories activated primarily the anterior paracingulate cortex (APC), the posterior cingulate cortex/precuneus (PCC/PC), and the temporo-parietal junction (TPJ)--components of the distributed neural system for theory of mind (ToM)--the social animations activated an extensive region along nearly the full extent of the superior temporal sulcus, including a locus in the posterior superior temporal sulcus (pSTS), as well as the frontal operculum and inferior parietal lobule (IPL)--components of the distributed neural system for action understanding--and the fusiform gyrus. These results suggest that the representation of covert mental states that may predict behavior and the representation of intentions that are implied by perceived actions involve distinct neural systems. These results show that the TPJ and the pSTS play dissociable roles in mentalizing and are parts of different distributed neural systems. Because the social animations do not depict articulated body movements, these results also highlight that the perception of the kinematics of actions is not necessary to activate the mirror neuron system, suggesting that this system plays a general role in the representation of intentions and goals of actions. Furthermore, these results suggest that the fusiform gyrus plays a general role in the representation of visual stimuli that signify agency, independent of visual form.

  7. Software packager user's guide

    NASA Technical Reports Server (NTRS)

    Callahan, John R.

    1995-01-01

    Software integration is a growing area of concern for many programmers and software managers because the need to build new programs quickly from existing components is greater than ever. This includes building versions of software products for multiple hardware platforms and operating systems, building programs from components written in different languages, and building systems from components that must execute on different machines in a distributed network. The goal of software integration is to make building new programs from existing components more seamless -- programmers should pay minimal attention to the underlying configuration issues involved. Libraries of reusable components and classes are important tools but only partial solutions to software development problems. Even though software components may have compatible interfaces, there may be other reasons, such as differences between execution environments, why they cannot be integrated. Often, components must be adapted or reimplemented to fit into another application because of implementation differences -- they are implemented in different programming languages, dependent on different operating system resources, or must execute on different physical machines. The software packager is a tool that allows programmers to deal with interfaces between software components and ignore complex integration details. The packager takes modular descriptions of the structure of a software system written in the package specification language and produces an integration program in the form of a makefile. If complex integration tools are needed to integrate a set of components, such as remote procedure call stubs, their use is implied by the packager automatically and stub generation tools are invoked in the corresponding makefile. The programmer deals only with the components themselves and not the details of how to build the system on any given platform.

  8. Continuous high speed coherent one-way quantum key distribution.

    PubMed

    Stucki, Damien; Barreiro, Claudio; Fasel, Sylvain; Gautier, Jean-Daniel; Gay, Olivier; Gisin, Nicolas; Thew, Rob; Thoma, Yann; Trinkler, Patrick; Vannel, Fabien; Zbinden, Hugo

    2009-08-03

    Quantum key distribution (QKD) is the first commercial quantum technology operating at the level of single quanta and is a leading light for quantum-enabled photonic technologies. However, controlling these quantum optical systems in real world environments presents significant challenges. For the first time, we have brought together three key concepts for future QKD systems: a simple high-speed protocol; high performance detection; and integration both, at the component level and for standard fibre network connectivity. The QKD system is capable of continuous and autonomous operation, generating secret keys in real time. Laboratory and field tests were performed and comparisons made with robust InGaAs avalanche photodiodes and superconducting detectors. We report the first real world implementation of a fully functional QKD system over a 43 dB-loss (150 km) transmission line in the Swisscom fibre optic network where we obtained average real-time distribution rates over 3 hours of 2.5 bps.

  9. Dynamic approach to description of entrance channel effects in angular distributions of fission fragments

    NASA Astrophysics Data System (ADS)

    Eremenko, D. O.; Drozdov, V. A.; Fotina, O. V.; Platonov, S. Yu.; Yuminov, O. A.

    2016-07-01

    Background: It is well known that the anomalous behavior of angular anisotropies of fission fragments at sub- and near-barrier energies is associated with a memory of conditions in the entrance channel of the heavy-ion reactions, particularly, deformations and spins of colliding nuclei that determine the initial distributions for the components of the total angular momentum over the symmetry axis of the fissioning system and the beam axis. Purpose: We develop a new dynamic approach, which allows the description of the memory effects in the fission fragment angular distributions and provides new information on fusion and fission dynamics. Methods: The approach is based on the dynamic model of the fission fragment angular distributions which takes into account stochastic aspects of nuclear fission and thermal fluctuations for the tilting mode that is characterized by the projection of the total angular momentum onto the symmetry axis of the fissioning system. Another base of our approach is the quantum mechanical method to calculate the initial distributions over the components of the total angular momentum of the nuclear system immediately following complete fusion. Results: A method is suggested for calculating the initial distributions of the total angular momentum projection onto the symmetry axis for the nuclear systems formed in the reactions of complete fusion of deformed nuclei with spins. The angular distributions of fission fragments for the 16O+232Th,12C+235,236,238, and 13C+235U reactions have been analyzed within the dynamic approach over a range of sub- and above-barrier energies. The analysis allowed us to determine the relaxation time for the tilting mode and the fraction of fission events occurring in times not larger than the relaxation time for the tilting mode. Conclusions: It is shown that the memory effects play an important role in the formation of the angular distributions of fission fragments for the reactions induced by heavy ions. The approach developed for analysis of the effects is a suitable tool to get insight into the complete fusion-fission dynamics, in particular, to investigate the mechanism of the complete fusion and fission time scale.

  10. The dynamics of spin stabilized spacecraft with movable appendages, part 1

    NASA Technical Reports Server (NTRS)

    Bainum, P. M.; Sellappan, R.

    1975-01-01

    The motion and stability of spin stabilized spacecraft with movable external appendages are treated both analytically and numerically. The two basic types of appendages considered are: (1) a telescoping type of varying length and (2) a hinged type of fixed length whose orientation with respect to the main part of the spacecraft can vary. Two classes of telescoping appendages are considered: (a) where an end mass is mounted at the end of an (assumed) massless boom; and (b) where the appendage is assumed to consist of a uniformly distributed homogeneous mass throughout its length. For the telescoping system Eulerian equations of motion are developed. During all deployment sequences it is assumed that the transverse component of angular momentum is much smaller than the component along the major spin axis. Closed form analytical solutions for the time response of the transverse components of angular velocities are obtained when the spacecraft hub has a nearly spherical mass distribution.

  11. Modelling diameter distributions of two-cohort forest stands with various proportions of dominant species: a two-component mixture model approach.

    Treesearch

    Rafal Podlaski; Francis Roesch

    2014-01-01

    In recent years finite-mixture models have been employed to approximate and model empirical diameter at breast height (DBH) distributions. We used two-component mixtures of either the Weibull distribution or the gamma distribution for describing the DBH distributions of mixed-species, two-cohort forest stands, to analyse the relationships between the DBH components,...

  12. The effects of acid precipitation runoff episodes on reservoir and tapwater quality in an Appalachian Mountain water supply.

    PubMed Central

    Sharpe, W E; DeWalle, D R

    1990-01-01

    The aluminum concentration and Ryznar Index increased and the pH decreased in a small Appalachian water supply reservoir following acid precipitation runoff episodes. Concomitant increases in tapwater aluminum and decreases in tapwater pH were also observed at two homes in the water distribution system. Lead concentrations in the tapwater of one home frequently exceeded recommended levels, although spatial and temporal variation in tapwater copper and lead concentrations was considerable. Since source water and reservoir water copper and lead concentrations were much lower, the increased copper and lead concentrations in tapwater were attributed to corrosion of household plumbing. Tapwater copper concentration correlated well with tapwater pH and tapwater temperature. Asbestos fibers were not detected in tapwater. The asbestos-cement pipe in the water distribution system was protected by a spontaneous metallic coating that inhibited fiber release from the pipe. Several simultaneous reactions were hypothesized to be taking place in the distribution system that involved corrosion of metallic components and coating of asbestos-cement pipe components in part with corrosion products and in part by cations of watershed origin. Greater water quality changes might be expected in areas of higher atmospheric deposition. Images FIGURE 5. FIGURE 6. PMID:2088742

  13. Effects of healthy aging and early stage dementia of the Alzheimer's type on components of response time distributions in three attention tasks.

    PubMed

    Tse, Chi-Shing; Balota, David A; Yap, Melvin J; Duchek, Janet M; McCabe, David P

    2010-05-01

    The characteristics of response time (RT) distributions beyond measures of central tendency were explored in 3 attention tasks across groups of young adults, healthy older adults, and individuals with very mild dementia of the Alzheimer's type (DAT). Participants were administered computerized Stroop, Simon, and switching tasks, along with psychometric tasks that tap various cognitive abilities and a standard personality inventory (NEO-FFI). Ex-Gaussian (and Vincentile) analyses were used to capture the characteristics of the RT distributions for each participant across the 3 tasks, which afforded 3 components: mu and sigma (mean and standard deviation of the modal portion of the distribution) and tau (the positive tail of the distribution). The results indicated that across all 3 attention tasks, healthy aging produced large changes in the central tendency mu parameter of the distribution along with some change in sigma and tau (mean etap(2) = .17, .08, and .04, respectively). In contrast, early stage DAT primarily produced an increase in the tau component (mean etap(2) = .06). tau was also correlated with the psychometric measures of episodic/semantic memory, working memory, and processing speed, and with the personality traits of neuroticism and conscientiousness. Structural equation modeling indicated a unique relation between a latent tau construct (-.90), as opposed to sigma (-.09) and mu constructs (.24), with working memory measures. The results suggest a critical role of attentional control systems in discriminating healthy aging from early stage DAT and the utility of RT distribution analyses to better specify the nature of such change.

  14. Model documentation: Natural gas transmission and distribution model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-02-17

    The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of integrated Analysis and Forecasting of the Energy information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. The NGTDM is the model within the NEMS that represents the transmission, distribution, and pricing of natural gas.more » The model also includes representations of the end-use demand for natural gas, the production of domestic natural gas, and the availability of natural gas traded on the international market based on information received from other NEMS models. The NGTDM determines the flow of natural gas in an aggregate, domestic pipeline network, connecting domestic and foreign supply regions with 12 demand regions. The methodology employed allows the analysis of impacts of regional capacity constraints in the interstate natural gas pipeline network and the identification of pipeline capacity expansion requirements. There is an explicit representation of core and noncore markets for natural gas transmission and distribution services, and the key components of pipeline tariffs are represented in a pricing algorithm. Natural gas pricing and flow patterns are derived by obtaining a market equilibrium across the three main elements of the natural gas market: the supply element, the demand element, and the transmission and distribution network that links them. The NGTDM consists of four modules: the Annual Flow Module, the Capacity F-expansion Module, the Pipeline Tariff Module, and the Distributor Tariff Module. A model abstract is provided in Appendix A.« less

  15. The distribution of cigarette prices under different tax structures: findings from the International Tobacco Control Policy Evaluation (ITC) Project

    PubMed Central

    Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T

    2013-01-01

    Background The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. Objective This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. Methods We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Findings Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax. PMID:23792324

  16. The distribution of cigarette prices under different tax structures: findings from the International Tobacco Control Policy Evaluation (ITC) Project.

    PubMed

    Shang, Ce; Chaloupka, Frank J; Zahra, Nahleen; Fong, Geoffrey T

    2014-03-01

    The distribution of cigarette prices has rarely been studied and compared under different tax structures. Descriptive evidence on price distributions by countries can shed light on opportunities for tax avoidance and brand switching under different tobacco tax structures, which could impact the effectiveness of increased taxation in reducing smoking. This paper aims to describe the distribution of cigarette prices by countries and to compare these distributions based on the tobacco tax structure in these countries. We employed data for 16 countries taken from the International Tobacco Control Policy Evaluation Project to construct survey-derived cigarette prices for each country. Self-reported prices were weighted by cigarette consumption and described using a comprehensive set of statistics. We then compared these statistics for cigarette prices under different tax structures. In particular, countries of similar income levels and countries that impose similar total excise taxes using different tax structures were paired and compared in mean and variance using a two-sample comparison test. Our investigation illustrates that, compared with specific uniform taxation, other tax structures, such as ad valorem uniform taxation, mixed (a tax system using ad valorem and specific taxes) uniform taxation, and tiered tax structures of specific, ad valorem and mixed taxation tend to have price distributions with greater variability. Countries that rely heavily on ad valorem and tiered taxes also tend to have greater price variability around the median. Among mixed taxation systems, countries that rely more heavily on the ad valorem component tend to have greater price variability than countries that rely more heavily on the specific component. In countries with tiered tax systems, cigarette prices are skewed more towards lower prices than are prices under uniform tax systems. The analyses presented here demonstrate that more opportunities exist for tax avoidance and brand switching when the tax structure departs from a uniform specific tax.

  17. Quantitative method to assess caries via fluorescence imaging from the perspective of autofluorescence spectral analysis

    NASA Astrophysics Data System (ADS)

    Chen, Q. G.; Zhu, H. H.; Xu, Y.; Lin, B.; Chen, H.

    2015-08-01

    A quantitative method to discriminate caries lesions for a fluorescence imaging system is proposed in this paper. The autofluorescence spectral investigation of 39 teeth samples classified by the International Caries Detection and Assessment System levels was performed at 405 nm excitation. The major differences in the different caries lesions focused on the relative spectral intensity range of 565-750 nm. The spectral parameter, defined as the ratio of wavebands at 565-750 nm to the whole spectral range, was calculated. The image component ratio R/(G + B) of color components was statistically computed by considering the spectral parameters (e.g. autofluorescence, optical filter, and spectral sensitivity) in our fluorescence color imaging system. Results showed that the spectral parameter and image component ratio presented a linear relation. Therefore, the image component ratio was graded as <0.66, 0.66-1.06, 1.06-1.62, and >1.62 to quantitatively classify sound, early decay, established decay, and severe decay tissues, respectively. Finally, the fluorescence images of caries were experimentally obtained, and the corresponding image component ratio distribution was compared with the classification result. A method to determine the numerical grades of caries using a fluorescence imaging system was proposed. This method can be applied to similar imaging systems.

  18. Masses of the visual components and black holes in X-ray novae: Effects of proximity of the components

    NASA Astrophysics Data System (ADS)

    Petrov, V. S.; Antokhina, E. A.; Cherepashchuk, A. M.

    2017-05-01

    It is shown that the approximation of the complex, tidally distorted shape of a star as a circular disc with local line profiles and a linear limb-darkening law, which is usually applied when deriving equatorial stellar rotation velocities from line profiles, leads to overestimation of the equatorial velocity V rot sin i and underestimation of the component mass ratio q = M x / M v . A formula enabling correction of the effect of these simplifying assumptions on the shape of a star is used to re-determine the mass ratios q and the masses of the black holes M x and visual components M v in low-mass X-ray binary systems containing black holes. Taking into account the tidal-rotational distortion of the stellar shape can significantly increase the mass ratios q = M x / M v , reducing M v , while M x changes only slightly. The resulting distribution of M v attains its maximum near M v ≃ 0.35 M ⊙, in disagreement with the results of population synthesis computations realizing standard models for Galactic X-ray novae with black holes. Possible ways to overcome this inconsistency are discussed. The derived distribution of M x also differs strongly from the mass distribution for massive stars in the Galaxy.

  19. Using a Ternary Diagram to Display a System's Evolving Energy Distribution

    ERIC Educational Resources Information Center

    Brazzle, Bob; Tapp, Anne

    2016-01-01

    A ternary diagram is a graphical representation used for systems with three components. They are familiar to mineralogists (who typically use them to categorize varieties of solid solution minerals such as feldspar) but are not yet widely used in the physics community. Last year the lead author began using ternary diagrams in his introductory…

  20. Data Aggregation in Multi-Agent Systems in the Presence of Hybrid Faults

    ERIC Educational Resources Information Center

    Srinivasan, Satish Mahadevan

    2010-01-01

    Data Aggregation (DA) is a set of functions that provide components of a distributed system access to global information for purposes of network management and user services. With the diverse new capabilities that networks can provide, applicability of DA is growing. DA is useful in dealing with multi-value domain information and often requires…

  1. Analytical Solution to the Pneumatic Transient Rod System at ACRR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fehr, Brandon Michael

    2016-01-08

    The ACRR pulse is pneumatically driven by nitrogen in a system of pipes, valves and hoses up to the connection of the pneumatic system and mechanical linkages of the transient rod (TR). The main components of the TR pneumatic system are the regulator, accumulator, solenoid valve and piston-cylinder assembly. The purpose of this analysis is to analyze the flow of nitrogen through the TR pneumatic system in order to develop a motion profile of the piston during the pulse and be able to predict the pressure distributions inside both the cylinder and accumulators. The predicted pressure distributions will be validatedmore » against pressure transducer data, while the motion profile will be compared to proximity switch data. By predicting the motion of the piston, pulse timing will be determined and provided to the engineers/operators for verification. The motion profile will provide an acceleration distribution to be used in Razorback to more accurately predict reactivity insertion into the system.« less

  2. Self-Reconfigurable Robots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HENSINGER, DAVID M.; JOHNSTON, GABRIEL A.; HINMAN-SWEENEY, ELAINE M.

    2002-10-01

    A distributed reconfigurable micro-robotic system is a collection of unlimited numbers of distributed small, homogeneous robots designed to autonomously organize and reorganize in order to achieve mission-specified geometric shapes and functions. This project investigated the design, control, and planning issues for self-configuring and self-organizing robots. In the 2D space a system consisting of two robots was prototyped and successfully displayed automatic docking/undocking to operate dependently or independently. Additional modules were constructed to display the usefulness of a self-configuring system in various situations. In 3D a self-reconfiguring robot system of 4 identical modules was built. Each module connects to its neighborsmore » using rotating actuators. An individual component can move in three dimensions on its neighbors. We have also built a self-reconfiguring robot system consisting of 9-module Crystalline Robot. Each module in this robot is actuated by expansion/contraction. The system is fully distributed, has local communication (to neighbors) capabilities and it has global sensing capabilities.« less

  3. Ultrastructural analysis of cell component distribution in the apical cell of Ceratodon protonemata

    NASA Technical Reports Server (NTRS)

    Walker, L. M.; Sack, F. D.

    1995-01-01

    A distinctive feature of tip-growing plant cells is that cell components are distributed differentially along the length of the cell, although most ultrastructural analyses have been qualitative. The longtitudinal distribution of cell components was studied both qualitatively and quantitatively in the apical cell of dark-grown protonemata of the moss Ceratodon. The first 35 micrometers of the apical cell was analyzed stereologically using transmission electron microscopy. There were four types of distributions along the cell's axis, three of them differential: (1) tubular endoplasmic reticulum was evenly distributed, (2) cisternal endoplasmic reticulum and Golgi vesicles were distributed in a tip-to-base gradient, (3) plastids, vacuoles, and Golgi stacks were enriched in specific areas, although the locations of the enrichments varied, and (4) mitochondria were excluded in the tip-most 5 micrometers and evenly distributed throughout the remaining 30 micrometers. This study provides one of the most comprehensive quantitative, ultrastructural analyses of the distribution of cell components in the apex of any tip-growing plant cell. The finding that almost every component had its own spatial arrangement demonstrates the complexity of the organization and regulation of the distribution of components in tip-growing cells.

  4. Stellar populations, stellar masses and the formation of galaxy bulges and discs at z < 3 in CANDELS

    NASA Astrophysics Data System (ADS)

    Margalef-Bentabol, Berta; Conselice, Christopher J.; Mortlock, Alice; Hartley, Will; Duncan, Kenneth; Kennedy, Rebecca; Kocevski, Dale D.; Hasinger, Guenther

    2018-02-01

    We present a multicomponent structural analysis of the internal structure of 1074 high-redshift massive galaxies at 1 < z < 3 from the CANDELS HST Survey. In particular, we examine galaxies best fitted by two structural components, and thus likely forming discs and bulges. We examine the stellar mass, star formation rates (SFRs) and colours of both the inner 'bulge' and outer 'disc' components for these systems using Spectral Energy Distribution (SED) information from the resolved ACS+WFC3 HST imaging. We find that the majority of both inner and outer components lie in the star-forming region of UVJ space (68 and 90 per cent, respectively). However, the inner portions, or the likely forming bulges, are dominated by dusty star formation. Furthermore, we show that the outer components of these systems have a higher SFR than their inner regions, and the ratio of SFR between 'disc' and 'bulge' increases at lower redshifts. Despite the higher SFR of the outer component, the stellar mass ratio of inner to outer component remains constant through this epoch. This suggests that there is mass transfer from the outer to inner components for typical two-component-forming systems, thus building bulges from discs. Finally, using Chandra data we find that the presence of an active galactic nucleus is more common in both one-component spheroid-like galaxies and two-component systems (13 ± 3 and 11 ± 2 per cent) than in one-component disc-like galaxies (3 ± 1 per cent), demonstrating that the formation of a central inner component likely triggers the formation of central massive black holes in these galaxies.

  5. Architecture, Voltage, and Components for a Turboelectric Distributed Propulsion Electric Grid (AVC-TeDP)

    NASA Technical Reports Server (NTRS)

    Gemin, Paul; Kupiszewski, Tom; Radun, Arthur; Pan, Yan; Lai, Rixin; Zhang, Di; Wang, Ruxi; Wu, Xinhui; Jiang, Yan; Galioto, Steve; hide

    2015-01-01

    The purpose of this effort was to advance the selection, characterization, and modeling of a propulsion electric grid for a Turboelectric Distributed Propulsion (TeDP) system for transport aircraft. The TeDP aircraft would constitute a miniature electric grid with 50 MW or more of total power, two or more generators, redundant transmission lines, and multiple electric motors driving propulsion fans. The study proposed power system architectures, investigated electromechanical and solid state circuit breakers, estimated the impact of the system voltage on system mass, and recommended DC bus voltage range. The study assumed an all cryogenic power system. Detailed assumptions within the study include hybrid circuit breakers, a two cryogen system, and supercritical cyrogens. A dynamic model was developed to investigate control and parameter selection.

  6. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  7. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  8. Cloud-based distributed control of unmanned systems

    NASA Astrophysics Data System (ADS)

    Nguyen, Kim B.; Powell, Darren N.; Yetman, Charles; August, Michael; Alderson, Susan L.; Raney, Christopher J.

    2015-05-01

    Enabling warfighters to efficiently and safely execute dangerous missions, unmanned systems have been an increasingly valuable component in modern warfare. The evolving use of unmanned systems leads to vast amounts of data collected from sensors placed on the remote vehicles. As a result, many command and control (C2) systems have been developed to provide the necessary tools to perform one of the following functions: controlling the unmanned vehicle or analyzing and processing the sensory data from unmanned vehicles. These C2 systems are often disparate from one another, limiting the ability to optimally distribute data among different users. The Space and Naval Warfare Systems Center Pacific (SSC Pacific) seeks to address this technology gap through the UxV to the Cloud via Widgets project. The overarching intent of this three year effort is to provide three major capabilities: 1) unmanned vehicle control using an open service oriented architecture; 2) data distribution utilizing cloud technologies; 3) a collection of web-based tools enabling analysts to better view and process data. This paper focuses on how the UxV to the Cloud via Widgets system is designed and implemented by leveraging the following technologies: Data Distribution Service (DDS), Accumulo, Hadoop, and Ozone Widget Framework (OWF).

  9. Dimensions of stabident intraosseous perforators and needles.

    PubMed

    Ramlee, R A; Whitworth, J

    2001-09-01

    Problems can be encountered inserting intraosseous injection needles through perforation sites. This in vitro study examined the variability and size compatibility of Stabident intraosseous injection components. The diameters of 40 needles and perforators from a single Stabident kit were measured in triplicate with a toolmakers microscope. One-way ANOVA revealed that mean needle diameter (0.411 mm) was significantly narrower than mean perforator diameter (0.427 mm) (p < 0.001). A frequency distribution plot revealed that needle diameter followed a normal distribution, indicating tight quality control during manufacture. The diameter of perforators was haphazardly distributed, with a clustering of 15% at the lower limit of the size range. However on no occasion was the diameter of a perforator smaller than that of an injection needle. We conclude that components of the Stabident intraosseous anaesthetic system are size-compatible, but there is greater and more haphazard variability in the diameter of perforators than injection needles.

  10. Future Concepts for Modular, Intelligent Aerospace Power Systems

    NASA Technical Reports Server (NTRS)

    Button, Robert M.; Soeder, James F.

    2004-01-01

    Nasa's resent commitment to Human and Robotic Space Exploration obviates the need for more affordable and sustainable systems and missions. Increased use of modularity and on-board intelligent technologies will enable these lofty goals. To support this new paradigm, an advanced technology program to develop modular, intelligent power management and distribution (PMAD) system technologies is presented. The many benefits to developing and including modular functionality in electrical power components and systems are shown to include lower costs and lower mass for highly reliable systems. The details of several modular technologies being developed by NASA are presented, broken down into hierarchical levels. Modularity at the device level, including the use of power electronic building blocks, is shown to provide benefits in lowering the development time and costs of new power electronic components.

  11. Design and implementation of a distributed large-scale spatial database system based on J2EE

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia

    2003-03-01

    With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.

  12. Coherent optical monolithic phased-array antenna steering system

    DOEpatents

    Hietala, Vincent M.; Kravitz, Stanley H.; Vawter, Gregory A.

    1994-01-01

    An optical-based RF beam steering system for phased-array antennas comprising a photonic integrated circuit (PIC). The system is based on optical heterodyning employed to produce microwave phase shifting by a monolithic PIC constructed entirely of passive components. Microwave power and control signal distribution to the antenna is accomplished by optical fiber, permitting physical separation of the PIC and its control functions from the antenna. The system reduces size, weight, complexity, and cost of phased-array antenna systems.

  13. Intelligent Integrated System Health Management

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2012-01-01

    Intelligent Integrated System Health Management (ISHM) is the management of data, information, and knowledge (DIaK) with the purposeful objective of determining the health of a system (Management: storage, distribution, sharing, maintenance, processing, reasoning, and presentation). Presentation discusses: (1) ISHM Capability Development. (1a) ISHM Knowledge Model. (1b) Standards for ISHM Implementation. (1c) ISHM Domain Models (ISHM-DM's). (1d) Intelligent Sensors and Components. (2) ISHM in Systems Design, Engineering, and Integration. (3) Intelligent Control for ISHM-Enabled Systems

  14. Revealing the microstructure of the giant component in random graph ensembles

    NASA Astrophysics Data System (ADS)

    Tishby, Ido; Biham, Ofer; Katzav, Eytan; Kühn, Reimer

    2018-04-01

    The microstructure of the giant component of the Erdős-Rényi network and other configuration model networks is analyzed using generating function methods. While configuration model networks are uncorrelated, the giant component exhibits a degree distribution which is different from the overall degree distribution of the network and includes degree-degree correlations of all orders. We present exact analytical results for the degree distributions as well as higher-order degree-degree correlations on the giant components of configuration model networks. We show that the degree-degree correlations are essential for the integrity of the giant component, in the sense that the degree distribution alone cannot guarantee that it will consist of a single connected component. To demonstrate the importance and broad applicability of these results, we apply them to the study of the distribution of shortest path lengths on the giant component, percolation on the giant component, and spectra of sparse matrices defined on the giant component. We show that by using the degree distribution on the giant component one obtains high quality results for these properties, which can be further improved by taking the degree-degree correlations into account. This suggests that many existing methods, currently used for the analysis of the whole network, can be adapted in a straightforward fashion to yield results conditioned on the giant component.

  15. Component Analysis of Remanent Magnetization Curves: A Revisit with a New Model Distribution

    NASA Astrophysics Data System (ADS)

    Zhao, X.; Suganuma, Y.; Fujii, M.

    2017-12-01

    Geological samples often consist of several magnetic components that have distinct origins. As the magnetic components are often indicative of their underlying geological and environmental processes, it is therefore desirable to identify individual components to extract associated information. This component analysis can be achieved using the so-called unmixing method, which fits a mixture model of certain end-member model distribution to the measured remanent magnetization curve. In earlier studies, the lognormal, skew generalized Gaussian and skewed Gaussian distributions have been used as the end-member model distribution in previous studies, which are performed on the gradient curve of remanent magnetization curves. However, gradient curves are sensitive to measurement noise as the differentiation of the measured curve amplifies noise, which could deteriorate the component analysis. Though either smoothing or filtering can be applied to reduce the noise before differentiation, their effect on biasing component analysis is vaguely addressed. In this study, we investigated a new model function that can be directly applied to the remanent magnetization curves and therefore avoid the differentiation. The new model function can provide more flexible shape than the lognormal distribution, which is a merit for modeling the coercivity distribution of complex magnetic component. We applied the unmixing method both to model and measured data, and compared the results with those obtained using other model distributions to better understand their interchangeability, applicability and limitation. The analyses on model data suggest that unmixing methods are inherently sensitive to noise, especially when the number of component is over two. It is, therefore, recommended to verify the reliability of component analysis by running multiple analyses with synthetic noise. Marine sediments and seafloor rocks are analyzed with the new model distribution. Given the same component number, the new model distribution can provide closer fits than the lognormal distribution evidenced by reduced residuals. Moreover, the new unmixing protocol is automated so that the users are freed from the labor of providing initial guesses for the parameters, which is also helpful to improve the subjectivity of component analysis.

  16. System and Method for Providing a Climate Data Analytic Services Application Programming Interface Distribution Package

    NASA Technical Reports Server (NTRS)

    Tamkin, Glenn S. (Inventor); Duffy, Daniel Q. (Inventor); Schnase, John L. (Inventor)

    2016-01-01

    A system, method and computer-readable storage devices for providing a climate data analytic services application programming interface distribution package. The example system can provide various components. The system provides a climate data analytic services application programming interface library that enables software applications running on a client device to invoke the capabilities of a climate data analytic service. The system provides a command-line interface that provides a means of interacting with a climate data analytic service by issuing commands directly to the system's server interface. The system provides sample programs that call on the capabilities of the application programming interface library and can be used as templates for the construction of new client applications. The system can also provide test utilities, build utilities, service integration utilities, and documentation.

  17. Game-theoretic strategies for asymmetric networked systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S.; Ma, Chris Y. T.; Hausken, Kjell

    Abstract—We consider an infrastructure consisting of a network of systems each composed of discrete components that can be reinforced at a certain cost to guard against attacks. The network provides the vital connectivity between systems, and hence plays a critical, asymmetric role in the infrastructure operations. We characterize the system-level correlations using the aggregate failure correlation function that specifies the infrastructure failure probability given the failure of an individual system or network. The survival probabilities of systems and network satisfy first-order differential conditions that capture the component-level correlations. We formulate the problem of ensuring the infrastructure survival as a gamemore » between anattacker and a provider, using the sum-form and product-form utility functions, each composed of a survival probability term and a cost term. We derive Nash Equilibrium conditions which provide expressions for individual system survival probabilities, and also the expected capacity specified by the total number of operational components. These expressions differ only in a single term for the sum-form and product-form utilities, despite their significant differences.We apply these results to simplified models of distributed cloud computing infrastructures.« less

  18. DEVELOPMENT OF A DATA EVALUATION/DECISION SUPPORT SYSTEM FOR REMEDIATION OF SUBSURFACE CONTAMINATION

    EPA Science Inventory

    Subsurface contamination frequently originates from spatially distributed sources of multi-component nonaqueous phase liquids (NAPLs). Such chemicals are typically persistent sources of ground-water contamination that are difficult to characterize. This work addresses the feasi...

  19. Performance of a distributed simultaneous strain and temperature sensor based on a Fabry-Perot laser diode and a dual-stage FBG optical demultiplexer.

    PubMed

    Kim, Suhwan; Kwon, Hyungwoo; Yang, Injae; Lee, Seungho; Kim, Jeehyun; Kang, Shinwon

    2013-11-12

    A simultaneous strain and temperature measurement method using a Fabry-Perot laser diode (FP-LD) and a dual-stage fiber Bragg grating (FBG) optical demultiplexer was applied to a distributed sensor system based on Brillouin optical time domain reflectometry (BOTDR). By using a Kalman filter, we improved the performance of the FP-LD based OTDR, and decreased the noise using the dual-stage FBG optical demultiplexer. Applying the two developed components to the BOTDR system and using a temperature compensating algorithm, we successfully demonstrated the simultaneous measurement of strain and temperature distributions under various experimental conditions. The observed errors in the temperature and strain measured using the developed sensing system were 0.6 °C and 50 με, and the spatial resolution was 1 m, respectively.

  20. Virtually-synchronous communication based on a weak failure suspector

    NASA Technical Reports Server (NTRS)

    Schiper, Andre; Ricciardi, Aleta

    1993-01-01

    Failure detectors (or, more accurately Failure Suspectors (FS)) appear to be a fundamental service upon which to build fault-tolerant, distributed applications. This paper shows that a FS with very weak semantics (i.e., that delivers failure and recovery information in no specific order) suffices to implement virtually-synchronous communication (VSC) in an asynchronous system subject to process crash failures and network partitions. The VSC paradigm is particularly useful in asynchronous systems and greatly simplifies building fault-tolerant applications that mask failures by replicating processes. We suggest a three-component architecture to implement virtually-synchronous communication: (1) at the lowest level, the FS component; (2) on top of it, a component (2a) that defines new views; and (3) a component (2b) that reliably multicasts messages within a view. The issues covered in this paper also lead to a better understanding of the various membership service semantics proposed in recent literature.

  1. Wearable Stretch Sensors for Motion Measurement of the Wrist Joint Based on Dielectric Elastomers.

    PubMed

    Huang, Bo; Li, Mingyu; Mei, Tao; McCoul, David; Qin, Shihao; Zhao, Zhanfeng; Zhao, Jianwen

    2017-11-23

    Motion capture of the human body potentially holds great significance for exoskeleton robots, human-computer interaction, sports analysis, rehabilitation research, and many other areas. Dielectric elastomer sensors (DESs) are excellent candidates for wearable human motion capture systems because of their intrinsic characteristics of softness, light weight, and compliance. In this paper, DESs were applied to measure all component motions of the wrist joints. Five sensors were mounted to different positions on the wrist, and each one is for one component motion. To find the best position to mount the sensors, the distribution of the muscles is analyzed. Even so, the component motions and the deformation of the sensors are coupled; therefore, a decoupling method was developed. By the decoupling algorithm, all component motions can be measured with a precision of 5°, which meets the requirements of general motion capture systems.

  2. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  3. Integrated Computer System of Management in Logistics

    NASA Astrophysics Data System (ADS)

    Chwesiuk, Krzysztof

    2011-06-01

    This paper aims at presenting a concept of an integrated computer system of management in logistics, particularly in supply and distribution chains. Consequently, the paper includes the basic idea of the concept of computer-based management in logistics and components of the system, such as CAM and CIM systems in production processes, and management systems for storage, materials flow, and for managing transport, forwarding and logistics companies. The platform which integrates computer-aided management systems is that of electronic data interchange.

  4. The Aerospace Energy Systems Laboratory: A BITBUS networking application

    NASA Technical Reports Server (NTRS)

    Glover, Richard D.; Oneill-Rood, Nora

    1989-01-01

    The NASA Ames-Dryden Flight Research Facility developed a computerized aircraft battery servicing facility called the Aerospace Energy Systems Laboratory (AESL). This system employs distributed processing with communications provided by a 2.4-megabit BITBUS local area network. Customized handlers provide real time status, remote command, and file transfer protocols between a central system running the iRMX-II operating system and ten slave stations running the iRMX-I operating system. The hardware configuration and software components required to implement this BITBUS application are required.

  5. A Comparative Study of the Proposed Models for the Components of the National Health Information System

    PubMed Central

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-01-01

    Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937

  6. A comparative study of the proposed models for the components of the national health information system.

    PubMed

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.

  7. Distributed fiber optic system for oil pipeline leakage detection

    NASA Astrophysics Data System (ADS)

    Paranjape, R.; Liu, N.; Rumple, C.; Hara, Elmer H.

    2003-02-01

    We present a novel approach for the detection of leakage in oil pipelines using methods of fiber optic distributed sensors, a presence-of-oil based actuator, and Optical Time Domain Reflectometry (OTDR). While the basic concepts of our approach are well understood, the integration of the components into a complete system is a real world engineering design problem. Our focus has been on the development of the actuator design and testing using installed dark fiber. Initial results are promising, however environmental studies into the long term effects of exposure to the environment are still pending.

  8. Logical optimization for database uniformization

    NASA Technical Reports Server (NTRS)

    Grant, J.

    1984-01-01

    Data base uniformization refers to the building of a common user interface facility to support uniform access to any or all of a collection of distributed heterogeneous data bases. Such a system should enable a user, situated anywhere along a set of distributed data bases, to access all of the information in the data bases without having to learn the various data manipulation languages. Furthermore, such a system should leave intact the component data bases, and in particular, their already existing software. A survey of various aspects of the data bases uniformization problem and a proposed solution are presented.

  9. Simulation Enabled Safeguards Assessment Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert Bean; Trond Bjornard; Thomas Larson

    2007-09-01

    It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less

  10. Radial Velocities of 41 Kepler Eclipsing Binaries

    NASA Astrophysics Data System (ADS)

    Matson, Rachel A.; Gies, Douglas R.; Guo, Zhao; Williams, Stephen J.

    2017-12-01

    Eclipsing binaries are vital for directly determining stellar parameters without reliance on models or scaling relations. Spectroscopically derived parameters of detached and semi-detached binaries allow us to determine component masses that can inform theories of stellar and binary evolution. Here we present moderate resolution ground-based spectra of stars in close binary systems with and without (detected) tertiary companions observed by NASA’s Kepler mission and analyzed for eclipse timing variations. We obtain radial velocities and spectroscopic orbits for five single-lined and 35 double-lined systems, and confirm one false positive eclipsing binary. For the double-lined spectroscopic binaries, we also determine individual component masses and examine the mass ratio {M}2/{M}1 distribution, which is dominated by binaries with like-mass pairs and semi-detached classical Algol systems that have undergone mass transfer. Finally, we constrain the mass of the tertiary component for five double-lined binaries with previously detected companions.

  11. Spatial Signal Characteristics of Shallow Paraboloidal Shell Structronic Systems

    NASA Astrophysics Data System (ADS)

    Yue, H. H.; Deng, Z. Q.; Tzou, H. S.

    Based on the smart material and structronics technology, distributed sensor and control of shell structures have been rapidly developed for the last twenty years. This emerging technology has been utilized in aerospace, telecommunication, micro-electromechanical systems and other engineering applications. However, distributed monitoring technique and its resulting global spatially distributed sensing signals of thin flexible membrane shells are not clearly understood. In this paper, modeling of free thin paraboloidal shell with spatially distributed sensor, micro-sensing signal characteristics, and location of distributed piezoelectric sensor patches are investigated based on a new set of assumed mode shape functions. Parametric analysis indicates that the signal generation depends on modal membrane strains in the meridional and circumferential directions in which the latter is more significant than the former, when all bending strains vanish in membrane shells. This study provides a modeling and analysis technique for distributed sensors laminated on lightweight paraboloidal flexible structures and identifies critical components and regions that generate significant signals.

  12. High-resolution distributed temperature sensing with the multiphoton-timing technique

    NASA Astrophysics Data System (ADS)

    Höbel, M.; Ricka, J.; Wüthrich, M.; Binkert, Th.

    1995-06-01

    We report on a multiphoton-timing distributed temperature sensor (DTS) based on the concept of distributed anti-Stokes Raman thermometry. The sensor combines the advantage of very high spatial resolution (40 cm) with moderate measurement times. In 5 min it is possible to determine the temperature of as many as 4000 points along an optical fiber with an accuracy Delta T less than 2 deg C. The new feature of the DTS system is the combination of a fast single-photon avalanche diode with specially designed real-time signal-processing electronics. We discuss various parameters that affect the operation of analog and photon-timing DTS systems. Particular emphasis is put on the consequences of the nonideal behavior of sensor components and the corresponding correction procedures.

  13. The decay of triple systems

    NASA Astrophysics Data System (ADS)

    Martynova, A. I.; Orlov, V. V.

    2014-10-01

    Numerical simulations have been carried out in the general three-body problem with equal masses with zero initial velocities, to investigate the distribution of the decay times T based on a representative sample of initial conditions. The distribution has a power-law character on long time scales, f( T) ∝ T - α , with α = 1.74. Over small times T < 30 T cr ( T cr is the mean crossing time for a component of the triple system), a series of local maxima separated by about 1.0 T cr is observed in the decay-time distribution. These local peaks correspond to zones of decay after one or a few triple encounters. Figures showing the arrangement of these zones in the domain of the initial conditions are presented.

  14. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  15. Probabilistic Prediction of Lifetimes of Ceramic Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel N.; Gyekenyesi, John P.; Jadaan, Osama M.; Palfi, Tamas; Powers, Lynn; Reh, Stefan; Baker, Eric H.

    2006-01-01

    ANSYS/CARES/PDS is a software system that combines the ANSYS Probabilistic Design System (PDS) software with a modified version of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) Version 6.0 software. [A prior version of CARES/Life was reported in Program for Evaluation of Reliability of Ceramic Parts (LEW-16018), NASA Tech Briefs, Vol. 20, No. 3 (March 1996), page 28.] CARES/Life models effects of stochastic strength, slow crack growth, and stress distribution on the overall reliability of a ceramic component. The essence of the enhancement in CARES/Life 6.0 is the capability to predict the probability of failure using results from transient finite-element analysis. ANSYS PDS models the effects of uncertainty in material properties, dimensions, and loading on the stress distribution and deformation. ANSYS/CARES/PDS accounts for the effects of probabilistic strength, probabilistic loads, probabilistic material properties, and probabilistic tolerances on the lifetime and reliability of the component. Even failure probability becomes a stochastic quantity that can be tracked as a response variable. ANSYS/CARES/PDS enables tracking of all stochastic quantities in the design space, thereby enabling more precise probabilistic prediction of lifetimes of ceramic components.

  16. Probability density function characterization for aggregated large-scale wind power based on Weibull mixtures

    DOE PAGES

    Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...

    2016-02-02

    Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less

  17. Conceptual study of superconducting urban area power systems

    NASA Astrophysics Data System (ADS)

    Noe, Mathias; Bach, Robert; Prusseit, Werner; Willén, Dag; Gold-acker, Wilfried; Poelchau, Juri; Linke, Christian

    2010-06-01

    Efficient transmission, distribution and usage of electricity are fundamental requirements for providing citizens, societies and economies with essential energy resources. It will be a major future challenge to integrate more sustainable generation resources, to meet growing electricity demand and to renew electricity networks. Research and development on superconducting equipment and components have an important role to play in addressing these challenges. Up to now, most studies on superconducting applications in power systems have been concentrated on the application of specific devices like for example cables and current limiters. In contrast to this, the main focus of our study is to show the consequence of a large scale integration of superconducting power equipment in distribution level urban power systems. Specific objectives are to summarize the state-of-the-art of superconducting power equipment including cooling systems and to compare the superconducting power system with respect to energy and economic efficiency with conventional solutions. Several scenarios were considered starting from the replacement of an existing distribution level sub-grid up to a full superconducting urban area distribution level power system. One major result is that a full superconducting urban area distribution level power system could be cost competitive with existing solutions in the future. In addition to that, superconducting power systems offer higher energy efficiency as well as a number of technical advantages like lower voltage drops and improved stability.

  18. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    NASA Astrophysics Data System (ADS)

    Cowell, Martin Andrew

    The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to spatially and temporally varying energy availability in order to understand sensor node reliability. Looking to the future, we see an opportunity for further research to implement machine learning algorithms to control the energy resources of distributed computing networks.

  19. Map Matching and Real World Integrated Sensor Data Warehousing (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burton, E.

    2014-02-01

    The inclusion of interlinked temporal and spatial elements within integrated sensor data enables a tremendous degree of flexibility when analyzing multi-component datasets. The presentation illustrates how to warehouse, process, and analyze high-resolution integrated sensor datasets to support complex system analysis at the entity and system levels. The example cases presented utilizes in-vehicle sensor system data to assess vehicle performance, while integrating a map matching algorithm to link vehicle data to roads to demonstrate the enhanced analysis possible via interlinking data elements. Furthermore, in addition to the flexibility provided, the examples presented illustrate concepts of maintaining proprietary operational information (Fleet DNA)more » and privacy of study participants (Transportation Secure Data Center) while producing widely distributed data products. Should real-time operational data be logged at high resolution across multiple infrastructure types, map matched to their associated infrastructure, and distributed employing a similar approach; dependencies between urban environment infrastructures components could be better understood. This understanding is especially crucial for the cities of the future where transportation will rely more on grid infrastructure to support its energy demands.« less

  20. A Search for Binary Systems among the Nearest L Dwarfs

    NASA Astrophysics Data System (ADS)

    Reid, I. Neill; Lewitus, E.; Allen, P. R.; Cruz, Kelle L.; Burgasser, Adam J.

    2006-08-01

    We have used the Near-Infrared Camera and Multi-Object Spectrometer NIC1 camera on the Hubble Space Telescope to obtain high angular resolution images of 52 ultracool dwarfs in the immediate solar neighborhood. Nine systems are resolved as binary, with component separations from 1.5 to 15 AU. Based on current theoretical models and empirical bolometric corrections, all systems have components with similar luminosities and, consequently, high mass ratios, q>0.8. Limiting analysis to L dwarfs within 20 pc, the observed binary fraction is 12%+7-3. Applying Bayesian analysis to our data set, we derive a mass-ratio distribution that peaks strongly at unity. Modeling the semimajor axis distribution as a logarithmic Gaussian, the best fit is centered at loga0=0.8 AU (~6.3 AU), with a (logarithmic) width of +/-0.3. The current data are consistent with an overall binary frequency of ~24%. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.

  1. The Self-Organization of a Spoken Word

    PubMed Central

    Holden, John G.; Rajaraman, Srinivasan

    2012-01-01

    Pronunciation time probability density and hazard functions from large speeded word naming data sets were assessed for empirical patterns consistent with multiplicative and reciprocal feedback dynamics – interaction dominant dynamics. Lognormal and inverse power law distributions are associated with multiplicative and interdependent dynamics in many natural systems. Mixtures of lognormal and inverse power law distributions offered better descriptions of the participant’s distributions than the ex-Gaussian or ex-Wald – alternatives corresponding to additive, superposed, component processes. The evidence for interaction dominant dynamics suggests fundamental links between the observed coordinative synergies that support speech production and the shapes of pronunciation time distributions. PMID:22783213

  2. The numerical methods for the development of the mixture region in the vapor explosion simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Y.; Ohashi, H.; Akiyama, M.

    An attempt to numerically simulate the process of the vapor explosion with a general multi-component and multi-dimension code is being challenged. Because of the rapid change of the flow field and extremely nonuniform distribution of the components in the system of the vapor explosion, the numerical divergence and diffusion are subject to occur easily. A dispersed component model and a multiregion scheme, by which these difficulties can be effectively overcome, were proposed. The simulations have been performed for the processes of the premixing and the fragmentation propagation in the vapor explosion.

  3. Data on the no-load performance analysis of a tomato postharvest storage system.

    PubMed

    Ayomide, Orhewere B; Ajayi, Oluseyi O; Banjo, Solomon O; Ajayi, Adesola A

    2017-08-01

    In this present investigation, an original and detailed empirical data on the transfer of heat in a tomato postharvest storage system was presented. No-load tests were performed for a period of 96 h. The heat distribution at different locations, namely the top, middle and bottom of the system was acquired, at a time interval of 30 min for the test period. The humidity inside the system was taken into consideration. Thus, No-load tests with or without introduction of humidity were carried out and data showing the effect of a rise in humidity level, on temperature distribution were acquired. The temperatures at the external mechanical cooling components were acquired and could be used for showing the performance analysis of the storage system.

  4. Distributed augmented reality with 3-D lung dynamics--a planning tool concept.

    PubMed

    Hamza-Lup, Felix G; Santhanam, Anand P; Imielińska, Celina; Meeks, Sanford L; Rolland, Jannick P

    2007-01-01

    Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patient's body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype.

  5. The Intellectual Supermarket.

    ERIC Educational Resources Information Center

    Demb, Ada

    2002-01-01

    Discusses how separating undergraduate education into its two primary components--general education and the major--and then applying the perspective of a supermarket analogy leads to startling conclusions about possible transformations of the production and distribution system for higher education at the undergraduate level and for implementing…

  6. Power Management and Distribution Trades Studies for a Deep-Space Mission Scientific Spacecraft

    NASA Technical Reports Server (NTRS)

    Kimnach, Greg L.; Soltis, James V.

    2004-01-01

    As part of NASA's Project Prometheus, the Nuclear Systems Program, NASA GRC performed trade studies on the various Power Management and Distribution (PMAD) options for a deep-space scientific spacecraft which would have a nominal electrical power requirement of 100 kWe. These options included AC (1000Hz and 1500Hz and DC primary distribution at various voltages. The distribution system efficiency, reliability, mass, thermal, corona, space radiation levels and technology readiness of devices and components were considered. The final proposed system consisted of two independent power distribution channels, sourced by two 3-phase, 110 kVA alternators nominally operating at half-rated power. Each alternator nominally supplies 50kWe to one half of the ion thrusters and science modules but is capable of supplying the total power re3quirements in the event of loss of one alternator. This paper is an introduction to the methodology for the trades done to arrive at the proposed PMAD architecture. Any opinions expressed are those of the author(s) and do not necessarily reflect the views of Project Prometheus.

  7. Power Management and Distribution Trades Studies for a Deep-space Mission Scientific Spacecraft

    NASA Astrophysics Data System (ADS)

    Kimnach, Greg L.; Soltis, James V.

    2004-02-01

    As part of NASA's Project Prometheus, the Nuclear Systems Program, NASA GRC performed trade studies on the various Power Management and Distribution (PMAD) options for a deep-space scientific spacecraft, which would have a nominal electrical power requirement of 100 kWe. These options included AC (1000Hz and 1500Hz) and DC primary distribution at various voltages. The distribution system efficiency, reliability, mass, thermal, corona, space radiation levels, and technology readiness of devices and components were considered. The final proposed system consisted of two independent power distribution channels, sourced by two 3-phase, 110 kVA alternators nominally operating at half-rated power. Each alternator nominally supplies 50 kWe to one-half of the ion thrusters and science modules, but is capable of supplying the total power requirements in the event of loss of one alternator. This paper is an introduction to the methodology for the trades done to arrive at the proposed PMAD architecture. Any opinions expressed are those of the author(s) and do not necessarily reflect the views of Project Prometheus.

  8. CUMBIN - CUMULATIVE BINOMIAL PROGRAMS

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.

  9. 3-dimensional beam scanning system for particle radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leemann, C.; Alonso, J.; Grunder, H.

    1977-03-01

    In radiation therapy treatment volumes up to several liters have to be irradiated. Today's charged particle programs use ridge filters, scattering foils, occluding rings collimators and boluses to shape the dose distribution. An alternative approach, scanning of a small diameter beam, is analyzed and tentative systems specifications are derived. Critical components are scheduled for fabrication and testing at LBL.

  10. System for Performing Single Query Searches of Heterogeneous and Dispersed Databases

    NASA Technical Reports Server (NTRS)

    Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)

    2017-01-01

    The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.

  11. 76 FR 60710 - Airworthiness Directives; The Boeing Company Model 737-600, -700, -700C, -800, and -900 Series...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-30

    ... Bulletin 737-28A1201, Revision 1, dated May 28, 2009. Subject (d) Joint Aircraft System Component (JASC... control relays in the P91 and P92 power distribution panels for the fuel boost and override pumps with new... INFORMATION CONTACT: Georgios Roussos, Aerospace Engineer, Systems and Equipment Branch, ANM-130S, FAA...

  12. Integrated Farm System Model Version 4.3 and Dairy Gas Emissions Model Version 3.3 Software development and distribution

    USDA-ARS?s Scientific Manuscript database

    Modeling routines of the Integrated Farm System Model (IFSM version 4.2) and Dairy Gas Emission Model (DairyGEM version 3.2), two whole-farm simulation models developed and maintained by USDA-ARS, were revised with new components for: (1) simulation of ammonia (NH3) and greenhouse gas emissions gene...

  13. A simulation of probabilistic wildfire risk components for the continental United States

    Treesearch

    Mark A. Finney; Charles W. McHugh; Isaac C. Grenfell; Karin L. Riley; Karen C. Short

    2011-01-01

    This simulation research was conducted in order to develop a large-fire risk assessment system for the contiguous land area of the United States. The modeling system was applied to each of 134 Fire Planning Units (FPUs) to estimate burn probabilities and fire size distributions. To obtain stable estimates of these quantities, fire ignition and growth was simulated for...

  14. The Phobos neutral and ionized torus

    NASA Astrophysics Data System (ADS)

    Poppe, A. R.; Curry, S. M.; Fatemi, S.

    2016-05-01

    Charged particle sputtering, micrometeoroid impact vaporization, and photon-stimulated desorption are fundamental processes operating at airless surfaces throughout the solar system. At larger bodies, such as Earth's Moon and several of the outer planet moons, these processes generate tenuous surface-bound exospheres that have been observed by a variety of methods. Phobos and Deimos, in contrast, are too gravitationally weak to keep ejected neutrals bound and, thus, are suspected to generate neutral tori in orbit around Mars. While these tori have not yet been detected, the distribution and density of both the neutral and ionized components are of fundamental interest. We combine a neutral Monte Carlo model and a hybrid plasma model to investigate both the neutral and ionized components of the Phobos torus. We show that the spatial distribution of the neutral torus is highly dependent on each individual species (due to ionization rates that span nearly 4 orders of magnitude) and on the location of Phobos with respect to Mars. Additionally, we present the flux distribution of torus pickup ions throughout the Martian system and estimate typical pickup ion fluxes. We find that the predicted pickup ion fluxes are too low to perturb the ambient plasma, consistent with previous null detections by spacecraft around Mars.

  15. Rocket measurements of electrons in a system of multiple auroral arcs

    NASA Technical Reports Server (NTRS)

    Boyd, J. S.; Davis, T. N.

    1977-01-01

    A Nike-Tomahawk rocket was launched into a system of auroral arcs northward of Poker Flat Research Range, Fairbanks, Alaska. The pitch-angle distribution of electrons was measured at 2.5, 5, and 10 keV and also at 10 keV on a separating forward section of the payload. The auroral activity appeared to be the extension of substorm activity centered to the east. The rocket crossed a westward-propagating fold in the brightest band. The electron spectrum was relatively hard through most of the flight, showing a peak in the range from 2.5 to 10 keV in the weaker aurora and below 5 keV in the brightest arc. The detailed structure of the pitch-angle distribution suggested that, at times, a very selective process was accelerating some electrons in the magnetic field direction, so that a narrow field-aligned component appeared superimposed on a more isotropic distribution. It is concluded that this process could not be a near-ionosphere field-aligned potential drop, although the more isotropic component may have been produced by a parallel electric field extending several thousand kilometers along the field line above the ionosphere.

  16. Aligning Strategic and Information Systems Planning: A Review of Navy Efforts

    DTIC Science & Technology

    1990-03-01

    plans." [Ref. 11:p. 6] The results of the component level IR planning process are documented in Component Information Management Plans ( CIMP ). In mid...August, the CIMP’s are presented to the IR Planning Committee at the annual IR Planning Conference. Each CIMP is then distributed to all organizations...objectives promulgated in the CIMP or FASP will, if approved for development or acquisition, be further refined in project plans, which in turn, form the

  17. A Framework and Toolkit for the Construction of Multimodal Learning Interfaces

    DTIC Science & Technology

    1998-04-29

    human communication modalities in the context of a broad class of applications, specifically those that support state manipulation via parameterized actions. The multimodal semantic model is also the basis for a flexible, domain independent, incrementally trainable multimodal interpretation algorithm based on a connectionist network. The second major contribution is an application framework consisting of reusable components and a modular, distributed system architecture. Multimodal application developers can assemble the components in the framework into a new application,

  18. AliEn—ALICE environment on the GRID

    NASA Astrophysics Data System (ADS)

    Saiz, P.; Aphecetche, L.; Bunčić, P.; Piskač, R.; Revsbech, J.-E.; Šego, V.; Alice Collaboration

    2003-04-01

    AliEn ( http://alien.cern.ch) (ALICE Environment) is a Grid framework built on top of the latest Internet standards for information exchange and authentication (SOAP, PKI) and common Open Source components. AliEn provides a virtual file catalogue that allows transparent access to distributed datasets and a number of collaborating Web services which implement the authentication, job execution, file transport, performance monitor and event logging. In the paper we will present the architecture and components of the system.

  19. Assessment of the components of the Kalimantan and Sulawesi power development project: Volume 2. Export trade information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-03-31

    This report, conducted by Utility Consulting was funded by the US Trade and Development Agency. The report concerns a power development project on the islands of Kalimantan and Sulawesi. This is TDA Volume 2, the main text (Report Volume 1), and it includes the following: (1) Introduction; (2) Transmission line and substation investment plan; (3) The distribution component; (4) Telecommunications; (5) PLN information systems; and Appendix: Figures and tables.

  20. MDA-based EHR application security services.

    PubMed

    Blobel, Bernd; Pharow, Peter

    2004-01-01

    Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.

Top