Sample records for distributed processing systems

  1. Design of distributed systems of hydrolithosphere processes management. A synthesis of distributed management systems

    NASA Astrophysics Data System (ADS)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The paper considers an important problem of designing distributed systems of hydrolithosphere processes management. The control actions on the hydrolithosphere processes under consideration are implemented by a set of extractive wells. The article shows the method of defining the approximation links for description of the dynamic characteristics of hydrolithosphere processes. The structure of distributed regulators, used in the management systems by the considered processes, is presented. The paper analyses the results of the synthesis of the distributed management system and the results of modelling the closed-loop control system by the parameters of the hydrolithosphere process.

  2. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  3. Space vehicle electrical power processing distribution and control study. Volume 1: Summary

    NASA Technical Reports Server (NTRS)

    Krausz, A.

    1972-01-01

    A concept for the processing, distribution, and control of electric power for manned space vehicles and future aircraft is presented. Emphasis is placed on the requirements of the space station and space shuttle configurations. The systems involved are referred to as the processing distribution and control system (PDCS), electrical power system (EPS), and electric power generation system (EPGS).

  4. Land transportation model for supply chain manufacturing industries

    NASA Astrophysics Data System (ADS)

    Kurniawan, Fajar

    2017-12-01

    Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.

  5. Distributed Processing System for Restoration of Electric Power Distribution Network Using Two-Layered Contract Net Protocol

    NASA Astrophysics Data System (ADS)

    Kodama, Yu; Hamagami, Tomoki

    Distributed processing system for restoration of electric power distribution network using two-layered CNP is proposed. The goal of this study is to develop the restoration system which adjusts to the future power network with distributed generators. The state of the art of this study is that the two-layered CNP is applied for the distributed computing environment in practical use. The two-layered CNP has two classes of agents, named field agent and operating agent in the network. In order to avoid conflicts of tasks, operating agent controls privilege for managers to send the task announcement messages in CNP. This technique realizes the coordination between agents which work asynchronously in parallel with others. Moreover, this study implements the distributed processing system using a de-fact standard multi-agent framework, JADE(Java Agent DEvelopment framework). This study conducts the simulation experiments of power distribution network restoration and compares the proposed system with the previous system. We confirmed the results show effectiveness of the proposed system.

  6. Supporting users through integrated retrieval, processing, and distribution systems at the land processes distributed active archive center

    USGS Publications Warehouse

    Kalvelage, T.; Willems, Jennifer

    2003-01-01

    The design of the EOS Data and Information Systems (EOSDIS) to acquire, archive, manage and distribute Earth observation data to the broadest possible user community was discussed. A number of several integrated retrieval, processing and distribution capabilities have been explained. The value of these functions to the users were described and potential future improvements were laid out for the users. The users were interested in acquiring the retrieval, processing and archiving systems integrated so that they can get the data they want in the format and delivery mechanism of their choice.

  7. The Raid distributed database system

    NASA Technical Reports Server (NTRS)

    Bhargava, Bharat; Riedl, John

    1989-01-01

    Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.

  8. The application of artificial intelligence techniques to large distributed networks

    NASA Technical Reports Server (NTRS)

    Dubyah, R.; Smith, T. R.; Star, J. L.

    1985-01-01

    Data accessibility and transfer of information, including the land resources information system pilot, are structured as large computer information networks. These pilot efforts include the reduction of the difficulty to find and use data, reducing processing costs, and minimize incompatibility between data sources. Artificial Intelligence (AI) techniques were suggested to achieve these goals. The applicability of certain AI techniques are explored in the context of distributed problem solving systems and the pilot land data system (PLDS). The topics discussed include: PLDS and its data processing requirements, expert systems and PLDS, distributed problem solving systems, AI problem solving paradigms, query processing, and distributed data bases.

  9. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  10. Methods, media and systems for managing a distributed application running in a plurality of digital processing devices

    DOEpatents

    Laadan, Oren; Nieh, Jason; Phung, Dan

    2012-10-02

    Methods, media and systems for managing a distributed application running in a plurality of digital processing devices are provided. In some embodiments, a method includes running one or more processes associated with the distributed application in virtualized operating system environments on a plurality of digital processing devices, suspending the one or more processes, and saving network state information relating to network connections among the one or more processes. The method further include storing process information relating to the one or more processes, recreating the network connections using the saved network state information, and restarting the one or more processes using the stored process information.

  11. Design of distributed systems of hydrolithospere processes management. Selection of optimal number of extracting wells

    NASA Astrophysics Data System (ADS)

    Pershin, I. M.; Pervukhin, D. A.; Ilyushin, Y. V.; Afanaseva, O. V.

    2017-10-01

    The article considers the important issue of designing the distributed systems of hydrolithospere processes management. Control effects on the hydrolithospere processes are implemented by a set of extractive wells. The article shows how to determine the optimal number of extractive wells that provide a distributed control impact on the management object.

  12. Planning Systems for Distributed Operations

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.

    2002-01-01

    This viewgraph representation presents an overview of the mission planning process involving distributed operations (such as the International Space Station (ISS)) and the computer hardware and software systems needed to support such an effort. Topics considered include: evolution of distributed planning systems, ISS distributed planning, the Payload Planning System (PPS), future developments in distributed planning systems, Request Oriented Scheduling Engine (ROSE) and Next Generation distributed planning systems.

  13. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  14. Distributed medical intelligence. A systems approach for developing and integrative health care information distribution infrastructure.

    PubMed

    Warner, D; Sale, J; Viirre, E

    1996-01-01

    Recent trends in healthcare informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. Distributed Medical Intelligence promotes the development of an integrative medical communication system which addresses the process of providing expert medical knowledge to the point of need.

  15. Distributed Processing Tools Definition. Volume 1. Hardware and Software Technologies for Tightly-Coupled Distributed Systems.

    DTIC Science & Technology

    1983-06-01

    LOSARDO Project Engineer APPROVED: .MARMCINIhI, Colonel. USAF Chief, Coaud and Control Division FOR THE CCOaIDKR: Acting Chief, Plea Off ice * **711...WORK UNIT NUMBERS General Dynamics Corporation 62702F Data Systems Division P 0 Box 748, Fort Worth TX 76101 55811829 I1. CONTROLLING OFFICE NAME AND...Processing System for 29 the Operation/Direction Center(s) 4-3 Distribution of Processing Control 30 for the Operation/Direction Center(s) 4-4 Generalized

  16. Adaptive Distributed Intelligent Control Architecture for Future Propulsion Systems (Preprint)

    DTIC Science & Technology

    2007-04-01

    weight will be reduced by replacing heavy harness assemblies and FADECs , with distributed processing elements interconnected. This paper reviews...Digital Electronic Controls ( FADECs ), with distributed processing elements interconnected through a serial bus. Efficient data flow throughout the...because intelligence is embedded in components while overall control is maintained in the FADEC . The need for Distributed Control Systems in

  17. Distributed parallel computing in stochastic modeling of groundwater systems.

    PubMed

    Dong, Yanhui; Li, Guomin; Xu, Haizhen

    2013-03-01

    Stochastic modeling is a rapidly evolving, popular approach to the study of the uncertainty and heterogeneity of groundwater systems. However, the use of Monte Carlo-type simulations to solve practical groundwater problems often encounters computational bottlenecks that hinder the acquisition of meaningful results. To improve the computational efficiency, a system that combines stochastic model generation with MODFLOW-related programs and distributed parallel processing is investigated. The distributed computing framework, called the Java Parallel Processing Framework, is integrated into the system to allow the batch processing of stochastic models in distributed and parallel systems. As an example, the system is applied to the stochastic delineation of well capture zones in the Pinggu Basin in Beijing. Through the use of 50 processing threads on a cluster with 10 multicore nodes, the execution times of 500 realizations are reduced to 3% compared with those of a serial execution. Through this application, the system demonstrates its potential in solving difficult computational problems in practical stochastic modeling. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  18. Resource depletion promotes automatic processing: implications for distribution of practice.

    PubMed

    Scheel, Matthew H

    2010-12-01

    Recent models of cognition include two processing systems: an automatic system that relies on associative learning, intuition, and heuristics, and a controlled system that relies on deliberate consideration. Automatic processing requires fewer resources and is more likely when resources are depleted. This study showed that prolonged practice on a resource-depleting mental arithmetic task promoted automatic processing on a subsequent problem-solving task, as evidenced by faster responding and more errors. Distribution of practice effects (0, 60, 120, or 180 sec. between problems) on rigidity also disappeared when groups had equal time on resource-depleting tasks. These results suggest that distribution of practice effects is reducible to resource availability. The discussion includes implications for interpreting discrepancies in the traditional distribution of practice effect.

  19. A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems

    DTIC Science & Technology

    2002-04-01

    Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained

  20. Production and Distribution of NASA MODIS Remote Sensing Products

    NASA Technical Reports Server (NTRS)

    Wolfe, Robert

    2007-01-01

    The two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on-board NASA's Earth Observing System (EOS) Terra and Aqua satellites make key measurements for understanding the Earth's terrestrial ecosystems. Global time-series of terrestrial geophysical parameters have been produced from MODIS/Terra for over 7 years and for MODIS/Aqua for more than 4 1/2 years. These well calibrated instruments, a team of scientists and a large data production, archive and distribution systems have allowed for the development of a new suite of high quality product variables at spatial resolutions as fine as 250m in support of global change research and natural resource applications. This talk describes the MODIS Science team's products, with a focus on the terrestrial (land) products, the data processing approach and the process for monitoring and improving the product quality. The original MODIS science team was formed in 1989. The team's primary role is the development and implementation of the geophysical algorithms. In addition, the team provided feedback on the design and pre-launch testing of the instrument and helped guide the development of the data processing system. The key challenges the science team dealt with before launch were the development of algorithms for a new instrument and provide guidance of the large and complex multi-discipline processing system. Land, Ocean and Atmosphere discipline teams drove the processing system requirements, particularly in the area of the processing loads and volumes needed to daily produce geophysical maps of the Earth at resolutions as fine as 250 m. The processing system had to handle a large number of data products, large data volumes and processing loads, and complex processing requirements. Prior to MODIS, daily global maps from heritage instruments, such as Advanced Very High Resolution Radiometer (AVHRR), were not produced at resolutions finer than 5 km. The processing solution evolved into a combination of processing the lower level (Level 1) products and the higher level discipline specific Land and Atmosphere products in the MODIS Science Investigator Lead Processing System (SIPS), the MODIS Adaptive Processing System (MODAPS), and archive and distribution of the Land products to the user community by two of NASA s EOS Distributed Active Archive Centers (DAACs). Recently, a part of MODAPS, the Level 1 and Atmosphere Archive and Distribution System (LAADS), took over the role of archiving and distributing the Level 1 and Atmosphere products to the user community.

  1. The Brain as a Distributed Intelligent Processing System: An EEG Study

    PubMed Central

    da Rocha, Armando Freitas; Rocha, Fábio Theoto; Massad, Eduardo

    2011-01-01

    Background Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion The present results support these claims and the neural efficiency hypothesis. PMID:21423657

  2. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  3. US GEOLOGICAL SURVEY'S NATIONAL SYSTEM FOR PROCESSING AND DISTRIBUTION OF NEAR REAL-TIME HYDROLOGICAL DATA.

    USGS Publications Warehouse

    Shope, William G.; ,

    1987-01-01

    The US Geological Survey is utilizing a national network of more than 1000 satellite data-collection stations, four satellite-relay direct-readout ground stations, and more than 50 computers linked together in a private telecommunications network to acquire, process, and distribute hydrological data in near real-time. The four Survey offices operating a satellite direct-readout ground station provide near real-time hydrological data to computers located in other Survey offices through the Survey's Distributed Information System. The computerized distribution system permits automated data processing and distribution to be carried out in a timely manner under the control and operation of the Survey office responsible for the data-collection stations and for the dissemination of hydrological information to the water-data users.

  4. Dynamic measurement of fluorescent proteins spectral distribution on virus infected cells

    NASA Astrophysics Data System (ADS)

    Lee, Ja-Yun; Wu, Ming-Xiu; Kao, Chia-Yun; Wu, Tzong-Yuan; Hsu, I.-Jen

    2006-09-01

    We constructed a dynamic spectroscopy system that can simultaneously measure the intensity and spectral distributions of samples with multi-fluorophores in a single scan. The system was used to monitor the fluorescence distribution of cells infected by the virus, which is constructed by a recombinant baculoviruses, vAcD-Rhir-E, containing the red and green fluorescent protein gene that can simultaneously produce dual fluorescence in recombinant virus-infected Spodoptera frugiperda 21 cells (Sf21) under the control of a polyhedrin promoter. The system was composed of an excitation light source, a scanning system and a spectrometer. We also developed an algorithm and fitting process to analyze the pattern of fluorescence distribution of the dual fluorescence produced in the recombinant virus-infected cells. All the algorithm and calculation are automatically processed in a visualized scanning program and can monitor the specific region of sample by calculating its intensity distribution. The spectral measurement of each pixel was performed at millisecond range and the two dimensional distribution of full spectrum was recorded within several seconds. We have constructed a dynamic spectroscopy system to monitor the process of virus-infection of cells. The distributions of the dual fluorescence were simultaneously measured at micrometer resolution.

  5. Methods and tools for profiling and control of distributed systems

    NASA Astrophysics Data System (ADS)

    Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.

    2018-02-01

    This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  6. Distributed systems status and control

    NASA Technical Reports Server (NTRS)

    Kreidler, David; Vickers, David

    1990-01-01

    Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.

  7. A distributed computing approach to mission operations support. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Larsen, R. L.

    1975-01-01

    Computing mission operation support includes orbit determination, attitude processing, maneuver computation, resource scheduling, etc. The large-scale third-generation distributed computer network discussed is capable of fulfilling these dynamic requirements. It is shown that distribution of resources and control leads to increased reliability, and exhibits potential for incremental growth. Through functional specialization, a distributed system may be tuned to very specific operational requirements. Fundamental to the approach is the notion of process-to-process communication, which is effected through a high-bandwidth communications network. Both resource-sharing and load-sharing may be realized in the system.

  8. Post-processing procedure for industrial quantum key distribution systems

    NASA Astrophysics Data System (ADS)

    Kiktenko, Evgeny; Trushechkin, Anton; Kurochkin, Yury; Fedorov, Aleksey

    2016-08-01

    We present algorithmic solutions aimed on post-processing procedure for industrial quantum key distribution systems with hardware sifting. The main steps of the procedure are error correction, parameter estimation, and privacy amplification. Authentication of classical public communication channel is also considered.

  9. Summary of the First Network-Centric Sensing Community Workshop, ’Netted Sensors: A Government, Industry and Academia Dialogue’

    DTIC Science & Technology

    2006-04-01

    and Scalability, (2) Sensors and Platforms, (3) Distributed Computing and Processing , (4) Information Management, (5) Fusion and Resource Management...use of the deployed system. 3.3 Distributed Computing and Processing Session The Distributed Computing and Processing Session consisted of three

  10. Distributed Data Processing in a United States Naval Shipyard.

    DTIC Science & Technology

    1979-12-01

    25 1. Evolution ........ ..................... 25 2. Motivations for Distributed Processing ... ....... 30 a. Extensibility...51 B. EVOLUTION ...... ........................ ... 51 C. CONCEPTS .... ... ........................ . 55 D. FORM AND STRUCTURE OF THE...motivations for, and the characteristics of, distributed processing as they apply to management information systems. 1. Evolution Prior to the advent of

  11. Research and Implementation of Key Technologies in Multi-Agent System to Support Distributed Workflow

    NASA Astrophysics Data System (ADS)

    Pan, Tianheng

    2018-01-01

    In recent years, the combination of workflow management system and Multi-agent technology is a hot research field. The problem of lack of flexibility in workflow management system can be improved by introducing multi-agent collaborative management. The workflow management system adopts distributed structure. It solves the problem that the traditional centralized workflow structure is fragile. In this paper, the agent of Distributed workflow management system is divided according to its function. The execution process of each type of agent is analyzed. The key technologies such as process execution and resource management are analyzed.

  12. Contingency theoretic methodology for agent-based web-oriented manufacturing systems

    NASA Astrophysics Data System (ADS)

    Durrett, John R.; Burnell, Lisa J.; Priest, John W.

    2000-12-01

    The development of distributed, agent-based, web-oriented, N-tier Information Systems (IS) must be supported by a design methodology capable of responding to the convergence of shifts in business process design, organizational structure, computing, and telecommunications infrastructures. We introduce a contingency theoretic model for the use of open, ubiquitous software infrastructure in the design of flexible organizational IS. Our basic premise is that developers should change in the way they view the software design process from a view toward the solution of a problem to one of the dynamic creation of teams of software components. We postulate that developing effective, efficient, flexible, component-based distributed software requires reconceptualizing the current development model. The basic concepts of distributed software design are merged with the environment-causes-structure relationship from contingency theory; the task-uncertainty of organizational- information-processing relationships from information processing theory; and the concept of inter-process dependencies from coordination theory. Software processes are considered as employees, groups of processes as software teams, and distributed systems as software organizations. Design techniques already used in the design of flexible business processes and well researched in the domain of the organizational sciences are presented. Guidelines that can be utilized in the creation of component-based distributed software will be discussed.

  13. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  14. Design and implementation of an Internet based effective controlling and monitoring system with wireless fieldbus communications technologies for process automation--an experimental study.

    PubMed

    Cetinceviz, Yucel; Bayindir, Ramazan

    2012-05-01

    The network requirements of control systems in industrial applications increase day by day. The Internet based control system and various fieldbus systems have been designed in order to meet these requirements. This paper describes an Internet based control system with wireless fieldbus communication designed for distributed processes. The system was implemented as an experimental setup in a laboratory. In industrial facilities, the process control layer and the distance connection of the distributed control devices in the lowest levels of the industrial production environment are provided with fieldbus networks. In this paper, the Internet based control system that will be able to meet the system requirements with a new-generation communication structure, which is called wired/wireless hybrid system, has been designed on field level and carried out to cover all sectors of distributed automation, from process control, to distributed input/output (I/O). The system has been accomplished by hardware structure with a programmable logic controller (PLC), a communication processor (CP) module, two industrial wireless modules and a distributed I/O module, Motor Protection Package (MPP) and software structure with WinCC flexible program used for the screen of Scada (Supervisory Control And Data Acquisition), SIMATIC MANAGER package program ("STEP7") used for the hardware and network configuration and also for downloading control program to PLC. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  15. CHANGES IN BACTERIAL COMPOSITION OF BIOFILM IN A METROPOLITAN DRINKING WATER DISTRIBUTION SYSTEM

    EPA Science Inventory

    This study examined the development of bacterial biofilms within a metropolitan distribution system. The distribution system is fed with different source water (i.e., groundwater, GW and surface water, SW) and undergoes different treatment processes in separate facilities. The b...

  16. EOS MLS Science Data Processing System: A Description of Architecture and Capabilities

    NASA Technical Reports Server (NTRS)

    Cuddy, David T.; Echeverri, Mark D.; Wagner, Paul A.; Hanzel, Audrey T.; Fuller, Ryan A.

    2006-01-01

    This paper describes the architecture and capabilities of the Science Data Processing System (SDPS) for the EOS MLS. The SDPS consists of two major components--the Science Computing Facility and the Science Investigator-led Processing System. The Science Computing Facility provides the facilities for the EOS MLS Science Team to perform the functions of scientific algorithm development, processing software development, quality control of data products, and scientific analyses. The Science Investigator-led Processing System processes and reprocesses the science data for the entire mission and delivers the data products to the Science Computing Facility and to the Goddard Space Flight Center Earth Science Distributed Active Archive Center, which archives and distributes the standard science products.

  17. Distributed Architecture for the Object-Oriented Method for Interoperability

    DTIC Science & Technology

    2003-03-01

    Collaborative Environment. ......................121 Figure V-2. Distributed OOMI And The Collaboration Centric Paradigm. .....................123 Figure V...of systems are formed into a system federation to resolve differences in modeling. An OOMI Integrated Development Environment (OOMI IDE) lends ...space for the creation of possible distributed systems is partitioned into User Centric systems, Processing/Storage Centric systems, Implementation

  18. Toward a process-level view of distributed healthcare tasks: Medication management as a case study.

    PubMed

    Werner, Nicole E; Malkana, Seema; Gurses, Ayse P; Leff, Bruce; Arbaje, Alicia I

    2017-11-01

    We aim to highlight the importance of using a process-level view in analyzing distributed healthcare tasks through a case study analysis of medication management (MM). MM during older adults' hospital-to-skilled-home-healthcare (SHHC) transitions is a healthcare process with tasks distributed across people, organizations, and time. MM has typically been studied at the task level, but a process-level is needed to fully understand and improve MM during transitions. A process-level view allows for a broader investigation of how tasks are distributed throughout the work system through an investigation of interactions and the resultant emergent properties. We studied MM during older adults' hospital-to-SHHC transitions through interviews and observations with 60 older adults, their 33 caregivers, and 79 SHHC providers at 5 sites associated with 3 SHHC agencies. Study findings identified key cross-system characteristics not observable at the task-level: (1) identification of emergent properties (e.g., role ambiguity, loosely-coupled teams performing MM) and associated barriers; and (2) examination of barrier propagation across system boundaries. Findings highlight the importance of a process-level view of healthcare delivery occurring across system boundaries. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Distributed Processing and Cortical Specialization for Speech and Environmental Sounds in Human Temporal Cortex

    ERIC Educational Resources Information Center

    Leech, Robert; Saygin, Ayse Pinar

    2011-01-01

    Using functional MRI, we investigated whether auditory processing of both speech and meaningful non-linguistic environmental sounds in superior and middle temporal cortex relies on a complex and spatially distributed neural system. We found that evidence for spatially distributed processing of speech and environmental sounds in a substantial…

  20. Task allocation model for minimization of completion time in distributed computer systems

    NASA Astrophysics Data System (ADS)

    Wang, Jai-Ping; Steidley, Carl W.

    1993-08-01

    A task in a distributed computing system consists of a set of related modules. Each of the modules will execute on one of the processors of the system and communicate with some other modules. In addition, precedence relationships may exist among the modules. Task allocation is an essential activity in distributed-software design. This activity is of importance to all phases of the development of a distributed system. This paper establishes task completion-time models and task allocation models for minimizing task completion time. Current work in this area is either at the experimental level or without the consideration of precedence relationships among modules. The development of mathematical models for the computation of task completion time and task allocation will benefit many real-time computer applications such as radar systems, navigation systems, industrial process control systems, image processing systems, and artificial intelligence oriented systems.

  1. Standard services for the capture, processing, and distribution of packetized telemetry data

    NASA Technical Reports Server (NTRS)

    Stallings, William H.

    1989-01-01

    Standard functional services for the capture, processing, and distribution of packetized data are discussed with particular reference to the future implementation of packet processing systems, such as those for the Space Station Freedom. The major functions are listed under the following major categories: input processing, packet processing, and output processing. A functional block diagram of a packet data processing facility is presented, showing the distribution of the various processing functions as well as the primary data flow through the facility.

  2. Okayama optical polarimetry and spectroscopy system (OOPS) II. Network-transparent control software.

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Kurakami, T.; Shimizu, Y.; Yutani, M.

    Control system of the OOPS (Okayama Optical Polarimetry and Spectroscopy system) is designed to integrate several instruments whose controllers are distributed over a network; the OOPS instrument, a CCD camera and data acquisition unit, the 91 cm telescope, an autoguider, a weather monitor, and an image display tool SAOimage. With the help of message-based communication, the control processes cooperate with related processes to perform an astronomical observation under supervising control by a scheduler process. A logger process collects status data of all the instruments to distribute them to related processes upon request. Software structure of each process is described.

  3. The ATLAS PanDA Pilot in Operation

    NASA Astrophysics Data System (ADS)

    Nilsson, P.; Caballero, J.; De, K.; Maeno, T.; Stradling, A.; Wenaus, T.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis system (PanDA) [1-2] was designed to meet ATLAS [3] requirements for a data-driven workload management system capable of operating at LHC data processing scale. Submitted jobs are executed on worker nodes by pilot jobs sent to the grid sites by pilot factories. This paper provides an overview of the PanDA pilot [4] system and presents major features added in light of recent operational experience, including multi-job processing, advanced job recovery for jobs with output storage failures, gLExec [5-6] based identity switching from the generic pilot to the actual user, and other security measures. The PanDA system serves all ATLAS distributed processing and is the primary system for distributed analysis; it is currently used at over 100 sites worldwide. We analyze the performance of the pilot system in processing real LHC data on the OSG [7], EGI [8] and Nordugrid [9-10] infrastructures used by ATLAS, and describe plans for its evolution.

  4. Supporting users through integrated retrieval, processing, and distribution systems at the Land Processes Distributed Active Archive Center

    USGS Publications Warehouse

    Kalvelage, Thomas A.; Willems, Jennifer

    2005-01-01

    The US Geological Survey's EROS Data Center (EDC) hosts the Land Processes Distributed Active Archive Center (LP DAAC). The LP DAAC supports NASA's Earth Observing System (EOS), which is a series of polar-orbiting and low inclination satellites for long-term global observations of the land surface, biosphere, solid Earth, atmosphere, and oceans. The EOS Data and Information Systems (EOSDIS) was designed to acquire, archive, manage and distribute Earth observation data to the broadest possible user community.The LP DAAC is one of four DAACs that utilize the EOSDIS Core System (ECS) to manage and archive their data. Since the ECS was originally designed, significant changes have taken place in technology, user expectations, and user requirements. Therefore the LP DAAC has implemented additional systems to meet the evolving needs of scientific users, tailored to an integrated working environment. These systems provide a wide variety of services to improve data access and to enhance data usability through subsampling, reformatting, and reprojection. These systems also support the wide breadth of products that are handled by the LP DAAC.The LP DAAC is the primary archive for the Landsat 7 Enhanced Thematic Mapper Plus (ETM+) data; it is the only facility in the United States that archives, processes, and distributes data from the Advanced Spaceborne Thermal Emission/Reflection Radiometer (ASTER) on NASA's Terra spacecraft; and it is responsible for the archive and distribution of “land products” generated from data acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA's Terra and Aqua satellites.

  5. Distributed Processing with a Mainframe-Based Hospital Information System: A Generalized Solution

    PubMed Central

    Kirby, J. David; Pickett, Michael P.; Boyarsky, M. William; Stead, William W.

    1987-01-01

    Over the last two years the Medical Center Information Systems Department at Duke University Medical Center has been developing a systematic approach to distributing the processing and data involved in computerized applications at DUMC. The resulting system has been named MAPS- the Micro-ADS Processing System. A key characteristic of MAPS is that it makes it easy to execute any existing mainframe ADS application with a request from a PC. This extends the functionality of the mainframe application set to the PC without compromising the maintainability of the PC or mainframe systems.

  6. Log-normal distribution from a process that is not multiplicative but is additive.

    PubMed

    Mouri, Hideaki

    2013-10-01

    The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.

  7. Research and Design of the Three-tier Distributed Network Management System Based on COM / COM + and DNA

    NASA Astrophysics Data System (ADS)

    Liang, Likai; Bi, Yushen

    Considered on the distributed network management system's demand of high distributives, extensibility and reusability, a framework model of Three-tier distributed network management system based on COM/COM+ and DNA is proposed, which adopts software component technology and N-tier application software framework design idea. We also give the concrete design plan of each layer of this model. Finally, we discuss the internal running process of each layer in the distributed network management system's framework model.

  8. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  9. Work Distribution in a Fully Distributed Processing System.

    DTIC Science & Technology

    1982-01-01

    Institute of Technology Atlanta, Georgia 30332 THE VIEW, OPINIONS, AND/OR FINDINGS CONTAINED IN THIS REPORT ARE THOSE OF THE AUTHOR AND SHOULD NOT BE...opinions, and/or findings contained in this report are those of the author and should not be construed as an official Department of the Navy position...SECTIONO1 Distributed data processing systems are currently being studied by researchers and prospective users because of their potential for improvements

  10. Incorporating client-server database architecture and graphical user interface into outpatient medical records.

    PubMed Central

    Fiacco, P. A.; Rice, W. H.

    1991-01-01

    Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732

  11. Virtual time and time warp on the JPL hypercube. [operating system implementation for distributed simulation

    NASA Technical Reports Server (NTRS)

    Jefferson, David; Beckman, Brian

    1986-01-01

    This paper describes the concept of virtual time and its implementation in the Time Warp Operating System at the Jet Propulsion Laboratory. Virtual time is a distributed synchronization paradigm that is appropriate for distributed simulation, database concurrency control, real time systems, and coordination of replicated processes. The Time Warp Operating System is targeted toward the distributed simulation application and runs on a 32-node JPL Mark II Hypercube.

  12. Intelligent Systems for Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2002-01-01

    The motivation behind an advanced technology program to develop intelligent power management and distribution (PMAD) systems is described. The program concentrates on developing digital control and distributed processing algorithms for PMAD components and systems to improve their size, weight, efficiency, and reliability. Specific areas of research in developing intelligent DC-DC converters and distributed switchgear are described. Results from recent development efforts are presented along with expected future benefits to the overall PMAD system performance.

  13. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  14. Managed traffic evacuation using distributed sensor processing

    NASA Astrophysics Data System (ADS)

    Ramuhalli, Pradeep; Biswas, Subir

    2005-05-01

    This paper presents an integrated sensor network and distributed event processing architecture for managed in-building traffic evacuation during natural and human-caused disasters, including earthquakes, fire and biological/chemical terrorist attacks. The proposed wireless sensor network protocols and distributed event processing mechanisms offer a new distributed paradigm for improving reliability in building evacuation and disaster management. The networking component of the system is constructed using distributed wireless sensors for measuring environmental parameters such as temperature, humidity, and detecting unusual events such as smoke, structural failures, vibration, biological/chemical or nuclear agents. Distributed event processing algorithms will be executed by these sensor nodes to detect the propagation pattern of the disaster and to measure the concentration and activity of human traffic in different parts of the building. Based on this information, dynamic evacuation decisions are taken for maximizing the evacuation speed and minimizing unwanted incidents such as human exposure to harmful agents and stampedes near exits. A set of audio-visual indicators and actuators are used for aiding the automated evacuation process. In this paper we develop integrated protocols, algorithms and their simulation models for the proposed sensor networking and the distributed event processing framework. Also, efficient harnessing of the individually low, but collectively massive, processing abilities of the sensor nodes is a powerful concept behind our proposed distributed event processing algorithms. Results obtained through simulation in this paper are used for a detailed characterization of the proposed evacuation management system and its associated algorithmic components.

  15. A distributed data base management system. [for Deep Space Network

    NASA Technical Reports Server (NTRS)

    Bryan, A. I.

    1975-01-01

    Major system design features of a distributed data management system for the NASA Deep Space Network (DSN) designed for continuous two-way deep space communications are described. The reasons for which the distributed data base utilizing third-generation minicomputers is selected as the optimum approach for the DSN are threefold: (1) with a distributed master data base, valid data is available in real-time to support DSN management activities at each location; (2) data base integrity is the responsibility of local management; and (3) the data acquisition/distribution and processing power of a third-generation computer enables the computer to function successfully as a data handler or as an on-line process controller. The concept of the distributed data base is discussed along with the software, data base integrity, and hardware used. The data analysis/update constraint is examined.

  16. Seasat low-rate data system

    NASA Technical Reports Server (NTRS)

    Brown, J. W.; Cleven, G. C.; Klose, J. C.; Lame, D. B.; Yamarone, C. A.

    1979-01-01

    The Seasat low-rate data system, an end-to-end data-processing and data-distribution system for the four low-rate sensors (radar altimeter, Seasat-A scatterometer system, scanning multichannel microwave radiometer, and visible and infrared radiometer) carried aboard the satellite, is discussed. The function of the distributed, nonreal-time, magnetic-tape system is to apply necessary calibrations, corrections, and conversions to yield geophysically meaningful products from raw telemetry data. The algorithms developed for processing data from the different sensors are described, together with the data catalogs compiled.

  17. Electronic holography using binary phase modulation

    NASA Astrophysics Data System (ADS)

    Matoba, Osamu

    2014-06-01

    A 3D display system by using a phase-only distribution is presented. Especially, binary phase distribution is used to reconstruct a 3D object for wide viewing zone angle. To obtain the phase distribution to be displayed on a phase-mode spatial light modulator, both of experimental and numerical processes are available. In this paper, we present a numerical process by using a computer graphics data. A random phase distribution is attached to all polygons of an input 3D object to reconstruct a 3D object well from the binary phase distribution. Numerical and experimental results are presented to show the effectiveness of the proposed system.

  18. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  19. AVIRIS and TIMS data processing and distribution at the land processes distributed active archive center

    NASA Technical Reports Server (NTRS)

    Mah, G. R.; Myers, J.

    1993-01-01

    The U.S. Government has initiated the Global Change Research program, a systematic study of the Earth as a complete system. NASA's contribution of the Global Change Research Program is the Earth Observing System (EOS), a series of orbital sensor platforms and an associated data processing and distribution system. The EOS Data and Information System (EOSDIS) is the archiving, production, and distribution system for data collected by the EOS space segment and uses a multilayer architecture for processing, archiving, and distributing EOS data. The first layer consists of the spacecraft ground stations and processing facilities that receive the raw data from the orbiting platforms and then separate the data by individual sensors. The second layer consists of Distributed Active Archive Centers (DAAC) that process, distribute, and archive the sensor data. The third layer consists of a user science processing network. The EOSDIS is being developed in a phased implementation. The initial phase, Version 0, is a prototype of the operational system. Version 0 activities are based upon existing systems and are designed to provide an EOSDIS-like capability for information management and distribution. An important science support task is the creation of simulated data sets for EOS instruments from precursor aircraft or satellite data. The Land Processes DAAC, at the EROS Data Center (EDC), is responsible for archiving and processing EOS precursor data from airborne instruments such as the Thermal Infrared Multispectral Scanner (TIMS), the Thematic Mapper Simulator (TMS), and Airborne Visible and Infrared Imaging Spectrometer (AVIRIS). AVIRIS, TIMS, and TMS are flown by the NASA-Ames Research Center ARC) on an ER-2. The ER-2 flies at 65000 feet and can carry up to three sensors simultaneously. Most jointly collected data sets are somewhat boresighted and roughly registered. The instrument data are being used to construct data sets that simulate the spectral and spatial characteristics of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument scheduled to be flown on the first EOS-AM spacecraft. The ASTER is designed to acquire 14 channels of land science data in the visible and near-IR (VNIR), shortwave-IR (SWIR), and thermal-IR (TIR) regions from 0.52 micron to 11.65 micron at high spatial resolutions of 15 m to 90 m. Stereo data will also be acquired in the VNIR region in a single band. The AVIRIS and TMS cover the ASTER VNIR and SWIR bands, and the TIMS covers the TIR bands. Simulated ASTER data sets have been generated over Death Valley, California, Cuprite, Nevada, and the Drum Mountains, Utah using a combination of AVIRIS, TIMS, amd TMS data, and existing digital elevation models (DEM) for the topographic information.

  20. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  1. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  2. Conversion and Validation of Distribution System Model from a QSTS-Based Tool to a Real-Time Dynamic Phasor Simulator: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  3. Exploiting virtual synchrony in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, Thomas A.

    1987-01-01

    Applications of a virtually synchronous environment are described for distributed programming, which underlies a collection of distributed programming tools in the ISIS2 system. A virtually synchronous environment allows processes to be structured into process groups, and makes events like broadcasts to the group as an entity, group membership changes, and even migration of an activity from one place to another appear to occur instantaneously, in other words, synchronously. A major advantage to this approach is that many aspects of a distributed application can be treated independently without compromising correctness. Moreover, user code that is designed as if the system were synchronous can often be executed concurrently. It is argued that this approach to building distributed and fault tolerant software is more straightforward, more flexible, and more likely to yield correct solutions than alternative approaches.

  4. NATIONAL WATER INFORMATION SYSTEM OF THE U. S. GEOLOGICAL SURVEY.

    USGS Publications Warehouse

    Edwards, Melvin D.

    1985-01-01

    National Water Information System (NWIS) has been designed as an interactive, distributed data system. It will integrate the existing, diverse data-processing systems into a common system. It will also provide easier, more flexible use as well as more convenient access and expanded computing, dissemination, and data-analysis capabilities. The NWIS is being implemented as part of a Distributed Information System (DIS) being developed by the Survey's Water Resources Division. The NWIS will be implemented on each node of the distributed network for the local processing, storage, and dissemination of hydrologic data collected within the node's area of responsibility. The processor at each node will also be used to perform hydrologic modeling, statistical data analysis, text editing, and some administrative work.

  5. Design of special purpose database for credit cooperation bank business processing network system

    NASA Astrophysics Data System (ADS)

    Yu, Yongling; Zong, Sisheng; Shi, Jinfa

    2011-12-01

    With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.

  6. A Disk-Based System for Producing and Distributing Science Products from MODIS

    NASA Technical Reports Server (NTRS)

    Masuoka, Edward; Wolfe, Robert; Sinno, Scott; Ye Gang; Teague, Michael

    2007-01-01

    Since beginning operations in 1999, the MODIS Adaptive Processing System (MODAPS) has evolved to take advantage of trends in information technology, such as the falling cost of computing cycles and disk storage and the availability of high quality open-source software (Linux, Apache and Perl), to achieve substantial gains in processing and distribution capacity and throughput while driving down the cost of system operations.

  7. Automated Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Thomason, Cindy; Anderson, Paul M.; Martin, James A.

    1990-01-01

    Automated power-distribution system monitors and controls electrical power to modules in network. Handles both 208-V, 20-kHz single-phase alternating current and 120- to 150-V direct current. Power distributed to load modules from power-distribution control units (PDCU's) via subsystem distributors. Ring busses carry power to PDCU's from power source. Needs minimal attention. Detects faults and also protects against them. Potential applications include autonomous land vehicles and automated industrial process systems.

  8. Distributed Information System Development: Review of Some Management Issues

    NASA Astrophysics Data System (ADS)

    Mishra, Deepti; Mishra, Alok

    Due to the proliferation of the Internet and globalization, distributed information system development is becoming popular. In this paper we have reviewed some significant management issues like process management, project management, requirements management and knowledge management issues which have received much attention in distributed development perspective. In this literature review we found that areas like quality and risk management issues could get only scant attention in distributed information system development.

  9. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  10. Compiling software for a hierarchical distributed processing system

    DOEpatents

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-12-31

    Compiling software for a hierarchical distributed processing system including providing to one or more compiling nodes software to be compiled, wherein at least a portion of the software to be compiled is to be executed by one or more nodes; compiling, by the compiling node, the software; maintaining, by the compiling node, any compiled software to be executed on the compiling node; selecting, by the compiling node, one or more nodes in a next tier of the hierarchy of the distributed processing system in dependence upon whether any compiled software is for the selected node or the selected node's descendents; sending to the selected node only the compiled software to be executed by the selected node or selected node's descendent.

  11. Method and Tool for Design Process Navigation and Automatic Generation of Simulation Models for Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Nakano, Masaru; Kubota, Fumiko; Inamori, Yutaka; Mitsuyuki, Keiji

    Manufacturing system designers should concentrate on designing and planning manufacturing systems instead of spending their efforts on creating the simulation models to verify the design. This paper proposes a method and its tool to navigate the designers through the engineering process and generate the simulation model automatically from the design results. The design agent also supports collaborative design projects among different companies or divisions with distributed engineering and distributed simulation techniques. The idea was implemented and applied to a factory planning process.

  12. SPS Energy Conversion Power Management Workshop

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Energy technology concerning photovoltaic conversion, solar thermal conversion systems, and electrical power distribution processing is discussed. The manufacturing processes involving solar cells and solar array production are summarized. Resource issues concerning gallium arsenides and silicon alternatives are reported. Collector structures for solar construction are described and estimates in their service life, failure rates, and capabilities are presented. Theories of advanced thermal power cycles are summarized. Power distribution system configurations and processing components are presented.

  13. Design alternatives for process group membership and multicast

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the casuality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is 'transparent' to other processes in the system.

  14. A distributed approach for optimizing cascaded classifier topologies in real-time stream mining systems.

    PubMed

    Foo, Brian; van der Schaar, Mihaela

    2010-11-01

    In this paper, we discuss distributed optimization techniques for configuring classifiers in a real-time, informationally-distributed stream mining system. Due to the large volume of streaming data, stream mining systems must often cope with overload, which can lead to poor performance and intolerable processing delay for real-time applications. Furthermore, optimizing over an entire system of classifiers is a difficult task since changing the filtering process at one classifier can impact both the feature values of data arriving at classifiers further downstream and thus, the classification performance achieved by an ensemble of classifiers, as well as the end-to-end processing delay. To address this problem, this paper makes three main contributions: 1) Based on classification and queuing theoretic models, we propose a utility metric that captures both the performance and the delay of a binary filtering classifier system. 2) We introduce a low-complexity framework for estimating the system utility by observing, estimating, and/or exchanging parameters between the inter-related classifiers deployed across the system. 3) We provide distributed algorithms to reconfigure the system, and analyze the algorithms based on their convergence properties, optimality, information exchange overhead, and rate of adaptation to non-stationary data sources. We provide results using different video classifier systems.

  15. A Visual Insight into the Degradation of Metals Used in Drinking Water Distribution Systems Using AFM

    EPA Science Inventory

    Evaluating the fundamental corrosion and passivation of metallic copper used in drinking water distribution materials is important in understanding the overall mechanism of the corrosion process. Copper pipes are widely used for drinking water distribution systems and although it...

  16. Transforming for Distribution Based Logistics

    DTIC Science & Technology

    2005-05-26

    distribution process, and extracts elements of distribution and distribution management . Finally characteristics of an effective Army distribution...eventually evolve into a Distribution Management Element. Each organization is examined based on their ability to provide centralized command, with an...distribution and distribution management that together form the distribution system. Clearly all of the physical distribution activities including

  17. Distributed software framework and continuous integration in hydroinformatics systems

    NASA Astrophysics Data System (ADS)

    Zhou, Jianzhong; Zhang, Wei; Xie, Mengfei; Lu, Chengwei; Chen, Xiao

    2017-08-01

    When encountering multiple and complicated models, multisource structured and unstructured data, complex requirements analysis, the platform design and integration of hydroinformatics systems become a challenge. To properly solve these problems, we describe a distributed software framework and it’s continuous integration process in hydroinformatics systems. This distributed framework mainly consists of server cluster for models, distributed database, GIS (Geographic Information System) servers, master node and clients. Based on it, a GIS - based decision support system for joint regulating of water quantity and water quality of group lakes in Wuhan China is established.

  18. Simulating fail-stop in asynchronous distributed systems

    NASA Technical Reports Server (NTRS)

    Sabel, Laura; Marzullo, Keith

    1994-01-01

    The fail-stop failure model appears frequently in the distributed systems literature. However, in an asynchronous distributed system, the fail-stop model cannot be implemented. In particular, it is impossible to reliably detect crash failures in an asynchronous system. In this paper, we show that it is possible to specify and implement a failure model that is indistinguishable from the fail-stop model from the point of view of any process within an asynchronous system. We give necessary conditions for a failure model to be indistinguishable from the fail-stop model, and derive lower bounds on the amount of process replication needed to implement such a failure model. We present a simple one-round protocol for implementing one such failure model, which we call simulated fail-stop.

  19. A Model of the Base Civil Engineering Work Request/Work Order Processing System.

    DTIC Science & Technology

    1979-09-01

    changes to the work order processing system. This research identifies the variables that significantly affect the accomplishment time and proposes a... order processing system and its behavior with respect to work order processing time. A conceptual model was developed to describe the work request...work order processing system as a stochastic queueing system in which the processing times and the various distributions are treated as random variables

  20. PILOT: An intelligent distributed operations support system

    NASA Technical Reports Server (NTRS)

    Rasmussen, Arthur N.

    1993-01-01

    The Real-Time Data System (RTDS) project is exploring the application of advanced technologies to the real-time flight operations environment of the Mission Control Centers at NASA's Johnson Space Center. The system, based on a network of engineering workstations, provides services such as delivery of real time telemetry data to flight control applications. To automate the operation of this complex distributed environment, a facility called PILOT (Process Integrity Level and Operation Tracker) is being developed. PILOT comprises a set of distributed agents cooperating with a rule-based expert system; together they monitor process operation and data flows throughout the RTDS network. The goal of PILOT is to provide unattended management and automated operation under user control.

  1. Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture

    NASA Astrophysics Data System (ADS)

    Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel

    2003-11-01

    Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.

  2. Flexible distributed architecture for semiconductor process control and experimentation

    NASA Astrophysics Data System (ADS)

    Gower, Aaron E.; Boning, Duane S.; McIlrath, Michael B.

    1997-01-01

    Semiconductor fabrication requires an increasingly expensive and integrated set of tightly controlled processes, driving the need for a fabrication facility with fully computerized, networked processing equipment. We describe an integrated, open system architecture enabling distributed experimentation and process control for plasma etching. The system was developed at MIT's Microsystems Technology Laboratories and employs in-situ CCD interferometry based analysis in the sensor-feedback control of an Applied Materials Precision 5000 Plasma Etcher (AME5000). Our system supports accelerated, advanced research involving feedback control algorithms, and includes a distributed interface that utilizes the internet to make these fabrication capabilities available to remote users. The system architecture is both distributed and modular: specific implementation of any one task does not restrict the implementation of another. The low level architectural components include a host controller that communicates with the AME5000 equipment via SECS-II, and a host controller for the acquisition and analysis of the CCD sensor images. A cell controller (CC) manages communications between these equipment and sensor controllers. The CC is also responsible for process control decisions; algorithmic controllers may be integrated locally or via remote communications. Finally, a system server images connections from internet/intranet (web) based clients and uses a direct link with the CC to access the system. Each component communicates via a predefined set of TCP/IP socket based messages. This flexible architecture makes integration easier and more robust, and enables separate software components to run on the same or different computers independent of hardware or software platform.

  3. Exploiting Virtual Synchrony in Distributed Systems

    DTIC Science & Technology

    1987-02-01

    for distributed systems yield the best performance relative to the level of synchronization guaranteed by the primitive . A pro- grammer could then... synchronization facility. Semaphores Replicated binary and general semaphores . Monitors Monitor lock, condition variables and signals. Deadlock detection...We describe applications of a new software abstraction called the virtually synchronous process group. Such a group consists of a set of processes

  4. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  5. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  6. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  7. Understanding scaling through history-dependent processes with collapsing sample space.

    PubMed

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  8. A two-stage predictive model to simultaneous control of trihalomethanes in water treatment plants and distribution systems: adaptability to treatment processes.

    PubMed

    Domínguez-Tello, Antonio; Arias-Borrego, Ana; García-Barrera, Tamara; Gómez-Ariza, José Luis

    2017-10-01

    The trihalomethanes (TTHMs) and others disinfection by-products (DBPs) are formed in drinking water by the reaction of chlorine with organic precursors contained in the source water, in two consecutive and linked stages, that starts at the treatment plant and continues in second stage along the distribution system (DS) by reaction of residual chlorine with organic precursors not removed. Following this approach, this study aimed at developing a two-stage empirical model for predicting the formation of TTHMs in the water treatment plant and subsequently their evolution along the water distribution system (WDS). The aim of the two-stage model was to improve the predictive capability for a wide range of scenarios of water treatments and distribution systems. The two-stage model was developed using multiple regression analysis from a database (January 2007 to July 2012) using three different treatment processes (conventional and advanced) in the water supply system of Aljaraque area (southwest of Spain). Then, the new model was validated using a recent database from the same water supply system (January 2011 to May 2015). The validation results indicated no significant difference in the predictive and observed values of TTHM (R 2 0.874, analytical variance <17%). The new model was applied to three different supply systems with different treatment processes and different characteristics. Acceptable predictions were obtained in the three distribution systems studied, proving the adaptability of the new model to the boundary conditions. Finally the predictive capability of the new model was compared with 17 other models selected from the literature, showing satisfactory results prediction and excellent adaptability to treatment processes.

  9. A Distributed Operating System for BMD Applications.

    DTIC Science & Technology

    1982-01-01

    Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real - time operating system functions...make the Basic Real - Time Operating System , and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms

  10. The Land Processes Distributed Active Archive Center (LP DAAC)

    USGS Publications Warehouse

    Golon, Danielle K.

    2016-10-03

    The Land Processes Distributed Active Archive Center (LP DAAC) operates as a partnership with the U.S. Geological Survey and is 1 of 12 DAACs within the National Aeronautics and Space Administration (NASA) Earth Observing System Data and Information System (EOSDIS). The LP DAAC ingests, archives, processes, and distributes NASA Earth science remote sensing data. These data are provided to the public at no charge. Data distributed by the LP DAAC provide information about Earth’s surface from daily to yearly intervals and at 15 to 5,600 meter spatial resolution. Data provided by the LP DAAC can be used to study changes in agriculture, vegetation, ecosystems, elevation, and much more. The LP DAAC provides several ways to access, process, and interact with these data. In addition, the LP DAAC is actively archiving new datasets to provide users with a variety of data to study the Earth.

  11. A Distributed Architecture for Tsunami Early Warning and Collaborative Decision-support in Crises

    NASA Astrophysics Data System (ADS)

    Moßgraber, J.; Middleton, S.; Hammitzsch, M.; Poslad, S.

    2012-04-01

    The presentation will describe work on the system architecture that is being developed in the EU FP7 project TRIDEC on "Collaborative, Complex and Critical Decision-Support in Evolving Crises". The challenges for a Tsunami Early Warning System (TEWS) are manifold and the success of a system depends crucially on the system's architecture. A modern warning system following a system-of-systems approach has to integrate various components and sub-systems such as different information sources, services and simulation systems. Furthermore, it has to take into account the distributed and collaborative nature of warning systems. In order to create an architecture that supports the whole spectrum of a modern, distributed and collaborative warning system one must deal with multiple challenges. Obviously, one cannot expect to tackle these challenges adequately with a monolithic system or with a single technology. Therefore, a system architecture providing the blueprints to implement the system-of-systems approach has to combine multiple technologies and architectural styles. At the bottom layer it has to reliably integrate a large set of conventional sensors, such as seismic sensors and sensor networks, buoys and tide gauges, and also innovative and unconventional sensors, such as streams of messages from social media services. At the top layer it has to support collaboration on high-level decision processes and facilitates information sharing between organizations. In between, the system has to process all data and integrate information on a semantic level in a timely manner. This complex communication follows an event-driven mechanism allowing events to be published, detected and consumed by various applications within the architecture. Therefore, at the upper layer the event-driven architecture (EDA) aspects are combined with principles of service-oriented architectures (SOA) using standards for communication and data exchange. The most prominent challenges on this layer include providing a framework for information integration on a syntactic and semantic level, leveraging distributed processing resources for a scalable data processing platform, and automating data processing and decision support workflows.

  12. Forensic Analysis of Window’s(Registered) Virtual Memory Incorporating the System’s Page-File

    DTIC Science & Technology

    2008-12-01

    Management and Budget, Paperwork Reduction Project (0704-0188) Washington DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE December...data in a meaningful way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed...way. One reason for this is how memory is managed by the operating system. Data belonging to one process can be distributed arbitrarily across

  13. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  14. Conference on Real-Time Computer Applications in Nuclear, Particle and Plasma Physics, 6th, Williamsburg, VA, May 15-19, 1989, Proceedings

    NASA Technical Reports Server (NTRS)

    Pordes, Ruth (Editor)

    1989-01-01

    Papers on real-time computer applications in nuclear, particle, and plasma physics are presented, covering topics such as expert systems tactics in testing FASTBUS segment interconnect modules, trigger control in a high energy physcis experiment, the FASTBUS read-out system for the Aleph time projection chamber, a multiprocessor data acquisition systems, DAQ software architecture for Aleph, a VME multiprocessor system for plasma control at the JT-60 upgrade, and a multiasking, multisinked, multiprocessor data acquisition front end. Other topics include real-time data reduction using a microVAX processor, a transputer based coprocessor for VEDAS, simulation of a macropipelined multi-CPU event processor for use in FASTBUS, a distributed VME control system for the LISA superconducting Linac, a distributed system for laboratory process automation, and a distributed system for laboratory process automation. Additional topics include a structure macro assembler for the event handler, a data acquisition and control system for Thomson scattering on ATF, remote procedure execution software for distributed systems, and a PC-based graphic display real-time particle beam uniformity.

  15. Distributed intrusion detection system based on grid security model

    NASA Astrophysics Data System (ADS)

    Su, Jie; Liu, Yahui

    2008-03-01

    Grid computing has developed rapidly with the development of network technology and it can solve the problem of large-scale complex computing by sharing large-scale computing resource. In grid environment, we can realize a distributed and load balance intrusion detection system. This paper first discusses the security mechanism in grid computing and the function of PKI/CA in the grid security system, then gives the application of grid computing character in the distributed intrusion detection system (IDS) based on Artificial Immune System. Finally, it gives a distributed intrusion detection system based on grid security system that can reduce the processing delay and assure the detection rates.

  16. Volcanic Ash Data Assimilation System for Atmospheric Transport Model

    NASA Astrophysics Data System (ADS)

    Ishii, K.; Shimbori, T.; Sato, E.; Tokumoto, T.; Hayashi, Y.; Hashimoto, A.

    2017-12-01

    The Japan Meteorological Agency (JMA) has two operations for volcanic ash forecasts, which are Volcanic Ash Fall Forecast (VAFF) and Volcanic Ash Advisory (VAA). In these operations, the forecasts are calculated by atmospheric transport models including the advection process, the turbulent diffusion process, the gravitational fall process and the deposition process (wet/dry). The initial distribution of volcanic ash in the models is the most important but uncertain factor. In operations, the model of Suzuki (1983) with many empirical assumptions is adopted to the initial distribution. This adversely affects the reconstruction of actual eruption plumes.We are developing a volcanic ash data assimilation system using weather radars and meteorological satellite observation, in order to improve the initial distribution of the atmospheric transport models. Our data assimilation system is based on the three-dimensional variational data assimilation method (3D-Var). Analysis variables are ash concentration and size distribution parameters which are mutually independent. The radar observation is expected to provide three-dimensional parameters such as ash concentration and parameters of ash particle size distribution. On the other hand, the satellite observation is anticipated to provide two-dimensional parameters of ash clouds such as mass loading, top height and particle effective radius. In this study, we estimate the thickness of ash clouds using vertical wind shear of JMA numerical weather prediction, and apply for the volcanic ash data assimilation system.

  17. Research on distributed optical fiber sensing data processing method based on LabVIEW

    NASA Astrophysics Data System (ADS)

    Li, Zhonghu; Yang, Meifang; Wang, Luling; Wang, Jinming; Yan, Junhong; Zuo, Jing

    2018-01-01

    The pipeline leak detection and leak location problem have gotten extensive attention in the industry. In this paper, the distributed optical fiber sensing system is designed based on the heat supply pipeline. The data processing method of distributed optical fiber sensing based on LabVIEW is studied emphatically. The hardware system includes laser, sensing optical fiber, wavelength division multiplexer, photoelectric detector, data acquisition card and computer etc. The software system is developed using LabVIEW. The software system adopts wavelet denoising method to deal with the temperature information, which improved the SNR. By extracting the characteristic value of the fiber temperature information, the system can realize the functions of temperature measurement, leak location and measurement signal storage and inquiry etc. Compared with traditional negative pressure wave method or acoustic signal method, the distributed optical fiber temperature measuring system can measure several temperatures in one measurement and locate the leak point accurately. It has a broad application prospect.

  18. Scheduling based on a dynamic resource connection

    NASA Astrophysics Data System (ADS)

    Nagiyev, A. E.; Botygin, I. A.; Shersntneva, A. I.; Konyaev, P. A.

    2017-02-01

    The practical using of distributed computing systems associated with many problems, including troubles with the organization of an effective interaction between the agents located at the nodes of the system, with the specific configuration of each node of the system to perform a certain task, with the effective distribution of the available information and computational resources of the system, with the control of multithreading which implements the logic of solving research problems and so on. The article describes the method of computing load balancing in distributed automatic systems, focused on the multi-agency and multi-threaded data processing. The scheme of the control of processing requests from the terminal devices, providing the effective dynamic scaling of computing power under peak load is offered. The results of the model experiments research of the developed load scheduling algorithm are set out. These results show the effectiveness of the algorithm even with a significant expansion in the number of connected nodes and zoom in the architecture distributed computing system.

  19. The WorkPlace distributed processing environment

    NASA Technical Reports Server (NTRS)

    Ames, Troy; Henderson, Scott

    1993-01-01

    Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.

  20. Using process groups to implement failure detection in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1991-01-01

    Agreement on the membership of a group of processes in a distributed system is a basic problem that arises in a wide range of applications. Such groups occur when a set of processes cooperate to perform some task, share memory, monitor one another, subdivide a computation, and so forth. The group membership problems is discussed as it relates to failure detection in asynchronous, distributed systems. A rigorous, formal specification for group membership is presented under this interpretation. A solution is then presented for this problem.

  1. South Carolina | Midmarket Solar Policies in the United States | Solar

    Science.gov Websites

    voluntary renewable energy goal of 2% distributed energy in 2021. Carve-out: 0.25% of total generation from energy portfolio standard, but a goal for distributed generation by 2021. The Distributed Energy Resource Fast Track Process Study Process System size limit: Not specified; South Carolina Public Service

  2. Analyzing Distributed Functions in an Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Massie, Michael J.

    2010-01-01

    Large scale integration of today's aerospace systems is achievable through the use of distributed systems. Validating the safety of distributed systems is significantly more difficult as compared to centralized systems because of the complexity of the interactions between simultaneously active components. Integrated hazard analysis (IHA), a process used to identify unacceptable risks and to provide a means of controlling them, can be applied to either centralized or distributed systems. IHA, though, must be tailored to fit the particular system being analyzed. Distributed systems, for instance, must be analyzed for hazards in terms of the functions that rely on them. This paper will describe systems-oriented IHA techniques (as opposed to traditional failure-event or reliability techniques) that should be employed for distributed systems in aerospace environments. Special considerations will be addressed when dealing with specific distributed systems such as active thermal control, electrical power, command and data handling, and software systems (including the interaction with fault management systems). Because of the significance of second-order effects in large scale distributed systems, the paper will also describe how to analyze secondary functions to secondary functions through the use of channelization.

  3. Bibliography On Multiprocessors And Distributed Processing

    NASA Technical Reports Server (NTRS)

    Miya, Eugene N.

    1988-01-01

    Multiprocessor and Distributed Processing Bibliography package consists of large machine-readable bibliographic data base, which in addition to usual keyword searches, used for producing citations, indexes, and cross-references. Data base contains UNIX(R) "refer" -formatted ASCII data and implemented on any computer running under UNIX(R) operating system. Easily convertible to other operating systems. Requires approximately one megabyte of secondary storage. Bibliography compiled in 1985.

  4. Methods and apparatuses for information analysis on shared and distributed computing systems

    DOEpatents

    Bohn, Shawn J [Richland, WA; Krishnan, Manoj Kumar [Richland, WA; Cowley, Wendy E [Richland, WA; Nieplocha, Jarek [Richland, WA

    2011-02-22

    Apparatuses and computer-implemented methods for analyzing, on shared and distributed computing systems, information comprising one or more documents are disclosed according to some aspects. In one embodiment, information analysis can comprise distributing one or more distinct sets of documents among each of a plurality of processes, wherein each process performs operations on a distinct set of documents substantially in parallel with other processes. Operations by each process can further comprise computing term statistics for terms contained in each distinct set of documents, thereby generating a local set of term statistics for each distinct set of documents. Still further, operations by each process can comprise contributing the local sets of term statistics to a global set of term statistics, and participating in generating a major term set from an assigned portion of a global vocabulary.

  5. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  6. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini; ...

    2015-01-15

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  7. ICESat Science Investigator led Processing System (I-SIPS)

    NASA Astrophysics Data System (ADS)

    Bhardwaj, S.; Bay, J.; Brenner, A.; Dimarzio, J.; Hancock, D.; Sherman, M.

    2003-12-01

    The ICESat Science Investigator-led Processing System (I-SIPS) generates the GLAS standard data products. It consists of two main parts the Scheduling and Data Management System (SDMS) and the Geoscience Laser Altimeter System (GLAS) Science Algorithm Software. The system has been operational since the successful launch of ICESat. It ingests data from the GLAS instrument, generates GLAS data products, and distributes them to the GLAS Science Computing Facility (SCF), the Instrument Support Facility (ISF) and the National Snow and Ice Data Center (NSIDC) ECS DAAC. The SDMS is the Planning, Scheduling and Data Management System that runs the GLAS Science Algorithm Software (GSAS). GSAS is based on the Algorithm Theoretical Basis Documents provided by the Science Team and is developed independently of SDMS. The SDMS provides the processing environment to plan jobs based on existing data, control job flow, data distribution, and archiving. The SDMS design is based on a mission-independent architecture that imposes few constraints on the science code thereby facilitating I-SIPS integration. I-SIPS currently works in an autonomous manner to ingest GLAS instrument data, distribute this data to the ISF, run the science processing algorithms to produce the GLAS standard products, reprocess data when new versions of science algorithms are released, and distributes the products to the SCF, ISF, and NSIDC. I-SIPS has a proven performance record, delivering the data to the SCF within hours after the initial instrument activation. The I-SIPS design philosophy gives this system a high potential for reuse in other science missions.

  8. An Investigation of Data Overload in Team-Based Distributed Cognition Systems

    ERIC Educational Resources Information Center

    Hellar, David Benjamin

    2009-01-01

    The modern military command center is a hybrid system of computer automated surveillance and human oriented decision making. In these distributed cognition systems, data overload refers simultaneously to the glut of raw data processed by information technology systems and the dearth of actionable knowledge useful to human decision makers.…

  9. a New Initiative for Tiling, Stitching and Processing Geospatial Big Data in Distributed Computing Environments

    NASA Astrophysics Data System (ADS)

    Olasz, A.; Nguyen Thai, B.; Kristóf, D.

    2016-06-01

    Within recent years, several new approaches and solutions for Big Data processing have been developed. The Geospatial world is still facing the lack of well-established distributed processing solutions tailored to the amount and heterogeneity of geodata, especially when fast data processing is a must. The goal of such systems is to improve processing time by distributing data transparently across processing (and/or storage) nodes. These types of methodology are based on the concept of divide and conquer. Nevertheless, in the context of geospatial processing, most of the distributed computing frameworks have important limitations regarding both data distribution and data partitioning methods. Moreover, flexibility and expendability for handling various data types (often in binary formats) are also strongly required. This paper presents a concept for tiling, stitching and processing of big geospatial data. The system is based on the IQLib concept (https://github.com/posseidon/IQLib/) developed in the frame of the IQmulus EU FP7 research and development project (http://www.iqmulus.eu). The data distribution framework has no limitations on programming language environment and can execute scripts (and workflows) written in different development frameworks (e.g. Python, R or C#). It is capable of processing raster, vector and point cloud data. The above-mentioned prototype is presented through a case study dealing with country-wide processing of raster imagery. Further investigations on algorithmic and implementation details are in focus for the near future.

  10. Red mud flocculation process in alumina production

    NASA Astrophysics Data System (ADS)

    Fedorova, E. R.; Firsov, A. Yu

    2018-05-01

    The process of thickening and washing red mud is a gooseneck of alumina production. The existing automated systems of the thickening process control involve stabilizing the parameters of the primary technological circuits of the thickener. The actual direction of scientific research is the creation and improvement of models and systems of the thickening process control by model. But the known models do not fully consider the presence of perturbing effects, in particular the particle size distribution in the feed process, distribution of floccules by size after the aggregation process in the feed barrel. The article is devoted to the basic concepts and terms used in writing the population balance algorithm. The population balance model is implemented in the MatLab environment. The result of the simulation is the particle size distribution after the flocculation process. This model allows one to foreseen the distribution range of floccules after the process of aggregation of red mud in the feed barrel. The mud of Jamaican bauxite was acting as an industrial sample of red mud; Cytec Industries of HX-3000 series with a concentration of 0.5% was acting as a flocculant. When simulating, model constants obtained in a tubular tank in the laboratories of CSIRO (Australia) were used.

  11. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing.

    PubMed

    Ölçer, İbrahim; Öncü, Ahmet

    2017-06-05

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ -OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ -OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems.

  12. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing

    PubMed Central

    Ölçer, İbrahim; Öncü, Ahmet

    2017-01-01

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry (ϕ-OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ-OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ-OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems. PMID:28587240

  13. An RFID-Based Manufacturing Control Framework for Loosely Coupled Distributed Manufacturing System Supporting Mass Customization

    NASA Astrophysics Data System (ADS)

    Chen, Ruey-Shun; Tsai, Yung-Shun; Tu, Arthur

    In this study we propose a manufacturing control framework based on radio-frequency identification (RFID) technology and a distributed information system to construct a mass-customization production process in a loosely coupled shop-floor control environment. On the basis of this framework, we developed RFID middleware and an integrated information system for tracking and controlling the manufacturing process flow. A bicycle manufacturer was used to demonstrate the prototype system. The findings of this study were that the proposed framework can improve the visibility and traceability of the manufacturing process as well as enhance process quality control and real-time production pedigree access. Using this framework, an enterprise can easily integrate an RFID-based system into its manufacturing environment to facilitate mass customization and a just-in-time production model.

  14. Microelectromechanical Systems

    NASA Technical Reports Server (NTRS)

    Gabriel, Kaigham J.

    1995-01-01

    Micro-electromechanical systems (MEMS) is an enabling technology that merges computation and communication with sensing and actuation to change the way people and machines interact with the physical world. MEMS is a manufacturing technology that will impact widespread applications including: miniature inertial measurement measurement units for competent munitions and personal navigation; distributed unattended sensors; mass data storage devices; miniature analytical instruments; embedded pressure sensors; non-invasive biomedical sensors; fiber-optics components and networks; distributed aerodynamic control; and on-demand structural strength. The long term goal of ARPA's MEMS program is to merge information processing with sensing and actuation to realize new systems and strategies for both perceiving and controlling systems, processes, and the environment. The MEMS program has three major thrusts: advanced devices and processes, system design, and infrastructure.

  15. Constraints and System Primitives in Achieving Multilevel Security in Real Time Distributed System Environment

    DTIC Science & Technology

    1994-04-18

    because they represent a microkernel and monolithic kernel approach to MLS operating system issues. TMACH is I based on MACH, a distributed operating...the operating system is [L.sed on a microkernel design or a monolithic kernel design. This distinction requires some caution since monolithic operating...are provided by 3 user-level processes, in contrast to standard UNIX, which has a large monolithic kernel that pro- I - 22 - Distributed O)perating

  16. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  17. Automated Power-Distribution System

    NASA Technical Reports Server (NTRS)

    Ashworth, Barry; Riedesel, Joel; Myers, Chris; Miller, William; Jones, Ellen F.; Freeman, Kenneth; Walsh, Richard; Walls, Bryan K.; Weeks, David J.; Bechtel, Robert T.

    1992-01-01

    Autonomous power-distribution system includes power-control equipment and automation equipment. System automatically schedules connection of power to loads and reconfigures itself when it detects fault. Potential terrestrial applications include optimization of consumption of power in homes, power supplies for autonomous land vehicles and vessels, and power supplies for automated industrial processes.

  18. A mobile ferromagnetic shape detection sensor using a Hall sensor array and magnetic imaging.

    PubMed

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a mobile Hall sensor array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the mobile Hall sensor array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of mobile Hall sensor array system for actual shape detection. The results prove that the mobile Hall sensor array system is able to perform magnetic imaging in identifying various ferromagnetic materials.

  19. A Mobile Ferromagnetic Shape Detection Sensor Using a Hall Sensor Array and Magnetic Imaging

    PubMed Central

    Misron, Norhisam; Shin, Ng Wei; Shafie, Suhaidi; Marhaban, Mohd Hamiruce; Mailah, Nashiren Farzilah

    2011-01-01

    This paper presents a Mobile Hall Sensor Array system for the shape detection of ferromagnetic materials that are embedded in walls or floors. The operation of the Mobile Hall Sensor Array system is based on the principle of magnetic flux leakage to describe the shape of the ferromagnetic material. Two permanent magnets are used to generate the magnetic flux flow. The distribution of magnetic flux is perturbed as the ferromagnetic material is brought near the permanent magnets and the changes in magnetic flux distribution are detected by the 1-D array of the Hall sensor array setup. The process for magnetic imaging of the magnetic flux distribution is done by a signal processing unit before it displays the real time images using a netbook. A signal processing application software is developed for the 1-D Hall sensor array signal acquisition and processing to construct a 2-D array matrix. The processed 1-D Hall sensor array signals are later used to construct the magnetic image of ferromagnetic material based on the voltage signal and the magnetic flux distribution. The experimental results illustrate how the shape of specimens such as square, round and triangle shapes is determined through magnetic images based on the voltage signal and magnetic flux distribution of the specimen. In addition, the magnetic images of actual ferromagnetic objects are also illustrated to prove the functionality of Mobile Hall Sensor Array system for actual shape detection. The results prove that the Mobile Hall Sensor Array system is able to perform magnetic imaging in identifying various ferromagnetic materials. PMID:22346653

  20. Cardea: Dynamic Access Control in Distributed Systems

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2004-01-01

    Modern authorization systems span domains of administration, rely on many different authentication sources, and manage complex attributes as part of the authorization process. This . paper presents Cardea, a distributed system that facilitates dynamic access control, as a valuable piece of an inter-operable authorization framework. First, the authorization model employed in Cardea and its functionality goals are examined. Next, critical features of the system architecture and its handling of the authorization process are then examined. Then the S A M L and XACML standards, as incorporated into the system, are analyzed. Finally, the future directions of this project are outlined and connection points with general components of an authorization system are highlighted.

  1. A Process for Comparing Dynamics of Distributed Space Systems Simulations

    NASA Technical Reports Server (NTRS)

    Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.

    2009-01-01

    The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.

  2. Using Markov Decision Processes with Heterogeneous Queueing Systems to Examine Military MEDEVAC Dispatching Policies

    DTIC Science & Technology

    2017-03-23

    Air Force Institute of Technology AFIT Scholar Theses and Dissertations 3-23-2017 Using Markov Decision Processes with Heterogeneous Queueing Systems... TECHNOLOGY Wright-Patterson Air Force Base, Ohio DISTRIBUTION STATEMENT A APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. The views expressed in...POLICIES THESIS Presented to the Faculty Department of Operational Sciences Graduate School of Engineering and Management Air Force Institute of Technology

  3. Information distribution in distributed microprocessor based flight control systems

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1977-01-01

    This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.

  4. Multimission Telemetry Visualization (MTV) system: A mission applications project from JPL's Multimedia Communications Laboratory

    NASA Technical Reports Server (NTRS)

    Koeberlein, Ernest, III; Pender, Shaw Exum

    1994-01-01

    This paper describes the Multimission Telemetry Visualization (MTV) data acquisition/distribution system. MTV was developed by JPL's Multimedia Communications Laboratory (MCL) and designed to process and display digital, real-time, science and engineering data from JPL's Mission Control Center. The MTV system can be accessed using UNIX workstations and PC's over common datacom and telecom networks from worldwide locations. It is designed to lower data distribution costs while increasing data analysis functionality by integrating low-cost, off-the-shelf desktop hardware and software. MTV is expected to significantly lower the cost of real-time data display, processing, distribution, and allow for greater spacecraft safety and mission data access.

  5. The embedded operating system project

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.

    1985-01-01

    The design and construction of embedded operating systems for real-time advanced aerospace applications was investigated. The applications require reliable operating system support that must accommodate computer networks. Problems that arise in the construction of such operating systems, reconfiguration, consistency and recovery in a distributed system, and the issues of real-time processing are reported. A thesis that provides theoretical foundations for the use of atomic actions to support fault tolerance and data consistency in real-time object-based system is included. The following items are addressed: (1) atomic actions and fault-tolerance issues; (2) operating system structure; (3) program development; (4) a reliable compiler for path Pascal; and (5) mediators, a mechanism for scheduling distributed system processes.

  6. Distributed semantic networks and CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Rodriguez, Tony

    1991-01-01

    Semantic networks of frames are commonly used as a method of reasoning in many problems. In most of these applications the semantic network exists as a single entity in a single process environment. Advances in workstation hardware provide support for more sophisticated applications involving multiple processes, interacting in a distributed environment. In these applications the semantic network may well be distributed over several concurrently executing tasks. This paper describes the design and implementation of a frame based, distributed semantic network in which frames are accessed both through C Language Integrated Production System (CLIPS) expert systems and procedural C++ language programs. The application area is a knowledge based, cooperative decision making model utilizing both rule based and procedural experts.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Prabakar, Kumaraguru; Palmintier, Bryan

    A software process is developed to convert distribution network models from a quasi-static time-series tool (OpenDSS) to a real-time dynamic phasor simulator (ePHASORSIM). The description of this process in this paper would be helpful for researchers who intend to perform similar conversions. The converter could be utilized directly by users of real-time simulators who intend to perform software-in-the-loop or hardware-in-the-loop tests on large distribution test feeders for a range of use cases, including testing functions of advanced distribution management systems against a simulated distribution system. In the future, the developers intend to release the conversion tool as open source tomore » enable use by others.« less

  8. Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.

    PubMed

    Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing

    2017-10-11

    In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.

  9. Operating tool for a distributed data and information management system

    NASA Astrophysics Data System (ADS)

    Reck, C.; Mikusch, E.; Kiemle, S.; Wolfmüller, M.; Böttcher, M.

    2002-07-01

    The German Remote Sensing Data Center has developed the Data Information and Management System DIMS which provides multi-mission ground system services for earth observation product processing, archiving, ordering and delivery. DIMS successfully uses newest technologies within its services. This paper presents the solution taken to simplify operation tasks for this large and distributed system.

  10. Power laws, discontinuities and regional city size distributions

    USGS Publications Warehouse

    Garmestani, A.S.; Allen, Craig R.; Gallagher, C.M.

    2008-01-01

    Urban systems are manifestations of human adaptation to the natural environment. City size distributions are the expression of hierarchical processes acting upon urban systems. In this paper, we test the entire city size distributions for the southeastern and southwestern United States (1990), as well as the size classes in these regions for power law behavior. We interpret the differences in the size of the regional city size distributions as the manifestation of variable growth dynamics dependent upon city size. Size classes in the city size distributions are snapshots of stable states within urban systems in flux.

  11. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  12. Simplified Distributed Computing

    NASA Astrophysics Data System (ADS)

    Li, G. G.

    2006-05-01

    The distributed computing runs from high performance parallel computing, GRID computing, to an environment where idle CPU cycles and storage space of numerous networked systems are harnessed to work together through the Internet. In this work we focus on building an easy and affordable solution for computationally intensive problems in scientific applications based on existing technology and hardware resources. This system consists of a series of controllers. When a job request is detected by a monitor or initialized by an end user, the job manager launches the specific job handler for this job. The job handler pre-processes the job, partitions the job into relative independent tasks, and distributes the tasks into the processing queue. The task handler picks up the related tasks, processes the tasks, and puts the results back into the processing queue. The job handler also monitors and examines the tasks and the results, and assembles the task results into the overall solution for the job request when all tasks are finished for each job. A resource manager configures and monitors all participating notes. A distributed agent is deployed on all participating notes to manage the software download and report the status. The processing queue is the key to the success of this distributed system. We use BEA's Weblogic JMS queue in our implementation. It guarantees the message delivery and has the message priority and re-try features so that the tasks never get lost. The entire system is built on the J2EE technology and it can be deployed on heterogeneous platforms. It can handle algorithms and applications developed in any languages on any platforms. J2EE adaptors are provided to manage and communicate the existing applications to the system so that the applications and algorithms running on Unix, Linux and Windows can all work together. This system is easy and fast to develop based on the industry's well-adopted technology. It is highly scalable and heterogeneous. It is an open system and any number and type of machines can join the system to provide the computational power. This asynchronous message-based system can achieve second of response time. For efficiency, communications between distributed tasks are often done at the start and end of the tasks but intermediate status of the tasks can also be provided.

  13. AIRSAR Automated Web-based Data Processing and Distribution System

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; vanZyl, Jakob; Kim, Yunjin; Lou, Yunling; Imel, David; Tung, Wayne; Chapman, Bruce; Durden, Stephen

    2005-01-01

    In this paper, we present an integrated, end-to-end synthetic aperture radar (SAR) processing system that accepts data processing requests, submits processing jobs, performs quality analysis, delivers and archives processed data. This fully automated SAR processing system utilizes database and internet/intranet web technologies to allow external users to browse and submit data processing requests and receive processed data. It is a cost-effective way to manage a robust SAR processing and archival system. The integration of these functions has reduced operator errors and increased processor throughput dramatically.

  14. Multiple Meanings Are Not Necessarily a Disadvantage in Semantic Processing: Evidence from Homophone Effects in Semantic Categorisation

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Sears, Christopher R.; Owen, William J.

    2007-01-01

    The ambiguity disadvantage (slower processing of ambiguous words relative to unambiguous words) has been taken as evidence for a distributed semantic representational system like that embodied in parallel distributed processing (PDP) models. In the present study, we investigated whether semantic ambiguity slows meaning activation, as PDP models…

  15. Defining and Enabling Resiliency of Electric Distribution Systems With Multiple Microgrids

    DOE PAGES

    Chanda, Sayonsom; Srivastava, Anurag K.

    2016-05-02

    This paper presents a method for quantifying and enabling the resiliency of a power distribution system (PDS) using analytical hierarchical process and percolation theory. Using this metric, quantitative analysis can be done to analyze the impact of possible control decisions to pro-actively enable the resilient operation of distribution system with multiple microgrids and other resources. Developed resiliency metric can also be used in short term distribution system planning. The benefits of being able to quantify resiliency can help distribution system planning engineers and operators to justify control actions, compare different reconfiguration algorithms, develop proactive control actions to avert power systemmore » outage due to impending catastrophic weather situations or other adverse events. Validation of the proposed method is done using modified CERTS microgrids and a modified industrial distribution system. Furthermore, simulation results show topological and composite metric considering power system characteristics to quantify the resiliency of a distribution system with the proposed methodology, and improvements in resiliency using two-stage reconfiguration algorithm and multiple microgrids.« less

  16. Exploiting replication in distributed systems

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Joseph, T. A.

    1989-01-01

    Techniques are examined for replicating data and execution in directly distributed systems: systems in which multiple processes interact directly with one another while continuously respecting constraints on their joint behavior. Directly distributed systems are often required to solve difficult problems, ranging from management of replicated data to dynamic reconfiguration in response to failures. It is shown that these problems reduce to more primitive, order-based consistency problems, which can be solved using primitives such as the reliable broadcast protocols. Moreover, given a system that implements reliable broadcast primitives, a flexible set of high-level tools can be provided for building a wide variety of directly distributed application programs.

  17. Main Difference with Formed Process of the Moon and Earth Minerals and Fluids

    NASA Astrophysics Data System (ADS)

    Kato, T.; Miura, Y.

    2018-04-01

    Minerals show large and global distribution on Earth system, but small and local formation on the Moon. Fluid water is formed as same size and distribution on Earth and the Moon based on their body-systems.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goriely, S.; Chamel, N.; Pearson, J. M.

    The rapid neutron-capture process, or r-process, is known to be of fundamental importance for explaining the origin of approximately half of the A>60 stable nuclei observed in nature. In recent years nuclear astrophysicists have developed more and more sophisticated r-process models, eagerly trying to add new astrophysical or nuclear physics ingredients to explain the solar system composition in a satisfactory way.We show here that the decompression of the neutron star matter may provide suitable conditions for a robust r-processing. After decompression, the inner crust material gives rise to an abundance distribution for A>130 nuclei similar to the one observed inmore » the solar system. Similarly, the outer crust if heated at a temperature of about 8 10{sup 9} K before decompression is made of exotic neutron-rich nuclei with a mass distribution close to the 80{<=}A{<=}130 solar one. During the decompression, the free neutrons (initially liberated by the high temperatures) are re-captured leading to a final pattern similar to the solar system distribution.« less

  19. Speech Perception as a Cognitive Process: The Interactive Activation Model.

    ERIC Educational Resources Information Center

    Elman, Jeffrey L.; McClelland, James L.

    Research efforts to model speech perception in terms of a processing system in which knowledge and processing are distributed over large numbers of highly interactive--but computationally primative--elements are described in this report. After discussing the properties of speech that demand a parallel interactive processing system, the report…

  20. Programming with process groups: Group and multicast semantics

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.; Cooper, Robert; Gleeson, Barry

    1991-01-01

    Process groups are a natural tool for distributed programming and are increasingly important in distributed computing environments. Discussed here is a new architecture that arose from an effort to simplify Isis process group semantics. The findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects.

  1. A stochastic diffusion process for Lochner's generalized Dirichlet distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-10-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less

  2. Cardea: Providing Support for Dynamic Resource Access in a Distributed Computing Environment

    NASA Technical Reports Server (NTRS)

    Lepro, Rebekah

    2003-01-01

    The environment framing the modem authorization process span domains of administration, relies on many different authentication sources, and manages complex attributes as part of the authorization process. Cardea facilitates dynamic access control within this environment as a central function of an inter-operable authorization framework. The system departs from the traditional authorization model by separating the authentication and authorization processes, distributing the responsibility for authorization data and allowing collaborating domains to retain control over their implementation mechanisms. Critical features of the system architecture and its handling of the authorization process differentiate the system from existing authorization components by addressing common needs not adequately addressed by existing systems. Continuing system research seeks to enhance the implementation of the current authorization model employed in Cardea, increase the robustness of current features, further the framework for establishing trust and promote interoperability with existing security mechanisms.

  3. Radial transport processes as a precursor to particle deposition in drinking water distribution systems.

    PubMed

    van Thienen, P; Vreeburg, J H G; Blokker, E J M

    2011-02-01

    Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.

  4. 76 FR 74753 - Authority To Manufacture and Distribute Postage Evidencing Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-01

    ... revision of the rules governing the inventory control processes of Postage Evidencing Systems (PES... destruction or disposal of all Postage Evidencing Systems and their components to enable accurate accounting...) Postage Evidencing System repair process--any physical or electronic access to the internal components of...

  5. Strategy Guideline. Compact Air Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burdick, Arlan

    2013-06-01

    This guideline discusses the benefits and challenges of using a compact air distribution system to handle the reduced loads and reduced air volume needed to condition the space within an energy efficient home. The decision criteria for a compact air distribution system must be determined early in the whole-house design process, considering both supply and return air design. However, careful installation of a compact air distribution system can result in lower material costs from smaller equipment, shorter duct runs, and fewer outlets; increased installation efficiencies, including ease of fitting the system into conditioned space; lower loads on a better balancedmore » HVAC system, and overall improved energy efficiency of the home.« less

  6. Heterogeneous distributed query processing: The DAVID system

    NASA Technical Reports Server (NTRS)

    Jacobs, Barry E.

    1985-01-01

    The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.

  7. Distributions of Autocorrelated First-Order Kinetic Outcomes: Illness Severity

    PubMed Central

    Englehardt, James D.

    2015-01-01

    Many complex systems produce outcomes having recurring, power law-like distributions over wide ranges. However, the form necessarily breaks down at extremes, whereas the Weibull distribution has been demonstrated over the full observed range. Here the Weibull distribution is derived as the asymptotic distribution of generalized first-order kinetic processes, with convergence driven by autocorrelation, and entropy maximization subject to finite positive mean, of the incremental compounding rates. Process increments represent multiplicative causes. In particular, illness severities are modeled as such, occurring in proportion to products of, e.g., chronic toxicant fractions passed by organs along a pathway, or rates of interacting oncogenic mutations. The Weibull form is also argued theoretically and by simulation to be robust to the onset of saturation kinetics. The Weibull exponential parameter is shown to indicate the number and widths of the first-order compounding increments, the extent of rate autocorrelation, and the degree to which process increments are distributed exponential. In contrast with the Gaussian result in linear independent systems, the form is driven not by independence and multiplicity of process increments, but by increment autocorrelation and entropy. In some physical systems the form may be attracting, due to multiplicative evolution of outcome magnitudes towards extreme values potentially much larger and smaller than control mechanisms can contain. The Weibull distribution is demonstrated in preference to the lognormal and Pareto I for illness severities versus (a) toxicokinetic models, (b) biologically-based network models, (c) scholastic and psychological test score data for children with prenatal mercury exposure, and (d) time-to-tumor data of the ED01 study. PMID:26061263

  8. On the degree distribution of horizontal visibility graphs associated with Markov processes and dynamical systems: diagrammatic and variational approaches

    NASA Astrophysics Data System (ADS)

    Lacasa, Lucas

    2014-09-01

    Dynamical processes can be transformed into graphs through a family of mappings called visibility algorithms, enabling the possibility of (i) making empirical time series analysis and signal processing and (ii) characterizing classes of dynamical systems and stochastic processes using the tools of graph theory. Recent works show that the degree distribution of these graphs encapsulates much information on the signals' variability, and therefore constitutes a fundamental feature for statistical learning purposes. However, exact solutions for the degree distributions are only known in a few cases, such as for uncorrelated random processes. Here we analytically explore these distributions in a list of situations. We present a diagrammatic formalism which computes for all degrees their corresponding probability as a series expansion in a coupling constant which is the number of hidden variables. We offer a constructive solution for general Markovian stochastic processes and deterministic maps. As case tests we focus on Ornstein-Uhlenbeck processes, fully chaotic and quasiperiodic maps. Whereas only for certain degree probabilities can all diagrams be summed exactly, in the general case we show that the perturbation theory converges. In a second part, we make use of a variational technique to predict the complete degree distribution for special classes of Markovian dynamics with fast-decaying correlations. In every case we compare the theory with numerical experiments.

  9. RCTS: A flexible environment for sensor integration and control of robot systems; the distributed processing approach

    NASA Technical Reports Server (NTRS)

    Allard, R.; Mack, B.; Bayoumi, M. M.

    1989-01-01

    Most robot systems lack a suitable hardware and software environment for the efficient research of new control and sensing schemes. Typically, engineers and researchers need to be experts in control, sensing, programming, communication and robotics in order to implement, integrate and test new ideas in a robot system. In order to reduce this time, the Robot Controller Test Station (RCTS) has been developed. It uses a modular hardware and software architecture allowing easy physical and functional reconfiguration of a robot. This is accomplished by emphasizing four major design goals: flexibility, portability, ease of use, and ease of modification. An enhanced distributed processing version of RCTS is described. It features an expanded and more flexible communication system design. Distributed processing results in the availability of more local computing power and retains the low cost of microprocessors. A large number of possible communication, control and sensing schemes can therefore be easily introduced and tested, using the same basic software structure.

  10. Computer-Controlled System for Plasma Ion Energy Auto-Analyzer

    NASA Astrophysics Data System (ADS)

    Wu, Xian-qiu; Chen, Jun-fang; Jiang, Zhen-mei; Zhong, Qing-hua; Xiong, Yu-ying; Wu, Kai-hua

    2003-02-01

    A computer-controlled system for plasma ion energy auto-analyzer was technically studied for rapid and online measurement of plasma ion energy distribution. The system intelligently controls all the equipments via a RS-232 port, a printer port and a home-built circuit. The software designed by Lab VIEW G language automatically fulfils all of the tasks such as system initializing, adjustment of scanning-voltage, measurement of weak-current, data processing, graphic export, etc. By using the system, a few minutes are taken to acquire the whole ion energy distribution, which rapidly provides important parameters of plasma process techniques based on semiconductor devices and microelectronics.

  11. Suboptimal distributed control and estimation: application to a four coupled tanks system

    NASA Astrophysics Data System (ADS)

    Orihuela, Luis; Millán, Pablo; Vivas, Carlos; Rubio, Francisco R.

    2016-06-01

    The paper proposes an innovative estimation and control scheme that enables the distributed monitoring and control of large-scale processes. The proposed approach considers a discrete linear time-invariant process controlled by a network of agents that may both collect information about the evolution of the plant and apply control actions to drive its behaviour. The problem makes full sense when local observability/controllability is not assumed and the communication between agents can be exploited to reach system-wide goals. Additionally, to reduce agents bandwidth requirements and power consumption, an event-based communication policy is studied. The design procedure guarantees system stability, allowing the designer to trade-off performance, control effort and communication requirements. The obtained controllers and observers are implemented in a fully distributed fashion. To illustrate the performance of the proposed technique, experimental results on a quadruple-tank process are provided.

  12. AIRSAR Web-Based Data Processing

    NASA Technical Reports Server (NTRS)

    Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne

    2007-01-01

    The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.

  13. Where Might We Be Headed? Signposts from Other States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiter, Emerson

    2017-04-07

    Presentation on the state of distributed energy resources interconnection in Wisconsin from the Wisconsin Distributed Resources Collaborative (WIDRC) Interconnection Forum for Distributed Generation. It addresses concerns over application submission and processing, lack of visibility into the distribution system, and uncertainty in upgrade costs.

  14. Power System Information Delivering System Based on Distributed Object

    NASA Astrophysics Data System (ADS)

    Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji

    In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.

  15. An approach for heterogeneous and loosely coupled geospatial data distributed computing

    NASA Astrophysics Data System (ADS)

    Chen, Bin; Huang, Fengru; Fang, Yu; Huang, Zhou; Lin, Hui

    2010-07-01

    Most GIS (Geographic Information System) applications tend to have heterogeneous and autonomous geospatial information resources, and the availability of these local resources is unpredictable and dynamic under a distributed computing environment. In order to make use of these local resources together to solve larger geospatial information processing problems that are related to an overall situation, in this paper, with the support of peer-to-peer computing technologies, we propose a geospatial data distributed computing mechanism that involves loosely coupled geospatial resource directories and a term named as Equivalent Distributed Program of global geospatial queries to solve geospatial distributed computing problems under heterogeneous GIS environments. First, a geospatial query process schema for distributed computing as well as a method for equivalent transformation from a global geospatial query to distributed local queries at SQL (Structured Query Language) level to solve the coordinating problem among heterogeneous resources are presented. Second, peer-to-peer technologies are used to maintain a loosely coupled network environment that consists of autonomous geospatial information resources, thus to achieve decentralized and consistent synchronization among global geospatial resource directories, and to carry out distributed transaction management of local queries. Finally, based on the developed prototype system, example applications of simple and complex geospatial data distributed queries are presented to illustrate the procedure of global geospatial information processing.

  16. The Web Measurement Environment (WebME): A Tool for Combining and Modeling Distributed Data

    NASA Technical Reports Server (NTRS)

    Tesoriero, Roseanne; Zelkowitz, Marvin

    1997-01-01

    Many organizations have incorporated data collection into their software processes for the purpose of process improvement. However, in order to improve, interpreting the data is just as important as the collection of data. With the increased presence of the Internet and the ubiquity of the World Wide Web, the potential for software processes being distributed among several physically separated locations has also grown. Because project data may be stored in multiple locations and in differing formats, obtaining and interpreting data from this type of environment becomes even more complicated. The Web Measurement Environment (WebME), a Web-based data visualization tool, is being developed to facilitate the understanding of collected data in a distributed environment. The WebME system will permit the analysis of development data in distributed, heterogeneous environments. This paper provides an overview of the system and its capabilities.

  17. Distributed computing feasibility in a non-dedicated homogeneous distributed system

    NASA Technical Reports Server (NTRS)

    Leutenegger, Scott T.; Sun, Xian-He

    1993-01-01

    The low cost and availability of clusters of workstations have lead researchers to re-explore distributed computing using independent workstations. This approach may provide better cost/performance than tightly coupled multiprocessors. In practice, this approach often utilizes wasted cycles to run parallel jobs. The feasibility of such a non-dedicated parallel processing environment assuming workstation processes have preemptive priority over parallel tasks is addressed. An analytical model is developed to predict parallel job response times. Our model provides insight into how significantly workstation owner interference degrades parallel program performance. A new term task ratio, which relates the parallel task demand to the mean service demand of nonparallel workstation processes, is introduced. It was proposed that task ratio is a useful metric for determining how large the demand of a parallel applications must be in order to make efficient use of a non-dedicated distributed system.

  18. 39 CFR 501.19 - Intellectual property.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Postal Service UNITED STATES POSTAL SERVICE POSTAGE PROGRAMS AUTHORIZATION TO MANUFACTURE AND DISTRIBUTE POSTAGE EVIDENCING SYSTEMS § 501.19 Intellectual property. Providers submitting Postage Evidencing Systems... that may be required to distribute their product in commerce and to allow the Postal Service to process...

  19. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  20. Forced guidance and distribution of practice in sequential information processing.

    NASA Technical Reports Server (NTRS)

    Decker, L. R.; Rogers, C. A., Jr.

    1973-01-01

    Distribution of practice and forced guidance were used in a sequential information-processing task in an attempt to increase the capacity of human information-processing mechanisms. A reaction time index of the psychological refractory period was used as the response measure. Massing of practice lengthened response times while forced guidance shortened them. Interpretation was in terms of load reduction upon the response-selection stage of the information-processing system.-

  1. A comparison of queueing, cluster and distributed computing systems

    NASA Technical Reports Server (NTRS)

    Kaplan, Joseph A.; Nelson, Michael L.

    1993-01-01

    Using workstation clusters for distributed computing has become popular with the proliferation of inexpensive, powerful workstations. Workstation clusters offer both a cost effective alternative to batch processing and an easy entry into parallel computing. However, a number of workstations on a network does not constitute a cluster. Cluster management software is necessary to harness the collective computing power. A variety of cluster management and queuing systems are compared: Distributed Queueing Systems (DQS), Condor, Load Leveler, Load Balancer, Load Sharing Facility (LSF - formerly Utopia), Distributed Job Manager (DJM), Computing in Distributed Networked Environments (CODINE), and NQS/Exec. The systems differ in their design philosophy and implementation. Based on published reports on the different systems and conversations with the system's developers and vendors, a comparison of the systems are made on the integral issues of clustered computing.

  2. The Development of Design Guides for the Implementation of Multiprocessing Element Systems.

    DTIC Science & Technology

    1985-09-01

    Conclusions............................ 30 -~-.4 IMPLEMENTATION OF CHILL SIGNALS . COMMUNICATION PRIMITIVES ON A DISTRIBUTED SYSTEM ........................ 31...Architecture of a Distributed System .......... ........................... 32 4.2 Algorithm for the SEND Signal Operation ...... 35 4.3 Algorithm for the...elements operating concurrently. Such Multi Processing-element Systems are clearly going to be complex and it is important that the designers of such

  3. Dynamic Reconfiguration of a RGBD Sensor Based on QoS and QoC Requirements in Distributed Systems.

    PubMed

    Munera, Eduardo; Poza-Lujan, Jose-Luis; Posadas-Yagüe, Juan-Luis; Simó-Ten, José-Enrique; Noguera, Juan Fco Blanes

    2015-07-24

    The inclusion of embedded sensors into a networked system provides useful information for many applications. A Distributed Control System (DCS) is one of the clearest examples where processing and communications are constrained by the client's requirements and the capacity of the system. An embedded sensor with advanced processing and communications capabilities supplies high level information, abstracting from the data acquisition process and objects recognition mechanisms. The implementation of an embedded sensor/actuator as a Smart Resource permits clients to access sensor information through distributed network services. Smart resources can offer sensor services as well as computing, communications and peripheral access by implementing a self-aware based adaptation mechanism which adapts the execution profile to the context. On the other hand, information integrity must be ensured when computing processes are dynamically adapted. Therefore, the processing must be adapted to perform tasks in a certain lapse of time but always ensuring a minimum process quality. In the same way, communications must try to reduce the data traffic without excluding relevant information. The main objective of the paper is to present a dynamic configuration mechanism to adapt the sensor processing and communication to the client's requirements in the DCS. This paper describes an implementation of a smart resource based on a Red, Green, Blue, and Depth (RGBD) sensor in order to test the dynamic configuration mechanism presented.

  4. Particle dispersing system and method for testing semiconductor manufacturing equipment

    DOEpatents

    Chandrachood, Madhavi; Ghanayem, Steve G.; Cantwell, Nancy; Rader, Daniel J.; Geller, Anthony S.

    1998-01-01

    The system and method prepare a gas stream comprising particles at a known concentration using a particle disperser for moving particles from a reservoir of particles into a stream of flowing carrier gas. The electrostatic charges on the particles entrained in the carrier gas are then neutralized or otherwise altered, and the resulting particle-laden gas stream is then diluted to provide an acceptable particle concentration. The diluted gas stream is then split into a calibration stream and the desired output stream. The particles in the calibration stream are detected to provide an indication of the actual size distribution and concentration of particles in the output stream that is supplied to a process chamber being analyzed. Particles flowing out of the process chamber within a vacuum pumping system are detected, and the output particle size distribution and concentration are compared with the particle size distribution and concentration of the calibration stream in order to determine the particle transport characteristics of a process chamber, or to determine the number of particles lodged in the process chamber as a function of manufacturing process parameters such as pressure, flowrate, temperature, process chamber geometry, particle size, particle charge, and gas composition.

  5. A cost-effective line-based light-balancing technique using adaptive processing.

    PubMed

    Hsia, Shih-Chang; Chen, Ming-Huei; Chen, Yu-Min

    2006-09-01

    The camera imaging system has been widely used; however, the displaying image appears to have an unequal light distribution. This paper presents novel light-balancing techniques to compensate uneven illumination based on adaptive signal processing. For text image processing, first, we estimate the background level and then process each pixel with nonuniform gain. This algorithm can balance the light distribution while keeping a high contrast in the image. For graph image processing, the adaptive section control using piecewise nonlinear gain is proposed to equalize the histogram. Simulations show that the performance of light balance is better than the other methods. Moreover, we employ line-based processing to efficiently reduce the memory requirement and the computational cost to make it applicable in real-time systems.

  6. Distributed Adaptive Control: Beyond Single-Instant, Discrete Variables

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Bieniawski, Stefan

    2005-01-01

    In extensive form noncooperative game theory, at each instant t, each agent i sets its state x, independently of the other agents, by sampling an associated distribution, q(sub i)(x(sub i)). The coupling between the agents arises in the joint evolution of those distributions. Distributed control problems can be cast the same way. In those problems the system designer sets aspects of the joint evolution of the distributions to try to optimize the goal for the overall system. Now information theory tells us what the separate q(sub i) of the agents are most likely to be if the system were to have a particular expected value of the objective function G(x(sub 1),x(sub 2), ...). So one can view the job of the system designer as speeding an iterative process. Each step of that process starts with a specified value of E(G), and the convergence of the q(sub i) to the most likely set of distributions consistent with that value. After this the target value for E(sub q)(G) is lowered, and then the process repeats. Previous work has elaborated many schemes for implementing this process when the underlying variables x(sub i) all have a finite number of possible values and G does not extend to multiple instants in time. That work also is based on a fixed mapping from agents to control devices, so that the the statistical independence of the agents' moves means independence of the device states. This paper also extends that work to relax all of these restrictions. This extends the applicability of that work to include continuous spaces and Reinforcement Learning. This paper also elaborates how some of that earlier work can be viewed as a first-principles justification of evolution-based search algorithms.

  7. Cross-coherent vector sensor processing for spatially distributed glider networks.

    PubMed

    Nichols, Brendan; Sabra, Karim G

    2015-09-01

    Autonomous underwater gliders fitted with vector sensors can be used as a spatially distributed sensor array to passively locate underwater sources. However, to date, the positional accuracy required for robust array processing (especially coherent processing) is not achievable using dead-reckoning while the gliders remain submerged. To obtain such accuracy, the gliders can be temporarily surfaced to allow for global positioning system contact, but the acoustically active sea surface introduces locally additional sensor noise. This letter demonstrates that cross-coherent array processing, which inherently mitigates the effects of local noise, outperforms traditional incoherent processing source localization methods for this spatially distributed vector sensor network.

  8. Relationships between digital signal processing and control and estimation theory

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1978-01-01

    Research areas associated with digital signal processing and control and estimation theory are identified. Particular attention is given to image processing, system identification problems (parameter identification, linear prediction, least squares, Kalman filtering), stability analyses (the use of the Liapunov theory, frequency domain criteria, passivity), and multiparameter systems, distributed processes, and random fields.

  9. System and Method for Monitoring Distributed Asset Data

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  10. Near-surface wind speed statistical distribution: comparison between ECMWF System 4 and ERA-Interim

    NASA Astrophysics Data System (ADS)

    Marcos, Raül; Gonzalez-Reviriego, Nube; Torralba, Verónica; Cortesi, Nicola; Young, Doo; Doblas-Reyes, Francisco J.

    2017-04-01

    In the framework of seasonal forecast verification, knowing whether the characteristics of the climatological wind speed distribution, simulated by the forecasting systems, are similar to the observed ones is essential to guide the subsequent process of bias adjustment. To bring some light about this topic, this work assesses the properties of the statistical distributions of 10m wind speed from both ERA-Interim reanalysis and seasonal forecasts of ECMWF system 4. The 10m wind speed distribution has been characterized in terms of the four main moments of the probability distribution (mean, standard deviation, skewness and kurtosis) together with the coefficient of variation and goodness of fit Shapiro-Wilks test, allowing the identification of regions with higher wind variability and non-Gaussian behaviour at monthly time-scales. Also, the comparison of the predicted and observed 10m wind speed distributions has been measured considering both inter-annual and intra-seasonal variability. Such a comparison is important in both climate research and climate services communities because it provides useful climate information for decision-making processes and wind industry applications.

  11. Study of data I/O performance on distributed disk system in mask data preparation

    NASA Astrophysics Data System (ADS)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  12. 39 CFR 501.14 - Postage Evidencing System inventory control processes.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Postage Evidencing System inventory control... control processes. (a) Each authorized provider of Postage Evidencing Systems must permanently hold title... sufficient facilities for and records of the distribution, control, storage, maintenance, repair, replacement...

  13. Modernizing Distribution System Restoration to Achieve Grid Resiliency Against Extreme Weather Events: An Integrated Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Chen; Wang, Jianhui; Ton, Dan

    Recent severe power outages caused by extreme weather hazards have highlighted the importance and urgency of improving the resilience of the electric power grid. As the distribution grids still remain vulnerable to natural disasters, the power industry has focused on methods of restoring distribution systems after disasters in an effective and quick manner. The current distribution system restoration practice for utilities is mainly based on predetermined priorities and tends to be inefficient and suboptimal, and the lack of situational awareness after the hazard significantly delays the restoration process. As a result, customers may experience an extended blackout, which causes largemore » economic loss. On the other hand, the emerging advanced devices and technologies enabled through grid modernization efforts have the potential to improve the distribution system restoration strategy. However, utilizing these resources to aid the utilities in better distribution system restoration decision-making in response to extreme weather events is a challenging task. Therefore, this paper proposes an integrated solution: a distribution system restoration decision support tool designed by leveraging resources developed for grid modernization. We first review the current distribution restoration practice and discuss why it is inadequate in response to extreme weather events. Then we describe how the grid modernization efforts could benefit distribution system restoration, and we propose an integrated solution in the form of a decision support tool to achieve the goal. The advantages of the solution include improving situational awareness of the system damage status and facilitating survivability for customers. The paper provides a comprehensive review of how the existing methodologies in the literature could be leveraged to achieve the key advantages. The benefits of the developed system restoration decision support tool include the optimal and efficient allocation of repair crews and resources, the expediting of the restoration process, and the reduction of outage durations for customers, in response to severe blackouts due to extreme weather hazards.« less

  14. When the mean is not enough: Calculating fixation time distributions in birth-death processes.

    PubMed

    Ashcroft, Peter; Traulsen, Arne; Galla, Tobias

    2015-10-01

    Studies of fixation dynamics in Markov processes predominantly focus on the mean time to absorption. This may be inadequate if the distribution is broad and skewed. We compute the distribution of fixation times in one-step birth-death processes with two absorbing states. These are expressed in terms of the spectrum of the process, and we provide different representations as forward-only processes in eigenspace. These allow efficient sampling of fixation time distributions. As an application we study evolutionary game dynamics, where invading mutants can reach fixation or go extinct. We also highlight the median fixation time as a possible analog of mixing times in systems with small mutation rates and no absorbing states, whereas the mean fixation time has no such interpretation.

  15. A new taxonomy for distributed computer systems based upon operating system structure

    NASA Technical Reports Server (NTRS)

    Foudriat, E. C.

    1985-01-01

    Characteristics of the resource structure found in the operating system are considered as a mechanism for classifying distributed computer systems. Since the operating system resources, themselves, are too diversified to provide a consistent classification, the structure upon which resources are built and shared are examined. The location and control character of this indivisibility provides the taxonomy for separating uniprocessors, computer networks, network computers (fully distributed processing systems or decentralized computers) and algorithm and/or data control multiprocessors. The taxonomy is important because it divides machines into a classification that is relevant or important to the client and not the hardware architect. It also defines the character of the kernel O/S structure needed for future computer systems. What constitutes an operating system for a fully distributed processor is discussed in detail.

  16. Data Integration in Computer Distributed Systems

    NASA Astrophysics Data System (ADS)

    Kwiecień, Błażej

    In this article the author analyze a problem of data integration in a computer distributed systems. Exchange of information between different levels in integrated pyramid of enterprise process is fundamental with regard to efficient enterprise work. Communication and data exchange between levels are not always the same cause of necessity of different network protocols usage, communication medium, system response time, etc.

  17. Research on Comprehensive Evaluation Method for Heating Project Based on Analytic Hierarchy Processing

    NASA Astrophysics Data System (ADS)

    Han, Shenchao; Yang, Yanchun; Liu, Yude; Zhang, Peng; Li, Siwei

    2018-01-01

    It is effective to reduce haze in winter by changing the distributed heat supply system. Thus, the studies on comprehensive index system and scientific evaluation method of distributed heat supply project are essential. Firstly, research the influence factors of heating modes, and an index system with multiple dimension including economic, environmental, risk and flexibility was built and all indexes were quantified. Secondly, a comprehensive evaluation method based on AHP was put forward to analyze the proposed multiple and comprehensive index system. Lastly, the case study suggested that supplying heat with electricity has great advantage and promotional value. The comprehensive index system of distributed heating supply project and evaluation method in this paper can evaluate distributed heat supply project effectively and provide scientific support for choosing the distributed heating project.

  18. A Stochastic Diffusion Process for the Dirichlet Distribution

    DOE PAGES

    Bakosi, J.; Ristorcelli, J. R.

    2013-03-01

    The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability ofNcoupled stochastic variables with the Dirichlet distribution as its asymptotic solution. To ensure a bounded sample space, a coupled nonlinear diffusion process is required: the Wiener processes in the equivalent system of stochastic differential equations are multiplicative with coefficients dependent on all the stochastic variables. Individual samples of a discrete ensemble, obtained from the stochastic process, satisfy a unit-sum constraint at all times. The process may be used to represent realizations of a fluctuating ensemble ofNvariables subject to a conservation principle.more » Similar to the multivariate Wright-Fisher process, whose invariant is also Dirichlet, the univariate case yields a process whose invariant is the beta distribution. As a test of the results, Monte Carlo simulations are used to evolve numerical ensembles toward the invariant Dirichlet distribution.« less

  19. Where is the Battle-Line for Supply Contractors?

    DTIC Science & Technology

    1999-04-01

    military supply distribution system initiates, at the Theater Distribution Management Center (TMC). 3 Chapter 2 Current peacetime supply process I don’t know...terms of distribution success on the battlefield. There are three components which comprise the idea of distribution and distribution management . They...throughout the distribution pipeline. Visibility is the most essential component of distribution management . History is full of examples that prove

  20. Cloud-based distributed control of unmanned systems

    NASA Astrophysics Data System (ADS)

    Nguyen, Kim B.; Powell, Darren N.; Yetman, Charles; August, Michael; Alderson, Susan L.; Raney, Christopher J.

    2015-05-01

    Enabling warfighters to efficiently and safely execute dangerous missions, unmanned systems have been an increasingly valuable component in modern warfare. The evolving use of unmanned systems leads to vast amounts of data collected from sensors placed on the remote vehicles. As a result, many command and control (C2) systems have been developed to provide the necessary tools to perform one of the following functions: controlling the unmanned vehicle or analyzing and processing the sensory data from unmanned vehicles. These C2 systems are often disparate from one another, limiting the ability to optimally distribute data among different users. The Space and Naval Warfare Systems Center Pacific (SSC Pacific) seeks to address this technology gap through the UxV to the Cloud via Widgets project. The overarching intent of this three year effort is to provide three major capabilities: 1) unmanned vehicle control using an open service oriented architecture; 2) data distribution utilizing cloud technologies; 3) a collection of web-based tools enabling analysts to better view and process data. This paper focuses on how the UxV to the Cloud via Widgets system is designed and implemented by leveraging the following technologies: Data Distribution Service (DDS), Accumulo, Hadoop, and Ozone Widget Framework (OWF).

  1. A distributed computing system for magnetic resonance imaging: Java-based processing and binding of XML.

    PubMed

    de Beer, R; Graveron-Demilly, D; Nastase, S; van Ormondt, D

    2004-03-01

    Recently we have developed a Java-based heterogeneous distributed computing system for the field of magnetic resonance imaging (MRI). It is a software system for embedding the various image reconstruction algorithms that we have created for handling MRI data sets with sparse sampling distributions. Since these data sets may result from multi-dimensional MRI measurements our system has to control the storage and manipulation of large amounts of data. In this paper we describe how we have employed the extensible markup language (XML) to realize this data handling in a highly structured way. To that end we have used Java packages, recently released by Sun Microsystems, to process XML documents and to compile pieces of XML code into Java classes. We have effectuated a flexible storage and manipulation approach for all kinds of data within the MRI system, such as data describing and containing multi-dimensional MRI measurements, data configuring image reconstruction methods and data representing and visualizing the various services of the system. We have found that the object-oriented approach, possible with the Java programming environment, combined with the XML technology is a convenient way of describing and handling various data streams in heterogeneous distributed computing systems.

  2. Marionette

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, M.; Anderson, D.P.

    1988-01-01

    Marionette is a system for distributed parallel programming in an environment of networked heterogeneous computer systems. It is based on a master/slave model. The master process can invoke worker operations (asynchronous remote procedure calls to single slaves) and context operations (updates to the state of all slaves). The master and slaves also interact through shared data structures that can be modified only by the master. The master and slave processes are programmed in a sequential language. The Marionette runtime system manages slave process creation, propagates shared data structures to slaves as needed, queues and dispatches worker and context operations, andmore » manages recovery from slave processor failures. The Marionette system also includes tools for automated compilation of program binaries for multiple architectures, and for distributing binaries to remote fuel systems. A UNIX-based implementation of Marionette is described.« less

  3. Resource Management for Distributed Parallel Systems

    NASA Technical Reports Server (NTRS)

    Neuman, B. Clifford; Rao, Santosh

    1993-01-01

    Multiprocessor systems should exist in the the larger context of distributed systems, allowing multiprocessor resources to be shared by those that need them. Unfortunately, typical multiprocessor resource management techniques do not scale to large networks. The Prospero Resource Manager (PRM) is a scalable resource allocation system that supports the allocation of processing resources in large networks and multiprocessor systems. To manage resources in such distributed parallel systems, PRM employs three types of managers: system managers, job managers, and node managers. There exist multiple independent instances of each type of manager, reducing bottlenecks. The complexity of each manager is further reduced because each is designed to utilize information at an appropriate level of abstraction.

  4. Space station data management system - A common GSE test interface for systems testing and verification

    NASA Technical Reports Server (NTRS)

    Martinez, Pedro A.; Dunn, Kevin W.

    1987-01-01

    This paper examines the fundamental problems and goals associated with test, verification, and flight-certification of man-rated distributed data systems. First, a summary of the characteristics of modern computer systems that affect the testing process is provided. Then, verification requirements are expressed in terms of an overall test philosophy for distributed computer systems. This test philosophy stems from previous experience that was gained with centralized systems (Apollo and the Space Shuttle), and deals directly with the new problems that verification of distributed systems may present. Finally, a description of potential hardware and software tools to help solve these problems is provided.

  5. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  6. Factors Influencing Bacterial Diversity and Community Composition in Municipal Drinking Waters in the Ohio River Basin, USA

    PubMed Central

    Stanish, Lee F.; Hull, Natalie M.; Robertson, Charles E.; Harris, J. Kirk; Stevens, Mark J.; Spear, John R.; Pace, Norman R.

    2016-01-01

    The composition and metabolic activities of microbes in drinking water distribution systems can affect water quality and distribution system integrity. In order to understand regional variations in drinking water microbiology in the upper Ohio River watershed, the chemical and microbiological constituents of 17 municipal distribution systems were assessed. While sporadic variations were observed, the microbial diversity was generally dominated by fewer than 10 taxa, and was driven by the amount of disinfectant residual in the water. Overall, Mycobacterium spp. (Actinobacteria), MLE1-12 (phylum Cyanobacteria), Methylobacterium spp., and sphingomonads were the dominant taxa. Shifts in community composition from Alphaproteobacteria and Betaproteobacteria to Firmicutes and Gammaproteobacteria were associated with higher residual chlorine. Alpha- and beta-diversity were higher in systems with higher chlorine loads, which may reflect changes in the ecological processes structuring the communities under different levels of oxidative stress. These results expand the assessment of microbial diversity in municipal distribution systems and demonstrate the value of considering ecological theory to understand the processes controlling microbial makeup. Such understanding may inform the management of municipal drinking water resources. PMID:27362708

  7. Factors Influencing Bacterial Diversity and Community Composition in Municipal Drinking Waters in the Ohio River Basin, USA.

    PubMed

    Stanish, Lee F; Hull, Natalie M; Robertson, Charles E; Harris, J Kirk; Stevens, Mark J; Spear, John R; Pace, Norman R

    2016-01-01

    The composition and metabolic activities of microbes in drinking water distribution systems can affect water quality and distribution system integrity. In order to understand regional variations in drinking water microbiology in the upper Ohio River watershed, the chemical and microbiological constituents of 17 municipal distribution systems were assessed. While sporadic variations were observed, the microbial diversity was generally dominated by fewer than 10 taxa, and was driven by the amount of disinfectant residual in the water. Overall, Mycobacterium spp. (Actinobacteria), MLE1-12 (phylum Cyanobacteria), Methylobacterium spp., and sphingomonads were the dominant taxa. Shifts in community composition from Alphaproteobacteria and Betaproteobacteria to Firmicutes and Gammaproteobacteria were associated with higher residual chlorine. Alpha- and beta-diversity were higher in systems with higher chlorine loads, which may reflect changes in the ecological processes structuring the communities under different levels of oxidative stress. These results expand the assessment of microbial diversity in municipal distribution systems and demonstrate the value of considering ecological theory to understand the processes controlling microbial makeup. Such understanding may inform the management of municipal drinking water resources.

  8. A Comprehensive Investigation of Copper Pitting Corrosion in a Drinking Water Distribution System

    EPA Science Inventory

    Copper pipe pitting is a complicated corrosion process for which exact causes and solutions are uncertain. This paper presents the findings of a comprehensive investigation of a cold water copper pitting corrosion problem in a drinking water distribution system, including a refi...

  9. A Family of Poisson Processes for Use in Stochastic Models of Precipitation

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2013-12-01

    Both modified Poisson processes and compound Poisson processes can be relevant to stochastic parameterization of precipitation. This presentation compares the dynamical properties of these systems and discusses the physical situations in which each might be appropriate. If the parameters describing either class of systems originate in hydrodynamics, then proper consideration of stochastic calculus is required during numerical implementation of the parameterization. It is shown here that an improper numerical treatment can have severe implications for estimating rainfall distributions, particularly in the tails of the distributions and, thus, on the frequency of extreme events.

  10. GANGA: A tool for computational-task management and easy access to Grid resources

    NASA Astrophysics Data System (ADS)

    Mościcki, J. T.; Brochu, F.; Ebke, J.; Egede, U.; Elmsheuser, J.; Harrison, K.; Jones, R. W. L.; Lee, H. C.; Liko, D.; Maier, A.; Muraru, A.; Patrick, G. N.; Pajchel, K.; Reece, W.; Samset, B. H.; Slater, M. W.; Soroko, A.; Tan, C. L.; van der Ster, D. C.; Williams, M.

    2009-11-01

    In this paper, we present the computational task-management tool GANGA, which allows for the specification, submission, bookkeeping and post-processing of computational tasks on a wide set of distributed resources. GANGA has been developed to solve a problem increasingly common in scientific projects, which is that researchers must regularly switch between different processing systems, each with its own command set, to complete their computational tasks. GANGA provides a homogeneous environment for processing data on heterogeneous resources. We give examples from High Energy Physics, demonstrating how an analysis can be developed on a local system and then transparently moved to a Grid system for processing of all available data. GANGA has an API that can be used via an interactive interface, in scripts, or through a GUI. Specific knowledge about types of tasks or computational resources is provided at run-time through a plugin system, making new developments easy to integrate. We give an overview of the GANGA architecture, give examples of current use, and demonstrate how GANGA can be used in many different areas of science. Catalogue identifier: AEEN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPL No. of lines in distributed program, including test data, etc.: 224 590 No. of bytes in distributed program, including test data, etc.: 14 365 315 Distribution format: tar.gz Programming language: Python Computer: personal computers, laptops Operating system: Linux/Unix RAM: 1 MB Classification: 6.2, 6.5 Nature of problem: Management of computational tasks for scientific applications on heterogenous distributed systems, including local, batch farms, opportunistic clusters and Grids. Solution method: High-level job management interface, including command line, scripting and GUI components. Restrictions: Access to the distributed resources depends on the installed, 3rd party software such as batch system client or Grid user interface.

  11. Simulating the decentralized processes of the human immune system in a virtual anatomy model.

    PubMed

    Sarpe, Vladimir; Jacob, Christian

    2013-01-01

    Many physiological processes within the human body can be perceived and modeled as large systems of interacting particles or swarming agents. The complex processes of the human immune system prove to be challenging to capture and illustrate without proper reference to the spatial distribution of immune-related organs and systems. Our work focuses on physical aspects of immune system processes, which we implement through swarms of agents. This is our first prototype for integrating different immune processes into one comprehensive virtual physiology simulation. Using agent-based methodology and a 3-dimensional modeling and visualization environment (LINDSAY Composer), we present an agent-based simulation of the decentralized processes in the human immune system. The agents in our model - such as immune cells, viruses and cytokines - interact through simulated physics in two different, compartmentalized and decentralized 3-dimensional environments namely, (1) within the tissue and (2) inside a lymph node. While the two environments are separated and perform their computations asynchronously, an abstract form of communication is allowed in order to replicate the exchange, transportation and interaction of immune system agents between these sites. The distribution of simulated processes, that can communicate across multiple, local CPUs or through a network of machines, provides a starting point to build decentralized systems that replicate larger-scale processes within the human body, thus creating integrated simulations with other physiological systems, such as the circulatory, endocrine, or nervous system. Ultimately, this system integration across scales is our goal for the LINDSAY Virtual Human project. Our current immune system simulations extend our previous work on agent-based simulations by introducing advanced visualizations within the context of a virtual human anatomy model. We also demonstrate how to distribute a collection of connected simulations over a network of computers. As a future endeavour, we plan to use parameter tuning techniques on our model to further enhance its biological credibility. We consider these in silico experiments and their associated modeling and optimization techniques as essential components in further enhancing our capabilities of simulating a whole-body, decentralized immune system, to be used both for medical education and research as well as for virtual studies in immunoinformatics.

  12. PD2P: PanDA Dynamic Data Placement for ATLAS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maeno, T.; De, K.; Panitkin, S.

    2012-12-13

    The PanDA (Production and Distributed Analysis) system plays a key role in the ATLAS distributed computing infrastructure. PanDA is the ATLAS workload management system for processing all Monte-Carlo (MC) simulation and data reprocessing jobs in addition to user and group analysis jobs. The PanDA Dynamic Data Placement (PD2P) system has been developed to cope with difficulties of data placement for ATLAS. We will describe the design of the new system, its performance during the past year of data taking, dramatic improvements it has brought about in the efficient use of storage and processing resources, and plans for the future.

  13. A Methodology for Quantifying Certain Design Requirements During the Design Phase

    NASA Technical Reports Server (NTRS)

    Adams, Timothy; Rhodes, Russel

    2005-01-01

    A methodology for developing and balancing quantitative design requirements for safety, reliability, and maintainability has been proposed. Conceived as the basis of a more rational approach to the design of spacecraft, the methodology would also be applicable to the design of automobiles, washing machines, television receivers, or almost any other commercial product. Heretofore, it has been common practice to start by determining the requirements for reliability of elements of a spacecraft or other system to ensure a given design life for the system. Next, safety requirements are determined by assessing the total reliability of the system and adding redundant components and subsystems necessary to attain safety goals. As thus described, common practice leaves the maintainability burden to fall to chance; therefore, there is no control of recurring costs or of the responsiveness of the system. The means that have been used in assessing maintainability have been oriented toward determining the logistical sparing of components so that the components are available when needed. The process established for developing and balancing quantitative requirements for safety (S), reliability (R), and maintainability (M) derives and integrates NASA s top-level safety requirements and the controls needed to obtain program key objectives for safety and recurring cost (see figure). Being quantitative, the process conveniently uses common mathematical models. Even though the process is shown as being worked from the top down, it can also be worked from the bottom up. This process uses three math models: (1) the binomial distribution (greaterthan- or-equal-to case), (2) reliability for a series system, and (3) the Poisson distribution (less-than-or-equal-to case). The zero-fail case for the binomial distribution approximates the commonly known exponential distribution or "constant failure rate" distribution. Either model can be used. The binomial distribution was selected for modeling flexibility because it conveniently addresses both the zero-fail and failure cases. The failure case is typically used for unmanned spacecraft as with missiles.

  14. Optimization and quantization in gradient symbol systems: a framework for integrating the continuous and the discrete in cognition.

    PubMed

    Smolensky, Paul; Goldrick, Matthew; Mathis, Donald

    2014-08-01

    Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland, Rumelhart, & The PDP Research Group, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization-Quantization, in which an optimization process favoring representations that satisfy well-formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ-Diffusion Theory, to phonological production. Simulations of the resulting model suggest that Gradient Symbol Processing offers a way to unify accounts of grammatical competence with both discrete and continuous patterns in language performance. Copyright © 2013 Cognitive Science Society, Inc.

  15. NASA's Earth Science Data Systems

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.

    2015-01-01

    NASA's Earth Science Data Systems (ESDS) Program has evolved over the last two decades, and currently has several core and community components. Core components provide the basic operational capabilities to process, archive, manage and distribute data from NASA missions. Community components provide a path for peer-reviewed research in Earth Science Informatics to feed into the evolution of the core components. The Earth Observing System Data and Information System (EOSDIS) is a core component consisting of twelve Distributed Active Archive Centers (DAACs) and eight Science Investigator-led Processing Systems spread across the U.S. The presentation covers how the ESDS Program continues to evolve and benefits from as well as contributes to advances in Earth Science Informatics.

  16. A Distributed Operating System Design and Dictionary/Directory for the Stock Point Logistics Integrated Communications Environment.

    DTIC Science & Technology

    1983-11-01

    transmission, FM(R) will only have to hold one message. 3. Program Control Block (PCB) The PCB ( Deitel 82] will be maintained by the Executive in...and Use of Kernel to Process Interrupts 35 10. Layered Operating System Design 38 11. Program Control Block Table 43 12. Ready List Data Structure 45 13...examples of fully distributed systems in operation. An objective of the NPS research program for SPLICE is to advance our knowledge of distributed

  17. Study on parallel and distributed management of RS data based on spatial database

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin

    2009-10-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  18. Study on parallel and distributed management of RS data based on spatial data base

    NASA Astrophysics Data System (ADS)

    Chen, Yingbiao; Qian, Qinglan; Liu, Shijin

    2006-12-01

    With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.

  19. Informing future NRT satellite distribution capabilities: Lessons learned from NASA's Land Atmosphere NRT capability for EOS (LANCE)

    NASA Astrophysics Data System (ADS)

    Davies, D.; Murphy, K. J.; Michael, K.

    2013-12-01

    NASA's Land Atmosphere Near real-time Capability for EOS (Earth Observing System) (LANCE) provides data and imagery from Terra, Aqua and Aura satellites in less than 3 hours from satellite observation, to meet the needs of the near real-time (NRT) applications community. This article describes the architecture of the LANCE and outlines the modifications made to achieve the 3-hour latency requirement with a view to informing future NRT satellite distribution capabilities. It also describes how latency is determined. LANCE is a distributed system that builds on the existing EOS Data and Information System (EOSDIS) capabilities. To achieve the NRT latency requirement, many components of the EOS satellite operations, ground and science processing systems have been made more efficient without compromising the quality of science data processing. The EOS Data and Operations System (EDOS) processes the NRT stream with higher priority than the science data stream in order to minimize latency. In addition to expediting transfer times, the key difference between the NRT Level 0 products and those for standard science processing is the data used to determine the precise location and tilt of the satellite. Standard products use definitive geo-location (attitude and ephemeris) data provided daily, whereas NRT products use predicted geo-location provided by the instrument Global Positioning System (GPS) or approximation of navigational data (depending on platform). Level 0 data are processed in to higher-level products at designated Science Investigator-led Processing Systems (SIPS). The processes used by LANCE have been streamlined and adapted to work with datasets as soon as they are downlinked from satellites or transmitted from ground stations. Level 2 products that require ancillary data have modified production rules to relax the requirements for ancillary data so reducing processing times. Looking to the future, experience gained from LANCE can provide valuable lessons on satellite and ground system architectures and on how the delivery of NRT products from other NASA missions might be achieved.

  20. Needs assessment under the Maternal and Child Health Services Block Grant: Massachusetts.

    PubMed

    Guyer, B; Schor, L; Messenger, K P; Prenney, B; Evans, F

    1984-09-01

    The Massachusetts maternal and child health (MCH) agency has developed a needs assessment process which includes four components: a statistical measure of need based on indirect, proxy health and social indicators; clinical standards for services to be provided; an advisory process which guides decision making and involves constituency groups; and a management system for implementing funds distribution, namely open competitive bidding in response to a Request for Proposals. In Fiscal Years 1982 and 1983, the process was applied statewide in the distribution of primary prenatal (MIC) and pediatric (C&Y) care services and lead poisoning prevention projects. Both processes resulted in clearer definitions of services to be provided under contract to the state as well as redistribution of funds to serve localities that had previously received no resources. Although the needs assessment process does not provide a direct measure of unmet need in a complex system of private and public services, it can be used to advocate for increased MCH funding and guide the distribution of new MCH service dollars.

  1. Systems Engineering Processes Applied to Ground Vehicle Integration at US Army Tank Automotive Research, Development, and Engineering Center (TARDEC)

    DTIC Science & Technology

    2010-08-19

    UNCLASSIFIED Systems Engineering Processes Applied To Ground Vehicle Integration at US Army Tank Automotive Research, Development, and Engineering...DATES COVERED - 4. TITLE AND SUBTITLE Systems Engineering Processes Applied To Ground Vehicle Integration at US Army Tank Automotive Research...release, distribution unlimited 13. SUPPLEMENTARY NOTES Presented at NDIAs Ground Vehicle Systems Engineering and Technology Symposium (GVSETS), 17 22

  2. Transition in the waiting-time distribution of price-change events in a global socioeconomic system

    NASA Astrophysics Data System (ADS)

    Zhao, Guannan; McDonald, Mark; Fenn, Dan; Williams, Stacy; Johnson, Nicholas; Johnson, Neil F.

    2013-12-01

    The goal of developing a firmer theoretical understanding of inhomogeneous temporal processes-in particular, the waiting times in some collective dynamical system-is attracting significant interest among physicists. Quantifying the deviations between the waiting-time distribution and the distribution generated by a random process may help unravel the feedback mechanisms that drive the underlying dynamics. We analyze the waiting-time distributions of high-frequency foreign exchange data for the best executable bid-ask prices across all major currencies. We find that the lognormal distribution yields a good overall fit for the waiting-time distribution between currency rate changes if both short and long waiting times are included. If we restrict our study to long waiting times, each currency pair’s distribution is consistent with a power-law tail with exponent near to 3.5. However, for short waiting times, the overall distribution resembles one generated by an archetypal complex systems model in which boundedly rational agents compete for limited resources. Our findings suggest that a gradual transition arises in trading behavior between a fast regime in which traders act in a boundedly rational way and a slower one in which traders’ decisions are driven by generic feedback mechanisms across multiple timescales and hence produce similar power-law tails irrespective of currency type.

  3. Changes in bacterial composition of biofilm in a metropolitan drinking water distribution system.

    PubMed

    Revetta, R P; Gomez-Alvarez, V; Gerke, T L; Santo Domingo, J W; Ashbolt, N J

    2016-07-01

    This study examined the development of bacterial biofilms within a metropolitan distribution system. The distribution system is fed with different source water (i.e. groundwater, GW and surface water, SW) and undergoes different treatment processes in separate facilities. The biofilm community was characterized using 16S rRNA gene clone libraries and functional potential analysis, generated from total DNA extracted from coupons in biofilm annular reactors fed with onsite drinking water for up to 18 months. Differences in the bacterial community structure were observed between GW and SW. Representatives that explained the dissimilarity were associated with the classes Betaproteobacteria, Alphaproteobacteria, Actinobacteria, Gammaproteobacteria and Firmicutes. After 9 months the biofilm bacterial community from both GW and SW were dominated by Mycobacterium species. The distribution of the dominant operational taxonomic unit (OTU) (Mycobacterium) positively correlated with the drinking water distribution system (DWDS) temperature. In this study, the biofilm community structure observed between GW and SW were dissimilar, while communities from different locations receiving SW did not show significant differences. The results suggest that source water and/or the water quality shaped by their respective treatment processes may play an important role in shaping the bacterial communities in the distribution system. In addition, several bacterial groups were present in all samples, suggesting that they are an integral part of the core microbiota of this DWDS. These results provide an ecological insight into biofilm bacterial structure in chlorine-treated drinking water influenced by different water sources and their respective treatment processes. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.

  4. Research into software executives for space operations support

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.

    1990-01-01

    Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.

  5. Automated Power Systems Management (APSM)

    NASA Technical Reports Server (NTRS)

    Bridgeforth, A. O.

    1981-01-01

    A breadboard power system incorporating autonomous functions of monitoring, fault detection and recovery, command and control was developed, tested and evaluated to demonstrate technology feasibility. Autonomous functions including switching of redundant power processing elements, individual load fault removal, and battery charge/discharge control were implemented by means of a distributed microcomputer system within the power subsystem. Three local microcomputers provide the monitoring, control and command function interfaces between the central power subsystem microcomputer and the power sources, power processing and power distribution elements. The central microcomputer is the interface between the local microcomputers and the spacecraft central computer or ground test equipment.

  6. An Autonomous Distributed Fault-Tolerant Local Positioning System

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2017-01-01

    We describe a fault-tolerant, GPS-independent (Global Positioning System) distributed autonomous positioning system for static/mobile objects and present solutions for providing highly-accurate geo-location data for the static/mobile objects in dynamic environments. The reliability and accuracy of a positioning system fundamentally depends on two factors; its timeliness in broadcasting signals and the knowledge of its geometry, i.e., locations and distances of the beacons. Existing distributed positioning systems either synchronize to a common external source like GPS or establish their own time synchrony using a scheme similar to a master-slave by designating a particular beacon as the master and other beacons synchronize to it, resulting in a single point of failure. Another drawback of existing positioning systems is their lack of addressing various fault manifestations, in particular, communication link failures, which, as in wireless networks, are increasingly dominating the process failures and are typically transient and mobile, in the sense that they typically affect different messages to/from different processes over time.

  7. Review on Water Distribution of Cooling Tower in Power Station

    NASA Astrophysics Data System (ADS)

    Huichao, Zhang; Lei, Fang; Hao, Guang; Ying, Niu

    2018-04-01

    As the energy sources situation is becoming more and more severe, the importance of energy conservation and emissions reduction gets clearer. Since the optimization of water distribution system of cooling tower in power station can save a great amount of energy, the research of water distribution system gets more attention nowadays. This paper summarizes the development process of counter-flow type natural draft wet cooling tower and the water distribution system, and introduces the related domestic and international research situation. Combining the current situation, we come to the conclusion about the advantages and disadvantages of the several major water distribution modes, and analyze the problems of the existing water distribution ways in engineering application, furthermore, we put forward the direction of water distribution mode development on the basis knowledge of water distribution of cooling tower. Due to the water system can hardly be optimized again when it’s built, choosing an appropriate water distribution mode according to actual condition seems to be more significant.

  8. Identification and verification of critical performance dimensions. Phase 1 of the systematic process redesign of drug distribution.

    PubMed

    Colen, Hadewig B; Neef, Cees; Schuring, Roel W

    2003-06-01

    Worldwide patient safety has become a major social policy problem for healthcare organisations. As in other organisations, the patients in our hospital also suffer from an inadequate distribution process, as becomes clear from incident reports involving medication errors. Medisch Spectrum Twente is a top primary-care, clinical, teaching hospital. The hospital pharmacy takes care of 1070 internal beds and 1120 beds in an affiliated psychiatric hospital and nursing homes. In the beginning of 1999, our pharmacy group started a large interdisciplinary research project to develop a safe, effective and efficient drug distribution system by using systematic process redesign. The process redesign includes both organisational and technological components. This article describes the identification and verification of critical performance dimensions for the design of drug distribution processes in hospitals (phase 1 of the systematic process redesign of drug distribution). Based on reported errors and related causes, we suggested six generic performance domains. To assess the role of the performance dimensions, we used three approaches: flowcharts, interviews with stakeholders and review of the existing performance using time studies and medication error studies. We were able to set targets for costs, quality of information, responsiveness, employee satisfaction, and degree of innovation. We still have to establish what drug distribution system, in respect of quality and cost-effectiveness, represents the best and most cost-effective way of preventing medication errors. We intend to develop an evaluation model, using the critical performance dimensions as a starting point. This model can be used as a simulation template to compare different drug distribution concepts in order to define the differences in quality and cost-effectiveness.

  9. Aircraft adaptive learning control

    NASA Technical Reports Server (NTRS)

    Lee, P. S. T.; Vanlandingham, H. F.

    1979-01-01

    The optimal control theory of stochastic linear systems is discussed in terms of the advantages of distributed-control systems, and the control of randomly-sampled systems. An optimal solution to longitudinal control is derived and applied to the F-8 DFBW aircraft. A randomly-sampled linear process model with additive process and noise is developed.

  10. An Expert System Solution for the Quantitative Condition Assessment of Electrical Distribution Systems in the United States Air Force

    DTIC Science & Technology

    1991-09-01

    Distribution system ... ......... 4 2. Architechture of an Expert system .. .............. 66 vi List of Tables Table Page 1. Prototype Component Model...expert system to properly process work requests Ln civil engineering (8:23). Electric Power Research Institute (EPRI). EPRI is a private organization ...used (51) Training Level. The level of training shop technicians receive, and the resulting proficiency, are important in all organizations . Experts 1

  11. Performance Monitoring of Distributed Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ojha, Anand K.

    2000-01-01

    Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.

  12. Process evaluation distributed system

    NASA Technical Reports Server (NTRS)

    Moffatt, Christopher L. (Inventor)

    2006-01-01

    The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.

  13. Real-time physiological monitoring with distributed networks of sensors and object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Wiesmann, William P.; Pranger, L. Alex; Bogucki, Mary S.

    1998-05-01

    Remote monitoring of physiologic data from individual high- risk workers distributed over time and space is a considerable challenge. This is often due to an inadequate capability to accurately integrate large amounts of data into usable information in real time. In this report, we have used the vertical and horizontal organization of the 'fireground' as a framework to design a distributed network of sensors. In this system, sensor output is linked through a hierarchical object oriented programing process to accurately interpret physiological data, incorporate these data into a synchronous model and relay processed data, trends and predictions to members of the fire incident command structure. There are several unique aspects to this approach. The first includes a process to account for variability in vital parameter values for each individual's normal physiologic response by including an adaptive network in each data process. This information is used by the model in an iterative process to baseline a 'normal' physiologic response to a given stress for each individual and to detect deviations that indicate dysfunction or a significant insult. The second unique capability of the system orders the information for each user including the subject, local company officers, medical personnel and the incident commanders. Information can be retrieved and used for training exercises and after action analysis. Finally this system can easily be adapted to existing communication and processing links along with incorporating the best parts of current models through the use of object oriented programming techniques. These modern software techniques are well suited to handling multiple data processes independently over time in a distributed network.

  14. Distributed fault detection over sensor networks with Markovian switching topologies

    NASA Astrophysics Data System (ADS)

    Ge, Xiaohua; Han, Qing-Long

    2014-05-01

    This paper deals with the distributed fault detection for discrete-time Markov jump linear systems over sensor networks with Markovian switching topologies. The sensors are scatteredly deployed in the sensor field and the fault detectors are physically distributed via a communication network. The system dynamics changes and sensing topology variations are modeled by a discrete-time Markov chain with incomplete mode transition probabilities. Each of these sensor nodes firstly collects measurement outputs from its all underlying neighboring nodes, processes these data in accordance with the Markovian switching topologies, and then transmits the processed data to the remote fault detector node. Network-induced delays and accumulated data packet dropouts are incorporated in the data transmission between the sensor nodes and the distributed fault detector nodes through the communication network. To generate localized residual signals, mode-independent distributed fault detection filters are proposed. By means of the stochastic Lyapunov functional approach, the residual system performance analysis is carried out such that the overall residual system is stochastically stable and the error between each residual signal and the fault signal is made as small as possible. Furthermore, a sufficient condition on the existence of the mode-independent distributed fault detection filters is derived in the simultaneous presence of incomplete mode transition probabilities, Markovian switching topologies, network-induced delays, and accumulated data packed dropouts. Finally, a stirred-tank reactor system is given to show the effectiveness of the developed theoretical results.

  15. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  16. Facility Monitoring: A Qualitative Theory for Sensor Fusion

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando

    2001-01-01

    Data fusion and sensor management approaches have largely been implemented with centralized and hierarchical architectures. Numerical and statistical methods are the most common data fusion methods found in these systems. Given the proliferation and low cost of processing power, there is now an emphasis on designing distributed and decentralized systems. These systems use analytical/quantitative techniques or qualitative reasoning methods for date fusion.Based on other work by the author, a sensor may be treated as a highly autonomous (decentralized) unit. Each highly autonomous sensor (HAS) is capable of extracting qualitative behaviours from its data. For example, it detects spikes, disturbances, noise levels, off-limit excursions, step changes, drift, and other typical measured trends. In this context, this paper describes a distributed sensor fusion paradigm and theory where each sensor in the system is a HAS. Hence, given the reach qualitative information from each HAS, a paradigm and formal definitions are given so that sensors and processes can reason and make decisions at the qualitative level. This approach to sensor fusion makes it possible the implementation of intuitive (effective) methods to monitor, diagnose, and compensate processes/systems and their sensors. This paradigm facilitates a balanced distribution of intelligence (code and/or hardware) to the sensor level, the process/system level, and a higher controller level. The primary application of interest is in intelligent health management of rocket engine test stands.

  17. Traceability System For Agricultural Productsbased on Rfid and Mobile Technology

    NASA Astrophysics Data System (ADS)

    Sugahara, Koji

    In agriculture, it is required to establish and integrate food traceability systems and risk management systems in order to improve food safety in the entire food chain. The integrated traceability system for agricultural products was developed, based on innovative technology of RFID and mobile computing. In order to identify individual products on the distribution process efficiently,small RFID tags with unique ID and handy RFID readers were applied. On the distribution process, the RFID tags are checked by using the readers, and transit records of the products are stored to the database via wireless LAN.Regarding agricultural production, the recent issues of pesticides misuse affect consumer confidence in food safety. The Navigation System for Appropriate Pesticide Use (Nouyaku-navi) was developed, which is available in the fields by Internet cell-phones. Based on it, agricultural risk management systems have been developed. These systems collaborate with traceability systems and they can be applied for process control and risk management in agriculture.

  18. Advanced Manufacturing Systems in Food Processing and Packaging Industry

    NASA Astrophysics Data System (ADS)

    Shafie Sani, Mohd; Aziz, Faieza Abdul

    2013-06-01

    In this paper, several advanced manufacturing systems in food processing and packaging industry are reviewed, including: biodegradable smart packaging and Nano composites, advanced automation control system consists of fieldbus technology, distributed control system and food safety inspection features. The main purpose of current technology in food processing and packaging industry is discussed due to major concern on efficiency of the plant process, productivity, quality, as well as safety. These application were chosen because they are robust, flexible, reconfigurable, preserve the quality of the food, and efficient.

  19. New security infrastructure model for distributed computing systems

    NASA Astrophysics Data System (ADS)

    Dubenskaya, J.; Kryukov, A.; Demichev, A.; Prikhodko, N.

    2016-02-01

    At the paper we propose a new approach to setting up a user-friendly and yet secure authentication and authorization procedure in a distributed computing system. The security concept of the most heterogeneous distributed computing systems is based on the public key infrastructure along with proxy certificates which are used for rights delegation. In practice a contradiction between the limited lifetime of the proxy certificates and the unpredictable time of the request processing is a big issue for the end users of the system. We propose to use unlimited in time hashes which are individual for each request instead of proxy certificate. Our approach allows to avoid using of the proxy certificates. Thus the security infrastructure of distributed computing system becomes easier for development, support and use.

  20. High voltage systems (tube-type microwave)/low voltage system (solid-state microwave) power distribution

    NASA Technical Reports Server (NTRS)

    Nussberger, A. A.; Woodcock, G. R.

    1980-01-01

    SPS satellite power distribution systems are described. The reference Satellite Power System (SPS) concept utilizes high-voltage klystrons to convert the onboard satellite power from dc to RF for transmission to the ground receiving station. The solar array generates this required high voltage and the power is delivered to the klystrons through a power distribution subsystem. An array switching of solar cell submodules is used to maintain bus voltage regulation. Individual klystron dc voltage conversion is performed by centralized converters. The on-board data processing system performs the necessary switching of submodules to maintain voltage regulation. Electrical power output from the solar panels is fed via switch gears into feeder buses and then into main distribution buses to the antenna. Power also is distributed to batteries so that critical functions can be provided through solar eclipses.

  1. Concreteness effects in semantic processing: ERP evidence supporting dual-coding theory.

    PubMed

    Kounios, J; Holcomb, P J

    1994-07-01

    Dual-coding theory argues that processing advantages for concrete over abstract (verbal) stimuli result from the operation of 2 systems (i.e., imaginal and verbal) for concrete stimuli, rather than just 1 (for abstract stimuli). These verbal and imaginal systems have been linked with the left and right hemispheres of the brain, respectively. Context-availability theory argues that concreteness effects result from processing differences in a single system. The merits of these theories were investigated by examining the topographic distribution of event-related brain potentials in 2 experiments (lexical decision and concrete-abstract classification). The results were most consistent with dual-coding theory. In particular, different scalp distributions of an N400-like negativity were elicited by concrete and abstract words.

  2. Distributed Processing of Projections of Large Datasets: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    Modern information needs have resulted in very large amounts of data being used in geographic information systems. Problems arise when trying to project these data in a reasonable amount of time and accuracy, however. Current single-threaded methods can suffer from two problems: fast projection with poor accuracy, or accurate projection with long processing time. A possible solution may be to combine accurate interpolation methods and distributed processing algorithms to quickly and accurately convert digital geospatial data between coordinate systems. Modern technology has made it possible to construct systems, such as Beowulf clusters, for a low cost and provide access to supercomputer-class technology. Combining these techniques may result in the ability to use large amounts of geographic data in time-critical situations.

  3. Intercommunications in Real Time, Redundant, Distributed Computer System

    NASA Technical Reports Server (NTRS)

    Zanger, H.

    1980-01-01

    An investigation into the applicability of fiber optic communication techniques to real time avionic control systems, in particular the total automatic flight control system used for the VSTOL aircraft is presented. The system consists of spatially distributed microprocessors. The overall control function is partitioned to yield a unidirectional data flow between the processing elements (PE). System reliability is enhanced by the use of triple redundancy. Some general overall system specifications are listed here to provide the necessary background for the requirements of the communications system.

  4. An online detection system for aggregate sizes and shapes based on digital image processing

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Chen, Sijia

    2017-02-01

    Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.

  5. CHANGES IN BACTERIAL COMPOSITION OF BIOFILM IN A ...

    EPA Pesticide Factsheets

    This study examined the development of bacterial biofilms within a metropolitan distribution system. The distribution system is fed with different source water (i.e., groundwater, GW and surface water, SW) and undergoes different treatment processes in separate facilities. The biofilm community was characterized using 16S rRNA gene clone libraries and functional potential analysis, generated from total DNA extracted from coupons in biofilm annular reactors fed with onsite drinking water for up to eighteen months. Significant differences in the bacterial community structure were observed between GW and SW. Representatives that explained the dissimilarity between service areas were associated with Betaproteobacteria, Alphaproteobacteria, Actinobacteria, Gammaproteobacteria, and Firmicutes. After nine months the biofilm bacterial community from both areas were dominated by Mycobacterium species. The distribution of the dominant OTU (Mycobacterium) positively correlated with the drinking water distribution system (DWDS) temperature, but no clear relationship was seen with free chlorine residual, pH, turbidity or total organic carbon (TOC). The results suggest that biofilm microbial communities harbor distinct and diverse bacterial communities, and that source water, treatment processes and environmental conditions may play an important role in shaping the bacterial community in the distribution system. On the other hand, several bacterial groups were present i

  6. Study of a Compression-Molding Process for Ultraviolet Light-Emitting Diode Exposure Systems via Finite-Element Analysis

    PubMed Central

    Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang

    2017-01-01

    Although wafer-level camera lenses are a very promising technology, problems such as warpage with time and non-uniform thickness of products still exist. In this study, finite element simulation was performed to simulate the compression molding process for acquiring the pressure distribution on the product on completion of the process and predicting the deformation with respect to the pressure distribution. Results show that the single-gate compression molding process significantly increases the pressure at the center of the product, whereas the multi-gate compressing molding process can effectively distribute the pressure. This study evaluated the non-uniform thickness of product and changes in the process parameters through computer simulations, which could help to improve the compression molding process. PMID:28617315

  7. Influence of particle size distribution on nanopowder cold compaction processes

    NASA Astrophysics Data System (ADS)

    Boltachev, G.; Volkov, N.; Lukyashin, K.; Markov, V.; Chingina, E.

    2017-06-01

    Nanopowder uniform and uniaxial cold compaction processes are simulated by 2D granular dynamics method. The interaction of particles in addition to wide-known contact laws involves the dispersion forces of attraction and possibility of interparticle solid bridges formation, which have a large importance for nanopowders. Different model systems are investigated: monosized systems with particle diameter of 10, 20 and 30 nm; bidisperse systems with different content of small (diameter is 10 nm) and large (30 nm) particles; polydisperse systems corresponding to the log-normal size distribution law with different width. Non-monotone dependence of compact density on powder content is revealed in bidisperse systems. The deviations of compact density in polydisperse systems from the density of corresponding monosized system are found to be minor, less than 1 per cent.

  8. Review of cost versus scale: water and wastewater treatment and reuse processes.

    PubMed

    Guo, Tianjiao; Englehardt, James; Wu, Tingting

    2014-01-01

    The US National Research Council recently recommended direct potable water reuse (DPR), or potable water reuse without environmental buffer, for consideration to address US water demand. However, conveyance of wastewater and water to and from centralized treatment plants consumes on average four times the energy of treatment in the USA, and centralized DPR would further require upgradient distribution of treated water. Therefore, information on the cost of unit treatment processes potentially useful for DPR versus system capacity was reviewed, converted to constant 2012 US dollars, and synthesized in this work. A logarithmic variant of the Williams Law cost function was found applicable over orders of magnitude of system capacity, for the subject processes: activated sludge, membrane bioreactor, coagulation/flocculation, reverse osmosis, ultrafiltration, peroxone and granular activated carbon. Results are demonstrated versus 10 DPR case studies. Because economies of scale found for capital equipment are counterbalanced by distribution/collection network costs, further study of the optimal scale of distributed DPR systems is suggested.

  9. Guest Editor's Introduction: Special section on dependable distributed systems

    NASA Astrophysics Data System (ADS)

    Fetzer, Christof

    1999-09-01

    We rely more and more on computers. For example, the Internet reshapes the way we do business. A `computer outage' can cost a company a substantial amount of money. Not only with respect to the business lost during an outage, but also with respect to the negative publicity the company receives. This is especially true for Internet companies. After recent computer outages of Internet companies, we have seen a drastic fall of the shares of the affected companies. There are multiple causes for computer outages. Although computer hardware becomes more reliable, hardware related outages remain an important issue. For example, some of the recent computer outages of companies were caused by failed memory and system boards, and even by crashed disks - a failure type which can easily be masked using disk mirroring. Transient hardware failures might also look like software failures and, hence, might be incorrectly classified as such. However, many outages are software related. Faulty system software, middleware, and application software can crash a system. Dependable computing systems are systems we can rely on. Dependable systems are, by definition, reliable, available, safe and secure [3]. This special section focuses on issues related to dependable distributed systems. Distributed systems have the potential to be more dependable than a single computer because the probability that all computers in a distributed system fail is smaller than the probability that a single computer fails. However, if a distributed system is not built well, it is potentially less dependable than a single computer since the probability that at least one computer in a distributed system fails is higher than the probability that one computer fails. For example, if the crash of any computer in a distributed system can bring the complete system to a halt, the system is less dependable than a single-computer system. Building dependable distributed systems is an extremely difficult task. There is no silver bullet solution. Instead one has to apply a variety of engineering techniques [2]: fault-avoidance (minimize the occurrence of faults, e.g. by using a proper design process), fault-removal (remove faults before they occur, e.g. by testing), fault-evasion (predict faults by monitoring and reconfigure the system before failures occur), and fault-tolerance (mask and/or contain failures). Building a system from scratch is an expensive and time consuming effort. To reduce the cost of building dependable distributed systems, one would choose to use commercial off-the-shelf (COTS) components whenever possible. The usage of COTS components has several potential advantages beyond minimizing costs. For example, through the widespread usage of a COTS component, design failures might be detected and fixed before the component is used in a dependable system. Custom-designed components have to mature without the widespread in-field testing of COTS components. COTS components have various potential disadvantages when used in dependable systems. For example, minimizing the time to market might lead to the release of components with inherent design faults (e.g. use of `shortcuts' that only work most of the time). In addition, the components might be more complex than needed and, hence, potentially have more design faults than simpler components. However, given economic constraints and the ability to cope with some of the problems using fault-evasion and fault-tolerance, only for a small percentage of systems can one justify not using COTS components. Distributed systems built from current COTS components are asynchronous systems in the sense that there exists no a priori known bound on the transmission delay of messages or the execution time of processes. When designing a distributed algorithm, one would like to make sure (e.g. by testing or verification) that it is correct, i.e. satisfies its specification. Many distributed algorithms make use of consensus (eventually all non-crashed processes have to agree on a value), leader election (a crashed leader is eventually replaced by a new leader, but at any time there is at most one leader) or a group membership detection service (a crashed process is eventually suspected to have crashed but only crashed processes are suspected). From a theoretical point of view, the service specifications given for such services are not implementable in asynchronous systems. In particular, for each implementation one can derive a counter example in which the service violates its specification. From a practical point of view, the consensus, the leader election, and the membership detection problem are solvable in asynchronous distributed systems. In this special section, Raynal and Tronel show how to bridge this difference by showing how to implement the group membership detection problem with a negligible probability [1] to fail in an asynchronous system. The group membership detection problem is specified by a liveness condition (L) and a safety property (S): (L) if a process p crashes, then eventually every non-crashed process q has to suspect that p has crashed; and (S) if a process q suspects p, then p has indeed crashed. One can show that either (L) or (S) is implementable, but one cannot implement both (L) and (S) at the same time in an asynchronous system. In practice, one only needs to implement (L) and (S) such that the probability that (L) or (S) is violated becomes negligible. Raynal and Tronel propose and analyse a protocol that implements (L) with certainty and that can be tuned such that the probability that (S) is violated becomes negligible. Designing and implementing distributed fault-tolerant protocols for asynchronous systems is a difficult but not an impossible task. A fault-tolerant protocol has to detect and mask certain failure classes, e.g. crash failures and message omission failures. There is a trade-off between the performance of a fault-tolerant protocol and the failure classes the protocol can tolerate. One wants to tolerate as many failure classes as needed to satisfy the stochastic requirements of the protocol [1] while still maintaining a sufficient performance. Since clients of a protocol have different requirements with respect to the performance/fault-tolerance trade-off, one would like to be able to customize protocols such that one can select an appropriate performance/fault-tolerance trade-off. In this special section Hiltunen et al describe how one can compose protocols from micro-protocols in their Cactus system. They show how a group RPC system can be tailored to the needs of a client. In particular, they show how considering additional failure classes affects the performance of a group RPC system. References [1] Cristian F 1991 Understanding fault-tolerant distributed systems Communications of ACM 34 (2) 56-78 [2] Heimerdinger W L and Weinstock C B 1992 A conceptual framework for system fault tolerance Technical Report 92-TR-33, CMU/SEI [3] Laprie J C (ed) 1992 Dependability: Basic Concepts and Terminology (Vienna: Springer)

  10. Level crossings and excess times due to a superposition of uncorrelated exponential pulses

    NASA Astrophysics Data System (ADS)

    Theodorsen, A.; Garcia, O. E.

    2018-01-01

    A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.

  11. Self-referenced processing, neurodevelopment and joint attention in autism.

    PubMed

    Mundy, Peter; Gwaltney, Mary; Henderson, Heather

    2010-09-01

    This article describes a parallel and distributed processing model (PDPM) of joint attention, self-referenced processing and autism. According to this model, autism involves early impairments in the capacity for rapid, integrated processing of self-referenced (proprioceptive and interoceptive) and other-referenced (exteroceptive) information. Measures of joint attention have proven useful in research on autism because they are sensitive to the early development of the 'parallel' and integrated processing of self- and other-referenced stimuli. Moreover, joint attention behaviors are a consequence, but also an organizer of the functional development of a distal distributed cortical system involving anterior networks including the prefrontal and insula cortices, as well as posterior neural networks including the temporal and parietal cortices. Measures of joint attention provide early behavioral indicators of atypical development in this parallel and distributed processing system in autism. In addition it is proposed that an early, chronic disturbance in the capacity for integrating self- and other-referenced information may have cascading effects on the development of self awareness in autism. The assumptions, empirical support and future research implications of this model are discussed.

  12. Development of a web service for analysis in a distributed network.

    PubMed

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.

  13. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  14. Bacterial community structure in the drinking water microbiome is governed by filtration processes.

    PubMed

    Pinto, Ameet J; Xi, Chuanwu; Raskin, Lutgarde

    2012-08-21

    The bacterial community structure of a drinking water microbiome was characterized over three seasons using 16S rRNA gene based pyrosequencing of samples obtained from source water (a mix of a groundwater and a surface water), different points in a drinking water plant operated to treat this source water, and in the associated drinking water distribution system. Even though the source water was shown to seed the drinking water microbiome, treatment process operations limit the source water's influence on the distribution system bacterial community. Rather, in this plant, filtration by dual media rapid sand filters played a primary role in shaping the distribution system bacterial community over seasonal time scales as the filters harbored a stable bacterial community that seeded the water treatment processes past filtration. Bacterial taxa that colonized the filter and sloughed off in the filter effluent were able to persist in the distribution system despite disinfection of finished water by chloramination and filter backwashing with chloraminated backwash water. Thus, filter colonization presents a possible ecological survival strategy for bacterial communities in drinking water systems, which presents an opportunity to control the drinking water microbiome by manipulating the filter microbial community. Grouping bacterial taxa based on their association with the filter helped to elucidate relationships between the abundance of bacterial groups and water quality parameters and showed that pH was the strongest regulator of the bacterial community in the sampled drinking water system.

  15. A Photo Storm Report Mobile Application, Processing/Distribution System, and AWIPS-II Display Concept

    NASA Astrophysics Data System (ADS)

    Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.

    2014-12-01

    The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the full resolution photograph.Here, we present initial NWS forecaster feedback received from social media posted PSRs, motivating the possible advantages of PSRs within AWIPS-II, the details of developing and implementing a PSR system, and possible future applications beyond severe weather reports and AWIPS-II.

  16. A mathematical model for generating bipartite graphs and its application to protein networks

    NASA Astrophysics Data System (ADS)

    Nacher, J. C.; Ochiai, T.; Hayashida, M.; Akutsu, T.

    2009-12-01

    Complex systems arise in many different contexts from large communication systems and transportation infrastructures to molecular biology. Most of these systems can be organized into networks composed of nodes and interacting edges. Here, we present a theoretical model that constructs bipartite networks with the particular feature that the degree distribution can be tuned depending on the probability rate of fundamental processes. We then use this model to investigate protein-domain networks. A protein can be composed of up to hundreds of domains. Each domain represents a conserved sequence segment with specific functional tasks. We analyze the distribution of domains in Homo sapiens and Arabidopsis thaliana organisms and the statistical analysis shows that while (a) the number of domain types shared by k proteins exhibits a power-law distribution, (b) the number of proteins composed of k types of domains decays as an exponential distribution. The proposed mathematical model generates bipartite graphs and predicts the emergence of this mixing of (a) power-law and (b) exponential distributions. Our theoretical and computational results show that this model requires (1) growth process and (2) copy mechanism.

  17. Vascular system modeling in parallel environment - distributed and shared memory approaches

    PubMed Central

    Jurczuk, Krzysztof; Kretowski, Marek; Bezy-Wendling, Johanne

    2011-01-01

    The paper presents two approaches in parallel modeling of vascular system development in internal organs. In the first approach, new parts of tissue are distributed among processors and each processor is responsible for perfusing its assigned parts of tissue to all vascular trees. Communication between processors is accomplished by passing messages and therefore this algorithm is perfectly suited for distributed memory architectures. The second approach is designed for shared memory machines. It parallelizes the perfusion process during which individual processing units perform calculations concerning different vascular trees. The experimental results, performed on a computing cluster and multi-core machines, show that both algorithms provide a significant speedup. PMID:21550891

  18. Advanced optical sensing and processing technologies for the distributed control of large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Williams, G. M.; Fraser, J. C.

    1991-01-01

    The objective was to examine state-of-the-art optical sensing and processing technology applied to control the motion of flexible spacecraft. Proposed large flexible space systems, such an optical telescopes and antennas, will require control over vast surfaces. Most likely distributed control will be necessary involving many sensors to accurately measure the surface. A similarly large number of actuators must act upon the system. The used technical approach included reviewing proposed NASA missions to assess system needs and requirements. A candidate mission was chosen as a baseline study spacecraft for comparison of conventional and optical control components. Control system requirements of the baseline system were used for designing both a control system containing current off-the-shelf components and a system utilizing electro-optical devices for sensing and processing. State-of-the-art surveys of conventional sensor, actuator, and processor technologies were performed. A technology development plan is presented that presents a logical, effective way to develop and integrate advancing technologies.

  19. Energy efficient wireless sensor network for structural health monitoring using distributed embedded piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Li, Peng; Olmi, Claudio; Song, Gangbing

    2010-04-01

    Piezoceramic based transducers are widely researched and used for structural health monitoring (SHM) systems due to the piezoceramic material's inherent advantage of dual sensing and actuation. Wireless sensor network (WSN) technology benefits from advances made in piezoceramic based structural health monitoring systems, allowing easy and flexible installation, low system cost, and increased robustness over wired system. However, piezoceramic wireless SHM systems still faces some drawbacks, one of these is that the piezoceramic based SHM systems require relatively high computational capabilities to calculate damage information, however, battery powered WSN sensor nodes have strict power consumption limitation and hence limited computational power. On the other hand, commonly used centralized processing networks require wireless sensors to transmit all data back to the network coordinator for analysis. This signal processing procedure can be problematic for piezoceramic based SHM applications as it is neither energy efficient nor robust. In this paper, we aim to solve these problems with a distributed wireless sensor network for piezoceramic base structural health monitoring systems. Three important issues: power system, waking up from sleep impact detection, and local data processing, are addressed to reach optimized energy efficiency. Instead of sweep sine excitation that was used in the early research, several sine frequencies were used in sequence to excite the concrete structure. The wireless sensors record the sine excitations and compute the time domain energy for each sine frequency locally to detect the energy change. By comparing the data of the damaged concrete frame with the healthy data, we are able to find out the damage information of the concrete frame. A relative powerful wireless microcontroller was used to carry out the sampling and distributed data processing in real-time. The distributed wireless network dramatically reduced the data transmission between wireless sensor and the wireless coordinator, which in turn reduced the power consumption of the overall system.

  20. Multicriterion problem of allocation of resources in the heterogeneous distributed information processing systems

    NASA Astrophysics Data System (ADS)

    Antamoshkin, O. A.; Kilochitskaya, T. R.; Ontuzheva, G. A.; Stupina, A. A.; Tynchenko, V. S.

    2018-05-01

    This study reviews the problem of allocation of resources in the heterogeneous distributed information processing systems, which may be formalized in the form of a multicriterion multi-index problem with the linear constraints of the transport type. The algorithms for solution of this problem suggest a search for the entire set of Pareto-optimal solutions. For some classes of hierarchical systems, it is possible to significantly speed up the procedure of verification of a system of linear algebraic inequalities for consistency due to the reducibility of them to the stream models or the application of other solution schemes (for strongly connected structures) that take into account the specifics of the hierarchies under consideration.

  1. Mesocell study area snow distributions for the Cold Land Processes Experiment (CLPX)

    Treesearch

    Glen E. Liston; Christopher A. Hiemstra; Kelly Elder; Donald W. Cline

    2008-01-01

    The Cold Land Processes Experiment (CLPX) had a goal of describing snow-related features over a wide range of spatial and temporal scales. This required linking disparate snow tools and datasets into one coherent, integrated package. Simulating realistic high-resolution snow distributions and features requires a snow-evolution modeling system (SnowModel) that can...

  2. Optimization research on the concentration field of NO in selective catalytic reduction flue gas denitration system

    NASA Astrophysics Data System (ADS)

    Zheng, Qingyu; Zhang, Guoqiang; Che, Kai; Shao, Shikuan; Li, Yanfei

    2017-08-01

    Taking 660 MW generator unit denitration system as a study object, an optimization and adjustment method shall be designed to control ammonia slip, i.e. adjust ammonia injection system based on NO concentration distribution at inlet/outlet of the denitration system to make the injected ammonia distribute evenly. The results shows that, this method can effectively improve NO concentration distribution at outlet of the denitration system and decrease ammonia injection amount and ammonia slip concentration. Reduce adverse impact of SCR denitration process on the air preheater to realize safe production by guaranteeing that NO discharge shall reach the standard.

  3. System and method for secure group transactions

    DOEpatents

    Goldsmith, Steven Y [Rochester, MN

    2006-04-25

    A method and a secure system, processing on one or more computers, provides a way to control a group transaction. The invention uses group consensus access control and multiple distributed secure agents in a network environment. Each secure agent can organize with the other secure agents to form a secure distributed agent collective.

  4. Reducing acquisition risk through integrated systems of systems engineering

    NASA Astrophysics Data System (ADS)

    Gross, Andrew; Hobson, Brian; Bouwens, Christina

    2016-05-01

    In the fall of 2015, the Joint Staff J7 (JS J7) sponsored the Bold Quest (BQ) 15.2 event and conducted planning and coordination to combine this event into a joint event with the Army Warfighting Assessment (AWA) 16.1 sponsored by the U.S. Army. This multipurpose event combined a Joint/Coalition exercise (JS J7) with components of testing, training, and experimentation required by the Army. In support of Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ASA(ALT)) System of Systems Engineering and Integration (SoSE&I), Always On-On Demand (AO-OD) used a system of systems (SoS) engineering approach to develop a live, virtual, constructive distributed environment (LVC-DE) to support risk mitigation utilizing this complex and challenging exercise environment for a system preparing to enter limited user test (LUT). AO-OD executed a requirements-based SoS engineering process starting with user needs and objectives from Army Integrated Air and Missile Defense (AIAMD), Patriot units, Coalition Intelligence, Surveillance and Reconnaissance (CISR), Focused End State 4 (FES4) Mission Command (MC) Interoperability with Unified Action Partners (UAP), and Mission Partner Environment (MPE) Integration and Training, Tactics and Procedures (TTP) assessment. The SoS engineering process decomposed the common operational, analytical, and technical requirements, while utilizing the Institute of Electrical and Electronics Engineers (IEEE) Distributed Simulation Engineering and Execution Process (DSEEP) to provide structured accountability for the integration and execution of the AO-OD LVC-DE. As a result of this process implementation, AO-OD successfully planned for, prepared, and executed a distributed simulation support environment that responsively satisfied user needs and objectives, demonstrating the viability of an LVC-DE environment to support multiple user objectives and support risk mitigation activities for systems in the acquisition process.

  5. An Investigation of Distributed Communications Systems and Their Potential Applications to Command and Control Structure of the Marine Corps.

    DTIC Science & Technology

    1979-12-01

    The Marine Corps Tactical Command and Control System (MTACCS) is expected to provide increased decision making speed and power through automated ... processing and display of data which previously was processed manually. The landing Force Organizational Systems Study (LFOSS) has challenged Marines to

  6. Iron and copper release in drinking-water distribution systems.

    PubMed

    Shi, Baoyou; Taylor, James S

    2007-09-01

    A large-scale pilot study was carried out to evaluate the impacts of changes in water source and treatment process on iron and copper release in water distribution systems. Finished surface waters, groundwaters, and desalinated waters were produced with seven different treatment systems and supplied to 18 pipe distribution systems (PDSs). The major water treatment processes included lime softening, ferric sulfate coagulation, reverse osmosis, nanofiltration, and integrated membrane systems. PDSs were constructed from PVC, lined cast iron, unlined cast iron, and galvanized pipes. Copper pipe loops were set up for corrosion monitoring. Results showed that surface water after ferric sulfate coagulation had low alkalinity and high sulfates, and consequently caused the highest iron release. Finished groundwater treated by conventional method produced the lowest iron release but the highest copper release. The iron release of desalinated water was relatively high because of the water's high chloride level and low alkalinity. Both iron and copper release behaviors were influenced by temperature.

  7. Supervisory control and diagnostics system for the mirror fusion test facility: overview and status 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGoldrick, P.R.

    1981-01-01

    The Mirror Fusion Test Facility (MFTF) is a complex facility requiring a highly-computerized Supervisory Control and Diagnostics System (SCDS) to monitor and provide control over ten subsystems; three of which require true process control. SCDS will provide physicists with a method of studying machine and plasma behavior by acquiring and processing up to four megabytes of plasma diagnostic information every five minutes. A high degree of availability and throughput is provided by a distributed computer system (nine 32-bit minicomputers on shared memory). Data, distributed across SCDS, is managed by a high-bandwidth Distributed Database Management System. The MFTF operators' control roommore » consoles use color television monitors with touch sensitive screens; this is a totally new approach. The method of handling deviations to normal machine operation and how the operator should be notified and assisted in the resolution of problems has been studied and a system designed.« less

  8. Digital Image Processing in Private Industry.

    ERIC Educational Resources Information Center

    Moore, Connie

    1986-01-01

    Examines various types of private industry optical disk installations in terms of business requirements for digital image systems in five areas: records management; transaction processing; engineering/manufacturing; information distribution; and office automation. Approaches for implementing image systems are addressed as well as key success…

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Hang, E-mail: hangchen@mit.edu; Thill, Peter; Cao, Jianshu

    In biochemical systems, intrinsic noise may drive the system switch from one stable state to another. We investigate how kinetic switching between stable states in a bistable network is influenced by dynamic disorder, i.e., fluctuations in the rate coefficients. Using the geometric minimum action method, we first investigate the optimal transition paths and the corresponding minimum actions based on a genetic toggle switch model in which reaction coefficients draw from a discrete probability distribution. For the continuous probability distribution of the rate coefficient, we then consider two models of dynamic disorder in which reaction coefficients undergo different stochastic processes withmore » the same stationary distribution. In one, the kinetic parameters follow a discrete Markov process and in the other they follow continuous Langevin dynamics. We find that regulation of the parameters modulating the dynamic disorder, as has been demonstrated to occur through allosteric control in bistable networks in the immune system, can be crucial in shaping the statistics of optimal transition paths, transition probabilities, and the stationary probability distribution of the network.« less

  10. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  11. A system of {sup 99m}Tc production based on distributed electron accelerators and thermal separation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, R.G.; Christian, J.D.; Petti, D.A.

    1999-04-01

    A system has been developed for the production of {sup 99m}Tc based on distributed electron accelerators and thermal separation. The radioactive decay parent of {sup 99m}Tc, {sup 99}Mo, is produced from {sup 100}Mo by a photoneutron reaction. Two alternative thermal separation processes have been developed to extract {sup 99m}Tc. Experiments have been performed to verify the technical feasibility of the production and assess the efficiency of the extraction processes. A system based on this technology enables the economical supply of {sup 99m}Tc for a large nuclear pharmacy. Twenty such production centers distributed near major metropolitan areas could produce the entiremore » US supply of {sup 99m}Tc at a cost less than the current subsidized price.« less

  12. Hierarchical Process Composition: Dynamic Maintenance of Structure in a Distributed Environment

    DTIC Science & Technology

    1988-01-01

    One prominent hne of research stresses the independence of address space and thread of control, and the resulting efficiencies due to shared memory...cooperating processes. StarOS focuses on case of use and a general capability mechanism, while Medusa stresses the effect of distributed hardware on system...process structure and the asynchrony among agents and between agents and sources of failure. By stressing dynamic structure, we are led to adopt an

  13. EOS: A project to investigate the design and construction of real-time distributed embedded operating systems

    NASA Technical Reports Server (NTRS)

    Campbell, R. H.; Essick, R. B.; Grass, J.; Johnston, G.; Kenny, K.; Russo, V.

    1986-01-01

    The EOS project is investigating the design and construction of a family of real-time distributed embedded operating systems for reliable, distributed aerospace applications. Using the real-time programming techniques developed in co-operation with NASA in earlier research, the project staff is building a kernel for a multiple processor networked system. The first six months of the grant included a study of scheduling in an object-oriented system, the design philosophy of the kernel, and the architectural overview of the operating system. In this report, the operating system and kernel concepts are described. An environment for the experiments has been built and several of the key concepts of the system have been prototyped. The kernel and operating system is intended to support future experimental studies in multiprocessing, load-balancing, routing, software fault-tolerance, distributed data base design, and real-time processing.

  14. FALCON: A distributed scheduler for MIMD architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimshaw, A.S.; Vivas, V.E. Jr.

    1991-01-01

    This paper describes FALCON (Fully Automatic Load COordinator for Networks), the scheduler for the Mentat parallel processing system. FALCON has a modular structure and is designed for systems that use a task scheduling mechanism. FALCON is distributed, stable, supports system heterogeneities, and employs a sender-initiated adaptive load sharing policy with static task assignment. FALCON is parameterizable and is implemented in Mentat, a working distributed system. We present the design and implementation of FALCON as well as a brief introduction to those features of the Mentat run-time system that influence FALCON. Performance measures under different scheduler configurations are also presented andmore » analyzed with respect to the system parameters. 36 refs., 8 figs.« less

  15. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  16. State estimation for distributed systems with sensing delay

    NASA Astrophysics Data System (ADS)

    Alexander, Harold L.

    1991-08-01

    Control of complex systems such as remote robotic vehicles requires combining data from many sensors where the data may often be delayed by sensory processing requirements. The number and variety of sensors make it desirable to distribute the computational burden of sensing and estimation among multiple processors. Classic Kalman filters do not lend themselves to distributed implementations or delayed measurement data. The alternative Kalman filter designs presented in this paper are adapted for delays in sensor data generation and for distribution of computation for sensing and estimation over a set of networked processors.

  17. Power management and distribution considerations for a lunar base

    NASA Technical Reports Server (NTRS)

    Kenny, Barbara H.; Coleman, Anthony S.

    1991-01-01

    Design philosophies and technology needs for the power management and distribution (PMAD) portion of a lunar base power system are discussed. A process is described whereby mission planners may proceed from a knowledge of the PMAD functions and mission performance requirements to a definition of design options and technology needs. Current research efforts at the NASA LRC to meet the PMAD system needs for a Lunar base are described. Based on the requirements, the lunar base PMAD is seen as best being accomplished by a utility like system, although with some additional demands including autonomous operation and scheduling and accurate, predictive modeling during the design process.

  18. Electric power processing, distribution and control for advanced aerospace vehicles.

    NASA Technical Reports Server (NTRS)

    Krausz, A.; Felch, J. L.

    1972-01-01

    The results of a current study program to develop a rational basis for selection of power processing, distribution, and control configurations for future aerospace vehicles including the Space Station, Space Shuttle, and high-performance aircraft are presented. Within the constraints imposed by the characteristics of power generation subsystems and the load utilization equipment requirements, the power processing, distribution and control subsystem can be optimized by selection of the proper distribution voltage, frequency, and overload/fault protection method. It is shown that, for large space vehicles which rely on static energy conversion to provide electric power, high-voltage dc distribution (above 100 V dc) is preferable to conventional 28 V dc and 115 V ac distribution per MIL-STD-704A. High-voltage dc also has advantages over conventional constant frequency ac systems in many aircraft applications due to the elimination of speed control, wave shaping, and synchronization equipment.

  19. Automated video-microscopic imaging and data acquisition system for colloid deposition measurements

    DOEpatents

    Abdel-Fattah, Amr I.; Reimus, Paul W.

    2004-12-28

    A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.

  20. Dynamics of assembly production flow

    NASA Astrophysics Data System (ADS)

    Ezaki, Takahiro; Yanagisawa, Daichi; Nishinari, Katsuhiro

    2015-06-01

    Despite recent developments in management theory, maintaining a manufacturing schedule remains difficult because of production delays and fluctuations in demand and supply of materials. The response of manufacturing systems to such disruptions to dynamic behavior has been rarely studied. To capture these responses, we investigate a process that models the assembly of parts into end products. The complete assembly process is represented by a directed tree, where the smallest parts are injected at leaves and the end products are removed at the root. A discrete assembly process, represented by a node on the network, integrates parts, which are then sent to the next downstream node as a single part. The model exhibits some intriguing phenomena, including overstock cascade, phase transition in terms of demand and supply fluctuations, nonmonotonic distribution of stockout in the network, and the formation of a stockout path and stockout chains. Surprisingly, these rich phenomena result from only the nature of distributed assembly processes. From a physical perspective, these phenomena provide insight into delay dynamics and inventory distributions in large-scale manufacturing systems.

  1. Trickling Filters. Student Manual. Biological Treatment Process Control.

    ERIC Educational Resources Information Center

    Richwine, Reynold D.

    The textual material for a unit on trickling filters is presented in this student manual. Topic areas discussed include: (1) trickling filter process components (preliminary treatment, media, underdrain system, distribution system, ventilation, and secondary clarifier); (2) operational modes (standard rate filters, high rate filters, roughing…

  2. RUSHMAPS: Real-Time Uploadable Spherical Harmonic Moment Analysis for Particle Spectrometers

    NASA Technical Reports Server (NTRS)

    Figueroa-Vinas, Adolfo

    2013-01-01

    RUSHMAPS is a new onboard data reduction scheme that gives real-time access to key science parameters (e.g. moments) of a class of heliophysics science and/or solar system exploration investigation that includes plasma particle spectrometers (PPS), but requires moments reporting (density, bulk-velocity, temperature, pressure, etc.) of higher-level quality, and tolerates a lowpass (variable quality) spectral representation of the corresponding particle velocity distributions, such that telemetry use is minimized. The proposed methodology trades access to the full-resolution velocity distribution data, saving on telemetry, for real-time access to both the moments and an adjustable-quality (increasing quality increases volume) spectral representation of distribution functions. Traditional onboard data storage and downlink bandwidth constraints severely limit PPS system functionality and drive cost, which, as a consequence, drives a limited data collection and lower angular energy and time resolution. This prototypical system exploit, using high-performance processing technology at GSFC (Goddard Space Flight Center), uses a SpaceCube and/or Maestro-type platform for processing. These processing platforms are currently being used on the International Space Station as a technology demonstration, and work is currently ongoing in a new onboard computation system for the Earth Science missions, but they have never been implemented in heliospheric science or solar system exploration missions. Preliminary analysis confirms that the targeted processor platforms possess the processing resources required for realtime application of these algorithms to the spectrometer data. SpaceCube platforms demonstrate that the target architecture possesses the sort of compact, low-mass/power, radiation-tolerant characteristics needed for flight. These high-performing hybrid systems embed unprecedented amounts of onboard processing power in the CPU (central processing unit), FPGAs (field programmable gate arrays), and DSP (digital signal processing) elements. The fundamental computational algorithm de constructs 3D velocity distributions in terms of spherical harmonic spectral coefficients (which are analogous to a Fourier sine-cosine decomposition), but uses instead spherical harmonics Legendre polynomial orthogonal functions as a basis for the expansion, portraying each 2D angular distribution at every energy or, geometrically, spherical speed-shell swept by the particle spectrometer. Optionally, these spherical harmonic spectral coefficients may be telemetered to the ground. These will provide a smoothed description of the velocity distribution function whose quality will depend on the number of coefficients determined. Successfully implemented on the GSFC-developed processor, the capability to integrate the proposed methodology with both heritage and anticipated future plasma particle spectrometer designs is demonstrated (with sufficiently detailed design analysis to advance TRL) to show specific science relevancy with future HSD (Heliophysics Science Division) solar-interplanetary, planetary missions, sounding rockets and/or CubeSat missions.

  3. Challenges of Using CSCL in Open Distributed Learning.

    ERIC Educational Resources Information Center

    Nilsen, Anders Grov; Instefjord, Elen J.

    As a compulsory part of the study in Pedagogical Information Science at the University of Bergen and Stord/Haugesund College (Norway) during the spring term of 1999, students participated in a distributed group activity that provided experience on distributed collaboration and use of online groupware systems. The group collaboration process was…

  4. Design and implementation of a status at a glance user interface for a power distribution expert system

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Manner, David B.; Dolce, James L.; Mellor, Pamela A.

    1993-01-01

    Expert systems are widely used in health monitoring and fault detection applications. One of the key features of an expert system is that it possesses a large body of knowledge about the application for which it was designed. When the user consults this knowledge base, it is essential that the expert system's reasoning process and its conclusions be as concise as possible. If, in addition, an expert system is part of a process monitoring system, the expert system's conclusions must be combined with current events of the process. Under these circumstances, it is difficult for a user to absorb and respond to all the available information. For example, a user can become distracted and confused if two or more unrelated devices in different parts of the system require attention. A human interface designed to integrate expert system diagnoses with process data and to focus the user's attention to the important matters provides a solution to the 'information overload' problem. This paper will discuss a user interface to the power distribution expert system for Space Station Freedom. The importance of features which simplify assessing system status and which minimize navigating through layers of information will be discussed. Design rationale and implementation choices will also be presented.

  5. DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data.

    PubMed

    Putri, Fadhilah Kurnia; Song, Giltae; Kwon, Joonho; Rao, Praveen

    2017-09-25

    One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query ( DISPAQ ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation's Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data.

  6. DISPAQ: Distributed Profitable-Area Query from Big Taxi Trip Data †

    PubMed Central

    Putri, Fadhilah Kurnia; Song, Giltae; Rao, Praveen

    2017-01-01

    One of the crucial problems for taxi drivers is to efficiently locate passengers in order to increase profits. The rapid advancement and ubiquitous penetration of Internet of Things (IoT) technology into transportation industries enables us to provide taxi drivers with locations that have more potential passengers (more profitable areas) by analyzing and querying taxi trip data. In this paper, we propose a query processing system, called Distributed Profitable-Area Query (DISPAQ) which efficiently identifies profitable areas by exploiting the Apache Software Foundation’s Spark framework and a MongoDB database. DISPAQ first maintains a profitable-area query index (PQ-index) by extracting area summaries and route summaries from raw taxi trip data. It then identifies candidate profitable areas by searching the PQ-index during query processing. Then, it exploits a Z-Skyline algorithm, which is an extension of skyline processing with a Z-order space filling curve, to quickly refine the candidate profitable areas. To improve the performance of distributed query processing, we also propose local Z-Skyline optimization, which reduces the number of dominant tests by distributing killer profitable areas to each cluster node. Through extensive evaluation with real datasets, we demonstrate that our DISPAQ system provides a scalable and efficient solution for processing profitable-area queries from huge amounts of big taxi trip data. PMID:28946679

  7. Macroergonomic study of food sector company distribution centres.

    PubMed

    García Acosta, Gabriel; Lange Morales, Karen

    2008-07-01

    This study focussed on the work system design to be used by a Colombian food sector company for distributing products. It considered the concept of participative ergonomics, where people from the commercial, logistics, operation, occupational health areas worked in conjunction with the industrial designers, ergonomists who methodologically led the project. As a whole, the project was conceived as having five phases: outline, diagnosis, modelling the process, scalability, instrumentation. The results of the project translate into procedures for selecting, projecting a new distribution centre, the operational process model, a description of ergonomic systems that will enable specific work stations to be designed, the procedure for adapting existing warehouses. Strategically, this work helped optimise the company's processes and ensure that knowledge would be transferred within it. In turn, it became a primary prevention strategy in the field of health, aimed at reducing occupational risks, improving the quality of life at work.

  8. Measuring of temperatures of phase objects using a point-diffraction interferometer plate made with the thermocavitation process

    NASA Astrophysics Data System (ADS)

    Aguilar, Juan C.; Berriel-Valdos, L. R.; Aguilar, J. Felix; Mejia-Romero, S.

    An optical system formed by four point-diffraction interferometers is used for measuring the refractive index distribution of a phase object. The phase of the object is assumed enough smooth to be computed in terms of the Radon Transform and it is processed with a tomographic iterative algorithm. Then, the associated refractive index distribution is calculated. To recovery the phase from the inteferograms we use the Kreis method, which is useful for interferograms having only few fringes. As an application of our technique, the temperature distribution of a candle flame is retrieved, this was made with the aid of the Gladstone-Dale equation. We also describe the process of manufacturing the point-diffraction interferometer (PDI) plates. These were made by means of the thermocavitation process. The obtained three dimensional distribution of temperature is presented.

  9. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    NASA Astrophysics Data System (ADS)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.

  10. Lessons Learned In Developing Multiple Distributed Planning Systems for the International Space Station

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.; McNair, Ann R. (Technical Monitor)

    2002-01-01

    The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The "pieces" are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.

  11. Lessons Learned in Developing Multiple Distributed Planning Systems for the International Space Station

    NASA Technical Reports Server (NTRS)

    Maxwell, Theresa G.

    2002-01-01

    The planning processes for the International Space Station (ISS) Program are quite complex. Detailed mission planning for ISS on-orbit operations is a distributed function. Pieces of the on-orbit plan are developed by multiple planning organizations, located around the world, based on their respective expertise and responsibilities. The pieces are then integrated to yield the final detailed plan that will be executed onboard the ISS. Previous space programs have not distributed the planning and scheduling functions to this extent. Major ISS planning organizations are currently located in the United States (at both the NASA Johnson Space Center (JSC) and NASA Marshall Space Flight Center (MSFC)), in Russia, in Europe, and in Japan. Software systems have been developed by each of these planning organizations to support their assigned planning and scheduling functions. Although there is some cooperative development and sharing of key software components, each planning system has been tailored to meet the unique requirements and operational environment of the facility in which it operates. However, all the systems must operate in a coordinated fashion in order to effectively and efficiently produce a single integrated plan of ISS operations, in accordance with the established planning processes. This paper addresses lessons learned during the development of these multiple distributed planning systems, from the perspective of the developer of one of the software systems. The lessons focus on the coordination required to allow the multiple systems to operate together, rather than on the problems associated with the development of any particular system. Included in the paper is a discussion of typical problems faced during the development and coordination process, such as incompatible development schedules, difficulties in defining system interfaces, technical coordination and funding for shared tools, continually evolving planning concepts/requirements, programmatic and budget issues, and external influences. Techniques that mitigated some of these problems will also be addressed, along with recommendations for any future programs involving the development of multiple planning and scheduling systems. Many of these lessons learned are not unique to the area of planning and scheduling systems, so may be applied to other distributed ground systems that must operate in concert to successfully support space mission operations.

  12. Network-Capable Application Process and Wireless Intelligent Sensors for ISHM

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Morris, Jon; Turowski, Mark; Wang, Ray

    2011-01-01

    Intelligent sensor technology and systems are increasingly becoming attractive means to serve as frameworks for intelligent rocket test facilities with embedded intelligent sensor elements, distributed data acquisition elements, and onboard data acquisition elements. Networked intelligent processors enable users and systems integrators to automatically configure their measurement automation systems for analog sensors. NASA and leading sensor vendors are working together to apply the IEEE 1451 standard for adding plug-and-play capabilities for wireless analog transducers through the use of a Transducer Electronic Data Sheet (TEDS) in order to simplify sensor setup, use, and maintenance, to automatically obtain calibration data, and to eliminate manual data entry and error. A TEDS contains the critical information needed by an instrument or measurement system to identify, characterize, interface, and properly use the signal from an analog sensor. A TEDS is deployed for a sensor in one of two ways. First, the TEDS can reside in embedded, nonvolatile memory (typically flash memory) within the intelligent processor. Second, a virtual TEDS can exist as a separate file, downloadable from the Internet. This concept of virtual TEDS extends the benefits of the standardized TEDS to legacy sensors and applications where the embedded memory is not available. An HTML-based user interface provides a visual tool to interface with those distributed sensors that a TEDS is associated with, to automate the sensor management process. Implementing and deploying the IEEE 1451.1-based Network-Capable Application Process (NCAP) can achieve support for intelligent process in Integrated Systems Health Management (ISHM) for the purpose of monitoring, detection of anomalies, diagnosis of causes of anomalies, prediction of future anomalies, mitigation to maintain operability, and integrated awareness of system health by the operator. It can also support local data collection and storage. This invention enables wide-area sensing and employs numerous globally distributed sensing devices that observe the physical world through the existing sensor network. This innovation enables distributed storage, distributed processing, distributed intelligence, and the availability of DiaK (Data, Information, and Knowledge) to any element as needed. It also enables the simultaneous execution of multiple processes, and represents models that contribute to the determination of the condition and health of each element in the system. The NCAP (intelligent process) can configure data-collection and filtering processes in reaction to sensed data, allowing it to decide when and how to adapt collection and processing with regard to sophisticated analysis of data derived from multiple sensors. The user will be able to view the sensing device network as a single unit that supports a high-level query language. Each query would be able to operate over data collected from across the global sensor network just as a search query encompasses millions of Web pages. The sensor web can preserve ubiquitous information access between the querier and the queried data. Pervasive monitoring of the physical world raises significant data and privacy concerns. This innovation enables different authorities to control portions of the sensing infrastructure, and sensor service authors may wish to compose services across authority boundaries.

  13. Inferring the parameters of a Markov process from snapshots of the steady state

    NASA Astrophysics Data System (ADS)

    Dettmer, Simon L.; Berg, Johannes

    2018-02-01

    We seek to infer the parameters of an ergodic Markov process from samples taken independently from the steady state. Our focus is on non-equilibrium processes, where the steady state is not described by the Boltzmann measure, but is generally unknown and hard to compute, which prevents the application of established equilibrium inference methods. We propose a quantity we call propagator likelihood, which takes on the role of the likelihood in equilibrium processes. This propagator likelihood is based on fictitious transitions between those configurations of the system which occur in the samples. The propagator likelihood can be derived by minimising the relative entropy between the empirical distribution and a distribution generated by propagating the empirical distribution forward in time. Maximising the propagator likelihood leads to an efficient reconstruction of the parameters of the underlying model in different systems, both with discrete configurations and with continuous configurations. We apply the method to non-equilibrium models from statistical physics and theoretical biology, including the asymmetric simple exclusion process (ASEP), the kinetic Ising model, and replicator dynamics.

  14. A comparison of producer gas, biochar, and activated carbon from two distributed scale thermochemical conversion systems used to process forest biomass

    Treesearch

    Nathaniel Anderson; J. Greg Jones; Deborah Page-Dumroese; Daniel McCollum; Stephen Baker; Daniel Loeffler; Woodam Chung

    2013-01-01

    Thermochemical biomass conversion systems have the potential to produce heat, power, fuels and other products from forest biomass at distributed scales that meet the needs of some forest industry facilities. However, many of these systems have not been deployed in this sector and the products they produce from forest biomass have not been adequately described or...

  15. Unit Under Test Simulator Feasibility Study.

    DTIC Science & Technology

    1980-06-01

    interlocking connectors to conceptual differences such as octopus types of cables. 0 The validity of the IA description to the UUT simulator. Although...Research Institute, January 1978. 146. Ring , S. J. "Automatic Testing Via a Distributed Intelligence Processing System." Autotestcon 77, 2-4 November 1977...pp. 89-98. 147. Ring , S. J. "A Distributed Intelligence Automatic Test System for PATRIOT." IEEE Trans. 1977, Aerosp. and Electron Systems, Vol. AES

  16. Globular cluster systems - Comparative evolution of Galactic halos

    NASA Astrophysics Data System (ADS)

    Harris, William E.

    Space distributions, metallicity/age distributions, and kinematics are considered for the Milky Way halo system. Comparisons are made with other systems, and time scales for dynamical evolution are considered. It is noted that the globular cluster subsystems of halos resemble each other more closely than their parent galaxies do; this forms a reasonable basis for supposing that they represent a kind of underlying unity in the protogalaxy formation process.

  17. Software techniques for a distributed real-time processing system. [for spacecraft

    NASA Technical Reports Server (NTRS)

    Lesh, F.; Lecoq, P.

    1976-01-01

    The paper describes software techniques developed for the Unified Data System (UDS), a distributed processor network for control and data handling onboard a planetary spacecraft. These techniques include a structured language for specifying the programs contained in each module, and a small executive program in each module which performs scheduling and implements the module task.

  18. INVESTIGATION OF AQUEOUS BIPHASIC SYSTEMS FOR THE SEPARATIONS OF LIGNINS FROM CELLULOSE IN THE PAPER PULPING PROCESS. (R826732)

    EPA Science Inventory

    In efforts to apply a polymer-based aqueous biphasic system (ABS) extraction to the paper pulping process, the study of the distribution of various lignin and cellulosic fractions in ABS and the effects of temperature on system composition and solute partitioning have been inv...

  19. Optoelectronics in TESLA, LHC, and pi-of-the-sky experiments

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.; Pozniak, Krzysztof T.; Wrochna, Grzegorz; Simrock, Stefan

    2004-09-01

    Optical and optoelectronics technologies are more and more widely used in the biggest world experiments of high energy and nuclear physics, as well as in the astronomy. The paper is a kind of a broad digest describing the usage of optoelectronics is such experiments and information about some of the involved teams. The described experiments include: TESLA linear accelerator and FEL, Compact Muon Solenoid at LHC and recently started π-of-the-sky global gamma ray bursts (with asociated optical flashes) observation experiment. Optoelectornics and photonics offer several key features which are either extending the technical parameters of existing solutions or adding quite new practical application possibilities. Some of these favorable features of photonic systems are: high selectivity of optical sensors, immunity to some kinds of noise processes, extremely broad bandwidth exchangeable for either terabit rate transmission or ultrashort pulse generation, parallel image processing capability, etc. The following groups of photonic components and systems were described: (1) discrete components applications like: LED, PD, LD, CCD and CMOS cameras, active optical crystals and optical fibers in radiation dosimetry, astronomical image processing and for building of more complex photonic systems; (2) optical fiber networks serving as very stable phase distribution, clock signal distribution, distributed dosimeters, distributed gigabit transmission for control, diagnostics and data acquisition/processing; (3) fast and stable coherent femtosecond laser systems with active optical components for electro-optical sampling and photocathode excitation in the RF electron gun for linac; The parameters of some of these systems were quoted and discussed. A number of the debated solutions seems to be competitive against the classical ones. Several future fields seem to emerge involving direct coupling between the ultrafast photonic and the VLSI FPGA based technologies.

  20. Theory of a general class of dissipative processes.

    NASA Technical Reports Server (NTRS)

    Hale, J. K.; Lasalle, J. P.; Slemrod, M.

    1972-01-01

    Development of a theory of periodic processes that is of sufficient generality for being applied to systems defined by partial differential equations (distributed parameter systems) and functional differential equations of the retarded and neutral type (hereditary systems), as well as to systems arising in the theory of elasticity. In particular, the attempt is made to develop a meaningful general theory of dissipative periodic systems with a wide range of applications.

  1. System of and method for transparent management of data objects in containers across distributed heterogenous resources

    DOEpatents

    Moore, Reagan W.; Rajasekar, Arcot; Wan, Michael Y.

    2004-01-13

    A system of and method for maintaining data objects in containers across a network of distributed heterogeneous resources in a manner which is transparent to a client. A client request pertaining to containers is resolved by querying meta data for the container, processing the request through one or more copies of the container maintained on the system, updating the meta data for the container to reflect any changes made to the container as a result processing the request, and, if a copy of the container has changed, changing the status of the copy to indicate dirty status or synchronizing the copy to one or more other copies that may be present on the system.

  2. System of and method for transparent management of data objects in containers across distributed heterogenous resources

    DOEpatents

    Moore, Reagan W [San Diego, CA; Rajasekar, Arcot [Del Mar, CA; Wan, Michael Y [San Diego, CA

    2007-09-11

    A system of and method for maintaining data objects in containers across a network of distributed heterogeneous resources in a manner which is transparent to a client. A client request pertaining to containers is resolved by querying meta data for the container, processing the request through one or more copies of the container maintained on the system, updating the meta data for the container to reflect any changes made to the container as a result processing the re quest, and, if a copy of the container has changed, changing the status of the copy to indicate dirty status or synchronizing the copy to one or more other copies that may be present on the system.

  3. System of and method for transparent management of data objects in containers across distributed heterogenous resources

    DOEpatents

    Moore, Reagan W.; Rajasekar, Arcot; Wan, Michael Y.

    2010-09-21

    A system of and method for maintaining data objects in containers across a network of distributed heterogeneous resources in a manner which is transparent to a client. A client request pertaining to containers is resolved by querying meta data for the container, processing the request through one or more copies of the container maintained on the system, updating the meta data for the container to reflect any changes made to the container as a result processing the request, and, if a copy of the container has changed, changing the status of the copy to indicate dirty status or synchronizing the copy to one or more other copies that may be present on the system.

  4. Bringing the CMS distributed computing system into scalable operations

    NASA Astrophysics Data System (ADS)

    Belforte, S.; Fanfani, A.; Fisk, I.; Flix, J.; Hernández, J. M.; Kress, T.; Letts, J.; Magini, N.; Miccio, V.; Sciabà, A.

    2010-04-01

    Establishing efficient and scalable operations of the CMS distributed computing system critically relies on the proper integration, commissioning and scale testing of the data and workload management tools, the various computing workflows and the underlying computing infrastructure, located at more than 50 computing centres worldwide and interconnected by the Worldwide LHC Computing Grid. Computing challenges periodically undertaken by CMS in the past years with increasing scale and complexity have revealed the need for a sustained effort on computing integration and commissioning activities. The Processing and Data Access (PADA) Task Force was established at the beginning of 2008 within the CMS Computing Program with the mandate of validating the infrastructure for organized processing and user analysis including the sites and the workload and data management tools, validating the distributed production system by performing functionality, reliability and scale tests, helping sites to commission, configure and optimize the networking and storage through scale testing data transfers and data processing, and improving the efficiency of accessing data across the CMS computing system from global transfers to local access. This contribution reports on the tools and procedures developed by CMS for computing commissioning and scale testing as well as the improvements accomplished towards efficient, reliable and scalable computing operations. The activities include the development and operation of load generators for job submission and data transfers with the aim of stressing the experiment and Grid data management and workload management systems, site commissioning procedures and tools to monitor and improve site availability and reliability, as well as activities targeted to the commissioning of the distributed production, user analysis and monitoring systems.

  5. Multiple Scattering in Random Mechanical Systems and Diffusion Approximation

    NASA Astrophysics Data System (ADS)

    Feres, Renato; Ng, Jasmine; Zhang, Hong-Kun

    2013-10-01

    This paper is concerned with stochastic processes that model multiple (or iterated) scattering in classical mechanical systems of billiard type, defined below. From a given (deterministic) system of billiard type, a random process with transition probabilities operator P is introduced by assuming that some of the dynamical variables are random with prescribed probability distributions. Of particular interest are systems with weak scattering, which are associated to parametric families of operators P h , depending on a geometric or mechanical parameter h, that approaches the identity as h goes to 0. It is shown that ( P h - I)/ h converges for small h to a second order elliptic differential operator on compactly supported functions and that the Markov chain process associated to P h converges to a diffusion with infinitesimal generator . Both P h and are self-adjoint (densely) defined on the space of square-integrable functions over the (lower) half-space in , where η is a stationary measure. This measure's density is either (post-collision) Maxwell-Boltzmann distribution or Knudsen cosine law, and the random processes with infinitesimal generator respectively correspond to what we call MB diffusion and (generalized) Legendre diffusion. Concrete examples of simple mechanical systems are given and illustrated by numerically simulating the random processes.

  6. Application of a distributed systems architecture for increased speed in image processing on an autonomous ground vehicle

    NASA Astrophysics Data System (ADS)

    Wright, Adam A.; Momin, Orko; Shin, Young Ho; Shakya, Rahul; Nepal, Kumud; Ahlgren, David J.

    2010-01-01

    This paper presents the application of a distributed systems architecture to an autonomous ground vehicle, Q, that participates in both the autonomous and navigation challenges of the Intelligent Ground Vehicle Competition. In the autonomous challenge the vehicle is required to follow a course, while avoiding obstacles and staying within the course boundaries, which are marked by white lines. For the navigation challenge, the vehicle is required to reach a set of target destinations, known as way points, with given GPS coordinates and avoid obstacles that it encounters in the process. Previously the vehicle utilized a single laptop to execute all processing activities including image processing, sensor interfacing and data processing, path planning and navigation algorithms and motor control. National Instruments' (NI) LabVIEW served as the programming language for software implementation. As an upgrade to last year's design, a NI compact Reconfigurable Input/Output system (cRIO) was incorporated to the system architecture. The cRIO is NI's solution for rapid prototyping that is equipped with a real time processor, an FPGA and modular input/output. Under the current system, the real time processor handles the path planning and navigation algorithms, the FPGA gathers and processes sensor data. This setup leaves the laptop to focus on running the image processing algorithm. Image processing as previously presented by Nepal et. al. is a multi-step line extraction algorithm and constitutes the largest processor load. This distributed approach results in a faster image processing algorithm which was previously Q's bottleneck. Additionally, the path planning and navigation algorithms are executed more reliably on the real time processor due to the deterministic nature of operation. The implementation of this architecture required exploration of various inter-system communication techniques. Data transfer between the laptop and the real time processor using UDP packets was established as the most reliable protocol after testing various options. Improvement can be made to the system by migrating more algorithms to the hardware based FPGA to further speed up the operations of the vehicle.

  7. Evaluation of scattered light distributions of cw-transillumination for functional diagnostic of rheumatic disorders in interphalangeal joints

    NASA Astrophysics Data System (ADS)

    Prapavat, Viravuth; Schuetz, Rijk; Runge, Wolfram; Beuthan, Juergen; Mueller, Gerhard J.

    1995-12-01

    This paper presents in-vitro-studies using the scattered intensity distribution obtained by cw- transillumination to examine the condition of rheumatic disorders of interphalangeal joints. Inflammation of joints, due to rheumatic diseases, leads to changes in the synovial membrane, synovia composition and content, and anatomic geometrical variations. Measurements have shown that these rheumatic induced inflammation processes result in a variation in optical properties of joint systems. With a scanning system the interphalangeal joint is transilluminated with diode lasers (670 nm, 905 nm) perpendicular to the joint cavity. The detection of the entire distribution of the transmitted radiation intensity was performed with a CCD camera. As a function of the structure and optical properties of the transilluminated volume we achieved distributions of scattered radiation which show characteristic variations in intensity and shape. Using signal and image processing procedures we evaluated the measured scattered distributions regarding their information weight, shape and scale features. Mathematical methods were used to find classification criteria to determine variations of the joint condition.

  8. Data collection system: Earth Resources Technology Satellite-1

    NASA Technical Reports Server (NTRS)

    Cooper, S. (Editor); Ryan, P. T. (Editor)

    1975-01-01

    Subjects covered at the meeting concerned results on the overall data collection system including sensors, interface hardware, power supplies, environmental enclosures, data transmission, processing and distribution, maintenance and integration in resources management systems.

  9. Development, primacy, and systems of cities.

    PubMed

    El-shakhs, S

    1972-10-01

    The relationship between the evolutionary changes in the city size distribution of nationally defined urban systems and the process of socioeconomic development is examined. Attention is directed to the problems of defining and measuring changes in city size distributions, using the results to test empirically the relationship of such changes to the development process. Existing theoretical structures and empirical generalizations which have tried to explain or to describe, respectively, the hierarchical relationships of cities are represented by central place theory and rank size relationships. The problem is not that deviations exist but that an adequate definition is lacking of urban systems on the 1 hand, and a universal measure of city size distribution, which could be applied to any system irrespective of its level of development, on the other. The problem of measuring changes in city size distributions is further compounded by the lack of sufficient reliable information about different systems of cities for the purposes of empirical comparative analysis. Changes in city size distributions have thus far been viewed largely within the framework of classic equilibrium theory. A more differentiated continuum of the development process should replace the bioplar continuum of underdeveloped developed countries in relating changes in city size distribution with development. Implicit in this distinction is the view that processes which influence spatial organization during the early formative stages of development are inherently different from those operating during the more advanced stages. 2 approaches were used to examine the relationship between national levels of development and primacy: a comparative analysis of a large number of countries at a given point in time; and a historical analysis of a limited sample of 2 advanced countries, the US and Great Britain. The 75 countries included in this study cover a wide range of characteristics. The study found a significant association between the degree of primacy of distributions of cities and their socioeconomic level of development; and the form of the primacy curve (or its evolution with development) seemed to follow a consistent pattern in which the peak of primacy obtained during the stages of socioeconomic transition with countries being less primate in either direction from that peak. This pattern is the result of 2 reverse influences of the development process on the spatial structure of countries--centralization and concentration beginning with the rise of cities and a decentralization and spread effect accompanying the increasing influence and importance of the periphery and structural changes in the pattern of authority.

  10. The use of discontinuities and functional groups to assess relative resilience in complex systems

    USGS Publications Warehouse

    Allen, Craig R.; Gunderson, Lance; Johnson, A.R.

    2005-01-01

    It is evident when the resilience of a system has been exceeded and the system qualitatively changed. However, it is not clear how to measure resilience in a system prior to the demonstration that the capacity for resilient response has been exceeded. We argue that self-organizing human and natural systems are structured by a relatively small set of processes operating across scales in time and space. These structuring processes should generate a discontinuous distribution of structures and frequencies, where discontinuities mark the transition from one scale to another. Resilience is not driven by the identity of elements of a system, but rather by the functions those elements provide, and their distribution within and across scales. A self-organizing system that is resilient should maintain patterns of function within and across scales despite the turnover of specific elements (for example, species, cities). However, the loss of functions, or a decrease in functional representation at certain scales will decrease system resilience. It follows that some distributions of function should be more resilient than others. We propose that the determination of discontinuities, and the quantification of function both within and across scales, produce relative measures of resilience in ecological and other systems. We describe a set of methods to assess the relative resilience of a system based upon the determination of discontinuities and the quantification of the distribution of functions in relation to those discontinuities. ?? 2005 Springer Science+Business Media, Inc.

  11. On the Path to SunShot - Emerging Issues and Challenges with Integrating High Levels of Solar into the Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palminitier, Bryan; Broderick, Robert; Mather, Barry

    2016-05-01

    Wide use of advanced inverters could double the electricity-distribution system’s hosting capacity for distributed PV at low costs—from about 170 GW to 350 GW (see Palmintier et al. 2016). At the distribution system level, increased variable generation due to high penetrations of distributed PV (typically rooftop and smaller ground-mounted systems) could challenge the management of distribution voltage, potentially increase wear and tear on electromechanical utility equipment, and complicate the configuration of circuit-breakers and other protection systems—all of which could increase costs, limit further PV deployment, or both. However, improved analysis of distribution system hosting capacity—the amount of distributed PV thatmore » can be interconnected without changing the existing infrastructure or prematurely wearing out equipment—has overturned previous rule-of-thumb assumptions such as the idea that distributed PV penetrations higher than 15% require detailed impact studies. For example, new analysis suggests that the hosting capacity for distributed PV could rise from approximately 170 GW using traditional inverters to about 350 GW with the use of advanced inverters for voltage management, and it could be even higher using accessible and low-cost strategies such as careful siting of PV systems within a distribution feeder and additional minor changes in distribution operations. Also critical to facilitating distributed PV deployment is the improvement of interconnection processes, associated standards and codes, and compensation mechanisms so they embrace PV’s contributions to system-wide operations. Ultimately SunShot-level PV deployment will require unprecedented coordination of the historically separate distribution and transmission systems along with incorporation of energy storage and “virtual storage,” which exploits improved management of electric vehicle charging, building energy systems, and other large loads. Additional analysis and innovation are neede« less

  12. 45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...

  13. 45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...

  14. 45 CFR 310.10 - What are the functional requirements for the Model Tribal IV-D System?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Tribal financial management and expenditure information; (d) Distribute current support and arrearage..., process and monitor accounts receivable on all amounts owed, collected, and distributed with regard to: (1...

  15. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less

  16. Architecture of distributed picture archiving and communication systems for storing and processing high resolution medical images

    NASA Astrophysics Data System (ADS)

    Tokareva, Victoria

    2018-04-01

    New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.

  17. A Decision Model for Supporting Task Allocation Processes in Global Software Development

    NASA Astrophysics Data System (ADS)

    Lamersdorf, Ansgar; Münch, Jürgen; Rombach, Dieter

    Today, software-intensive systems are increasingly being developed in a globally distributed way. However, besides its benefit, global development also bears a set of risks and problems. One critical factor for successful project management of distributed software development is the allocation of tasks to sites, as this is assumed to have a major influence on the benefits and risks. We introduce a model that aims at improving management processes in globally distributed projects by giving decision support for task allocation that systematically regards multiple criteria. The criteria and causal relationships were identified in a literature study and refined in a qualitative interview study. The model uses existing approaches from distributed systems and statistical modeling. The article gives an overview of the problem and related work, introduces the empirical and theoretical foundations of the model, and shows the use of the model in an example scenario.

  18. The decay process of rotating unstable systems through the passage time distribution

    NASA Astrophysics Data System (ADS)

    Jiménez-Aquino, J. I.; Cortés, Emilio; Aquino, N.

    2001-05-01

    In this work we propose a general scheme to characterize, through the passage time distribution, the decay process of rotational unstable systems in the presence of external forces of large amplitude. The formalism starts with a matricial Langevin type equation formulated in the context of two dynamical representations given, respectively, by the vectors x and y, both related by a time dependent rotation matrix. The transformation preserves the norm of the vector and decouples the set of dynamical equations in the transformed space y. We study the dynamical characterization of the systems of two variables and show that the statistical properties of the passage time distribution are essentially equivalent in both dynamics. The theory is applied to the laser system studied in Dellunde et al. (Opt. Commun. 102 (1993) 277), where the effect of large injected signals on the transient dynamics of the laser has been studied in terms of complex electric field. The analytical results are compared with numerical simulation.

  19. The Global Distribution of Precipitation and Clouds. Chapter 2.4

    NASA Technical Reports Server (NTRS)

    Shepherd, J. Marshall; Adler, Robert; Huffman, George; Rossow, William; Ritter, Michael; Curtis, Scott

    2004-01-01

    The water cycle is the key circuit moving water through the Earth's system. This large system, powered by energy from the sun, is a continuous exchange of moisture between the oceans, the atmosphere, and the land. Precipitation (including rain, snow, sleet, freezing rain, and hail), is the primary mechanism for transporting water from the atmosphere back to the Earth's surface and is the key physical process that links aspects of climate, weather, and the global water cycle. Global precipitation and associate cloud processes are critical for understanding the water cycle balance on a global scale and interactions with the Earth's climate system. However, unlike measurement of less dynamic and more homogenous meteorological fields such as pressure or even temperature, accurate assessment of global precipitation is particularly challenging due to its highly stochastic and rapidly changing nature. It is not uncommon to observe a broad spectrum of precipitation rates and distributions over very localized time scales. Furthermore, precipitating systems generally exhibit nonhomogeneous spatial distributions of rain rates over local to global domains.

  20. Harvested rainwater quality before and after treatment in six ...

    EPA Pesticide Factsheets

    Rainwater harvesting (RWH) is an alternative method of providing water for indoor domestic use, but the water quality after treatment and distribution at individual residences is not well documented. In this study, water quality parameters were measured at the cistern and indoor cold-water taps of six residential RWH systems that use various treatment processes. Potential human pathogens (Mycobacterium avium, Mycobacterium intracellulare, Aspergillus flavus, Aspergillus fumigatus, and Aspergillus niger) were found frequently in cisterns and in treated rainwater delivered at the tap; Legionella pneumophila was not detected as frequently, but it persisted in a system after its first detection. The observed decreases in bacterial concentrations from the cistern to the tap after filtration/ ultraviolet (UV) treatment and distribution were less than expected; this suggests deficiencies in the effectiveness of the filtration/UV processes employed and/or degradation in water quality in the distribution system due to the absence of a disinfectant residual. Determination of the disinfection efficiency occuring in home treatment processes. Molecular analysis of rainwater before and after treatment. First study to include the monitoring of opportunistic fungal pathogens.

  1. Bluetooth-based distributed measurement system

    NASA Astrophysics Data System (ADS)

    Tang, Baoping; Chen, Zhuo; Wei, Yuguo; Qin, Xiaofeng

    2007-07-01

    A novel distributed wireless measurement system, which is consisted of a base station, wireless intelligent sensors and relay nodes etc, is established by combining of Bluetooth-based wireless transmission, virtual instrument, intelligent sensor, and network. The intelligent sensors mounted on the equipments to be measured acquire various parameters and the Bluetooth relay nodes get the acquired data modulated and sent to the base station, where data analysis and processing are done so that the operational condition of the equipment can be evaluated. The establishment of the distributed measurement system is discussed with a measurement flow chart for the distributed measurement system based on Bluetooth technology, and the advantages and disadvantages of the system are analyzed at the end of the paper and the measurement system has successfully been used in Daqing oilfield, China for measurement of parameters, such as temperature, flow rate and oil pressure at an electromotor-pump unit.

  2. Integrating CLIPS applications into heterogeneous distributed systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1991-01-01

    SOCIAL is an advanced, object-oriented development tool for integrating intelligent and conventional applications across heterogeneous hardware and software platforms. SOCIAL defines a family of 'wrapper' objects called agents, which incorporate predefined capabilities for distributed communication and control. Developers embed applications within agents and establish interactions between distributed agents via non-intrusive message-based interfaces. This paper describes a predefined SOCIAL agent that is specialized for integrating C Language Integrated Production System (CLIPS)-based applications. The agent's high-level Application Programming Interface supports bidirectional flow of data, knowledge, and commands to other agents, enabling CLIPS applications to initiate interactions autonomously, and respond to requests and results from heterogeneous remote systems. The design and operation of CLIPS agents are illustrated with two distributed applications that integrate CLIPS-based expert systems with other intelligent systems for isolating and mapping problems in the Space Shuttle Launch Processing System at the NASA Kennedy Space Center.

  3. Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability

    NASA Astrophysics Data System (ADS)

    Kar, Soummya; Moura, José M. F.

    2011-04-01

    The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.

  4. A distributed Clips implementation: dClips

    NASA Technical Reports Server (NTRS)

    Li, Y. Philip

    1993-01-01

    A distributed version of the Clips language, dClips, was implemented on top of two existing generic distributed messaging systems to show that: (1) it is easy to create a coarse-grained parallel programming environment out of an existing language if a high level messaging system is used; and (2) the computing model of a parallel programming environment can be changed easily if we change the underlying messaging system. dClips processes were first connected with a simple master-slave model. A client-server model with intercommunicating agents was later implemented. The concept of service broker is being investigated.

  5. Extending the Community Multiscale Air Quality (CMAQ) Modeling System to Hemispheric Scales: Overview of Process Considerations and Initial Applications

    EPA Science Inventory

    The Community Multiscale Air Quality (CMAQ) modeling system is extended to simulate ozone, particulate matter, and related precursor distributions throughout the Northern Hemisphere. Modeled processes were examined and enhanced to suitably represent the extended space and timesca...

  6. Power Distribution Analysis For Electrical Usage In Province Area Using Olap (Online Analytical Processing)

    NASA Astrophysics Data System (ADS)

    Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi

    2018-02-01

    The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.

  7. Optimum random and age replacement policies for customer-demand multi-state system reliability under imperfect maintenance

    NASA Astrophysics Data System (ADS)

    Chen, Yen-Luan; Chang, Chin-Chih; Sheu, Dwan-Fang

    2016-04-01

    This paper proposes the generalised random and age replacement policies for a multi-state system composed of multi-state elements. The degradation of the multi-state element is assumed to follow the non-homogeneous continuous time Markov process which is a continuous time and discrete state process. A recursive approach is presented to efficiently compute the time-dependent state probability distribution of the multi-state element. The state and performance distribution of the entire multi-state system is evaluated via the combination of the stochastic process and the Lz-transform method. The concept of customer-centred reliability measure is developed based on the system performance and the customer demand. We develop the random and age replacement policies for an aging multi-state system subject to imperfect maintenance in a failure (or unacceptable) state. For each policy, the optimum replacement schedule which minimises the mean cost rate is derived analytically and discussed numerically.

  8. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  9. The application of terahertz pulsed imaging in characterising density distribution of roll-compacted ribbons.

    PubMed

    Zhang, Jianyi; Pei, Chunlei; Schiano, Serena; Heaps, David; Wu, Chuan-Yu

    2016-09-01

    Roll compaction is a commonly used dry granulation process in pharmaceutical, fine chemical and agrochemical industries for materials sensitive to heat or moisture. The ribbon density distribution plays an important role in controlling properties of granules (e.g. granule size distribution, porosity and strength). Accurate characterisation of ribbon density distribution is critical in process control and quality assurance. The terahertz imaging system has a great application potential in achieving this as the terahertz radiation has the ability to penetrate most of the pharmaceutical excipients and the refractive index reflects variations in density and chemical compositions. The aim of this study is to explore whether terahertz pulse imaging is a feasible technique for quantifying ribbon density distribution. Ribbons were made of two grades of microcrystalline cellulose (MCC), Avicel PH102 and DG, using a roll compactor at various process conditions and the ribbon density variation was investigated using terahertz imaging and section methods. The density variations obtained from both methods were compared to explore the reliability and accuracy of the terahertz imaging system. An average refractive index is calculated from the refractive index values in the frequency range between 0.5 and 1.5THz. It is shown that the refractive index gradually decreases from the middle of the ribbon towards to the edges. Variations of density distribution across the width of the ribbons are also obtained using both the section method and the terahertz imaging system. It is found that the terahertz imaging results are in excellent agreement with that obtained using the section method, demonstrating that terahertz imaging is a feasible and rapid tool to characterise ribbon density distributions. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Measurement and analysis of operating system fault tolerance

    NASA Technical Reports Server (NTRS)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  11. Hyperswitch communication network

    NASA Technical Reports Server (NTRS)

    Peterson, J.; Pniel, M.; Upchurch, E.

    1991-01-01

    The Hyperswitch Communication Network (HCN) is a large scale parallel computer prototype being developed at JPL. Commercial versions of the HCN computer are planned. The HCN computer being designed is a message passing multiple instruction multiple data (MIMD) computer, and offers many advantages in price-performance ratio, reliability and availability, and manufacturing over traditional uniprocessors and bus based multiprocessors. The design of the HCN operating system is a uniquely flexible environment that combines both parallel processing and distributed processing. This programming paradigm can achieve a balance among the following competing factors: performance in processing and communications, user friendliness, and fault tolerance. The prototype is being designed to accommodate a maximum of 64 state of the art microprocessors. The HCN is classified as a distributed supercomputer. The HCN system is described, and the performance/cost analysis and other competing factors within the system design are reviewed.

  12. Selecting an Architecture for a Safety-Critical Distributed Computer System with Power, Weight and Cost Considerations

    NASA Technical Reports Server (NTRS)

    Torres-Pomales, Wilfredo

    2014-01-01

    This report presents an example of the application of multi-criteria decision analysis to the selection of an architecture for a safety-critical distributed computer system. The design problem includes constraints on minimum system availability and integrity, and the decision is based on the optimal balance of power, weight and cost. The analysis process includes the generation of alternative architectures, evaluation of individual decision criteria, and the selection of an alternative based on overall value. In this example presented here, iterative application of the quantitative evaluation process made it possible to deliberately generate an alternative architecture that is superior to all others regardless of the relative importance of cost.

  13. Autonomous Robot Navigation in Human-Centered Environments Based on 3D Data Fusion

    NASA Astrophysics Data System (ADS)

    Steinhaus, Peter; Strand, Marcus; Dillmann, Rüdiger

    2007-12-01

    Efficient navigation of mobile platforms in dynamic human-centered environments is still an open research topic. We have already proposed an architecture (MEPHISTO) for a navigation system that is able to fulfill the main requirements of efficient navigation: fast and reliable sensor processing, extensive global world modeling, and distributed path planning. Our architecture uses a distributed system of sensor processing, world modeling, and path planning units. In this arcticle, we present implemented methods in the context of data fusion algorithms for 3D world modeling and real-time path planning. We also show results of the prototypic application of the system at the museum ZKM (center for art and media) in Karlsruhe.

  14. Remote-Sensing Data Distribution and Processing in the Cloud at the ASF DAAC

    NASA Astrophysics Data System (ADS)

    Stoner, C.; Arko, S. A.; Nicoll, J. B.; Labelle-Hamer, A. L.

    2016-12-01

    The Alaska Satellite Facility (ASF) Distributed Active Archive Center (DAAC) has been tasked to archive and distribute data from both SENTINEL-1 satellites and from the NASA-ISRO Synthetic Aperture Radar (NISAR) satellite in a cost effective manner. In order to best support processing and distribution of these large data sets for users, the ASF DAAC enhanced our data system in a number of ways that will be detailed in this presentation.The SENTINEL-1 mission comprises a constellation of two polar-orbiting satellites, operating day and night performing C-band Synthetic Aperture Radar (SAR) imaging, enabling them to acquire imagery regardless of the weather. SENTINEL-1A was launched by the European Space Agency (ESA) in April 2014. SENTINEL-1B is scheduled to launch in April 2016.The NISAR satellite is designed to observe and take measurements of some of the planet's most complex processes, including ecosystem disturbances, ice-sheet collapse, and natural hazards such as earthquakes, tsunamis, volcanoes and landslides. NISAR will employ radar imaging, polarimetry, and interferometry techniques using the SweepSAR technology employed for full-resolution wide-swath imaging. NISAR data files are large, making storage and processing a challenge for conventional store and download systems.To effectively process, store, and distribute petabytes of data in a High-performance computing environment, ASF took a long view with regard to technology choices and picked a path of most flexibility and Software re-use. To that end, this Software tools and services presentation will cover Web Object Storage (WOS) and the ability to seamlessly move from local sunk cost hardware to public cloud, such as Amazon Web Services (AWS). A prototype of SENTINEL-1A system that is in AWS, as well as a local hardware solution, will be examined to explain the pros and cons of each. In preparation for NISAR files which will be even larger than SENTINEL-1A, ASF has embarked on a number of cloud initiatives, including processing in the cloud at scale, processing data on-demand, and processing end-user computations on DAAC data in the cloud.

  15. An adaptive grid algorithm for 3-D GIS landform optimization based on improved ant algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Chenhan; Meng, Lingkui; Deng, Shijun

    2005-07-01

    The key technique of 3-D GIS is to realize quick and high-quality 3-D visualization, in which 3-D roaming system based on landform plays an important role. However how to increase efficiency of 3-D roaming engine and process a large amount of landform data is a key problem in 3-D landform roaming system and improper process of the problem would result in tremendous consumption of system resources. Therefore it has become the key of 3-D roaming system design that how to realize high-speed process of distributed data for landform DEM (Digital Elevation Model) and high-speed distributed modulation of various 3-D landform data resources. In the paper we improved the basic ant algorithm and designed the modulation strategy of 3-D GIS landform resources based on the improved ant algorithm. By initially hypothetic road weights σi , the change of the information factors in the original algorithm would transform from ˜τj to ∆τj+σi and the weights was decided by 3-D computative capacity of various nodes in network environment. So during the course of initial phase of task assignment, increasing the resource information factors of high task-accomplishing rate and decreasing ones of low accomplishing rate would make load accomplishing rate approach the same value as quickly as possible, then in the later process of task assignment, the load balanced ability of the system was further improved. Experimental results show by improving ant algorithm, our system not only decreases many disadvantage of the traditional ant algorithm, but also like ants looking for food effectively distributes the complicated landform algorithm to many computers to process cooperatively and gains a satisfying search result.

  16. Critical thresholds for eventual extinction in randomly disturbed population growth models.

    PubMed

    Peckham, Scott D; Waymire, Edward C; De Leenheer, Patrick

    2018-02-16

    This paper considers several single species growth models featuring a carrying capacity, which are subject to random disturbances that lead to instantaneous population reduction at the disturbance times. This is motivated in part by growing concerns about the impacts of climate change. Our main goal is to understand whether or not the species can persist in the long run. We consider the discrete-time stochastic process obtained by sampling the system immediately after the disturbances, and find various thresholds for several modes of convergence of this discrete process, including thresholds for the absence or existence of a positively supported invariant distribution. These thresholds are given explicitly in terms of the intensity and frequency of the disturbances on the one hand, and the population's growth characteristics on the other. We also perform a similar threshold analysis for the original continuous-time stochastic process, and obtain a formula that allows us to express the invariant distribution for this continuous-time process in terms of the invariant distribution of the discrete-time process, and vice versa. Examples illustrate that these distributions can differ, and this sends a cautionary message to practitioners who wish to parameterize these and related models using field data. Our analysis relies heavily on a particular feature shared by all the deterministic growth models considered here, namely that their solutions exhibit an exponentially weighted averaging property between a function of the initial condition, and the same function applied to the carrying capacity. This property is due to the fact that these systems can be transformed into affine systems.

  17. Future electro-optical sensors and processing in urban operations

    NASA Astrophysics Data System (ADS)

    Grönwall, Christina; Schwering, Piet B.; Rantakokko, Jouni; Benoist, Koen W.; Kemp, Rob A. W.; Steinvall, Ove; Letalick, Dietmar; Björkert, Stefan

    2013-10-01

    In the electro-optical sensors and processing in urban operations (ESUO) study we pave the way for the European Defence Agency (EDA) group of Electro-Optics experts (IAP03) for a common understanding of the optimal distribution of processing functions between the different platforms. Combinations of local, distributed and centralized processing are proposed. In this way one can match processing functionality to the required power, and available communication systems data rates, to obtain the desired reaction times. In the study, three priority scenarios were defined. For these scenarios, present-day and future sensors and signal processing technologies were studied. The priority scenarios were camp protection, patrol and house search. A method for analyzing information quality in single and multi-sensor systems has been applied. A method for estimating reaction times for transmission of data through the chain of command has been proposed and used. These methods are documented and can be used to modify scenarios, or be applied to other scenarios. Present day data processing is organized mainly locally. Very limited exchange of information with other platforms is present; this is performed mainly at a high information level. Main issues that arose from the analysis of present-day systems and methodology are the slow reaction time due to the limited field of view of present-day sensors and the lack of robust automated processing. Efficient handover schemes between wide and narrow field of view sensors may however reduce the delay times. The main effort in the study was in forecasting the signal processing of EO-sensors in the next ten to twenty years. Distributed processing is proposed between hand-held and vehicle based sensors. This can be accompanied by cloud processing on board several vehicles. Additionally, to perform sensor fusion on sensor data originating from different platforms, and making full use of UAV imagery, a combination of distributed and centralized processing is essential. There is a central role for sensor fusion of heterogeneous sensors in future processing. The changes that occur in the urban operations of the future due to the application of these new technologies will be the improved quality of information, with shorter reaction time, and with lower operator load.

  18. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  19. A model for the distributed storage and processing of large arrays

    NASA Technical Reports Server (NTRS)

    Mehrota, P.; Pratt, T. W.

    1983-01-01

    A conceptual model for parallel computations on large arrays is developed. The model provides a set of language concepts appropriate for processing arrays which are generally too large to fit in the primary memories of a multiprocessor system. The semantic model is used to represent arrays on a concurrent architecture in such a way that the performance realities inherent in the distributed storage and processing can be adequately represented. An implementation of the large array concept as an Ada package is also described.

  20. Loss Reduction on Adoption of High Voltage LT Less Distribution

    NASA Astrophysics Data System (ADS)

    Tiwari, Deepika; Adhikari, Nikhileshwar Prasad; Gupta, Amit; Bajpai, Santosh Kumar

    2016-06-01

    In India there is a need to improve the quality of the electricity distribution process which has increased varying from year to year. In distribution networks, the limiting factor to load carrying capacity is generally the voltage reduction. High voltage distribution system (HVDS) is one of the steps to reduce line losses in electrical distribution network. It helps to reduce the length of low tension (LT) lines and makes the power available close to the users. The high voltage power distribution system reduces the probability of power theft by hooking HVDS suggests an increase in installation of small capacity single-phase transformers in the network which again save considerable energy. This paper is compared to existing conventional low tension distribution network with HVDS. The paper gives a clear picture of reduction in distribution losses with adoption of HVDS system. Losses Reduction of 11 kV Feeder in Nuniya (India) with adoption of HVDS have been worked out/ quantified and benefits thereby in generating capacity have discussed.

  1. Support for User Interfaces for Distributed Systems

    NASA Technical Reports Server (NTRS)

    Eychaner, Glenn; Niessner, Albert

    2005-01-01

    An extensible Java(TradeMark) software framework supports the construction and operation of graphical user interfaces (GUIs) for distributed computing systems typified by ground control systems that send commands to, and receive telemetric data from, spacecraft. Heretofore, such GUIs have been custom built for each new system at considerable expense. In contrast, the present framework affords generic capabilities that can be shared by different distributed systems. Dynamic class loading, reflection, and other run-time capabilities of the Java language and JavaBeans component architecture enable the creation of a GUI for each new distributed computing system with a minimum of custom effort. By use of this framework, GUI components in control panels and menus can send commands to a particular distributed system with a minimum of system-specific code. The framework receives, decodes, processes, and displays telemetry data; custom telemetry data handling can be added for a particular system. The framework supports saving and later restoration of users configurations of control panels and telemetry displays with a minimum of effort in writing system-specific code. GUIs constructed within this framework can be deployed in any operating system with a Java run-time environment, without recompilation or code changes.

  2. Valuation of Electric Power System Services and Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kintner-Meyer, Michael C. W.; Homer, Juliet S.; Balducci, Patrick J.

    Accurate valuation of existing and new technologies and grid services has been recognized to be important to stimulate investment in grid modernization. Clear, transparent, and accepted methods for estimating the total value (i.e., total benefits minus cost) of grid technologies and services are necessary for decision makers to make informed decisions. This applies to home owners interested in distributed energy technologies, as well as to service providers offering new demand response services, and utility executives evaluating best investment strategies to meet their service obligation. However, current valuation methods lack consistency, methodological rigor, and often the capabilities to identify and quantifymore » multiple benefits of grid assets or new and innovative services. Distributed grid assets often have multiple benefits that are difficult to quantify because of the locational context in which they operate. The value is temporally, operationally, and spatially specific. It varies widely by distribution systems, transmission network topology, and the composition of the generation mix. The Electric Power Research Institute (EPRI) recently established a benefit-cost framework that proposes a process for estimating multiple benefits of distributed energy resources (DERs) and the associated cost. This document proposes an extension of this endeavor that offers a generalizable framework for valuation that quantifies the broad set of values for a wide range of technologies (including energy efficiency options, distributed resources, transmission, and generation) as well as policy options that affect all aspects of the entire generation and delivery system of the electricity infrastructure. The extension includes a comprehensive valuation framework of monetizable and non-monetizable benefits of new technologies and services beyond the traditional reliability objectives. The benefits are characterized into the following categories: sustainability, affordability, and security, flexibility, and resilience. This document defines the elements of a generic valuation framework and process as well as system properties and metrics by which value streams can be derived. The valuation process can be applied to determine the value on the margin of incremental system changes. This process is typically performed when estimating the value of a particular project (e.g., value of a merchant generator, or a distributed photovoltaic (PV) rooftop installation). Alternatively, the framework can be used when a widespread change in the grid operation, generation mix, or transmission topology is to be valued. In this case a comprehensive system analysis is required.« less

  3. The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process

    NASA Astrophysics Data System (ADS)

    Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko

    2012-06-01

    A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.

  4. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  5. Energy and Cost Optimized Technology Options to Meet Energy Needs of Food Processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makhmalbaf, Atefe; Srivastava, Viraj; Hoffman, Michael G.

    ABSTRACT Combined cooling, heating and electric power (CCHP) distributed generation (DG) systems can provide electricity, heat, and cooling power to buildings and industrial processes directly onsite, while significantly increasing energy efficiency, security of energy supply, and grid independence. Fruit, vegetable, dairy and meat processing industries with simultaneous requirements for heat, steam, chilling and electricity, are well suited for the use of such systems to supply base-load electrical demand or as peak reducing generators with heat recovery in the forms of hot water, steam and/or chilled water. This paper documents results and analysis from a pilot project to evaluate opportunities formore » energy, emission, and cost for CCHP-DG and energy storage systems installed onsite at food processing facilities. It was found that a dairy processing plant purchasing 15,000 MWh of electricity will need to purchase 450 MWh with the integration of a 1.1 MW CCHP system. Here, the natural gas to be purchased increased from 190,000 MMBtu to 255,000 MMBtu given the fuel requirements of the CCHP system. CCHP systems lower emissions, however, in the Pacific Northwest the high percentage of hydro-power results in CO2 emissions from CCHP were higher than that attributed to the electric utility/regional energy mix. The value of this paper is in promoting and educating financial decision makers to seriously consider CCHP systems when building or upgrading facilities. The distributed generation aspect can reduce utility costs for industrial facilities and show non-wires solution benefits to delay or eliminate the need for upgrades to local electric transmission and distribution systems.« less

  6. Enabling the Continuous EOS-SNPP Satellite Data Record thru EOSDIS Services

    NASA Astrophysics Data System (ADS)

    Hall, A.; Behnke, J.; Ho, E. L.

    2015-12-01

    Following Suomi National Polar-Orbiting Partnership (SNPP) launch of October 2011, the role of the NASA Science Data Segment (SDS) focused primarily on evaluation of the sensor data records (SDRs) and environmental data records (EDRs) produced by the Joint Polar Satellite System (JPSS), a National Oceanic and Atmosphere Administration (NOAA) Program as to their suitability for Earth system science. The evaluation has been completed for Visible Infrared Imager Radiometer Suite (VIIRS), Advanced Technology Microwave Sounder (ATMS), Cross-track Infrared Sounder (CrIS), and Ozone Mapper/Profiler Suite (OMPS) Nadir instruments. Since launch, the SDS has also been processing, archiving and distributing data from the Clouds and the Earth's Radiant Energy System (CERES) and Ozone Mapper/Profiler Suite (OMPS) Limb instruments and this work is planned to continue through the life of the mission. As NASA transitions to the production of standard, Earth Observing System (EOS)-like science products for all instruments aboard Suomi NPP, the Suomi NPP Science Team (ST) will need data processing and production facilities to produce the new science products they develop. The five Science Investigator-led Processing Systems (SIPS): Land, Ocean. Atmosphere, Ozone, and Sounder will produce the NASA SNPP standard Level 1, Level 2, and global Level 3 products and provide the products to the NASA's Distributed Active Archive Centers (DAACs) for distribution to the user community. The SIPS will ingest EOS compatible Level 0 data from EOS Data Operations System (EDOS) for their data processing. A key feature is the use of Earth Observing System Data and Information System (EOSDIS) services for the continuous EOS-SNPP satellite data record. This allows users to use the same tools and interfaces on SNPP as they would on the entire NASA Earth Science data collection in EOSDIS.

  7. Stability and performance analysis of a jump linear control system subject to digital upsets

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Sun, Hui; Ma, Zhen-Yang

    2015-04-01

    This paper focuses on the methodology analysis for the stability and the corresponding tracking performance of a closed-loop digital jump linear control system with a stochastic switching signal. The method is applied to a flight control system. A distributed recoverable platform is implemented on the flight control system and subject to independent digital upsets. The upset processes are used to stimulate electromagnetic environments. Specifically, the paper presents the scenarios that the upset process is directly injected into the distributed flight control system, which is modeled by independent Markov upset processes and independent and identically distributed (IID) processes. A theoretical performance analysis and simulation modelling are both presented in detail for a more complete independent digital upset injection. The specific examples are proposed to verify the methodology of tracking performance analysis. The general analyses for different configurations are also proposed. Comparisons among different configurations are conducted to demonstrate the availability and the characteristics of the design. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61403395), the Natural Science Foundation of Tianjin, China (Grant No. 13JCYBJC39000), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry, China, the Tianjin Key Laboratory of Civil Aircraft Airworthiness and Maintenance in Civil Aviation of China (Grant No. 104003020106), and the Fund for Scholars of Civil Aviation University of China (Grant No. 2012QD21x).

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less

  9. Architecture for distributed design and fabrication

    NASA Astrophysics Data System (ADS)

    McIlrath, Michael B.; Boning, Duane S.; Troxel, Donald E.

    1997-01-01

    We describe a flexible, distributed system architecture capable of supporting collaborative design and fabrication of semi-conductor devices and integrated circuits. Such capabilities are of particular importance in the development of new technologies, where both equipment and expertise are limited. Distributed fabrication enables direct, remote, physical experimentation in the development of leading edge technology, where the necessary manufacturing resources are new, expensive, and scarce. Computational resources, software, processing equipment, and people may all be widely distributed; their effective integration is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages is essential in order to achieve the realization of new technologies for specific product requirements. Our architecture leverages current vendor and consortia developments to define software interfaces and infrastructure based on existing and merging networking, CIM, and CAD standards. Process engineers and product designers access processing and simulation results through a common interface and collaborate across the distributed manufacturing environment.

  10. Development of a Heterogenic Distributed Environment for Spatial Data Processing Using Cloud Technologies

    NASA Astrophysics Data System (ADS)

    Garov, A. S.; Karachevtseva, I. P.; Matveev, E. V.; Zubarev, A. E.; Florinsky, I. V.

    2016-06-01

    We are developing a unified distributed communication environment for processing of spatial data which integrates web-, desktop- and mobile platforms and combines volunteer computing model and public cloud possibilities. The main idea is to create a flexible working environment for research groups, which may be scaled according to required data volume and computing power, while keeping infrastructure costs at minimum. It is based upon the "single window" principle, which combines data access via geoportal functionality, processing possibilities and communication between researchers. Using an innovative software environment the recently developed planetary information system (http://cartsrv.mexlab.ru/geoportal) will be updated. The new system will provide spatial data processing, analysis and 3D-visualization and will be tested based on freely available Earth remote sensing data as well as Solar system planetary images from various missions. Based on this approach it will be possible to organize the research and representation of results on a new technology level, which provides more possibilities for immediate and direct reuse of research materials, including data, algorithms, methodology, and components. The new software environment is targeted at remote scientific teams, and will provide access to existing spatial distributed information for which we suggest implementation of a user interface as an advanced front-end, e.g., for virtual globe system.

  11. Tank waste remediation system privatization infrastructure program requirements and document management process guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    ROOT, R.W.

    1999-05-18

    This guide provides the Tank Waste Remediation System Privatization Infrastructure Program management with processes and requirements to appropriately control information and documents in accordance with the Tank Waste Remediation System Configuration Management Plan (Vann 1998b). This includes documents and information created by the program, as well as non-program generated materials submitted to the project. It provides appropriate approval/control, distribution and filing systems.

  12. Distribution of p-process 174Hf in early solar system materials and the origin of nucleosynthetic Hf and W isotope anomalies in Ca-Al rich inclusions

    NASA Astrophysics Data System (ADS)

    Peters, Stefan T. M.; Münker, Carsten; Pfeifer, Markus; Elfers, Bo-Magnus; Sprung, Peter

    2017-02-01

    Some nuclides that were produced in supernovae are heterogeneously distributed between different meteoritic materials. In some cases these heterogeneities have been interpreted as the result of interaction between ejecta from a nearby supernova and the nascent solar system. Particularly in the case of the oldest objects that formed in the solar system - Ca-Al rich inclusions (CAIs) - this view is confirm the hypothesis that a nearby supernova event facilitated or even triggered solar system formation. We present Hf isotope data for bulk meteorites, terrestrial materials and CAIs, for the first time including the low-abundance isotope 174Hf (∼0.16%). This rare isotope was likely produced during explosive O/Ne shell burning in massive stars (i.e., the classical "p-process"), and therefore its abundance potentially provides a sensitive tracer for putative heterogeneities within the solar system that were introduced by supernova ejecta. For CAIs and one LL chondrite, also complementary W isotope data are reported for the same sample cuts. Once corrected for small neutron capture effects, different chondrite groups, eucrites, a silicate inclusion of a IAB iron meteorite, and terrestrial materials display homogeneous Hf isotope compositions including 174Hf. Hafnium-174 was thus uniformly distributed in the inner solar system when planetesimals formed at the <50 ppm level. This finding is in good agreement with the evidently homogeneous distributions of p-process isotopes 180W, 184Os and possibly 190Pt between different iron meteorite groups. In contrast to bulk meteorite samples, CAIs show variable depletions in p-process 174Hf with respect to the inner solar system composition, and also variable r-process (or s-process) Hf and W contributions. Based on combined Hf and W isotope compositions, we show that CAIs sampled at least one component in which the proportion of r- and s-process derived Hf and W deviates from that of supernova ejecta. The Hf and W isotope anomalies in CAIs are therefore best explained by selective processing of presolar carrier phases prior to CAI formation, and not by a late injection of supernova materials. Likewise, other isotope anomalies in additional elements in CAIs relative to the bulk solar system may reflect the same process. The isotopic heterogeneities between the first refractory condensates may have been eradicated partially during CAI formation, because W isotope anomalies in CAIs appear to decrease with increasing W concentrations as inferred from time-integrated 182W/184W. Importantly, the 176Lu-176Hf and 182Hf-182W chronometers are not significantly affected by nucleosynthetic heterogeneity of Hf isotopes in bulk meteorites, but may be affected in CAIs.

  13. Foundational Report Series. Advanced Distribution management Systems for Grid Modernization (Importance of DMS for Distribution Grid Modernization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui

    2015-09-01

    Grid modernization is transforming the operation and management of electric distribution systems from manual, paper-driven business processes to electronic, computer-assisted decisionmaking. At the center of this business transformation is the distribution management system (DMS), which provides a foundation from which optimal levels of performance can be achieved in an increasingly complex business and operating environment. Electric distribution utilities are facing many new challenges that are dramatically increasing the complexity of operating and managing the electric distribution system: growing customer expectations for service reliability and power quality, pressure to achieve better efficiency and utilization of existing distribution system assets, and reductionmore » of greenhouse gas emissions by accommodating high penetration levels of distributed generating resources powered by renewable energy sources (wind, solar, etc.). Recent “storm of the century” events in the northeastern United States and the lengthy power outages and customer hardships that followed have greatly elevated the need to make power delivery systems more resilient to major storm events and to provide a more effective electric utility response during such regional power grid emergencies. Despite these newly emerging challenges for electric distribution system operators, only a small percentage of electric utilities have actually implemented a DMS. This paper discusses reasons why a DMS is needed and why the DMS may emerge as a mission-critical system that will soon be considered essential as electric utilities roll out their grid modernization strategies.« less

  14. Dynamic Resource Allocation to Improve Service Performance in Order Fulfillment Systems

    DTIC Science & Technology

    2009-01-01

    efficient system uses economies of scale at two points: orders are batched before processing, which reduces processing costs, and processed or- ders ...the ef- fects of batching on order picking processes is well-researched and well-understood ( van den Berg and Gademann, 1999). Because orders are...a final so- journ time distribution. Our work builds on existing research in matrix-geometric methods by Neuts (1981), Asmussen and M0ller (2001

  15. Middleware for big data processing: test results

    NASA Astrophysics Data System (ADS)

    Gankevich, I.; Gaiduchok, V.; Korkhov, V.; Degtyarev, A.; Bogdanov, A.

    2017-12-01

    Dealing with large volumes of data is resource-consuming work which is more and more often delegated not only to a single computer but also to a whole distributed computing system at once. As the number of computers in a distributed system increases, the amount of effort put into effective management of the system grows. When the system reaches some critical size, much effort should be put into improving its fault tolerance. It is difficult to estimate when some particular distributed system needs such facilities for a given workload, so instead they should be implemented in a middleware which works efficiently with a distributed system of any size. It is also difficult to estimate whether a volume of data is large or not, so the middleware should also work with data of any volume. In other words, the purpose of the middleware is to provide facilities that adapt distributed computing system for a given workload. In this paper we introduce such middleware appliance. Tests show that this middleware is well-suited for typical HPC and big data workloads and its performance is comparable with well-known alternatives.

  16. Systems Suitable for Information Professionals.

    ERIC Educational Resources Information Center

    Blair, John C., Jr.

    1983-01-01

    Describes computer operating systems applicable to microcomputers, noting hardware components, advantages and disadvantages of each system, local area networks, distributed processing, and a fully configured system. Lists of hardware components (disk drives, solid state disk emulators, input/output and memory components, and processors) and…

  17. Assessing the origin of bacteria in tap water and distribution system in an unchlorinated drinking water system by SourceTracker using microbial community fingerprints.

    PubMed

    Liu, Gang; Zhang, Ya; van der Mark, Ed; Magic-Knezev, Aleksandra; Pinto, Ameet; van den Bogert, Bartholomeus; Liu, Wentso; van der Meer, Walter; Medema, Gertjan

    2018-07-01

    The general consensus is that the abundance of tap water bacteria is greatly influenced by water purification and distribution. Those bacteria that are released from biofilm in the distribution system are especially considered as the major potential risk for drinking water bio-safety. For the first time, this full-scale study has captured and identified the proportional contribution of the source water, treated water, and distribution system in shaping the tap water bacterial community based on their microbial community fingerprints using the Bayesian "SourceTracker" method. The bacterial community profiles and diversity analyses illustrated that the water purification process shaped the community of planktonic and suspended particle-associated bacteria in treated water. The bacterial communities associated with suspended particles, loose deposits, and biofilm were similar to each other, while the community of tap water planktonic bacteria varied across different locations in distribution system. The microbial source tracking results showed that there was not a detectable contribution of source water to bacterial community in the tap water and distribution system. The planktonic bacteria in the treated water was the major contributor to planktonic bacteria in the tap water (17.7-54.1%). The particle-associated bacterial community in the treated water seeded the bacterial community associated with loose deposits (24.9-32.7%) and biofilm (37.8-43.8%) in the distribution system. In return, the loose deposits and biofilm showed a significant influence on tap water planktonic and particle-associated bacteria, which were location dependent and influenced by hydraulic changes. This was revealed by the increased contribution of loose deposits to tap water planktonic bacteria (from 2.5% to 38.0%) and an increased contribution of biofilm to tap water particle-associated bacteria (from 5.9% to 19.7%) caused by possible hydraulic disturbance from proximal to distal regions. Therefore, our findings indicate that the tap water bacteria could possibly be managed by selecting and operating the purification process properly and cleaning the distribution system effectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Dynamics of Biofilm Regrowth in Drinking Water Distribution Systems.

    PubMed

    Douterelo, I; Husband, S; Loza, V; Boxall, J

    2016-07-15

    The majority of biomass within water distribution systems is in the form of attached biofilm. This is known to be central to drinking water quality degradation following treatment, yet little understanding of the dynamics of these highly heterogeneous communities exists. This paper presents original information on such dynamics, with findings demonstrating patterns of material accumulation, seasonality, and influential factors. Rigorous flushing operations repeated over a 1-year period on an operational chlorinated system in the United Kingdom are presented here. Intensive monitoring and sampling were undertaken, including time-series turbidity and detailed microbial analysis using 16S rRNA Illumina MiSeq sequencing. The results show that bacterial dynamics were influenced by differences in the supplied water and by the material remaining attached to the pipe wall following flushing. Turbidity, metals, and phosphate were the main factors correlated with the distribution of bacteria in the samples. Coupled with the lack of inhibition of biofilm development due to residual chlorine, this suggests that limiting inorganic nutrients, rather than organic carbon, might be a viable component in treatment strategies to manage biofilms. The research also showed that repeat flushing exerted beneficial selective pressure, giving another reason for flushing being a viable advantageous biofilm management option. This work advances our understanding of microbiological processes in drinking water distribution systems and helps inform strategies to optimize asset performance. This research provides novel information regarding the dynamics of biofilm formation in real drinking water distribution systems made of different materials. This new knowledge on microbiological process in water supply systems can be used to optimize the performance of the distribution network and to guarantee safe and good-quality drinking water to consumers. Copyright © 2016 Douterelo et al.

  19. Dynamics of Biofilm Regrowth in Drinking Water Distribution Systems

    PubMed Central

    Husband, S.; Loza, V.; Boxall, J.

    2016-01-01

    ABSTRACT The majority of biomass within water distribution systems is in the form of attached biofilm. This is known to be central to drinking water quality degradation following treatment, yet little understanding of the dynamics of these highly heterogeneous communities exists. This paper presents original information on such dynamics, with findings demonstrating patterns of material accumulation, seasonality, and influential factors. Rigorous flushing operations repeated over a 1-year period on an operational chlorinated system in the United Kingdom are presented here. Intensive monitoring and sampling were undertaken, including time-series turbidity and detailed microbial analysis using 16S rRNA Illumina MiSeq sequencing. The results show that bacterial dynamics were influenced by differences in the supplied water and by the material remaining attached to the pipe wall following flushing. Turbidity, metals, and phosphate were the main factors correlated with the distribution of bacteria in the samples. Coupled with the lack of inhibition of biofilm development due to residual chlorine, this suggests that limiting inorganic nutrients, rather than organic carbon, might be a viable component in treatment strategies to manage biofilms. The research also showed that repeat flushing exerted beneficial selective pressure, giving another reason for flushing being a viable advantageous biofilm management option. This work advances our understanding of microbiological processes in drinking water distribution systems and helps inform strategies to optimize asset performance. IMPORTANCE This research provides novel information regarding the dynamics of biofilm formation in real drinking water distribution systems made of different materials. This new knowledge on microbiological process in water supply systems can be used to optimize the performance of the distribution network and to guarantee safe and good-quality drinking water to consumers. PMID:27208119

  20. Real-time flight test data distribution and display

    NASA Technical Reports Server (NTRS)

    Nesel, Michael C.; Hammons, Kevin R.

    1988-01-01

    Enhancements to the real-time processing and display systems of the NASA Western Aeronautical Test Range are described. Display processing has been moved out of the telemetry and radar acquisition processing systems super-minicomputers into user/client interactive graphic workstations. Real-time data is provided to the workstations by way of Ethernet. Future enhancement plans include use of fiber optic cable to replace the Ethernet.

  1. Distributed Visualization Project

    NASA Technical Reports Server (NTRS)

    Craig, Douglas; Conroy, Michael; Kickbusch, Tracey; Mazone, Rebecca

    2016-01-01

    Distributed Visualization allows anyone, anywhere to see any simulation at any time. Development focuses on algorithms, software, data formats, data systems and processes to enable sharing simulation-based information across temporal and spatial boundaries without requiring stakeholders to possess highly-specialized and very expensive display systems. It also introduces abstraction between the native and shared data, which allows teams to share results without giving away proprietary or sensitive data. The initial implementation of this capability is the Distributed Observer Network (DON) version 3.1. DON 3.1 is available for public release in the NASA Software Store (https://software.nasa.gov/software/KSC-13775) and works with version 3.0 of the Model Process Control specification (an XML Simulation Data Representation and Communication Language) to display complex graphical information and associated Meta-Data.

  2. Low-Energy, Low-Cost Production of Ethylene by Low- Temperature Oxidative Coupling of Methane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radaelli, Guido; Chachra, Gaurav; Jonnavittula, Divya

    In this project, we develop a catalytic process technology for distributed small-scale production of ethylene by oxidative coupling of methane at low temperatures using an advanced catalyst. The Low Temperature Oxidative Coupling of Methane (LT-OCM) catalyst system is enabled by a novel chemical catalyst and process pioneered by Siluria, at private expense, over the last six years. Herein, we develop the LT-OCM catalyst system for distributed small-scale production of ethylene by identifying and addressing necessary process schemes, unit operations and process parameters that limit the economic viability and mass penetration of this technology to manufacture ethylene at small-scales. The outputmore » of this program is process concepts for small-scale LT-OCM catalyst based ethylene production, lab-scale verification of the novel unit operations adopted in the proposed concept, and an analysis to validate the feasibility of the proposed concepts.« less

  3. Software fault tolerance in computer operating systems

    NASA Technical Reports Server (NTRS)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  4. Seasonal and spatial evolution of trihalomethanes in a drinking water distribution system according to the treatment process.

    PubMed

    Domínguez-Tello, A; Arias-Borrego, A; García-Barrera, Tamara; Gómez-Ariza, J L

    2015-11-01

    This paper comparatively shows the influence of four water treatment processes on the formation of trihalomethanes (THMs) in a water distribution system. The study was performed from February 2005 to January 2012 with analytical data of 600 samples taken in Aljaraque water treatment plant (WTP) and 16 locations along the water distribution system (WDS) in the region of Andévalo and the coast of Huelva (southwest Spain), a region with significant seasonal and population changes. The comparison of results in the four different processes studied indicated a clear link of the treatment process with the formation of THM along the WDS. The most effective treatment process is preozonation and activated carbon filtration (P3), which is also the most stable under summer temperatures. Experiments also show low levels of THMs with the conventional process of preoxidation with potassium permanganate (P4), delaying the chlorination to the end of the WTP; however, this simple and economical treatment process is less effective and less stable than P3. In this study, strong seasonal variations were obtained (increase of THM from winter to summer of 1.17 to 1.85 times) and a strong spatial variation (1.1 to 1.7 times from WTP to end points of WDS) which largely depends on the treatment process applied. There was also a strong correlation between THM levels and water temperature, contact time and pH. On the other hand, it was found that THM formation is not proportional to the applied chlorine dose in the treatment process, but there is a direct relationship with the accumulated dose of chlorine. Finally, predictive models based on multiple linear regressions are proposed for each treatment process.

  5. Research and design of intelligent distributed traffic signal light control system based on CAN bus

    NASA Astrophysics Data System (ADS)

    Chen, Yu

    2007-12-01

    Intelligent distributed traffic signal light control system was designed based on technologies of infrared, CAN bus, single chip microprocessor (SCM), etc. The traffic flow signal is processed with the core of SCM AT89C51. At the same time, the SCM controls the CAN bus controller SJA1000/transceiver PCA82C250 to build a CAN bus communication system to transmit data. Moreover, up PC realizes to connect and communicate with SCM through USBCAN chip PDIUSBD12. The distributed traffic signal light control system with three control styles of Vehicle flux, remote and PC is designed. This paper introduces the system composition method and parts of hardware/software design in detail.

  6. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.

    1993-01-01

    The key elements in the 1992-93 period of the project are the following: (1) extensive use of the simulator to implement and test - concurrency control algorithms, interactive user interface, and replica control algorithms; and (2) investigations into the applicability of data and process replication in real-time systems. In the 1993-94 period of the project, we intend to accomplish the following: (1) concentrate on efforts to investigate the effects of data and process replication on hard and soft real-time systems - especially we will concentrate on the impact of semantic-based consistency control schemes on a distributed real-time system in terms of improved reliability, improved availability, better resource utilization, and reduced missed task deadlines; and (2) use the prototype to verify the theoretically predicted performance of locking protocols, etc.

  7. Design considerations, architecture, and use of the Mini-Sentinel distributed data system.

    PubMed

    Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S

    2012-01-01

    We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.

  8. Development and Operation of the Americas ALOS Data Node

    NASA Astrophysics Data System (ADS)

    Arko, S. A.; Marlin, R. H.; La Belle-Hamer, A. L.

    2004-12-01

    In the spring of 2005, the Japanese Aerospace Exploration Agency (JAXA) will launch the next generation in advanced, remote sensing satellites. The Advanced Land Observing Satellite (ALOS) includes three sensors, two visible imagers and one L-band polarimetric SAR, providing high-quality remote sensing data to the scientific and commercial communities throughout the world. Focusing on remote sensing and scientific pursuits, ALOS will image nearly the entire Earth using all three instruments during its expected three-year lifetime. These data sets offer the potential for data continuation of older satellite missions as well as new products for the growing user community. One of the unique features of the ALOS mission is the data distribution approach. JAXA has created a worldwide cooperative data distribution network. The data nodes are NOAA /ASF representing the Americas ALOS Data Node (AADN), ESA representing the ALOS European and African Node (ADEN), Geoscience Australia representing Oceania and JAXA representing the Asian continent. The AADN is the sole agency responsible for archival, processing and distribution of L0 and L1 products to users in both North and South America. In support of this mission, AADN is currently developing a processing and distribution infrastructure to provide easy access to these data sets. Utilizing a custom, grid-based process controller and media generation system, the overall infrastructure has been designed to provide maximum throughput while requiring a minimum of operator input and maintenance. This paper will present an overview of the ALOS system, details of each sensor's capabilities and of the processing and distribution system being developed by AADN to provide these valuable data sets to users throughout North and South America.

  9. Distant Comets in the Early Solar System

    NASA Technical Reports Server (NTRS)

    Meech, Karen J.

    2000-01-01

    The main goal of this project is to physically characterize the small outer solar system bodies. An understanding of the dynamics and physical properties of the outer solar system small bodies is currently one of planetary science's highest priorities. The measurement of the size distributions of these bodies will help constrain the early mass of the outer solar system as well as lead to an understanding of the collisional and accretional processes. A study of the physical properties of the small outer solar system bodies in comparison with comets in the inner solar system and in the Kuiper Belt will give us information about the nebular volatile distribution and small body surface processing. We will increase the database of comet nucleus sizes making it statistically meaningful (for both Short-Period and Centaur comets) to compare with those of the Trans-Neptunian Objects. In addition, we are proposing to do active ground-based observations in preparation for several upcoming space missions.

  10. Earth Observatory Satellite system definition study. Report no. 3: Design/cost tradeoff studies. Appendix E: EOS program supporting system trade data. Part 2: System trade studies no. 9 - 19

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The relative merits of several international data acquisition (IDA) alternatives for the Earth Observatory Satellite (EOS) are established and rated on a cost effectiveness basis. The primary alternatives under consideration are: (1) direct transmission to foreign ground stations, (2) a wideband video tape recorder system for collection of foreign data and processing and distribution from the United States, and (3) a tracking and data relay satellite (TDRS) system for the relay of foreign data to the United States for processing and distribution. A requirements model is established for the analysis on the basis of the heaviest concentration of agricultural areas around the world. The model, the orbit path and the constraints of EOS and data volume summaries are presented. Alternative system descriptions and costs are given in addition to cost-performance summaries.

  11. The revolution in data gathering systems. [mini and microcomputers in NASA wind tunnels

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    This paper gives a review of the data-acquisition systems used in NASA's wind tunnels from the 1950's to the present as a basis for assessing the impact of minicomputers and microcomputers on data acquisition and processing. The operation and disadvantages of wind-tunnel data systems are summarized for the period before 1950, the early 1950's, the early and late 1960's, and the early 1970's. Some significant advances discussed include the use or development of solid-state components, minicomputer systems, large central computers, on-line data processing, autoranging DC amplifiers, MOS-FET multiplexers, MSI and LSI logic, computer-controlled programmable amplifiers, solid-state remote multiplexing, integrated circuits, and microprocessors. The distributed system currently in use with the 40-ft by 80-ft wind tunnel at Ames Research Center is described in detail. The expected employment of distributed systems and microprocessors in the next decade is noted.

  12. The Researches on Food Traceability System of University takeout

    NASA Astrophysics Data System (ADS)

    lu, Jia xin; zhao, Ce; li, Zhuang zhuang; shao, Zi rong; pi, Kun yi

    2018-06-01

    In recent years, campus takeaway has developed rapidly, and all kinds of online ordering platforms are running. The problem of distribution in the campus can not only save the time cost of the businessmen, but also guarantee the effective management of the school, which is beneficial to the construction of the standard health system for the takeout. But distribution according to the existing mode will cause certain safety and health risks. The establishment of the University takeaway food traceability system can solve this problem. This paper first analyzes the sharing mode and distribution process of campus takeaway, and then designs the intelligent tracing system for the campus takeaway; the construction of the food distribution information platform and the problem of the recycling of the green environment of the dining box. Finally, the intelligent tracing system of the school takeout is analyzed with the braised chicken as an example.

  13. A Distributed Processing Approach to Payroll Time Reporting for a Large School District.

    ERIC Educational Resources Information Center

    Freeman, Raoul J.

    1983-01-01

    Describes a system for payroll reporting from geographically disparate locations in which data is entered, edited, and verified locally on minicomputers and then uploaded to a central computer for the standard payroll process. Communications and hardware, time-reporting software, data input techniques, system implementation, and its advantages are…

  14. Interconnecting PV on New York City's Secondary Network Distribution System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, K; Coddington, M; Burman, K

    2009-11-01

    The U.S. Department of Energy (DOE) has teamed with cities across the country through the Solar America Cities (SAC) partnership program to help reduce barriers and accelerate implementation of solar energy. The New York City SAC team is a partnership between the City University of New York (CUNY), the New York City Mayor s Office of Long-term Planning and Sustainability, and the New York City Economic Development Corporation (NYCEDC).The New York City SAC team is working with DOE s National Renewable Energy Laboratory (NREL) and Con Edison, the local utility, to develop a roadmap for photovoltaic (PV) installations in themore » five boroughs. The city set a goal to increase its installed PV capacity from1.1 MW in 2005 to 8.1 MW by 2015 (the maximum allowed in 2005). A key barrier to reaching this goal, however, is the complexity of the interconnection process with the local utility. Unique challenges are associated with connecting distributed PV systems to secondary network distribution systems (simplified to networks in this report). Although most areas of the country use simpler radial distribution systems to distribute electricity, larger metropolitan areas like New York City typically use networks to increase reliability in large load centers. Unlike the radial distribution system, where each customer receives power through a single line, a network uses a grid of interconnected lines to deliver power to each customer through several parallel circuits and sources. This redundancy improves reliability, but it also requires more complicated coordination and protection schemes that can be disrupted by energy exported from distributed PV systems. Currently, Con Edison studies each potential PV system in New York City to evaluate the system s impact on the network, but this is time consuming for utility engineers and may delay the customer s project or add cost for larger installations. City leaders would like to streamline this process to facilitate faster, simpler, and less expensive distributed PV system interconnections. To assess ways to improve the interconnection process, NREL conducted a four-part study with support from DOE. The NREL team then compiled the final reports from each study into this report. In Section 1PV Deployment Analysis for New York City we analyze the technical potential for rooftop PV systems in the city. This analysis evaluates potential PV power production in ten Con Edison networks of various locations and building densities (ranging from high density apartments to lower density single family homes). Next, we compare the potential power production to network loads to determine where and when PV generation is most likely to exceed network load and disrupt network protection schemes. The results of this analysis may assist Con Edison in evaluating future PV interconnection applications and in planning future network protection system upgrades. This analysis may also assist other utilities interconnecting PV systems to networks by defining a method for assessing the technical potential of PV in the network and its impact on network loads. Section 2. A Briefing for Policy Makers on Connecting PV to a Network Grid presents an overview intended for nontechnical stakeholders. This section describes the issues associated with interconnecting PV systems to networks, along with possible solutions. Section 3. Technical Review of Concerns and Solutions to PV Interconnection in New York City summarizes common concerns of utility engineers and network experts about interconnecting PV systems to secondary networks. This section also contains detailed descriptions of nine solutions, including advantages and disadvantages, potential impacts, and road maps for deployment. Section 4. Utility Application Process Reviewlooks at utility interconnection application processes across the country and identifies administrative best practices for efficient PV interconnection.« less

  15. Extrusion-spheronization: process variables and characterization.

    PubMed

    Sinha, V R; Agrawal, M K; Agarwal, A; Singh, G; Ghai, D

    2009-01-01

    Multiparticulate systems have undergone great development in the past decade fueled by the better understanding of their multiple roles as a suitable delivery system. With the passage of time, significant advances have been made in the process of pelletization due to the incorporation of specialized techniques for their development. Extrusion-spheronization seems to be the most promising process for the optimum delivery of many potent drugs having high systemic toxicity. It also offers immense pharmaceutical applicability due to the benefits of high loading capacity of active ingredient(s), narrow size distribution, and cost-effectiveness. On application of a specific coat, these systems can also aid in site-specific delivery, thereby enhancing the bioavailability of many drugs. The current review focuses on the process of extrusion-spheronization and the operational (extruder types, screen pressure, screw speed, temperature, moisture content, spheronization load, speed and time) and formulation (excipients and drugs) variables, which may affect the quality of the final pellets. Various methods for the evaluation of the quality of the pellets with regard to the size distribution, shape, friability, granule strength, density, porosity, flow properties, and surface texture are discussed.

  16. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing

    PubMed Central

    da Rocha, Armando Freitas; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i) provided by each electrode of the 10/20 system about the identified s i. H(e i) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i. This analysis evidenced 4 different patterns of H(e i) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies. PMID:26713089

  17. Combining Different Tools for EEG Analysis to Study the Distributed Character of Language Processing.

    PubMed

    Rocha, Armando Freitas da; Foz, Flávia Benevides; Pereira, Alfredo

    2015-01-01

    Recent studies on language processing indicate that language cognition is better understood if assumed to be supported by a distributed intelligent processing system enrolling neurons located all over the cortex, in contrast to reductionism that proposes to localize cognitive functions to specific cortical structures. Here, brain activity was recorded using electroencephalogram while volunteers were listening or reading small texts and had to select pictures that translate meaning of these texts. Several techniques for EEG analysis were used to show this distributed character of neuronal enrollment associated with the comprehension of oral and written descriptive texts. Low Resolution Tomography identified the many different sets (s i ) of neurons activated in several distinct cortical areas by text understanding. Linear correlation was used to calculate the information H(e i ) provided by each electrode of the 10/20 system about the identified s i . H(e i ) Principal Component Analysis (PCA) was used to study the temporal and spatial activation of these sources s i . This analysis evidenced 4 different patterns of H(e i ) covariation that are generated by neurons located at different cortical locations. These results clearly show that the distributed character of language processing is clearly evidenced by combining available EEG technologies.

  18. Distributed Peer-to-Peer Target Tracking in Wireless Sensor Networks

    PubMed Central

    Wang, Xue; Wang, Sheng; Bi, Dao-Wei; Ma, Jun-Jie

    2007-01-01

    Target tracking is usually a challenging application for wireless sensor networks (WSNs) because it is always computation-intensive and requires real-time processing. This paper proposes a practical target tracking system based on the auto regressive moving average (ARMA) model in a distributed peer-to-peer (P2P) signal processing framework. In the proposed framework, wireless sensor nodes act as peers that perform target detection, feature extraction, classification and tracking, whereas target localization requires the collaboration between wireless sensor nodes for improving the accuracy and robustness. For carrying out target tracking under the constraints imposed by the limited capabilities of the wireless sensor nodes, some practically feasible algorithms, such as the ARMA model and the 2-D integer lifting wavelet transform, are adopted in single wireless sensor nodes due to their outstanding performance and light computational burden. Furthermore, a progressive multi-view localization algorithm is proposed in distributed P2P signal processing framework considering the tradeoff between the accuracy and energy consumption. Finally, a real world target tracking experiment is illustrated. Results from experimental implementations have demonstrated that the proposed target tracking system based on a distributed P2P signal processing framework can make efficient use of scarce energy and communication resources and achieve target tracking successfully.

  19. Managing Sustainable Data Infrastructures: The Gestalt of EOSDIS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Lowe, Dawn; Lindsay, Francis; Lynnes, Chris; Mitchell, Andrew

    2016-01-01

    EOSDIS epitomizes a System of Systems, whose many varied and distributed parts are integrated into a single, highly functional organized science data system. A distributed architecture was adopted to ensure discipline-specific support for the science data, while also leveraging standards and establishing policies and tools to enable interdisciplinary research, and analysis across multiple scientific instruments. The EOSDIS is composed of system elements such as geographically distributed archive centers used to manage the stewardship of data. The infrastructure consists of underlying capabilities connections that enable the primary system elements to function together. For example, one key infrastructure component is the common metadata repository, which enables discovery of all data within the EOSDIS system. EOSDIS employs processes and standards to ensure partners can work together effectively, and provide coherent services to users.

  20. Space station automation of common module power management and distribution

    NASA Technical Reports Server (NTRS)

    Miller, W.; Jones, E.; Ashworth, B.; Riedesel, J.; Myers, C.; Freeman, K.; Steele, D.; Palmer, R.; Walsh, R.; Gohring, J.

    1989-01-01

    The purpose is to automate a breadboard level Power Management and Distribution (PMAD) system which possesses many functional characteristics of a specified Space Station power system. The automation system was built upon 20 kHz ac source with redundancy of the power buses. There are two power distribution control units which furnish power to six load centers which in turn enable load circuits based upon a system generated schedule. The progress in building this specified autonomous system is described. Automation of Space Station Module PMAD was accomplished by segmenting the complete task in the following four independent tasks: (1) develop a detailed approach for PMAD automation; (2) define the software and hardware elements of automation; (3) develop the automation system for the PMAD breadboard; and (4) select an appropriate host processing environment.

  1. NASA SNPP SIPS - Following in the Path of EOS

    NASA Technical Reports Server (NTRS)

    Behnke, Jeanne; Hall, Alfreda; Ho, Evelyn

    2016-01-01

    NASA's Earth Science Data Information System (ESDIS) Project has been operating NASA's Suomi National Polar-Orbiting Partnership (SNPP) Science Data Segment (SDS) since the launch in October 2011. At launch, the SDS focused primarily on the evaluation of Sensor Data Records (SDRs) and Environmental Data Records (EDRs) produced by the Joint Polar Satellite System (JPSS), a National Oceanic and Atmosphere Administration (NOAA) Program, as to their suitability for Earth system science. During the summer of 2014, NASA transitioned to the production of standard Earth Observing System (EOS)-like science products for all instruments aboard Suomi NPP. The five Science Investigator-led Processing Systems (SIPS): Land, Ocean, Atmosphere, Ozone, and Sounder were established to produce the NASA SNPP standard Level 1, Level 2, and global Level 3 products developed by the SNPP Science Teams and to provide the products to NASA's Distributed Active Archive Centers (DAACs) for archive and distribution to the user community. The processing, archiving and distribution of data from NASA's Clouds and the Earth's Radiant Energy System (CERES) and Ozone Mapper/Profiler Suite (OMPS) Limb instruments will continue. With the implementation of the JPSS Block 2 architecture and the launch of JPSS-1, the SDS will receive SNPP data in near real-time via the JPSS Stored Mission Data Hub (JSH), as well as JPSS-1 and future JPSS-2 data. The SNPP SIPS will ingest EOS compatible Level 0 data from the EOS Data Operations System (EDOS) element for their data processing, enabling the continuous EOS-SNPP-JPSS Satellite Data Record.

  2. Army Logistician. Volume 39, Issue 2, March-April 2007

    DTIC Science & Technology

    2007-04-01

    Most Army Reduces Tactical Supply System Footprint by thoMas h. aMent, jr. Centralizing all of the Army’s Corps/Theater Automated Data Processing...Middleware, which comprises both hardware and software, revises data in the Standard Army Retail Supply System (SARSS), thereby extending the use of the...Logistics: Supply Based or Distribution Based? The Changing Face of Fuel Management Combat Logistics Patrol Methodology Distribution-Based

  3. Viirs Land Science Investigator-Led Processing System

    NASA Astrophysics Data System (ADS)

    Devadiga, S.; Mauoka, E.; Roman, M. O.; Wolfe, R. E.; Kalb, V.; Davidson, C. C.; Ye, G.

    2015-12-01

    The objective of the NASA's Suomi National Polar Orbiting Partnership (S-NPP) Land Science Investigator-led Processing System (Land SIPS), housed at the NASA Goddard Space Flight Center (GSFC), is to produce high quality land products from the Visible Infrared Imaging Radiometer Suite (VIIRS) to extend the Earth System Data Records (ESDRs) developed from NASA's heritage Earth Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the EOS Terra and Aqua satellites. In this paper we will present the functional description and capabilities of the S-NPP Land SIPS, including system development phases and production schedules, timeline for processing, and delivery of land science products based on coordination with the S-NPP Land science team members. The Land SIPS processing stream is expected to be operational by December 2016, generating land products either using the NASA science team delivered algorithms, or the "best-of" science algorithms currently in operation at NASA's Land Product Evaluation and Algorithm Testing Element (PEATE). In addition to generating the standard land science products through processing of the NASA's VIIRS Level 0 data record, the Land SIPS processing system is also used to produce a suite of near-real time products for NASA's application community. Land SIPS will also deliver the standard products, ancillary data sets, software and supporting documentation (ATBDs) to the assigned Distributed Active Archive Centers (DAACs) for archival and distribution. Quality assessment and validation will be an integral part of the Land SIPS processing system; the former being performed at Land Data Operational Product Evaluation (LDOPE) facility, while the latter under the auspices of the CEOS Working Group on Calibration & Validation (WGCV) Land Product Validation (LPV) Subgroup; adopting the best-practices and tools used to assess the quality of heritage EOS-MODIS products generated at the MODIS Adaptive Processing System (MODAPS).

  4. Optimal regulation in systems with stochastic time sampling

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.; Lee, P. S.

    1980-01-01

    An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.

  5. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  6. Filling the gaps: Predicting the distribution of temperate reef biota using high resolution biological and acoustic data

    NASA Astrophysics Data System (ADS)

    Hill, Nicole A.; Lucieer, Vanessa; Barrett, Neville S.; Anderson, Tara J.; Williams, Stefan B.

    2014-06-01

    Management of the marine environment is often hampered by a lack of comprehensive spatial information on the distribution of diversity and the bio-physical processes structuring regional ecosystems. This is particularly true in temperate reef systems beyond depths easily accessible to divers. Yet these systems harbor a diversity of sessile life that provide essential ecosystem services, sustain fisheries and, as with shallower ecosystems, are also increasingly vulnerable to anthropogenic impacts and environmental change. Here we use cutting-edge tools (Autonomous Underwater Vehicles and ship-borne acoustics) and analytical approaches (predictive modelling) to quantify and map these highly productive ecosystems. We find the occurrence of key temperate-reef biota can be explained and predicted using standard (depth) and novel (texture) surrogates derived from multibeam acoustic data, and geographic surrogates. This suggests that combinations of fine-scale processes, such as light limitation and habitat complexity, and broad-scale processes, such as regional currents and exposure regimes, are important in structuring these diverse deep-reef communities. While some dominant habitat forming biota, including canopy algae, were widely distributed, others, including gorgonians and sea whips, exhibited patchy and restricted distributions across the reef system. In addition to providing the first quantitative and full coverage maps of reef diversity for this area, our modelling revealed that offshore reefs represented a regional diversity hotspot that is of high ecological and conservation value. Regional reef systems should not, therefore, be considered homogenous units in conservation planning and management. Full-coverage maps of the predicted distribution of biota (and associated uncertainty) are likely to be increasingly valuable, not only for conservation planning, but in the ongoing management and monitoring of these less-accessible ecosystems.

  7. Time-Fractional Advection-Dispersion Equation (tFADE) to Quantify Aqueous Phase Contaminant Elution from a Trichloroethene (TCE) NAPL Source Zone in Sand Columns

    NASA Astrophysics Data System (ADS)

    Tick, G. R.; Wei, S.; Sun, H.; Zhang, Y.

    2016-12-01

    Pore-scale heterogeneity, NAPL distribution, and sorption/desorption processes can significantly affect aqueous phase elution and mass flux in porous media systems. The application of a scale-independent fractional derivative model (tFADE) was used to simulate elution curves for a series of columns (5 cm, 7 cm, 15 cm, 25 cm, and 80 cm) homogeneously packed with 20/30-mesh sand and distributed with uniform saturations (7-24%) of NAPL phase trichloroethene (TCE). An additional set of columns (7 cm and 25 cm) were packed with a heterogeneous distribution of quartz sand upon which TCE was emplaced by imbibing the immiscible liquid, under stable displacement conditions, to simulate a spill-type process. The tFADE model was able to better represent experimental elution behavior for systems that exhibited extensive long-term concentration tailing requiring much less parameters compared to typical multi-rate mass transfer models (MRMT). However, the tFADE model was not able to effectively simulate the entire elution curve for such systems with short concentration tailing periods since it assumes a power-law distribution for the dissolution rate for TCE. Such limitations may be solved using the tempered fractional derivative model, which can capture the single-rate mass transfer process and therefore the short elution concentration tailing behavior. Numerical solution for the tempered fractional-derivative model in bounded domains however remains a challenge and therefore requires further study. However, the tFADE model shows excellent promise for understanding impacts on concentration elution behavior for systems in which physical heterogeneity, non-uniform NAPL distribution, and pronounced sorption-desorption effects dominate or are present.

  8. On measures of association among genetic variables

    PubMed Central

    Gianola, Daniel; Manfredi, Eduardo; Simianer, Henner

    2012-01-01

    Summary Systems involving many variables are important in population and quantitative genetics, for example, in multi-trait prediction of breeding values and in exploration of multi-locus associations. We studied departures of the joint distribution of sets of genetic variables from independence. New measures of association based on notions of statistical distance between distributions are presented. These are more general than correlations, which are pairwise measures, and lack a clear interpretation beyond the bivariate normal distribution. Our measures are based on logarithmic (Kullback-Leibler) and on relative ‘distances’ between distributions. Indexes of association are developed and illustrated for quantitative genetics settings in which the joint distribution of the variables is either multivariate normal or multivariate-t, and we show how the indexes can be used to study linkage disequilibrium in a two-locus system with multiple alleles and present applications to systems of correlated beta distributions. Two multivariate beta and multivariate beta-binomial processes are examined, and new distributions are introduced: the GMS-Sarmanov multivariate beta and its beta-binomial counterpart. PMID:22742500

  9. Frequency distributions from birth, death, and creation processes.

    PubMed

    Bartley, David L; Ogden, Trevor; Song, Ruiguang

    2002-01-01

    The time-dependent frequency distribution of groups of individuals versus group size was investigated within a continuum approximation, assuming a simplified individual growth, death and creation model. The analogy of the system to a physical fluid exhibiting both convection and diffusion was exploited in obtaining various solutions to the distribution equation. A general solution was approximated through the application of a Green's function. More specific exact solutions were also found to be useful. The solutions were continually checked against the continuum approximation through extensive simulation of the discrete system. Over limited ranges of group size, the frequency distributions were shown to closely exhibit a power-law dependence on group size, as found in many realizations of this type of system, ranging from colonies of mutated bacteria to the distribution of surnames in a given population. As an example, the modeled distributions were successfully fit to the distribution of surnames in several countries by adjusting the parameters specifying growth, death and creation rates.

  10. Checkpoint-based forward recovery using lookahead execution and rollback validation in parallel and distributed systems. Ph.D. Thesis, 1992

    NASA Technical Reports Server (NTRS)

    Long, Junsheng

    1994-01-01

    This thesis studies a forward recovery strategy using checkpointing and optimistic execution in parallel and distributed systems. The approach uses replicated tasks executing on different processors for forwared recovery and checkpoint comparison for error detection. To reduce overall redundancy, this approach employs a lower static redundancy in the common error-free situation to detect error than the standard N Module Redundancy scheme (NMR) does to mask off errors. For the rare occurrence of an error, this approach uses some extra redundancy for recovery. To reduce the run-time recovery overhead, look-ahead processes are used to advance computation speculatively and a rollback process is used to produce a diagnosis for correct look-ahead processes without rollback of the whole system. Both analytical and experimental evaluation have shown that this strategy can provide a nearly error-free execution time even under faults with a lower average redundancy than NMR.

  11. WAMS measurements pre-processing for detecting low-frequency oscillations in power systems

    NASA Astrophysics Data System (ADS)

    Kovalenko, P. Y.

    2017-07-01

    Processing the data received from measurement systems implies the situation when one or more registered values stand apart from the sample collection. These values are referred to as “outliers”. The processing results may be influenced significantly by the presence of those in the data sample under consideration. In order to ensure the accuracy of low-frequency oscillations detection in power systems the corresponding algorithm has been developed for the outliers detection and elimination. The algorithm is based on the concept of the irregular component of measurement signal. This component comprises measurement errors and is assumed to be Gauss-distributed random. The median filtering is employed to detect the values lying outside the range of the normally distributed measurement error on the basis of a 3σ criterion. The algorithm has been validated involving simulated signals and WAMS data as well.

  12. Using fuzzy rule-based knowledge model for optimum plating conditions search

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Arzamastsev, A. A.; Glazkov, V. P.; L’vov, A. A.

    2018-03-01

    The paper discusses existing approaches to plating process modeling in order to decrease the distribution thickness of plating surface cover. However, these approaches do not take into account the experience, knowledge, and intuition of the decision-makers when searching the optimal conditions of electroplating technological process. The original approach to optimal conditions search for applying the electroplating coatings, which uses the rule-based model of knowledge and allows one to reduce the uneven product thickness distribution, is proposed. The block diagrams of a conventional control system of a galvanic process as well as the system based on the production model of knowledge are considered. It is shown that the fuzzy production model of knowledge in the control system makes it possible to obtain galvanic coatings of a given thickness unevenness with a high degree of adequacy to the experimental data. The described experimental results confirm the theoretical conclusions.

  13. 77 FR 23618 - Authority To Manufacture and Distribute Postage Evidencing Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-20

    ... revision of the rules concerning inventory controls for Postage Evidencing Systems (PES). These changes are... System inventory control processes. (a) Each authorized provider of Postage Evidencing Systems must... custody and control of Postage Evidencing Systems and must specifically authorize in writing the proposed...

  14. A Framework for Distributed Problem Solving

    NASA Astrophysics Data System (ADS)

    Leone, Joseph; Shin, Don G.

    1989-03-01

    This work explores a distributed problem solving (DPS) approach, namely the AM/AG model, to cooperative memory recall. The AM/AG model is a hierarchic social system metaphor for DPS based on the Mintzberg's model of organizations. At the core of the model are information flow mechanisms, named amplification and aggregation. Amplification is a process of expounding a given task, called an agenda, into a set of subtasks with magnified degree of specificity and distributing them to multiple processing units downward in the hierarchy. Aggregation is a process of combining the results reported from multiple processing units into a unified view, called a resolution, and promoting the conclusion upward in the hierarchy. The combination of amplification and aggregation can account for a memory recall process which primarily relies on the ability of making associations between vast amounts of related concepts, sorting out the combined results, and promoting the most plausible ones. The amplification process is discussed in detail. An implementation of the amplification process is presented. The process is illustrated by an example.

  15. Multiplicative processes in visual cognition

    NASA Astrophysics Data System (ADS)

    Credidio, H. F.; Teixeira, E. N.; Reis, S. D. S.; Moreira, A. A.; Andrade, J. S.

    2014-03-01

    The Central Limit Theorem (CLT) is certainly one of the most important results in the field of statistics. The simple fact that the addition of many random variables can generate the same probability curve, elucidated the underlying process for a broad spectrum of natural systems, ranging from the statistical distribution of human heights to the distribution of measurement errors, to mention a few. An extension of the CLT can be applied to multiplicative processes, where a given measure is the result of the product of many random variables. The statistical signature of these processes is rather ubiquitous, appearing in a diverse range of natural phenomena, including the distributions of incomes, body weights, rainfall, and fragment sizes in a rock crushing process. Here we corroborate results from previous studies which indicate the presence of multiplicative processes in a particular type of visual cognition task, namely, the visual search for hidden objects. Precisely, our results from eye-tracking experiments show that the distribution of fixation times during visual search obeys a log-normal pattern, while the fixational radii of gyration follow a power-law behavior.

  16. Distributed architecture and distributed processing mode in urban sewage treatment

    NASA Astrophysics Data System (ADS)

    Zhou, Ruipeng; Yang, Yuanming

    2017-05-01

    Decentralized rural sewage treatment facility over the broad area, a larger operation and management difficult, based on the analysis of rural sewage treatment model based on the response to these challenges, we describe the principle, structure and function in networking technology and network communications technology as the core of distributed remote monitoring system, through the application of case analysis to explore remote monitoring system features in a decentralized rural sewage treatment facilities in the daily operation and management. Practice shows that the remote monitoring system to provide technical support for the long-term operation and effective supervision of the facilities, and reduced operating, maintenance and supervision costs for development.

  17. Experimental investigation of static ice refrigeration air conditioning system driven by distributed photovoltaic energy system

    NASA Astrophysics Data System (ADS)

    Xu, Y. F.; Li, M.; Luo, X.; Wang, Y. F.; Yu, Q. F.; Hassanien, R. H. E.

    2016-08-01

    The static ice refrigeration air conditioning system (SIRACS) driven by distributed photovoltaic energy system (DPES) was proposed and the test experiment have been investigated in this paper. Results revealed that system energy utilization efficiency is low because energy losses were high in ice making process of ice slide maker. So the immersed evaporator and co-integrated exchanger were suggested in system structure optimization analysis and the system COP was improved nearly 40%. At the same time, we have researched that ice thickness and ice super-cooled temperature changed along with time and the relationship between system COP and ice thickness was obtained.

  18. Modular modeling system for building distributed hydrologic models with a user-friendly software package

    NASA Astrophysics Data System (ADS)

    Wi, S.; Ray, P. A.; Brown, C.

    2015-12-01

    A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.

  19. Flight deck benefits of integrated data link communication

    NASA Technical Reports Server (NTRS)

    Waller, Marvin C.

    1992-01-01

    A fixed-base, piloted simulation study was conducted to determine the operational benefits that result when air traffic control (ATC) instructions are transmitted to the deck of a transport aircraft over a digital data link. The ATC instructions include altitude, airspeed, heading, radio frequency, and route assignment data. The interface between the flight deck and the data link was integrated with other subsystems of the airplane to facilitate data management. Data from the ATC instructions were distributed to the flight guidance and control system, the navigation system, and an automatically tuned communication radio. The co-pilot initiated the automation-assisted data distribution process. Digital communications and automated data distribution were compared with conventional voice radio communication and manual input of data into other subsystems of the simulated aircraft. Less time was required in the combined communication and data management process when data link ATC communication was integrated with the other subsystems. The test subjects, commercial airline pilots, provided favorable evaluations of both the digital communication and data management processes.

  20. A Scientific Workflow System for Satellite Data Processing with Real-Time Monitoring

    NASA Astrophysics Data System (ADS)

    Nguyen, Minh Duc

    2018-02-01

    This paper provides a case study on satellite data processing, storage, and distribution in the space weather domain by introducing the Satellite Data Downloading System (SDDS). The approach proposed in this paper was evaluated through real-world scenarios and addresses the challenges related to the specific field. Although SDDS is used for satellite data processing, it can potentially be adapted to a wide range of data processing scenarios in other fields of physics.

  1. Effectiveness of a web-based automated cell distribution system.

    PubMed

    Niland, Joyce C; Stiller, Tracey; Cravens, James; Sowinski, Janice; Kaddis, John; Qian, Dajun

    2010-01-01

    In recent years, industries have turned to the field of operations research to help improve the efficiency of production and distribution processes. Largely absent is the application of this methodology to biological materials, such as the complex and costly procedure of human pancreas procurement and islet isolation. Pancreatic islets are used for basic science research and in a promising form of cell replacement therapy for a subset of patients afflicted with severe type 1 diabetes mellitus. Having an accurate and reliable system for cell distribution is therefore crucial. The Islet Cell Resource Center Consortium was formed in 2001 as the first and largest cooperative group of islet production and distribution facilities in the world. We previously reported on the development of a Matching Algorithm for Islet Distribution (MAID), an automated web-based tool used to optimize the distribution of human pancreatic islets by matching investigator requests to islet characteristics. This article presents an assessment of that algorithm and compares it to the manual distribution process used prior to MAID. A comparison was done using an investigator's ratio of the number of islets received divided by the number requested pre- and post-MAID. Although the supply of islets increased between the pre- versus post-MAID period, the median received-to-requested ratio remained around 60% due to an increase in demand post-MAID. A significantly smaller variation in the received-to-requested ratio was achieved in the post- versus pre-MAID period. In particular, the undesirable outcome of providing users with more islets than requested, ranging up to four times their request, was greatly reduced through the algorithm. In conclusion, this analysis demonstrates, for the first time, the effectiveness of using an automated web-based cell distribution system to facilitate efficient and consistent delivery of human pancreatic islets by enhancing the islet matching process.

  2. Impact of distributed power electronics on the lifetime and reliability of PV systems: Impact of distributed power electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olalla, Carlos; Maksimovic, Dragan; Deline, Chris

    Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less

  3. Impact of distributed power electronics on the lifetime and reliability of PV systems: Impact of distributed power electronics

    DOE PAGES

    Olalla, Carlos; Maksimovic, Dragan; Deline, Chris; ...

    2017-04-26

    Here, this paper quantifies the impact of distributed power electronics in photovoltaic (PV) systems in terms of end-of-life energy-capture performance and reliability. The analysis is based on simulations of PV installations over system lifetime at various degradation rates. It is shown how module-level or submodule-level power converters can mitigate variations in cell degradation over time, effectively increasing the system lifespan by 5-10 years compared with the nominal 25-year lifetime. An important aspect typically overlooked when characterizing such improvements is the reliability of distributed power electronics, as power converter failures may not only diminish energy yield improvements but also adversely affectmore » the overall system operation. Failure models are developed, and power electronics reliability is taken into account in this work, in order to provide a more comprehensive view of the opportunities and limitations offered by distributed power electronics in PV systems. Lastly, it is shown how a differential power-processing approach achieves the best mismatch mitigation performance and the least susceptibility to converter faults.« less

  4. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  5. Enhanced High Performance Power Compensation Methodology by IPFC Using PIGBT-IDVR

    PubMed Central

    Arumugom, Subramanian; Rajaram, Marimuthu

    2015-01-01

    Currently, power systems are involuntarily controlled without high speed control and are frequently initiated, therefore resulting in a slow process when compared with static electronic devices. Among various power interruptions in power supply systems, voltage dips play a central role in causing disruption. The dynamic voltage restorer (DVR) is a process based on voltage control that compensates for line transients in the distributed system. To overcome these issues and to achieve a higher speed, a new methodology called the Parallel IGBT-Based Interline Dynamic Voltage Restorer (PIGBT-IDVR) method has been proposed, which mainly spotlights the dynamic processing of energy reloads in common dc-linked energy storage with less adaptive transition. The interline power flow controller (IPFC) scheme has been employed to manage the power transmission between the lines and the restorer method for controlling the reactive power in the individual lines. By employing the proposed methodology, the failure of a distributed system has been avoided and provides better performance than the existing methodologies. PMID:26613101

  6. Efficient bit sifting scheme of post-processing in quantum key distribution

    NASA Astrophysics Data System (ADS)

    Li, Qiong; Le, Dan; Wu, Xianyan; Niu, Xiamu; Guo, Hong

    2015-10-01

    Bit sifting is an important step in the post-processing of quantum key distribution (QKD). Its function is to sift out the undetected original keys. The communication traffic of bit sifting has essential impact on the net secure key rate of a practical QKD system. In this paper, an efficient bit sifting scheme is presented, of which the core is a lossless source coding algorithm. Both theoretical analysis and experimental results demonstrate that the performance of the scheme is approaching the Shannon limit. The proposed scheme can greatly decrease the communication traffic of the post-processing of a QKD system, which means the proposed scheme can decrease the secure key consumption for classical channel authentication and increase the net secure key rate of the QKD system, as demonstrated by analyzing the improvement on the net secure key rate. Meanwhile, some recommendations on the application of the proposed scheme to some representative practical QKD systems are also provided.

  7. Distributed heterogeneous inspecting system and its middleware-based solution.

    PubMed

    Huang, Li-can; Wu, Zhao-hui; Pan, Yun-he

    2003-01-01

    There are many cases when an organization needs to monitor the data and operations of its supervised departments, especially those departments which are not owned by this organization and are managed by their own information systems. Distributed Heterogeneous Inspecting System (DHIS) is the system an organization uses to monitor its supervised departments by inspecting their information systems. In DHIS, the inspected systems are generally distributed, heterogeneous, and constructed by different companies. DHIS has three key processes-abstracting core data sets and core operation sets, collecting these sets, and inspecting these collected sets. In this paper, we present the concept and mathematical definition of DHIS, a metadata method for solving the interoperability, a security strategy for data transferring, and a middleware-based solution of DHIS. We also describe an example of the inspecting system at WENZHOU custom.

  8. The Paperless Solution

    NASA Technical Reports Server (NTRS)

    2001-01-01

    REI Systems, Inc. developed a software solution that uses the Internet to eliminate the paperwork typically required to document and manage complex business processes. The data management solution, called Electronic Handbooks (EHBs), is presently used for the entire SBIR program processes at NASA. The EHB-based system is ideal for programs and projects whose users are geographically distributed and are involved in complex management processes and procedures. EHBs provide flexible access control and increased communications while maintaining security for systems of all sizes. Through Internet Protocol- based access, user authentication and user-based access restrictions, role-based access control, and encryption/decryption, EHBs provide the level of security required for confidential data transfer. EHBs contain electronic forms and menus, which can be used in real time to execute the described processes. EHBs use standard word processors that generate ASCII HTML code to set up electronic forms that are viewed within a web browser. EHBs require no end-user software distribution, significantly reducing operating costs. Each interactive handbook simulates a hard-copy version containing chapters with descriptions of participants' roles in the online process.

  9. Immunocytochemical distribution of glial fibrillary acidic protein in the central nervous system of the Japanese quail (Coturnix coturnix japonica).

    PubMed

    Cameron-Curry, P; Aste, N; Viglietti-Panzica, C; Panzica, G C

    1991-01-01

    In the present study we detailed the distribution of GFAP-immunopositive structures within the central nervous system of the Japanese quail. Different fixation and embedding procedures were applied. The best results were obtained on frozen cryostatic sections from freshly dissected brains subsequently fixed by a short immersion in cold acetone. Immunopositive structures were observed both with immunofluorescence, and with immunoperoxidase methods. Immunoreactive cell bodies and processes were observed within the whole central nervous system, and different cell types can be identified on the basis of their topographical location and morphology. A first class of astrocytes is composed of intensely stained unipolar cells lining the inner surface of the pia mater and the large blood vessels. A second type is represented by multipolar astrocytes of variable size, provided with an irregular cell body. The last type is represented by similar elements, showing an immunonegative cell body, that can be identified only by the presence of converging processes. These three types of cells, and several isolated processes, show a differential distribution within the quail central nervous system, both in the grey and in the white matter. Present results suggest that GFAP may represent a good marker for at least part of the astroglial population in quail.

  10. Species, functional groups, and thresholds in ecological resilience

    USGS Publications Warehouse

    Sundstrom, Shana M.; Allen, Craig R.; Barichievy, Chris

    2012-01-01

    The cross-scale resilience model states that ecological resilience is generated in part from the distribution of functions within and across scales in a system. Resilience is a measure of a system's ability to remain organized around a particular set of mutually reinforcing processes and structures, known as a regime. We define scale as the geographic extent over which a process operates and the frequency with which a process occurs. Species can be categorized into functional groups that are a link between ecosystem processes and structures and ecological resilience. We applied the cross-scale resilience model to avian species in a grassland ecosystem. A species’ morphology is shaped in part by its interaction with ecological structure and pattern, so animal body mass reflects the spatial and temporal distribution of resources. We used the log-transformed rank-ordered body masses of breeding birds associated with grasslands to identify aggregations and discontinuities in the distribution of those body masses. We assessed cross-scale resilience on the basis of 3 metrics: overall number of functional groups, number of functional groups within an aggregation, and the redundancy of functional groups across aggregations. We assessed how the loss of threatened species would affect cross-scale resilience by removing threatened species from the data set and recalculating values of the 3 metrics. We also determined whether more function was retained than expected after the loss of threatened species by comparing observed loss with simulated random loss in a Monte Carlo process. The observed distribution of function compared with the random simulated loss of function indicated that more functionality in the observed data set was retained than expected. On the basis of our results, we believe an ecosystem with a full complement of species can sustain considerable species losses without affecting the distribution of functions within and across aggregations, although ecological resilience is reduced. We propose that the mechanisms responsible for shaping discontinuous distributions of body mass and the nonrandom distribution of functions may also shape species losses such that local extinctions will be nonrandom with respect to the retention and distribution of functions and that the distribution of function within and across aggregations will be conserved despite extinctions.

  11. The Joint Distribution Process Analysis Center (JDPAC): Background and Current Capability

    DTIC Science & Technology

    2007-06-12

    Systems Integration and Data Management JDDE Analysis/Global Distribution Performance Assessment Futures/Transformation Analysis Balancing Operational Art ... Science JDPAC “101” USTRANSCOM Future Operations Center SDDC – TEA Army SES (Dual Hat) • Transportability Engineering • Other Title 10

  12. 242A Distributed Control System Year 2000 Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TEATS, M.C.

    1999-08-31

    This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less

  13. Distributed computer taxonomy based on O/S structure

    NASA Technical Reports Server (NTRS)

    Foudriat, Edwin C.

    1985-01-01

    The taxonomy considers the resource structure at the operating system level. It compares a communication based taxonomy with the new taxonomy to illustrate how the latter does a better job when related to the client's view of the distributed computer. The results illustrate the fundamental features and what is required to construct fully distributed processing systems. The problem of using network computers on the space station is addressed. A detailed discussion of the taxonomy is not given here. Information is given in the form of charts and diagrams that were used to illustrate a talk.

  14. Fission meter and neutron detection using poisson distribution comparison

    DOEpatents

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  15. Architectures Toward Reusable Science Data Systems

    NASA Astrophysics Data System (ADS)

    Moses, J. F.

    2014-12-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building ground systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research, NOAA's weather satellites and USGS's Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience the goal is to recognize architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  16. Architectures Toward Reusable Science Data Systems

    NASA Technical Reports Server (NTRS)

    Moses, John

    2015-01-01

    Science Data Systems (SDS) comprise an important class of data processing systems that support product generation from remote sensors and in-situ observations. These systems enable research into new science data products, replication of experiments and verification of results. NASA has been building systems for satellite data processing since the first Earth observing satellites launched and is continuing development of systems to support NASA science research and NOAAs Earth observing satellite operations. The basic data processing workflows and scenarios continue to be valid for remote sensor observations research as well as for the complex multi-instrument operational satellite data systems being built today. System functions such as ingest, product generation and distribution need to be configured and performed in a consistent and repeatable way with an emphasis on scalability. This paper will examine the key architectural elements of several NASA satellite data processing systems currently in operation and under development that make them suitable for scaling and reuse. Examples of architectural elements that have become attractive include virtual machine environments, standard data product formats, metadata content and file naming, workflow and job management frameworks, data acquisition, search, and distribution protocols. By highlighting key elements and implementation experience we expect to find architectures that will outlast their original application and be readily adaptable for new applications. Concepts and principles are explored that lead to sound guidance for SDS developers and strategists.

  17. Rapid Processing of Radio Interferometer Data for Transient Surveys

    NASA Astrophysics Data System (ADS)

    Bourke, S.; Mooley, K.; Hallinan, G.

    2014-05-01

    We report on a software infrastructure and pipeline developed to process large radio interferometer datasets. The pipeline is implemented using a radical redesign of the AIPS processing model. An infrastructure we have named AIPSlite is used to spawn, at runtime, minimal AIPS environments across a cluster. The pipeline then distributes and processes its data in parallel. The system is entirely free of the traditional AIPS distribution and is self configuring at runtime. This software has so far been used to process a EVLA Stripe 82 transient survey, the data for the JVLA-COSMOS project, and has been used to process most of the EVLA L-Band data archive imaging each integration to search for short duration transients.

  18. Mission operations update for the restructured Earth Observing System (EOS) mission

    NASA Technical Reports Server (NTRS)

    Kelly, Angelita Castro; Chang, Edward S.

    1993-01-01

    The National Aeronautics and Space Administration's (NASA) Earth Observing System (EOS) will provide a comprehensive long term set of observations of the Earth to the Earth science research community. The data will aid in determining global changes caused both naturally and through human interaction. Understanding man's impact on the global environment will allow sound policy decisions to be made to protect our future. EOS is a major component of the Mission to Planet Earth program, which is NASA's contribution to the U.S. Global Change Research Program. EOS consists of numerous instruments on multiple spacecraft and a distributed ground system. The EOS Data and Information System (EOSDIS) is the major ground system developed to support EOS. The EOSDIS will provide EOS spacecraft command and control, data processing, product generation, and data archival and distribution services for EOS spacecraft. Data from EOS instruments on other Earth science missions (e.g., Tropical Rainfall Measuring Mission (TRMM)) will also be processed, distributed, and archived in EOSDIS. The U.S. and various International Partners (IP) (e.g., the European Space Agency (ESA), the Ministry of International Trade and Industry (MITI) of Japan, and the Canadian Space Agency (CSA)) participate in and contribute to the international EOS program. The EOSDIS will also archive processed data from other designated NASA Earth science missions (e.g., UARS) that are under the broad umbrella of Mission to Planet Earth.

  19. A convergent model for distributed processing of Big Sensor Data in urban engineering networks

    NASA Astrophysics Data System (ADS)

    Parygin, D. S.; Finogeev, A. G.; Kamaev, V. A.; Finogeev, A. A.; Gnedkova, E. P.; Tyukov, A. P.

    2017-01-01

    The problems of development and research of a convergent model of the grid, cloud, fog and mobile computing for analytical Big Sensor Data processing are reviewed. The model is meant to create monitoring systems of spatially distributed objects of urban engineering networks and processes. The proposed approach is the convergence model of the distributed data processing organization. The fog computing model is used for the processing and aggregation of sensor data at the network nodes and/or industrial controllers. The program agents are loaded to perform computing tasks for the primary processing and data aggregation. The grid and the cloud computing models are used for integral indicators mining and accumulating. A computing cluster has a three-tier architecture, which includes the main server at the first level, a cluster of SCADA system servers at the second level, a lot of GPU video cards with the support for the Compute Unified Device Architecture at the third level. The mobile computing model is applied to visualize the results of intellectual analysis with the elements of augmented reality and geo-information technologies. The integrated indicators are transferred to the data center for accumulation in a multidimensional storage for the purpose of data mining and knowledge gaining.

  20. GPR-Based Water Leak Models in Water Distribution Systems

    PubMed Central

    Ayala-Cabrera, David; Herrera, Manuel; Izquierdo, Joaquín; Ocaña-Levario, Silvia J.; Pérez-García, Rafael

    2013-01-01

    This paper addresses the problem of leakage in water distribution systems through the use of ground penetrating radar (GPR) as a nondestructive method. Laboratory tests are performed to extract features of water leakage from the obtained GPR images. Moreover, a test in a real-world urban system under real conditions is performed. Feature extraction is performed by interpreting GPR images with the support of a pre-processing methodology based on an appropriate combination of statistical methods and multi-agent systems. The results of these tests are presented, interpreted, analyzed and discussed in this paper.

  1. Distributed Offline Data Reconstruction in BaBar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pulliam, Teela M

    The BaBar experiment at SLAC is in its fourth year of running. The data processing system has been continuously evolving to meet the challenges of higher luminosity running and the increasing bulk of data to re-process each year. To meet these goals a two-pass processing architecture has been adopted, where 'rolling calibrations' are quickly calculated on a small fraction of the events in the first pass and the bulk data reconstruction done in the second. This allows for quick detector feedback in the first pass and allows for the parallelization of the second pass over two or more separate farms.more » This two-pass system allows also for distribution of processing farms off-site. The first such site has been setup at INFN Padova. The challenges met here were many. The software was ported to a full Linux-based, commodity hardware system. The raw dataset, 90 TB, was imported from SLAC utilizing a 155 Mbps network link. A system for quality control and export of the processed data back to SLAC was developed. Between SLAC and Padova we are currently running three pass-one farms, with 32 CPUs each, and nine pass-two farms with 64 to 80 CPUs each. The pass-two farms can process between 2 and 4 million events per day. Details about the implementation and performance of the system will be presented.« less

  2. Comprehensive evaluation of impacts of distributed generation integration in distribution network

    NASA Astrophysics Data System (ADS)

    Peng, Sujiang; Zhou, Erbiao; Ji, Fengkun; Cao, Xinhui; Liu, Lingshuang; Liu, Zifa; Wang, Xuyang; Cai, Xiaoyu

    2018-04-01

    All Distributed generation (DG) as the supplement to renewable energy centralized utilization, is becoming the focus of development direction of renewable energy utilization. With the increasing proportion of DG in distribution network, the network power structure, power flow distribution, operation plans and protection are affected to some extent. According to the main impacts of DG, a comprehensive evaluation model of distributed network with DG is proposed in this paper. A comprehensive evaluation index system including 7 aspects, along with their corresponding index calculation method is established for quantitative analysis. The indices under different access capacity of DG in distribution network are calculated based on the IEEE RBTS-Bus 6 system and the evaluation result is calculated by analytic hierarchy process (AHP). The proposed model and method are verified effective and validity through case study.

  3. Environmental Control and Life Support Systems

    NASA Technical Reports Server (NTRS)

    Engel, Joshua Allen

    2017-01-01

    The Environmental Control System provides a controlled air purge to Orion and SLS. The ECS performs this function by processing 100% ambient air while simultaneously controlling temperature, pressure, humidity, cleanliness and purge distribution.

  4. Communication Interface for a System of Distributed Processes.

    DTIC Science & Technology

    1981-03-03

    Tnterprocess Communication Service Apnlied in a nistrihuted Data Rase System" (submitte to VCI, Miinchen 10/Rl) and "The Trans- port Service of an...1980 4) B6’-hme, K.: An Tnternrocess Communication Service Applied in a Distribut.1 Data Pase System. Submitted to FCI, tliinchen, Oct. 3091l 5) ribme...K(. The "ransport Service of an Tnterprocess Commiunication Systell. Snbmitte to 7th Data Cor’nunicatinrin rooi1 !’exico City, Oct. 1091 ’"’h~e oi1

  5. Design and development of a medical big data processing system based on Hadoop.

    PubMed

    Yao, Qin; Tian, Yu; Li, Peng-Fei; Tian, Li-Li; Qian, Yang-Ming; Li, Jing-Song

    2015-03-01

    Secondary use of medical big data is increasingly popular in healthcare services and clinical research. Understanding the logic behind medical big data demonstrates tendencies in hospital information technology and shows great significance for hospital information systems that are designing and expanding services. Big data has four characteristics--Volume, Variety, Velocity and Value (the 4 Vs)--that make traditional systems incapable of processing these data using standalones. Apache Hadoop MapReduce is a promising software framework for developing applications that process vast amounts of data in parallel with large clusters of commodity hardware in a reliable, fault-tolerant manner. With the Hadoop framework and MapReduce application program interface (API), we can more easily develop our own MapReduce applications to run on a Hadoop framework that can scale up from a single node to thousands of machines. This paper investigates a practical case of a Hadoop-based medical big data processing system. We developed this system to intelligently process medical big data and uncover some features of hospital information system user behaviors. This paper studies user behaviors regarding various data produced by different hospital information systems for daily work. In this paper, we also built a five-node Hadoop cluster to execute distributed MapReduce algorithms. Our distributed algorithms show promise in facilitating efficient data processing with medical big data in healthcare services and clinical research compared with single nodes. Additionally, with medical big data analytics, we can design our hospital information systems to be much more intelligent and easier to use by making personalized recommendations.

  6. Performance of the Landsat-Data Collection System in a Total System Context

    NASA Technical Reports Server (NTRS)

    Paulson, R. W. (Principal Investigator); Merk, C. F.

    1975-01-01

    The author has identified the following significant results. This experiment was, and continues to be, an integration of the LANDSAT-DCS with the data collection and processing system of the Geological Survey. Although an experimental demonstration, it was a successful integration of a satellite relay system that is capable of continental data collection, and an existing governmental nationwide operational data processing and distributing networks. The Survey's data processing system uses a large general purpose computer with insufficient redundancy for 24-hour a day, 7 day a week operation. This is significant, but soluble obstacle to converting the experimental integration of the system to an operational integration.

  7. Advanced Information Processing System (AIPS)

    NASA Technical Reports Server (NTRS)

    Pitts, Felix L.

    1993-01-01

    Advanced Information Processing System (AIPS) is a computer systems philosophy, a set of validated hardware building blocks, and a set of validated services as embodied in system software. The goal of AIPS is to provide the knowledgebase which will allow achievement of validated fault-tolerant distributed computer system architectures, suitable for a broad range of applications, having failure probability requirements of 10E-9 at 10 hours. A background and description is given followed by program accomplishments, the current focus, applications, technology transfer, FY92 accomplishments, and funding.

  8. Launch Processing System. [for Space Shuttle

    NASA Technical Reports Server (NTRS)

    Byrne, F.; Doolittle, G. V.; Hockenberger, R. W.

    1976-01-01

    This paper presents a functional description of the Launch Processing System, which provides automatic ground checkout and control of the Space Shuttle launch site and airborne systems, with emphasis placed on the Checkout, Control, and Monitor Subsystem. Hardware and software modular design concepts for the distributed computer system are reviewed relative to performing system tests, launch operations control, and status monitoring during ground operations. The communication network design, which uses a Common Data Buffer interface to all computers to allow computer-to-computer communication, is discussed in detail.

  9. Intelligent Control of Micro Grid: A Big Data-Based Control Center

    NASA Astrophysics Data System (ADS)

    Liu, Lu; Wang, Yanping; Liu, Li; Wang, Zhiseng

    2018-01-01

    In this paper, a structure of micro grid system with big data-based control center is introduced. Energy data from distributed generation, storage and load are analized through the control center, and from the results new trends will be predicted and applied as a feedback to optimize the control. Therefore, each step proceeded in micro grid can be adjusted and orgnized in a form of comprehensive management. A framework of real-time data collection, data processing and data analysis will be proposed by employing big data technology. Consequently, a integrated distributed generation and a optimized energy storage and transmission process can be implemented in the micro grid system.

  10. Telemedicine and distributed medical intelligence.

    PubMed

    Warner, D; Tichenor, J M; Balch, D C

    1996-01-01

    Recent trends in health care informatics and telemedicine indicate that systems are being developed with a primary focus on technology and business, not on the process of medicine itself. The authors present a new model of health care information, distributed medical intelligence, which promotes the development of an integrative medical communication system addressing the process of providing expert medical knowledge to the point of need. The model incorporates audio, video, high-resolution still images, and virtual reality applications into an integrated medical communications network. Three components of the model (care portals, Docking Station, and the bridge) are described. The implementation of this model at the East Carolina University School of Medicine is also outlined.

  11. Monitoring copper release in drinking water distribution systems.

    PubMed

    d'Antonio, L; Fabbricino, M; Panico, A

    2008-01-01

    A new procedure, recently proposed for on-line monitoring of copper released from metal pipes in household plumbing system for drinking water distribution during the development of corrosion processes, is tested experimentally. Experiments were carried out in laboratory controlled conditions, using synthetic water and varying the water alkalinity. The possibility of using the corrosion potential as a surrogate measure of copper concentration in stagnating water is shown, verifying, in the meantime, the effect of alkalinity on the development of passivation phenomena, which tend to protect the pipe from corrosion processes. Experimental data are discussed, highlighting the potentiality of the procedure, and recognizing its limitations. Copyright IWA Publishing 2008.

  12. The design and implementation of multi-source application middleware based on service bus

    NASA Astrophysics Data System (ADS)

    Li, Yichun; Jiang, Ningkang

    2017-06-01

    With the rapid development of the Internet of Things(IoT), the real-time monitoring data are increasing with different types and large amounts. Aiming at taking full advantages of the data, we designed and implemented an application middleware, which not only supports the three-layer architecture of IoT information system but also enables the flexible configuration of multiple resources access and other accessional modules. The middleware platform shows the characteristics of lightness, security, AoP (aspect-oriented programming), distribution and real-time, which can let application developers construct the information processing systems on related areas in a short period. It focuses not limited to these functions: pre-processing of data format, the definition of data entity, the callings and handlings of distributed service and massive data process. The result of experiment shows that the performance of middleware is more excellent than some message queue construction to some degree and its throughput grows better as the number of distributed nodes increases while the code is not complex. Currently, the middleware is applied to the system of Shanghai Pudong environmental protection agency and achieved a great success.

  13. Spitzer Telemetry Processing System

    NASA Technical Reports Server (NTRS)

    Stanboli, Alice; Martinez, Elmain M.; McAuley, James M.

    2013-01-01

    The Spitzer Telemetry Processing System (SirtfTlmProc) was designed to address objectives of JPL's Multi-mission Image Processing Lab (MIPL) in processing spacecraft telemetry and distributing the resulting data to the science community. To minimize costs and maximize operability, the software design focused on automated error recovery, performance, and information management. The system processes telemetry from the Spitzer spacecraft and delivers Level 0 products to the Spitzer Science Center. SirtfTlmProc is a unique system with automated error notification and recovery, with a real-time continuous service that can go quiescent after periods of inactivity. The software can process 2 GB of telemetry and deliver Level 0 science products to the end user in four hours. It provides analysis tools so the operator can manage the system and troubleshoot problems. It automates telemetry processing in order to reduce staffing costs.

  14. Distributed MIMO chaotic radar based on wavelength-division multiplexing technology.

    PubMed

    Yao, Tingfeng; Zhu, Dan; Ben, De; Pan, Shilong

    2015-04-15

    A distributed multiple-input multiple-output chaotic radar based on wavelength-division multiplexing technology (WDM) is proposed and demonstrated. The wideband quasi-orthogonal chaotic signals generated by different optoelectronic oscillators (OEOs) are emitted by separated antennas to gain spatial diversity against the fluctuation of a target's radar cross section and enhance the detection capability. The received signals collected by the receive antennas and the reference signals from the OEOs are delivered to the central station for joint processing by exploiting WDM technology. The centralized signal processing avoids precise time synchronization of the distributed system and greatly simplifies the remote units, which improves the localization accuracy of the entire system. A proof-of-concept experiment for two-dimensional localization of a metal target is demonstrated. The maximum position error is less than 6.5 cm.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katipamula, Srinivas; Gowri, Krishnan; Hernandez, George

    This paper describes one such reference process that can be deployed to provide continuous automated conditioned-based maintenance management for buildings that have BIM, a building automation system (BAS) and a computerized maintenance management software (CMMS) systems. The process can be deployed using an open source transactional network platform, VOLTTRON™, designed for distributed sensing and controls and supports both energy efficiency and grid services.

  16. Approaching Error-Free Customer Satisfaction through Process Change and Feedback Systems

    ERIC Educational Resources Information Center

    Berglund, Kristin M.; Ludwig, Timothy D.

    2009-01-01

    Employee-based errors result in quality defects that can often impact customer satisfaction. This study examined the effects of a process change and feedback system intervention on error rates of 3 teams of retail furniture distribution warehouse workers. Archival records of error codes were analyzed and aggregated as the measure of quality. The…

  17. Shape Modification and Size Classification of Microcrystalline Graphite Powder as Anode Material for Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Gai, Guosheng; Yang, Yufen

    2018-03-01

    Natural microcrystalline graphite (MCG) composed of many crystallites is a promising new anode material for lithium-ion batteries (LiBs) and has received considerable attention from researchers. MCG with narrow particle size distribution and high sphericity exhibits excellent electrochemical performance. A nonaddition process to prepare natural MCG as a high-performance LiB anode material is described. First, raw MCG was broken into smaller particles using a pulverization system. Then, the particles were modified into near-spherical shape using a particle shape modification system. Finally, the particle size distribution was narrowed using a centrifugal rotor classification system. The products with uniform hemispherical shape and narrow size distribution had mean particle size of approximately 9 μm, 10 μm, 15 μm, and 20 μm. Additionally, the innovative pilot experimental process increased the product yield of the raw material. Finally, the electrochemical performance of the prepared MCG was tested, revealing high reversible capacity and good cyclability.

  18. Wave scheduling - Decentralized scheduling of task forces in multicomputers

    NASA Technical Reports Server (NTRS)

    Van Tilborg, A. M.; Wittie, L. D.

    1984-01-01

    Decentralized operating systems that control large multicomputers need techniques to schedule competing parallel programs called task forces. Wave scheduling is a probabilistic technique that uses a hierarchical distributed virtual machine to schedule task forces by recursively subdividing and issuing wavefront-like commands to processing elements capable of executing individual tasks. Wave scheduling is highly resistant to processing element failures because it uses many distributed schedulers that dynamically assign scheduling responsibilities among themselves. The scheduling technique is trivially extensible as more processing elements join the host multicomputer. A simple model of scheduling cost is used by every scheduler node to distribute scheduling activity and minimize wasted processing capacity by using perceived workload to vary decentralized scheduling rules. At low to moderate levels of network activity, wave scheduling is only slightly less efficient than a central scheduler in its ability to direct processing elements to accomplish useful work.

  19. A Methodology for Distributing the Corporate Database.

    ERIC Educational Resources Information Center

    McFadden, Fred R.

    The trend to distributed processing is being fueled by numerous forces, including advances in technology, corporate downsizing, increasing user sophistication, and acquisitions and mergers. Increasingly, the trend in corporate information systems (IS) departments is toward sharing resources over a network of multiple types of processors, operating…

  20. a Hadoop-Based Distributed Framework for Efficient Managing and Processing Big Remote Sensing Images

    NASA Astrophysics Data System (ADS)

    Wang, C.; Hu, F.; Hu, X.; Zhao, S.; Wen, W.; Yang, C.

    2015-07-01

    Various sensors from airborne and satellite platforms are producing large volumes of remote sensing images for mapping, environmental monitoring, disaster management, military intelligence, and others. However, it is challenging to efficiently storage, query and process such big data due to the data- and computing- intensive issues. In this paper, a Hadoop-based framework is proposed to manage and process the big remote sensing data in a distributed and parallel manner. Especially, remote sensing data can be directly fetched from other data platforms into the Hadoop Distributed File System (HDFS). The Orfeo toolbox, a ready-to-use tool for large image processing, is integrated into MapReduce to provide affluent image processing operations. With the integration of HDFS, Orfeo toolbox and MapReduce, these remote sensing images can be directly processed in parallel in a scalable computing environment. The experiment results show that the proposed framework can efficiently manage and process such big remote sensing data.

  1. Satellite and earth science data management activities at the U.S. geological survey's EROS data center

    USGS Publications Warehouse

    Carneggie, David M.; Metz, Gary G.; Draeger, William C.; Thompson, Ralph J.

    1991-01-01

    The U.S. Geological Survey's Earth Resources Observation Systems (EROS) Data Center, the national archive for Landsat data, has 20 years of experience in acquiring, archiving, processing, and distributing Landsat and earth science data. The Center is expanding its satellite and earth science data management activities to support the U.S. Global Change Research Program and the National Aeronautics and Space Administration (NASA) Earth Observing System Program. The Center's current and future data management activities focus on land data and include: satellite and earth science data set acquisition, development and archiving; data set preservation, maintenance and conversion to more durable and accessible archive medium; development of an advanced Land Data Information System; development of enhanced data packaging and distribution mechanisms; and data processing, reprocessing, and product generation systems.

  2. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  3. Single Plant Root System Modeling under Soil Moisture Variation

    NASA Astrophysics Data System (ADS)

    Yabusaki, S.; Fang, Y.; Chen, X.; Scheibe, T. D.

    2016-12-01

    A prognostic Virtual Plant-Atmosphere-Soil System (vPASS) model is being developed that integrates comprehensively detailed mechanistic single plant modeling with microbial, atmospheric, and soil system processes in its immediate environment. Three broad areas of process module development are targeted: Incorporating models for root growth and function, rhizosphere interactions with bacteria and other organisms, litter decomposition and soil respiration into established porous media flow and reactive transport models Incorporating root/shoot transport, growth, photosynthesis and carbon allocation process models into an integrated plant physiology model Incorporating transpiration, Volatile Organic Compounds (VOC) emission, particulate deposition and local atmospheric processes into a coupled plant/atmosphere model. The integrated plant ecosystem simulation capability is being developed as open source process modules and associated interfaces under a modeling framework. The initial focus addresses the coupling of root growth, vascular transport system, and soil under drought scenarios. Two types of root water uptake modeling approaches are tested: continuous root distribution and constitutive root system architecture. The continuous root distribution models are based on spatially averaged root development process parameters, which are relatively straightforward to accommodate in the continuum soil flow and reactive transport module. Conversely, the constitutive root system architecture models use root growth rates, root growth direction, and root branching to evolve explicit root geometries. The branching topologies require more complex data structures and additional input parameters. Preliminary results are presented for root model development and the vascular response to temporal and spatial variations in soil conditions.

  4. On the problem of modeling for parameter identification in distributed structures

    NASA Technical Reports Server (NTRS)

    Norris, Mark A.; Meirovitch, Leonard

    1988-01-01

    Structures are often characterized by parameters, such as mass and stiffness, that are spatially distributed. Parameter identification of distributed structures is subject to many of the difficulties involved in the modeling problem, and the choice of the model can greatly affect the results of the parameter identification process. Analogously to control spillover in the control of distributed-parameter systems, identification spillover is shown to exist as well and its effect is to degrade the parameter estimates. Moreover, as in modeling by the Rayleigh-Ritz method, it is shown that, for a Rayleigh-Ritz type identification algorithm, an inclusion principle exists in the identification of distributed-parameter systems as well, so that the identified natural frequencies approach the actual natural frequencies monotonically from above.

  5. Implementing an SIG based platform of application and service for city spatial information in Shanghai

    NASA Astrophysics Data System (ADS)

    Yu, Bailang; Wu, Jianping

    2006-10-01

    Spatial Information Grid (SIG) is an infrastructure that has the ability to provide the services for spatial information according to users' needs by means of collecting, sharing, organizing and processing the massive distributed spatial information resources. This paper presents the architecture, technologies and implementation of the Shanghai City Spatial Information Application and Service System, a SIG based platform, which is an integrated platform that serves for administration, planning, construction and development of the city. In the System, there are ten categories of spatial information resources, including city planning, land-use, real estate, river system, transportation, municipal facility construction, environment protection, sanitation, urban afforestation and basic geographic information data. In addition, spatial information processing services are offered as a means of GIS Web Services. The resources and services are all distributed in different web-based nodes. A single database is created to store the metadata of all the spatial information. A portal site is published as the main user interface of the System. There are three main functions in the portal site. First, users can search the metadata and consequently acquire the distributed data by using the searching results. Second, some spatial processing web applications that developed with GIS Web Services, such as file format conversion, spatial coordinate transfer, cartographic generalization and spatial analysis etc, are offered to use. Third, GIS Web Services currently available in the System can be searched and new ones can be registered. The System has been working efficiently in Shanghai Government Network since 2005.

  6. Multi-kw dc power distribution system study program

    NASA Technical Reports Server (NTRS)

    Berkery, E. A.; Krausz, A.

    1974-01-01

    The first phase of the Multi-kw dc Power Distribution Technology Program is reported and involves the test and evaluation of a technology breadboard in a specifically designed test facility according to design concepts developed in a previous study on space vehicle electrical power processing, distribution, and control. The static and dynamic performance, fault isolation, reliability, electromagnetic interference characterisitics, and operability factors of high distribution systems were studied in order to gain a technology base for the use of high voltage dc systems in future aerospace vehicles. Detailed technical descriptions are presented and include data for the following: (1) dynamic interactions due to operation of solid state and electromechanical switchgear; (2) multiplexed and computer controlled supervision and checkout methods; (3) pulse width modulator design; and (4) cable design factors.

  7. A data and information system for processing, archival, and distribution of data for global change research

    NASA Technical Reports Server (NTRS)

    Graves, Sara J.

    1994-01-01

    Work on this project was focused on information management techniques for Marshall Space Flight Center's EOSDIS Version 0 Distributed Active Archive Center (DAAC). The centerpiece of this effort has been participation in EOSDIS catalog interoperability research, the result of which is a distributed Information Management System (IMS) allowing the user to query the inventories of all the DAAC's from a single user interface. UAH has provided the MSFC DAAC database server for the distributed IMS, and has contributed to definition and development of the browse image display capabilities in the system's user interface. Another important area of research has been in generating value-based metadata through data mining. In addition, information management applications for local inventory and archive management, and for tracking data orders were provided.

  8. Garbage Collection in a Distributed Object-Oriented System

    NASA Technical Reports Server (NTRS)

    Gupta, Aloke; Fuchs, W. Kent

    1993-01-01

    An algorithm is described in this paper for garbage collection in distributed systems with object sharing across processor boundaries. The algorithm allows local garbage collection at each node in the system to proceed independently of local collection at the other nodes. It requires no global synchronization or knowledge of the global state of the system and exhibits the capability of graceful degradation. The concept of a specialized dump node is proposed to facilitate the collection of inaccessible circular structures. An experimental evaluation of the algorithm is also described. The algorithm is compared with a corresponding scheme that requires global synchronization. The results show that the algorithm works well in distributed processing environments even when the locality of object references is low.

  9. Design and Implementation of Distributed Crawler System Based on Scrapy

    NASA Astrophysics Data System (ADS)

    Fan, Yuhao

    2018-01-01

    At present, some large-scale search engines at home and abroad only provide users with non-custom search services, and a single-machine web crawler cannot sovle the difficult task. In this paper, Through the study and research of the original Scrapy framework, the original Scrapy framework is improved by combining Scrapy and Redis, a distributed crawler system based on Web information Scrapy framework is designed and implemented, and Bloom Filter algorithm is applied to dupefilter modul to reduce memory consumption. The movie information captured from douban is stored in MongoDB, so that the data can be processed and analyzed. The results show that distributed crawler system based on Scrapy framework is more efficient and stable than the single-machine web crawler system.

  10. A wireless data acquisition system for acoustic emission testing

    NASA Astrophysics Data System (ADS)

    Zimmerman, A. T.; Lynch, J. P.

    2013-01-01

    As structural health monitoring (SHM) systems have seen increased demand due to lower costs and greater capabilities, wireless technologies have emerged that enable the dense distribution of transducers and the distributed processing of sensor data. In parallel, ultrasonic techniques such as acoustic emission (AE) testing have become increasingly popular in the non-destructive evaluation of materials and structures. These techniques, which involve the analysis of frequency content between 1 kHz and 1 MHz, have proven effective in detecting the onset of cracking and other early-stage failure in active structures such as airplanes in flight. However, these techniques typically involve the use of expensive and bulky monitoring equipment capable of accurately sensing AE signals at sampling rates greater than 1 million samples per second. In this paper, a wireless data acquisition system is presented that is capable of collecting, storing, and processing AE data at rates of up to 20 MHz. Processed results can then be wirelessly transmitted in real-time, creating a system that enables the use of ultrasonic techniques in large-scale SHM systems.

  11. Super and parallel computers and their impact on civil engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamat, M.P.

    1986-01-01

    This book presents the papers given at a conference on the use of supercomputers in civil engineering. Topics considered at the conference included solving nonlinear equations on a hypercube, a custom architectured parallel processing system, distributed data processing, algorithms, computer architecture, parallel processing, vector processing, computerized simulation, and cost benefit analysis.

  12. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  13. Computational Unification: a Vision for Connecting Researchers

    NASA Astrophysics Data System (ADS)

    Troy, R. M.; Kingrey, O. J.

    2002-12-01

    Computational Unification of science, once only a vision, is becoming a reality. This technology is based upon a scientifically defensible, general solution for Earth Science data management and processing. The computational unification of science offers a real opportunity to foster inter and intra-discipline cooperation, and the end of 're-inventing the wheel'. As we move forward using computers as tools, it is past time to move from computationally isolating, "one-off" or discipline-specific solutions into a unified framework where research can be more easily shared, especially with researchers in other disciplines. The author will discuss how distributed meta-data, distributed processing and distributed data objects are structured to constitute a working interdisciplinary system, including how these resources lead to scientific defensibility through known lineage of all data products. Illustration of how scientific processes are encapsulated and executed illuminates how previously written processes and functions are integrated into the system efficiently and with minimal effort. Meta-data basics will illustrate how intricate relationships may easily be represented and used to good advantage. Retrieval techniques will be discussed including trade-offs of using meta-data versus embedded data, how the two may be integrated, and how simplifying assumptions may or may not help. This system is based upon the experience of the Sequoia 2000 and BigSur research projects at the University of California, Berkeley, whose goals were to find an alternative to the Hughes EOS-DIS system and is presently offered by Science Tools corporation, of which the author is a principal.

  14. Effects of Spatial Gradients on Electron Runaway Acceleration

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Ljepojevic, N. N.

    1996-01-01

    The runaway process is known to accelerate electrons in many laboratory plasmas and has been suggested as an acceleration mechanism in some astrophysical plasmas, including solar flares. Current calculations of the electron velocity distributions resulting from the runaway process are greatly restricted because they impose spatial homogeneity on the distribution. We have computed runaway distributions which include consistent development of spatial gradients in the energetic tail. Our solution for the electron velocity distribution is presented as a function of distance along a finite length acceleration region, and is compared with the equivalent distribution for the infinitely long homogenous system (i.e., no spatial gradients), as considered in the existing literature. All these results are for the weak field regime. We also discuss the severe restrictiveness of this weak field assumption.

  15. Remote voice training: A case study on space shuttle applications, appendix C

    NASA Technical Reports Server (NTRS)

    Mollakarimi, Cindy; Hamid, Tamin

    1990-01-01

    The Tile Automation System includes applications of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. An integrated set of rapid prototyping testbeds was developed which include speech recognition and synthesis, laser imaging systems, distributed Ada programming environments, distributed relational data base architectures, distributed computer network architectures, multi-media workbenches, and human factors considerations. Remote voice training in the Tile Automation System is discussed. The user is prompted over a headset by synthesized speech for the training sequences. The voice recognition units and the voice output units are remote from the user and are connected by Ethernet to the main computer system. A supervisory channel is used to monitor the training sequences. Discussions include the training approaches as well as the human factors problems and solutions for this system utilizing remote training techniques.

  16. Systems of Multiple Planets

    NASA Astrophysics Data System (ADS)

    Marcy, G. W.; Fischer, D. A.; Butler, R. P.; Vogt, S. S.

    To date, 10 stars are known which harbor two or three planets. These systems reveal secular and mean motion resonances in some systems and consist of widely separated, eccentric orbits in others. Both of the triple planet systems, namely Upsilon And and 55 Cancri, exhibit evidence of resonances. The two planets orbiting GJ 876 exhibit both mean-motion and secular resonances and they perturb each other so strongly that the evolution of the orbits is revealed in the Doppler measurements. The common occurrence of resonances suggests that delicate dynamical processes often shape the architecture of planetary systems. Likely processes include planet migration in a viscous disk, eccentricity pumping by the planet-disk interaction, and resonance capture of two planets. We find a class of "hierarchical" double-planet systems characterized by two planets in widely separated orbits, defined to have orbital period ratios greater than 5 to 1. In such systems, resonant interactions are weak, leaving high-order interactions and Kozai resonances plausibly important. We compare the planets that are single with those in multiple systems. We find that neither the two mass distributions nor the two eccentricity distributions are significantly different. This similarity in single and multiple systems suggests that similar dynamical processes may operate in both. The origin of eccentricities may stem from a multi-planet past or from interactions between planets and disk. Multiple planets in resonances can pump their eccentricities pumping resulting in one planet being ejected from the system or sent into the star, leaving a (more massive) single planet in an eccentric orbit. The distribution of semimajor axes of all known extrasolar planets shows a rise toward larger orbits, portending a population of gas-giant planets that reside beyond 3 AU, arguably in less perturbed, more circular orbits.

  17. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  18. Foundational Report Series: Advanced Distribution Management Systems for Grid Modernization, High-Level Use Cases for DMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui; Lu, Xiaonan; Martino, Sal

    Many distribution management systems (DMS) projects have achieved limited success because the electric utility did not sufficiently plan for actual use of the DMS functions in the control room environment. As a result, end users were not clear on how to use the new application software in actual production environments with existing, well-established business processes. An important first step in the DMS implementation process is development and refinement of the “to be” business processes. Development of use cases for the required DMS application functions is a key activity that leads to the formulation of the “to be” requirements. It ismore » also an important activity that is needed to develop specifications that are used to procure a new DMS.« less

  19. Can airborne ultrasound monitor bubble size in chocolate?

    NASA Astrophysics Data System (ADS)

    Watson, N.; Hazlehurst, T.; Povey, M.; Vieira, J.; Sundara, R.; Sandoz, J.-P.

    2014-04-01

    Aerated chocolate products consist of solid chocolate with the inclusion of bubbles and are a popular consumer product in many countries. The volume fraction and size distribution of the bubbles has an effect on their sensory properties and manufacturing cost. For these reasons it is important to have an online real time process monitoring system capable of measuring their bubble size distribution. As these products are eaten by consumers it is desirable that the monitoring system is non contact to avoid food contaminations. In this work we assess the feasibility of using an airborne ultrasound system to monitor the bubble size distribution in aerated chocolate bars. The experimental results from the airborne acoustic experiments were compared with theoretical results for known bubble size distributions using COMSOL Multiphysics. This combined experimental and theoretical approach is used to develop a greater understanding of how ultrasound propagates through aerated chocolate and to assess the feasibility of using airborne ultrasound to monitor bubble size distribution in these systems. The results indicated that a smaller bubble size distribution would result in an increase in attenuation through the product.

  20. Electronic Timekeeping: North Dakota State University Improves Payroll Processing.

    ERIC Educational Resources Information Center

    Vetter, Ronald J.; And Others

    1993-01-01

    North Dakota State University has adopted automated timekeeping to improve the efficiency and effectiveness of payroll processing. The microcomputer-based system accurately records and computes employee time, tracks labor distribution, accommodates complex labor policies and company pay practices, provides automatic data processing and reporting,…

  1. Hadoop-based implementation of processing medical diagnostic records for visual patient system

    NASA Astrophysics Data System (ADS)

    Yang, Yuanyuan; Shi, Liehang; Xie, Zhe; Zhang, Jianguo

    2018-03-01

    We have innovatively introduced Visual Patient (VP) concept and method visually to represent and index patient imaging diagnostic records (IDR) in last year SPIE Medical Imaging (SPIE MI 2017), which can enable a doctor to review a large amount of IDR of a patient in a limited appointed time slot. In this presentation, we presented a new approach to design data processing architecture of VP system (VPS) to acquire, process and store various kinds of IDR to build VP instance for each patient in hospital environment based on Hadoop distributed processing structure. We designed this system architecture called Medical Information Processing System (MIPS) with a combination of Hadoop batch processing architecture and Storm stream processing architecture. The MIPS implemented parallel processing of various kinds of clinical data with high efficiency, which come from disparate hospital information system such as PACS, RIS LIS and HIS.

  2. NASA Remote Sensing Data in Earth Sciences: Processing, Archiving, Distribution, Applications at the GES DISC

    NASA Technical Reports Server (NTRS)

    Leptoukh, Gregory G.

    2005-01-01

    The NASA Goddard Earth Sciences Data and Information Services Center (GES DISC) is one of the major Distributed Active Archive Centers (DAACs) archiving and distributing remote sensing data from the NASA's Earth Observing System. In addition to providing just data, the GES DISC/DAAC has developed various value-adding processing services. A particularly useful service is data processing a t the DISC (i.e., close to the input data) with the users' algorithms. This can take a number of different forms: as a configuration-managed algorithm within the main processing stream; as a stand-alone program next to the on-line data storage; as build-it-yourself code within the Near-Archive Data Mining (NADM) system; or as an on-the-fly analysis with simple algorithms embedded into the web-based tools (to avoid downloading unnecessary all the data). The existing data management infrastructure at the GES DISC supports a wide spectrum of options: from data subsetting data spatially and/or by parameter to sophisticated on-line analysis tools, producing economies of scale and rapid time-to-deploy. Shifting processing and data management burden from users to the GES DISC, allows scientists to concentrate on science, while the GES DISC handles the data management and data processing at a lower cost. Several examples of successful partnerships with scientists in the area of data processing and mining are presented.

  3. On buffer overflow duration in a finite-capacity queueing system with multiple vacation policy

    NASA Astrophysics Data System (ADS)

    Kempa, Wojciech M.

    2017-12-01

    A finite-buffer queueing system with Poisson arrivals and generally distributed processing times, operating under multiple vacation policy, is considered. Each time when the system becomes empty, the service station takes successive independent and identically distributed vacation periods, until, at the completion epoch of one of them, at least one job waiting for service is detected in the buffer. Applying analytical approach based on the idea of embedded Markov chain, integral equations and linear algebra, the compact-form representation for the cumulative distribution function (CDF for short) of the first buffer overflow duration is found. Hence, the formula for the CDF of next such periods is obtained. Moreover, probability distributions of the number of job losses in successive buffer overflow periods are found. The considered queueing system can be efficienly applied in modelling energy saving mechanisms in wireless network communication.

  4. The BaBar Data Reconstruction Control System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceseracciu, A

    2005-04-20

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a Control System has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of OO design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system ismore » distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful Finite State Machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on {approx}450 CPUs organized in 9 farms.« less

  5. The BaBar Data Reconstruction Control System

    NASA Astrophysics Data System (ADS)

    Ceseracciu, A.; Piemontese, M.; Tehrani, F. S.; Pulliam, T. M.; Galeazzi, F.

    2005-08-01

    The BaBar experiment is characterized by extremely high luminosity and very large volume of data produced and stored, with increasing computing requirements each year. To fulfill these requirements a control system has been designed and developed for the offline distributed data reconstruction system. The control system described in this paper provides the performance and flexibility needed to manage a large number of small computing farms, and takes full benefit of object oriented (OO) design. The infrastructure is well isolated from the processing layer, it is generic and flexible, based on a light framework providing message passing and cooperative multitasking. The system is distributed in a hierarchical way: the top-level system is organized in farms, farms in services, and services in subservices or code modules. It provides a powerful finite state machine framework to describe custom processing models in a simple regular language. This paper describes the design and evolution of this control system, currently in use at SLAC and Padova on /spl sim/450 CPUs organized in nine farms.

  6. Advanced Distributed Measurements and Data Processing at the Vibro-Acoustic Test Facility, GRC Space Power Facility, Sandusky, Ohio - an Architecture and an Example

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.; Evans, Richard K.

    2009-01-01

    A large-scale, distributed, high-speed data acquisition system (HSDAS) is currently being installed at the Space Power Facility (SPF) at NASA Glenn Research Center s Plum Brook Station in Sandusky, OH. This installation is being done as part of a facility construction project to add Vibro-acoustic Test Capabilities (VTC) to the current thermal-vacuum testing capability of SPF in support of the Orion Project s requirement for Space Environments Testing (SET). The HSDAS architecture is a modular design, which utilizes fully-remotely managed components, enables the system to support multiple test locations with a wide-range of measurement types and a very large system channel count. The architecture of the system is presented along with details on system scalability and measurement verification. In addition, the ability of the system to automate many of its processes such as measurement verification and measurement system analysis is also discussed.

  7. Hadoop distributed batch processing for Gaia: a success story

    NASA Astrophysics Data System (ADS)

    Riello, Marco

    2015-12-01

    The DPAC Cambridge Data Processing Centre (DPCI) is responsible for the photometric calibration of the Gaia data including the low resolution spectra. The large data volume produced by Gaia (~26 billion transits/year), the complexity of its data stream and the self-calibrating approach pose unique challenges for scalability, reliability and robustness of both the software pipelines and the operations infrastructure. DPCI has been the first in DPAC to realise the potential of Hadoop and Map/Reduce and to adopt them as the core technologies for its infrastructure. This has proven a winning choice allowing DPCI unmatched processing throughput and reliability within DPAC to the point that other DPCs have started following our footsteps. In this talk we will present the software infrastructure developed to build the distributed and scalable batch data processing system that is currently used in production at DPCI and the excellent results in terms of performance of the system.

  8. Distributed Cognition and Process Management Enabling Individualized Translational Research: The NIH Undiagnosed Diseases Program Experience

    PubMed Central

    Links, Amanda E.; Draper, David; Lee, Elizabeth; Guzman, Jessica; Valivullah, Zaheer; Maduro, Valerie; Lebedev, Vlad; Didenko, Maxim; Tomlin, Garrick; Brudno, Michael; Girdea, Marta; Dumitriu, Sergiu; Haendel, Melissa A.; Mungall, Christopher J.; Smedley, Damian; Hochheiser, Harry; Arnold, Andrew M.; Coessens, Bert; Verhoeven, Steven; Bone, William; Adams, David; Boerkoel, Cornelius F.; Gahl, William A.; Sincan, Murat

    2016-01-01

    The National Institutes of Health Undiagnosed Diseases Program (NIH UDP) applies translational research systematically to diagnose patients with undiagnosed diseases. The challenge is to implement an information system enabling scalable translational research. The authors hypothesized that similar complex problems are resolvable through process management and the distributed cognition of communities. The team, therefore, built the NIH UDP integrated collaboration system (UDPICS) to form virtual collaborative multidisciplinary research networks or communities. UDPICS supports these communities through integrated process management, ontology-based phenotyping, biospecimen management, cloud-based genomic analysis, and an electronic laboratory notebook. UDPICS provided a mechanism for efficient, transparent, and scalable translational research and thereby addressed many of the complex and diverse research and logistical problems of the NIH UDP. Full definition of the strengths and deficiencies of UDPICS will require formal qualitative and quantitative usability and process improvement measurement. PMID:27785453

  9. Spatial analysis of extension fracture systems: A process modeling approach

    USGS Publications Warehouse

    Ferguson, C.C.

    1985-01-01

    Little consensus exists on how best to analyze natural fracture spacings and their sequences. Field measurements and analyses published in geotechnical literature imply fracture processes radically different from those assumed by theoretical structural geologists. The approach adopted in this paper recognizes that disruption of rock layers by layer-parallel extension results in two spacing distributions, one representing layer-fragment lengths and another separation distances between fragments. These two distributions and their sequences reflect mechanics and history of fracture and separation. Such distributions and sequences, represented by a 2 ?? n matrix of lengthsL, can be analyzed using a method that is history sensitive and which yields also a scalar estimate of bulk extension, e (L). The method is illustrated by a series of Monte Carlo experiments representing a variety of fracture-and-separation processes, each with distinct implications for extension history. Resulting distributions of e (L)are process-specific, suggesting that the inverse problem of deducing fracture-and-separation history from final structure may be tractable. ?? 1985 Plenum Publishing Corporation.

  10. An autonomous fault detection, isolation, and recovery system for a 20-kHz electric power distribution test bed

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Walters, Jerry L.

    1991-01-01

    Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.

  11. Smart-DS: Synthetic Models for Advanced, Realistic Testing: Distribution Systems and Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnan, Venkat K; Palmintier, Bryan S; Hodge, Brian S

    The National Renewable Energy Laboratory (NREL) in collaboration with Massachusetts Institute of Technology (MIT), Universidad Pontificia Comillas (Comillas-IIT, Spain) and GE Grid Solutions, is working on an ARPA-E GRID DATA project, titled Smart-DS, to create: 1) High-quality, realistic, synthetic distribution network models, and 2) Advanced tools for automated scenario generation based on high-resolution weather data and generation growth projections. Through these advancements, the Smart-DS project is envisioned to accelerate the development, testing, and adoption of advanced algorithms, approaches, and technologies for sustainable and resilient electric power systems, especially in the realm of U.S. distribution systems. This talk will present themore » goals and overall approach of the Smart-DS project, including the process of creating the synthetic distribution datasets using reference network model (RNM) and the comprehensive validation process to ensure network realism, feasibility, and applicability to advanced use cases. The talk will provide demonstrations of early versions of synthetic models, along with the lessons learnt from expert engagements to enhance future iterations. Finally, the scenario generation framework, its development plans, and co-ordination with GRID DATA repository teams to house these datasets for public access will also be discussed.« less

  12. Distributional behavior of diffusion coefficients obtained by single trajectories in annealed transit time model

    NASA Astrophysics Data System (ADS)

    Akimoto, Takuma; Yamamoto, Eiji

    2016-12-01

    Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.

  13. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  14. Occurrence of Mycobacteria in Water Treatment Lines and in Water Distribution Systems

    PubMed Central

    Le Dantec, Corinne; Duguet, Jean-Pierre; Montiel, Antoine; Dumoutier, Nadine; Dubrou, Sylvie; Vincent, Véronique

    2002-01-01

    The frequency of recovery of atypical mycobacteria was estimated in two treatment plants providing drinking water to Paris, France, at some intermediate stages of treatment. The two plants use two different filtration processes, rapid and slow sand filtration. Our results suggest that slow sand filtration is more efficient for removing mycobacteria than rapid sand filtration. In addition, our results show that mycobacteria can colonize and grow on granular activated carbon and are able to enter distribution systems. We also investigated the frequency of recovery of mycobacteria in the water distribution system of Paris (outside buildings). The mycobacterial species isolated from the Paris drinking water distribution system are different from those isolated from the water leaving the treatment plants. Saprophytic mycobacteria (present in 41.3% of positive samples), potentially pathogenic mycobacteria (16.3%), and unidentifiable mycobacteria (54.8%) were isolated from 12 sites within the Paris water distribution system. Mycobacterium gordonae was preferentially recovered from treated surface water, whereas Mycobacterium nonchromogenicum was preferentially recovered from groundwater. No significant correlations were found among the presence of mycobacteria, the origin of water, and water temperature. PMID:12406720

  15. Copilot: Monitoring Embedded Systems

    NASA Technical Reports Server (NTRS)

    Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn

    2012-01-01

    Runtime verification (RV) is a natural fit for ultra-critical systems, where correctness is imperative. In ultra-critical systems, even if the software is fault-free, because of the inherent unreliability of commodity hardware and the adversity of operational environments, processing units (and their hosted software) are replicated, and fault-tolerant algorithms are used to compare the outputs. We investigate both software monitoring in distributed fault-tolerant systems, as well as implementing fault-tolerance mechanisms using RV techniques. We describe the Copilot language and compiler, specifically designed for generating monitors for distributed, hard real-time systems. We also describe two case-studies in which we generated Copilot monitors in avionics systems.

  16. Information processing in the primate visual system - An integrated systems perspective

    NASA Technical Reports Server (NTRS)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  17. Inter-computer communication architecture for a mixed redundancy distributed system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Adams, Stuart J.

    1987-01-01

    The triply redundant intercomputer network for the Advanced Information Processing System (AIPS), an architecture developed to serve as the core avionics system for a broad range of aerospace vehicles, is discussed. The AIPS intercomputer network provides a high-speed, Byzantine-fault-resilient communication service between processing sites, even in the presence of arbitrary failures of simplex and duplex processing sites on the IC network. The IC network contention poll has evolved from the Laning Poll. An analysis of the failure modes and effects and a simulation of the AIPS contention poll, demonstrate the robustness of the system.

  18. Classification of cognitive systems dedicated to data sharing

    NASA Astrophysics Data System (ADS)

    Ogiela, Lidia; Ogiela, Marek R.

    2017-08-01

    In this paper will be presented classification of new cognitive information systems dedicated to cryptographic data splitting and sharing processes. Cognitive processes of semantic data analysis and interpretation, will be used to describe new classes of intelligent information and vision systems. In addition, cryptographic data splitting algorithms and cryptographic threshold schemes will be used to improve processes of secure and efficient information management with application of such cognitive systems. The utility of the proposed cognitive sharing procedures and distributed data sharing algorithms will be also presented. A few possible application of cognitive approaches for visual information management and encryption will be also described.

  19. Distributed Cognition and Distributed Morality: Agency, Artifacts and Systems.

    PubMed

    Heersmink, Richard

    2017-04-01

    There are various philosophical approaches and theories describing the intimate relation people have to artifacts. In this paper, I explore the relation between two such theories, namely distributed cognition and distributed morality theory. I point out a number of similarities and differences in these views regarding the ontological status they attribute to artifacts and the larger systems they are part of. Having evaluated and compared these views, I continue by focussing on the way cognitive artifacts are used in moral practice. I specifically conceptualise how such artifacts (a) scaffold and extend moral reasoning and decision-making processes, (b) have a certain moral status which is contingent on their cognitive status, and (c) whether responsibility can be attributed to distributed systems. This paper is primarily written for those interested in the intersection of cognitive and moral theory as it relates to artifacts, but also for those independently interested in philosophical debates in extended and distributed cognition and ethics of (cognitive) technology.

  20. GOES-R User Data Types and Structure

    NASA Astrophysics Data System (ADS)

    Royle, A. W.

    2012-12-01

    GOES-R meteorological data is provided to the operational and science user community through four main distribution mechanisms. The GOES-R Ground Segment (GS) generates a set of Level 1b (L1b) data from each of the six primary satellite instruments and formats the data into a direct broadcast stream known as GOES Rebroadcast (GRB). Terrestrially, cloud and moisture imagery data is provided to forecasters at the National Weather Service (NWS) through a direct interface to the Advanced Weather Interactive Processing System (AWIPS). A secondary pathway for the user community to receive data terrestrially is via NOAA's Environmental Satellite Processing and Distribution System (ESPDS) Product Distribution and Access (PDA) system. The ESPDS PDA will service the NWS and other meteorological users through a data portal, which provides both a subscription service and an ad hoc query capability. Finally, GOES-R data is made available to NOAA's Comprehensive Large Array-Data Stewardship System (CLASS) for long-term archive. CLASS data includes the L1b and L2+ products sent to PDA, along with the Level 0 data used to create these products, and other data used for product generation and processing. This session will provide a summary description of the data types and formats associated with each of the four primary distribution pathways for user data from GOES-R. It will discuss the resources that are being developed by GOES-R to document the data structures and formats. It will also provide a brief introduction to the types of metadata associated with each of the primary data flows.

  1. ENDOCRINE DISRUPTORS IN THE ENVIRONMENT

    EPA Science Inventory

    The endocrine system produces hormones which are powerful natural chemicals that regulate important life processes. Endocrine disruptors are human-made chemicals distributed globally which have the potential to interfere with the endocrine system and produce serious biological e...

  2. Low Density Materials

    DTIC Science & Technology

    2012-03-09

    materials structures across scales for design of engineered systems ODISSEI: Origami Design for Integration of Self-assembling Systems for...AGENCIES Origami Engineering US-India Tunable Materials Forum US-AFRICA Initiative Reliance 21 Board Materials and Processing COI 29 DISTRIBUTION A

  3. On Maximal Hard-Core Thinnings of Stationary Particle Processes

    NASA Astrophysics Data System (ADS)

    Hirsch, Christian; Last, Günter

    2018-02-01

    The present paper studies existence and distributional uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion on the level of realizations. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently distributionally unique. More precisely, distributional uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing distributional uniqueness at high intensities. Finally, we provide a family of algorithmically constructible approximations whose volume fractions are arbitrarily close to the maximum.

  4. Optimization of cell seeding in a 2D bio-scaffold system using computational models.

    PubMed

    Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong

    2017-05-01

    The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Telescience - Optimizing aerospace science return through geographically distributed operations

    NASA Technical Reports Server (NTRS)

    Rasmussen, Daryl N.; Mian, Arshad M.

    1990-01-01

    The paper examines the objectives and requirements of teleoperations, defined as the means and process for scientists, NASA operations personnel, and astronauts to conduct payload operations as if these were colocated. This process is described in terms of Space Station era platforms. Some of the enabling technologies are discussed, including open architecture workstations, distributed computing, transaction management, expert systems, and high-speed networks. Recent testbedding experiments are surveyed to highlight some of the human factors requirements.

  6. The Development of Two Science Investigator-led Processing Systems (SIPS) for NASA's Earth Observation System (EOS)

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt

    2004-01-01

    In 2001, NASA Goddard Space Flight Center's Laboratory for Terrestrial Physics started the construction of a science Investigator-led Processing System (SIPS) for processing data from the Ozone Monitoring Instrument (OMI) which will launch on the Aura platform in mid 2004. The Ozone Monitoring Instrument (OMI) is a contribution of the Netherlands Agency for Aerospace Programs (NIVR) in collaboration with the Finnish Meteorological Institute (FMI) to the Earth Observing System (EOS) Aura mission. It will continue the Total Ozone Monitoring System (TOMS) record for total ozone and other atmospheric parameters related to ozone chemistry and climate. OMI measurements will be highly synergistic with the other instruments on the EOS Aura platform. The LTP previously developed the Moderate Resolution Imaging Spectrometer (MODIS) Data Processing System (MODAPS), which has been in full operations since the launches of the Terra and Aqua spacecrafts in December, 1999 and May, 2002 respectively. During that time, it has continually evolved to better support the needs of the MODIS team. We now run multiple instances of the system managing faster than real time reprocessings of the data as well as continuing forward processing. The new OMI Data Processing System (OMIDAPS) was adapted from the MODAPS. It will ingest raw data from the satellite ground station and process it to produce calibrated, geolocated higher level data products. These data products will be transmitted to the Goddard Distributed Active Archive Center (GDAAC) instance of the Earth Observing System (EOS) Data and Information System (EOSDIS) for long term archive and distribution to the public. The OMIDAPS will also provide data distribution to the OMI Science Team for quality assessment, algorithm improvement, calibration, etc. We have taken advantage of lessons learned from the MODIS experience and software already developed for MODIS. We made some changes in the hardware system organization, database and software to adapt the system for OMI. We replaced the fundamental database system, Sybase, with an Open Source RDBMS called PostgreSQL, and based the entire OMIDAPS on a cluster of Linux based commodity computers rather than the large SGI servers that MODAPS uses. Rather than relying on a central I/O server host, the new system distributes its data archive among multiple server hosts in the cluster. OMI is also customizing the graphical user interfaces and reporting structure to more closely meet the needs of the OMI Science Team. Prior to 2003, simulated OMI data and the science algorithms were not ready for production testing. We initially constructed a prototype system and tested using a 25 year dataset of Total Ozone Mapping Spectrometer (TOMS) and Solar Backscatter Ultraviolet Instrument (SBUV) data. This prototype system provided a platform to support the adaptation of the algorithms for OMI, and provided reprocessing of the historical data aiding in its analysis. In a recent reanalysis of the TOMS data, the OMIDAPS processed 108,000 full orbits of data through 4 processing steps per orbit, producing about 800,000 files (400 GiB) of level 2 and greater data files. More recently we have installed two instances of the OMIDAPS for integration and testing of OM1 science processes as they get delivered from the Science Team. A Test instance of the OMIDAPS has also supported a series of "Interface Confidence Tests" (ICTs) and End-to-End Ground System tests to ensure the launch readiness of the system. This paper will discuss the high-level hardware, software, and database organization of the OMIDAPS and how it builds on the MODAPS heritage system. It will also provide an overview of the testing and implementation of the production OMIDAPS.

  7. Visualizing Chemistry with Infrared Imaging

    ERIC Educational Resources Information Center

    Xie, Charles

    2011-01-01

    Almost all chemical processes release or absorb heat. The heat flow in a chemical system reflects the process it is undergoing. By showing the temperature distribution dynamically, infrared (IR) imaging provides a salient visualization of the process. This paper presents a set of simple experiments based on IR imaging to demonstrate its enormous…

  8. Characterizing Feedbacks Between Environmental Forcing and Sediment Characteristics in Fluvial and Coastal Systems

    NASA Astrophysics Data System (ADS)

    Feehan, S.; Ruggiero, P.; Hempel, L. A.; Anderson, D. L.; Cohn, N.

    2016-12-01

    Characterizing Feedbacks Between Environmental Forcing and Sediment Characteristics in Fluvial and Coastal Systems American Geophysical Union, 2016 Fall Meeting: San Francisco, CA Authors: Scott Feehan, Peter Ruggiero, Laura Hempel, and Dylan Anderson Linking transport processes and sediment characteristics within different environments along the source to sink continuum provides critical insight into the dominant feedbacks between grain size distributions and morphological evolution. This research is focused on evaluating differences in sediment size distributions across both fluvial and coastal environments in the U.S. Pacific Northwest. The Cascades' high relief is characterized by diverse flow regimes with high peak/flashy flows and sub-threshold flows occurring in relative proximity and one of the most energetic wave climates in the world. Combining analyses of both fluvial and coastal environments provides a broader understanding of the dominant forces driving differences between each system's grain size distributions, sediment transport processes, and resultant evolution. We consider sediment samples taken during a large-scale flume experiment that simulated floods representative of both high/flashy peak flows analogous to runoff dominated rivers and sub-threshold flows, analogous to spring-fed rivers. High discharge flows resulted in narrower grain size distributions while low flows where less skewed. Relative sediment size showed clear dependence on distance from source and the environments' dominant fluid motion. Grain size distributions and sediment transport rates were also quantified in both wave dominated nearshore and aeolian dominated backshore portions of Long Beach Peninsula, Washington during SEDEX2, the Sandbar-aEolian-Dune EXchange Experiment of summer 2016. The distributions showed spatial patterns in mean grain size, skewness, and kurtosis dependent on the dominant sediment transport process. The feedback between these grain size distributions and the predominant driver of sediment transport controls the potential for geomorphic change on societally relevant time scales in multiple settings.

  9. Hipe, Hipe, Hooray

    NASA Astrophysics Data System (ADS)

    Ott, Stephan; Herschel Science Ground Segment Consortium

    2010-05-01

    The Herschel Space Observatory, the fourth cornerstone mission in the ESA science program, was launched 14th of May 2009. With a 3.5 m telescope, it is the largest space telescope ever launched. Herschel's three instruments (HIFI, PACS, and SPIRE) perform photometry and spectroscopy in the 55 - 672 micron range and will deliver exciting science for the astronomical community during at least three years of routine observations. Since 2nd of December 2009 Herschel has been performing and processing observations in routine science mode. The development of the Herschel Data Processing System started eight years ago to support the data analysis for Instrument Level Tests. To fulfil the expectations of the astronomical community, additional resources were made available to implement a freely distributable Data Processing System capable of interactively and automatically reducing Herschel data at different processing levels. The system combines data retrieval, pipeline execution and scientific analysis in one single environment. The Herschel Interactive Processing Environment (HIPE) is the user-friendly face of Herschel Data Processing. The software is coded in Java and Jython to be platform independent and to avoid the need for commercial licenses. It is distributed under the GNU Lesser General Public License (LGPL), permitting everyone to access and to re-use its code. We will summarise the current capabilities of the Herschel Data Processing System and give an overview about future development milestones and plans, and how the astronomical community can contribute to HIPE. The Herschel Data Processing System is a joint development by the Herschel Science Ground Segment Consortium, consisting of ESA, the NASA Herschel Science Center, and the HIFI, PACS and SPIRE consortium members.

  10. Authentication techniques for smart cards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nelson, R.A.

    1994-02-01

    Smart card systems are most cost efficient when implemented as a distributed system, which is a system without central host interaction or a local database of card numbers for verifying transaction approval. A distributed system, as such, presents special card and user authentication problems. Fortunately, smart cards offer processing capabilities that provide solutions to authentication problems, provided the system is designed with proper data integrity measures. Smart card systems maintain data integrity through a security design that controls data sources and limits data changes. A good security design is usually a result of a system analysis that provides a thoroughmore » understanding of the application needs. Once designers understand the application, they may specify authentication techniques that mitigate the risk of system compromise or failure. Current authentication techniques include cryptography, passwords, challenge/response protocols, and biometrics. The security design includes these techniques to help prevent counterfeit cards, unauthorized use, or information compromise. This paper discusses card authentication and user identity techniques that enhance security for microprocessor card systems. It also describes the analysis process used for determining proper authentication techniques for a system.« less

  11. A PDA-based system for online recording and analysis of concurrent events in complex behavioral processes.

    PubMed

    Held, Jürgen; Manser, Tanja

    2005-02-01

    This article outlines how a Palm- or Newton-based PDA (personal digital assistant) system for online event recording was used to record and analyze concurrent events. We describe the features of this PDA-based system, called the FIT-System (flexible interface technique), and its application to the analysis of concurrent events in complex behavioral processes--in this case, anesthesia work processes. The patented FIT-System has a unique user interface design allowing the user to design an interface template with a pencil and paper or using a transparency film. The template usually consists of a drawing or sketch that includes icons or symbols that depict the observer's representation of the situation to be observed. In this study, the FIT-System allowed us to create a design for fast, intuitive online recording of concurrent events using a set of 41 observation codes. An analysis of concurrent events leads to a description of action density, and our results revealed a characteristic distribution of action density during the administration of anesthesia in the operating room. This distribution indicated the central role of the overlapping operations in the action sequences of medical professionals as they deal with the varying requirements of this complex task. We believe that the FIT-System for online recording of concurrent events in complex behavioral processes has the potential to be useful across a broad spectrum of research areas.

  12. Improved Seismic Acquisition System and Data Processing for the Italian National Seismic Network

    NASA Astrophysics Data System (ADS)

    Badiali, L.; Marcocci, C.; Mele, F.; Piscini, A.

    2001-12-01

    A new system for acquiring and processing digital signals has been developed in the last few years at the Istituto Nazionale di Geofisica e Vulcanologia (INGV). The system makes extensive use of the internet communication protocol standards such as TCP and UDP which are used as the transport highway inside the Italian network, and possibly in a near future outside, to share or redirect data among processes. The Italian National Seismic Network has been working for about 18 years equipped with vertical short period seismometers and transmitting through analog lines, to the computer center in Rome. We are now concentrating our efforts on speeding the migration towards a fully digital network based on about 150 stations equipped with either broad band or 5 seconds sensors connected to the data center partly through wired digital communication and partly through satellite digital communication. The overall process is layered through intranet and/or internet. Every layer gathers data in a simple format and provides data in a processed format, ready to be distributed towards the next layer. The lowest level acquires seismic data (raw waveforms) coming from the remote stations. It handshakes, checks and sends data in LAN or WAN according to a distribution list where other machines with their programs are waiting for. At the next level there are the picking procedures, or "pickers", on a per instrument basis, looking for phases. A picker spreads phases, again through the LAN or WAN and according to a distribution list, to one or more waiting locating machines tuned to generate a seismic event. The event locating procedure itself, the higher level in this stack, can exchange information with other similar procedures. Such a layered and distributed structure with nearby targets allows other seismic networks to join the processing and data collection of the same ongoing event, creating a virtual network larger than the original one. At present we plan to cooperate with other Italian regional and local networks, and with the VBB Mediterranean Network (MedNet) to share waveforms and events detected in real time. The seismic acquisition system at INGV uses a relational database built on standard SQL, for every activity involving the seismic network.

  13. Accumulation of Contaminants in the Distribution System.

    EPA Science Inventory

    Removal of arsenic from water using iron-related processes including coagulation with iron salts, iron removal with oxidation/filtration, and specific iron resins is established. These processes are effective because iron solids including minerals and chemical floc have strong ad...

  14. Mathematical Modelling of Continuous Biotechnological Processes

    ERIC Educational Resources Information Center

    Pencheva, T.; Hristozov, I.; Shannon, A. G.

    2003-01-01

    Biotechnological processes (BTP) are characterized by a complicated structure of organization and interdependent characteristics. Partial differential equations or systems of partial differential equations are used for their behavioural description as objects with distributed parameters. Modelling of substrate without regard to dispersion…

  15. Signal processing for distributed sensor concept: DISCO

    NASA Astrophysics Data System (ADS)

    Rafailov, Michael K.

    2007-04-01

    Distributed Sensor concept - DISCO proposed for multiplication of individual sensor capabilities through cooperative target engagement. DISCO relies on ability of signal processing software to format, to process and to transmit and receive sensor data and to exploit those data in signal synthesis process. Each sensor data is synchronized formatted, Signal-to-Noise Ration (SNR) enhanced and distributed inside of the sensor network. Signal processing technique for DISCO is Recursive Adaptive Frame Integration of Limited data - RAFIL technique that was initially proposed [1] as a way to improve the SNR, reduce data rate and mitigate FPA correlated noise of an individual sensor digital video-signal processing. In Distributed Sensor Concept RAFIL technique is used in segmented way, when constituencies of the technique are spatially and/or temporally separated between transmitters and receivers. Those constituencies include though not limited to two thresholds - one is tuned for optimum probability of detection, the other - to manage required false alarm rate, and limited frame integration placed somewhere between the thresholds as well as formatters, conventional integrators and more. RAFIL allows a non-linear integration that, along with SNR gain, provides system designers more capability where cost, weight, or power considerations limit system data rate, processing, or memory capability [2]. DISCO architecture allows flexible optimization of SNR gain, data rates and noise suppression on sensor's side and limited integration, re-formatting and final threshold on node's side. DISCO with Recursive Adaptive Frame Integration of Limited data may have flexible architecture that allows segmenting the hardware and software to be best suitable for specific DISCO applications and sensing needs - whatever it is air-or-space platforms, ground terminals or integration of sensors network.

  16. Real-time simulation clock

    NASA Technical Reports Server (NTRS)

    Bennington, Donald R. (Inventor); Crawford, Daniel J. (Inventor)

    1990-01-01

    The invention is a clock for synchronizing operations within a high-speed, distributed data processing network. The clock is actually a distributed system comprising a central clock and multiple site clock interface units (SCIUs) which are connected by means of a fiber optic star network and which operate under control of separate clock software. The presently preferred embodiment is a part of the flight simulation system now in current use at the NASA Langley Research Center.

  17. The distributed agent-based approach in the e-manufacturing environment

    NASA Astrophysics Data System (ADS)

    Sękala, A.; Kost, G.; Dobrzańska-Danikiewicz, A.; Banaś, W.; Foit, K.

    2015-11-01

    The deficiency of a coherent flow of information from a production department causes unplanned downtime and failures of machines and their equipment, which in turn results in production planning process based on incorrect and out-of-date information. All of these factors entail, as the consequence, the additional difficulties associated with the process of decision-making. They concern, among other, the coordination of components of a distributed system and providing the access to the required information, thereby generating unnecessary costs. The use of agent technology significantly speeds up the flow of information within the virtual enterprise. This paper includes the proposal of a multi-agent approach for the integration of processes within the virtual enterprise concept. The presented concept was elaborated to investigate the possible solutions of the ways of transmission of information in the production system taking into account the self-organization of constituent components. Thus it implicated the linking of the concept of multi-agent system with the system of managing the production information, based on the idea of e-manufacturing. The paper presents resulting scheme that should be the base for elaborating an informatics model of the target virtual system. The computer system itself is intended to be developed next.

  18. Software system for data management and distributed processing of multichannel biomedical signals.

    PubMed

    Franaszczuk, P J; Jouny, C C

    2004-01-01

    The presented software is designed for efficient utilization of cluster of PC computers for signal analysis of multichannel physiological data. The system consists of three main components: 1) a library of input and output procedures, 2) a database storing additional information about location in a storage system, 3) a user interface for selecting data for analysis, choosing programs for analysis, and distributing computing and output data on cluster nodes. The system allows for processing multichannel time series data in multiple binary formats. The description of data format, channels and time of recording are included in separate text files. Definition and selection of multiple channel montages is possible. Epochs for analysis can be selected both manually and automatically. Implementation of a new signal processing procedures is possible with a minimal programming overhead for the input/output processing and user interface. The number of nodes in cluster used for computations and amount of storage can be changed with no major modification to software. Current implementations include the time-frequency analysis of multiday, multichannel recordings of intracranial EEG of epileptic patients as well as evoked response analyses of repeated cognitive tasks.

  19. Technology Insertion-Engineering Services Process Characterization. Task Order No. 1. Book 1 of 3. Database Documentation Book. OO-ALC MANPGP (Overview Layouts)

    DTIC Science & Technology

    1989-12-15

    Missile Systems Company 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER McDonnell Douglas Missile Systems...SEQUENCE NO. B008 MCDONNELL DOUGLAS McDonnefl Douglas Missile Systems Company St. Louis, Missouri 63166-0516 (314) 232-0232 91-02815 Distribution nt pm rt...Systems Company 7.1- 1 2. TASK ORDER NO. 1 PROCESS CHARACTERIZATION The brake assembly subunit is responsible for the assembly of brakes. Brakes enter

  20. The Network Configuration of an Object Relational Database Management System

    NASA Technical Reports Server (NTRS)

    Diaz, Philip; Harris, W. C.

    2000-01-01

    The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.

  1. Modeling of electric and heat processes in spot resistance welding of cross-wire steel bars

    NASA Astrophysics Data System (ADS)

    Iatcheva, Ilona; Darzhanova, Denitsa; Manilova, Marina

    2018-03-01

    The aim of this work is the modeling of coupled electric and heat processes in a system for spot resistance welding of cross-wire reinforced steel bars. The real system geometry, dependences of material properties on the temperature, and changes of contact resistance and released power during the welding process have been taken into account in the study. The 3D analysis of the coupled AC electric and transient thermal field distributions is carried out using the finite element method. The novel feature is that the processes are modeled for several successive time stages, corresponding to the change of contact area, related contact resistance, and reduction of the released power, occurring simultaneously with the creation of contact between the workpieces. The values of contact resistance and power changes have been determined on the basis of preliminary experimental and theoretical investigations. The obtained results present the electric and temperature field distributions in the system. Special attention has been paid to the temperature evolution at specified observation points and lines in the contact area. The obtained information could be useful for clarification of the complicated nature of interrelated electric, thermal, mechanical, and physicochemical welding processes. Adequate modeling is also an opportunity for proper control and improvement of the system.

  2. Collision Based Blood Cell Distribution of the Blood Flow

    NASA Astrophysics Data System (ADS)

    Cinar, Yildirim

    2003-11-01

    Introduction: The goal of the study is the determination of the energy transferring process between colliding masses and the application of the results to the distribution of the cell, velocity and kinetic energy in arterial blood flow. Methods: Mathematical methods and models were used to explain the collision between two moving systems, and the distribution of linear momentum, rectilinear velocity, and kinetic energy in a collision. Results: According to decrease of mass of the second system, the velocity and momentum of constant mass of the first system are decreased, and linearly decreasing mass of the second system captures a larger amount of the kinetic energy and the rectilinear velocity of the collision system on a logarithmic scale. Discussion: The cause of concentration of blood cells at the center of blood flow an artery is not explained by Bernoulli principle alone but the kinetic energy and velocity distribution due to collision between the big mass of the arterial wall and the small mass of blood cells must be considered as well.

  3. Dynamical Aspects of Quasifission Process in Heavy-Ion Reactions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2015-06-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. For all the reactions the main component of the distributions corresponds to asymmetrical mass division typical for asymmetric quasifission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  4. Soil temperature variability in complex terrain measured using fiber-optic distributed temperature sensing

    USDA-ARS?s Scientific Manuscript database

    Soil temperature (Ts) exerts critical controls on hydrologic and biogeochemical processes but magnitude and nature of Ts variability in a landscape setting are rarely documented. Fiber optic distributed temperature sensing systems (FO-DTS) potentially measure Ts at high density over a large extent. ...

  5. Proceedings of the First Semiannual Distributed Receiver Program Review

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Point focus and line focus distributed receiver solar thermal technology for the production of electric power and of industrial process heat is addressed. Concentrator, receiver, and power conversion development are covered along with hardware tests and evaluation. Mass production costing, parabolic dish applications, and trough and bowl systems are included.

  6. COMPARISON OF PARTICLE SIZE DISTRIBUTIONS AND ELEMENTAL PARTITIONING FROM THE COMBUSTION OF PULVERIZED COAL AND RESIDUAL FUEL OIL

    EPA Science Inventory

    The paper gives results of experimental efforts in which three coals and a residual fuel oil were combusted in three different systems simulating process and utility boilers. Particloe size distributions (PSDs) were determined using atmospheric and low-pressure impaction, electr...

  7. Superconcurrency: A Form of Distributed Heterogeneous Supercomputing

    DTIC Science & Technology

    1991-05-01

    and Nathaniel J. Davis IV, An Overview of the PASM Parallel Processing System, in Computer Architecture, edited by D. D. Gajski , V. M. Milutinovic, H...nianag- concurrency Research Team has been rarena in the next few months, iag optinmalyconfigured sutes of the development of the Distributed e- g ., an

  8. Time evolution of pore system in lime - Pozzolana composites

    NASA Astrophysics Data System (ADS)

    Doleželová, Magdaléna; Čáchová, Monika; Scheinherrová, Lenka; Keppert, Martin

    2017-11-01

    The lime - pozzolana mortars and plasters are used in restoration works on building cultural heritage but these materials are also following the trend of energy - efficient solutions in civil engineering. Porosity and pore size distribution is one of crucial parameters influencing engineering properties of porous materials. The pore size distribution of lime based system is changing in time due to chemical processes occurring in the material. The present paper describes time evolution of pore system in lime - pozzolana composites; the obtained results are useful in prediction of performance of lime - pozzolana systems in building structures.

  9. Control of Groundwater Remediation Process as Distributed Parameter System

    NASA Astrophysics Data System (ADS)

    Mendel, M.; Kovács, T.; Hulkó, G.

    2014-12-01

    Pollution of groundwater requires the implementation of appropriate solutions which can be deployed for several years. The case of local groundwater contamination and its subsequent spread may result in contamination of drinking water sources or other disasters. This publication aims to design and demonstrate control of pumping wells for a model task of groundwater remediation. The task consists of appropriately spaced soil with input parameters, pumping wells and control system. Model of controlled system is made in the program MODFLOW using the finitedifference method as distributed parameter system. Control problem is solved by DPS Blockset for MATLAB & Simulink.

  10. A Robust Distributed Multipoint Fiber Optic Gas Sensor System Based on AGC Amplifier Structure.

    PubMed

    Zhu, Cunguang; Wang, Rende; Tao, Xuechen; Wang, Guangwei; Wang, Pengpeng

    2016-07-28

    A harsh environment-oriented distributed multipoint fiber optic gas sensor system realized by automatic gain control (AGC) technology is proposed. To improve the photoelectric signal reliability, the electronic variable gain can be modified in real time by an AGC closed-loop feedback structure to compensate for optical transmission loss which is caused by the fiber bend loss or other reasons. The deviation of the system based on AGC structure is below 4.02% when photoelectric signal decays due to fiber bending loss for bending radius of 5 mm, which is 20 times lower than the ordinary differential system. In addition, the AGC circuit with the same electric parameters can keep the baseline intensity of signals in different channels of the distributed multipoint sensor system at the same level. This avoids repetitive calibrations and streamlines the installation process.

  11. Combustion distribution control using the extremum seeking algorithm

    NASA Astrophysics Data System (ADS)

    Marjanovic, A.; Krstic, M.; Djurovic, Z.; Kvascev, G.; Papic, V.

    2014-12-01

    Quality regulation of the combustion process inside the furnace is the basis of high demands for increasing robustness, safety and efficiency of thermal power plants. The paper considers the possibility of spatial temperature distribution control inside the boiler, based on the correction of distribution of coal over the mills. Such control system ensures the maintenance of the flame focus away from the walls of the boiler, and thus preserves the equipment and reduces the possibility of ash slugging. At the same time, uniform heat dissipation over mills enhances the energy efficiency of the boiler, while reducing the pollution of the system. A constrained multivariable extremum seeking algorithm is proposed as a tool for combustion process optimization with the main objective of centralizing the flame in the furnace. Simulations are conducted on a model corresponding to the 350MW boiler of the Nikola Tesla Power Plant, in Obrenovac, Serbia.

  12. Generic emergence of power law distributions and Lévy-Stable intermittent fluctuations in discrete logistic systems

    NASA Astrophysics Data System (ADS)

    Biham, Ofer; Malcai, Ofer; Levy, Moshe; Solomon, Sorin

    1998-08-01

    The dynamics of generic stochastic Lotka-Volterra (discrete logistic) systems of the form wi(t+1)=λ(t)wi(t)+aw¯(t)-bwi(t)w¯(t) is studied by computer simulations. The variables wi, i=1,...,N, are the individual system components and w¯(t)=(1/N)∑iwi(t) is their average. The parameters a and b are constants, while λ(t) is randomly chosen at each time step from a given distribution. Models of this type describe the temporal evolution of a large variety of systems such as stock markets and city populations. These systems are characterized by a large number of interacting objects and the dynamics is dominated by multiplicative processes. The instantaneous probability distribution P(w,t) of the system components wi turns out to fulfill a Pareto power law P(w,t)~w-1-α. The time evolution of w¯(t) presents intermittent fluctuations parametrized by a Lévy-stable distribution with the same index α, showing an intricate relation between the distribution of the wi's at a given time and the temporal fluctuations of their average.

  13. Coherent Frequency Reference System for the NASA Deep Space Network

    NASA Technical Reports Server (NTRS)

    Tucker, Blake C.; Lauf, John E.; Hamell, Robert L.; Gonzaler, Jorge, Jr.; Diener, William A.; Tjoelker, Robert L.

    2010-01-01

    The NASA Deep Space Network (DSN) requires state-of-the-art frequency references that are derived and distributed from very stable atomic frequency standards. A new Frequency Reference System (FRS) and Frequency Reference Distribution System (FRD) have been developed, which together replace the previous Coherent Reference Generator System (CRG). The FRS and FRD each provide new capabilities that significantly improve operability and reliability. The FRS allows for selection and switching between frequency standards, a flywheel capability (to avoid interruptions when switching frequency standards), and a frequency synthesis system (to generate standardized 5-, 10-, and 100-MHz reference signals). The FRS is powered by redundant, specially filtered, and sustainable power systems and includes a monitor and control capability for station operations to interact and control the frequency-standard selection process. The FRD receives the standardized 5-, 10-, and 100-MHz reference signals and distributes signals to distribution amplifiers in a fan out fashion to dozens of DSN users that require the highly stable reference signals. The FRD is also powered by redundant, specially filtered, and sustainable power systems. The new DSN Frequency Distribution System, which consists of the FRS and FRD systems described here, is central to all operational activities of the NASA DSN. The frequency generation and distribution system provides ultra-stable, coherent, and very low phase-noise references at 5, l0, and 100 MHz to between 60 and 100 separate users at each Deep Space Communications Complex.

  14. Advanced end-to-end fiber optic sensing systems for demanding environments

    NASA Astrophysics Data System (ADS)

    Black, Richard J.; Moslehi, Behzad

    2010-09-01

    Optical fibers are small-in-diameter, light-in-weight, electromagnetic-interference immune, electrically passive, chemically inert, flexible, embeddable into different materials, and distributed-sensing enabling, and can be temperature and radiation tolerant. With appropriate processing and/or packaging, they can be very robust and well suited to demanding environments. In this paper, we review a range of complete end-to-end fiber optic sensor systems that IFOS has developed comprising not only (1) packaged sensors and mechanisms for integration with demanding environments, but (2) ruggedized sensor interrogators, and (3) intelligent decision aid algorithms software systems. We examine the following examples: " Fiber Bragg Grating (FBG) optical sensors systems supporting arrays of environmentally conditioned multiplexed FBG point sensors on single or multiple optical fibers: In conjunction with advanced signal processing, decision aid algorithms and reasoners, FBG sensor based structural health monitoring (SHM) systems are expected to play an increasing role in extending the life and reducing costs of new generations of aerospace systems. Further, FBG based structural state sensing systems have the potential to considerably enhance the performance of dynamic structures interacting with their environment (including jet aircraft, unmanned aerial vehicles (UAVs), and medical or extravehicular space robots). " Raman based distributed temperature sensing systems: The complete length of optical fiber acts as a very long distributed sensor which may be placed down an oil well or wrapped around a cryogenic tank.

  15. Waste-to-Energy Technology Brief

    EPA Science Inventory

    ETV's Greenhouse Gas Technology (GHG) Center, operated by Southern Research Institute under a cooperative agreement with US EPA, verified two biogas processing systems and four distributed generation (DG) energy systems in collaboration with the Colorado Governors Office or the N...

  16. Storing files in a parallel computing system based on user-specified parser function

    DOEpatents

    Faibish, Sorin; Bent, John M; Tzelnic, Percy; Grider, Gary; Manzanares, Adam; Torres, Aaron

    2014-10-21

    Techniques are provided for storing files in a parallel computing system based on a user-specified parser function. A plurality of files generated by a distributed application in a parallel computing system are stored by obtaining a parser from the distributed application for processing the plurality of files prior to storage; and storing one or more of the plurality of files in one or more storage nodes of the parallel computing system based on the processing by the parser. The plurality of files comprise one or more of a plurality of complete files and a plurality of sub-files. The parser can optionally store only those files that satisfy one or more semantic requirements of the parser. The parser can also extract metadata from one or more of the files and the extracted metadata can be stored with one or more of the plurality of files and used for searching for files.

  17. GEARS: An Enterprise Architecture Based On Common Ground Services

    NASA Astrophysics Data System (ADS)

    Petersen, S.

    2014-12-01

    Earth observation satellites collect a broad variety of data used in applications that range from weather forecasting to climate monitoring. Within NOAA the National Environmental Satellite Data and Information Service (NESDIS) supports these applications by operating satellites in both geosynchronous and polar orbits. Traditionally NESDIS has acquired and operated its satellites as stand-alone systems with their own command and control, mission management, processing, and distribution systems. As the volume, velocity, veracity, and variety of sensor data and products produced by these systems continues to increase, NESDIS is migrating to a new concept of operation in which it will operate and sustain the ground infrastructure as an integrated Enterprise. Based on a series of common ground services, the Ground Enterprise Architecture System (GEARS) approach promises greater agility, flexibility, and efficiency at reduced cost. This talk describes the new architecture and associated development activities, and presents the results of initial efforts to improve product processing and distribution.

  18. Integration science and distributed networks

    NASA Astrophysics Data System (ADS)

    Landauer, Christopher; Bellman, Kirstie L.

    2002-07-01

    Our work on integration of data and knowledge sources is based in a common theoretical treatment of 'Integration Science', which leads to systematic processes for combining formal logical and mathematical systems, computational and physical systems, and human systems and organizations. The theory is based on the processing of explicit meta-knowledge about the roles played by the different knowledge sources and the methods of analysis and semantic implications of the different data values, together with information about the context in which and the purpose for which they are being combined. The research treatment is primarily mathematical, and though this kind of integration mathematics is still under development, there are some applicable common threads that have emerged already. Instead of describing the current state of the mathematical investigations, since they are not yet crystallized enough for formalisms, we describe our applications of the approach in several different areas, including our focus area of 'Constructed Complex Systems', which are complex heterogeneous systems managed or mediated by computing systems. In this context, it is important to remember that all systems are embedded, all systems are autonomous, and that all systems are distributed networks.

  19. Automation of Shuttle Tile Inspection - Engineering methodology for Space Station

    NASA Technical Reports Server (NTRS)

    Wiskerchen, M. J.; Mollakarimi, C.

    1987-01-01

    The Space Systems Integration and Operations Research Applications (SIORA) Program was initiated in late 1986 as a cooperative applications research effort between Stanford University, NASA Kennedy Space Center, and Lockheed Space Operations Company. One of the major initial SIORA tasks was the application of automation and robotics technology to all aspects of the Shuttle tile processing and inspection system. This effort has adopted a systems engineering approach consisting of an integrated set of rapid prototyping testbeds in which a government/university/industry team of users, technologists, and engineers test and evaluate new concepts and technologies within the operational world of Shuttle. These integrated testbeds include speech recognition and synthesis, laser imaging inspection systems, distributed Ada programming environments, distributed relational database architectures, distributed computer network architectures, multimedia workbenches, and human factors considerations.

  20. A framework for building real-time expert systems

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1991-01-01

    The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.

Top