Sample records for high-performance distributed systems

  1. An XML-Based Protocol for Distributed Event Services

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A recent trend in distributed computing is the construction of high-performance distributed systems called computational grids. One difficulty we have encountered is that there is no standard format for the representation of performance information and no standard protocol for transmitting this information. This limits the types of performance analysis that can be undertaken in complex distributed systems. To address this problem, we present an XML-based protocol for transmitting performance events in distributed systems and evaluate the performance of this protocol.

  2. The WorkPlace distributed processing environment

    NASA Technical Reports Server (NTRS)

    Ames, Troy; Henderson, Scott

    1993-01-01

    Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.

  3. High voltage systems (tube-type microwave)/low voltage system (solid-state microwave) power distribution

    NASA Technical Reports Server (NTRS)

    Nussberger, A. A.; Woodcock, G. R.

    1980-01-01

    SPS satellite power distribution systems are described. The reference Satellite Power System (SPS) concept utilizes high-voltage klystrons to convert the onboard satellite power from dc to RF for transmission to the ground receiving station. The solar array generates this required high voltage and the power is delivered to the klystrons through a power distribution subsystem. An array switching of solar cell submodules is used to maintain bus voltage regulation. Individual klystron dc voltage conversion is performed by centralized converters. The on-board data processing system performs the necessary switching of submodules to maintain voltage regulation. Electrical power output from the solar panels is fed via switch gears into feeder buses and then into main distribution buses to the antenna. Power also is distributed to batteries so that critical functions can be provided through solar eclipses.

  4. Implementing Access to Data Distributed on Many Processors

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercier, C.W.

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  6. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  7. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  8. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  9. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  10. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  11. High performance and highly reliable Raman-based distributed temperature sensors based on correlation-coded OTDR and multimode graded-index fibers

    NASA Astrophysics Data System (ADS)

    Soto, M. A.; Sahu, P. K.; Faralli, S.; Sacchi, G.; Bolognini, G.; Di Pasquale, F.; Nebendahl, B.; Rueck, C.

    2007-07-01

    The performance of distributed temperature sensor systems based on spontaneous Raman scattering and coded OTDR are investigated. The evaluated DTS system, which is based on correlation coding, uses graded-index multimode fibers, operates over short-to-medium distances (up to 8 km) with high spatial and temperature resolutions (better than 1 m and 0.3 K at 4 km distance with 10 min measuring time) and high repeatability even throughout a wide temperature range.

  12. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  13. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  14. Low-cost high performance distributed data storage for multi-channel observations

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li

    2015-10-01

    The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.

  15. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  16. Building America Case Study: Standard- Versus High-Velocity Air Distribution in High-Performance Townhomes, Denver, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Poerschke, R. Beach, T. Begg

    IBACOS investigated the performance of a small-diameter high-velocity heat pump system compared to a conventional system in a new construction triplex townhouse. A ductless heat pump system also was installed for comparison, but the homebuyer backed out because of aesthetic concerns about that system. In total, two buildings, having identical solar orientation and comprised of six townhomes, were monitored for comfort and energy performance.

  17. INTELLIGENT MONITORING SYSTEM WITH HIGH TEMPERATURE DISTRIBUTED FIBEROPTIC SENSOR FOR POWER PLANT COMBUSTION PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwang Y. Lee; Stuart S. Yin; Andre Boheman

    2004-12-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatialmore » resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, improvement was made on the performance of in-fiber grating fabricated in single crystal sapphire fibers, test was performed on the grating performance of single crystal sapphire fiber with new fabrication methods, and the fabricated grating was applied to high temperature sensor. Under Task 2, models obtained from 3-D modeling of the Demonstration Boiler were used to study relationships between temperature and NOx, as the multi-dimensionality of such systems are most comparable with real-life boiler systems. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we investigate a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. The 3D temperature data is furnished by the Penn State Energy Institute using FLUENT. Given a set of empirical data with no analytic expression, we first develop an analytic description and then extend that model along a single axis.« less

  18. Study on Walking Training System using High-Performance Shoes constructed with Rubber Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Y.; Kawanaka, S.; Kanezaki, K.; Doi, S.

    2016-09-01

    The number of accidental falls has been increasing among the elderly as society has aged. The main factor is a deteriorating center of balance due to declining physical performance. Another major factor is that the elderly tend to have bowlegged walking and their center of gravity position of the body tend to swing from side to side during walking. To find ways to counteract falls among the elderly, we developed walking training system to treat the gap in the center of balance. We also designed High-Performance Shoes that showed the status of a person's balance while walking. We also produced walk assistance from the insole in which insole stiffness corresponded to human sole distribution could be changed to correct the person's walking status. We constructed our High- Performances Shoes to detect pressure distribution during walking. Comparing normal sole distribution patterns and corrected ones, we confirmed that our assistance system helped change the user's posture, thereby reducing falls among the elderly.

  19. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  20. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  1. High performance frame synchronization for continuous variable quantum key distribution systems.

    PubMed

    Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua

    2015-08-24

    Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.

  2. Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Palermo, Daniel J.; Banerjee, Prithviraj

    1995-01-01

    For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.

  3. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  4. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  5. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.

  6. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1988-01-01

    The use and implementation of Ada were investigated in distributed environments in which reliability is the primary concern. In particular, the focus was on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors are being executed, and that failures may occur in the software and underlying hardware. A secondary interest is in the performance of Ada systems and how that performance can be gauged reliably. Primary activities included: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; development of a refined approach to recovery that was applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.

  7. Continuous high speed coherent one-way quantum key distribution.

    PubMed

    Stucki, Damien; Barreiro, Claudio; Fasel, Sylvain; Gautier, Jean-Daniel; Gay, Olivier; Gisin, Nicolas; Thew, Rob; Thoma, Yann; Trinkler, Patrick; Vannel, Fabien; Zbinden, Hugo

    2009-08-03

    Quantum key distribution (QKD) is the first commercial quantum technology operating at the level of single quanta and is a leading light for quantum-enabled photonic technologies. However, controlling these quantum optical systems in real world environments presents significant challenges. For the first time, we have brought together three key concepts for future QKD systems: a simple high-speed protocol; high performance detection; and integration both, at the component level and for standard fibre network connectivity. The QKD system is capable of continuous and autonomous operation, generating secret keys in real time. Laboratory and field tests were performed and comparisons made with robust InGaAs avalanche photodiodes and superconducting detectors. We report the first real world implementation of a fully functional QKD system over a 43 dB-loss (150 km) transmission line in the Swisscom fibre optic network where we obtained average real-time distribution rates over 3 hours of 2.5 bps.

  8. Final Report for DOE Award ER25756

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesselman, Carl

    2014-11-17

    The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less

  9. IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.

    This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less

  10. An Efficient Modulation Strategy for Cascaded Photovoltaic Systems Suffering From Module Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Cheng; Zhang, Kai; Xiong, Jian

    Modular multilevel cascaded converter (MMCC) is a promising technique for medium/high-voltage high-power photovoltaic systems due to its modularity, scalability, and capability of distributed maximum power point tracking (MPPT) etc. However, distributed MPPT under module-mismatch might polarize the distribution of ac output voltages as well as the dc-link voltages among the modules, distort grid currents, and even cause system instability. For the better acceptance in practical applications, such issues need to be well addressed. Based on mismatch degree that is defined to consider both active power distribution and maximum modulation index, this paper presents an efficient modulation strategy for a cascaded-H-bridge-basedmore » MMCC under module mismatch. It can operate in loss-reducing mode or range-extending mode. By properly switching between the two modes, performance indices such as system efficiency, grid current quality, and balance of dc voltages, can be well coordinated. In this way, the MMCC system can maintain high-performance over a wide range of operating conditions. As a result, effectiveness of the proposed modulation strategy is proved with experiments.« less

  11. An Efficient Modulation Strategy for Cascaded Photovoltaic Systems Suffering From Module Mismatch

    DOE PAGES

    Wang, Cheng; Zhang, Kai; Xiong, Jian; ...

    2017-09-26

    Modular multilevel cascaded converter (MMCC) is a promising technique for medium/high-voltage high-power photovoltaic systems due to its modularity, scalability, and capability of distributed maximum power point tracking (MPPT) etc. However, distributed MPPT under module-mismatch might polarize the distribution of ac output voltages as well as the dc-link voltages among the modules, distort grid currents, and even cause system instability. For the better acceptance in practical applications, such issues need to be well addressed. Based on mismatch degree that is defined to consider both active power distribution and maximum modulation index, this paper presents an efficient modulation strategy for a cascaded-H-bridge-basedmore » MMCC under module mismatch. It can operate in loss-reducing mode or range-extending mode. By properly switching between the two modes, performance indices such as system efficiency, grid current quality, and balance of dc voltages, can be well coordinated. In this way, the MMCC system can maintain high-performance over a wide range of operating conditions. As a result, effectiveness of the proposed modulation strategy is proved with experiments.« less

  12. NRL Fact Book 2010

    DTIC Science & Technology

    2010-01-01

    service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services

  13. Approach Considerations in Aircraft with High-Lift Propeller Systems

    NASA Technical Reports Server (NTRS)

    Patterson, Michael D.; Borer, Nicholas K.

    2017-01-01

    NASA's research into distributed electric propulsion (DEP) includes the design and development of the X-57 Maxwell aircraft. This aircraft has two distinct types of DEP: wingtip propellers and high-lift propellers. This paper focuses on the unique opportunities and challenges that the high-lift propellers--i.e., the small diameter propellers distributed upstream of the wing leading edge to augment lift at low speeds--bring to the aircraft performance in approach conditions. Recent changes to the regulations related to certifying small aircraft (14 CFR x23) and these new regulations' implications on the certification of aircraft with high-lift propellers are discussed. Recommendations about control systems for high-lift propeller systems are made, and performance estimates for the X-57 aircraft with high-lift propellers operating are presented.

  14. Shape Modification and Size Classification of Microcrystalline Graphite Powder as Anode Material for Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Gai, Guosheng; Yang, Yufen

    2018-03-01

    Natural microcrystalline graphite (MCG) composed of many crystallites is a promising new anode material for lithium-ion batteries (LiBs) and has received considerable attention from researchers. MCG with narrow particle size distribution and high sphericity exhibits excellent electrochemical performance. A nonaddition process to prepare natural MCG as a high-performance LiB anode material is described. First, raw MCG was broken into smaller particles using a pulverization system. Then, the particles were modified into near-spherical shape using a particle shape modification system. Finally, the particle size distribution was narrowed using a centrifugal rotor classification system. The products with uniform hemispherical shape and narrow size distribution had mean particle size of approximately 9 μm, 10 μm, 15 μm, and 20 μm. Additionally, the innovative pilot experimental process increased the product yield of the raw material. Finally, the electrochemical performance of the prepared MCG was tested, revealing high reversible capacity and good cyclability.

  15. Performance prediction of a synchronization link for distributed aerospace wireless systems.

    PubMed

    Wang, Wen-Qin; Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.

  16. Power management and distribution technology

    NASA Astrophysics Data System (ADS)

    Dickman, John Ellis

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  17. Power management and distribution technology

    NASA Technical Reports Server (NTRS)

    Dickman, John Ellis

    1993-01-01

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  18. Naval Research Laboratory Fact Book 2012

    DTIC Science & Technology

    2012-11-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion

  19. The NATO III 5 MHz Distribution System

    NASA Technical Reports Server (NTRS)

    Vulcan, A.; Bloch, M.

    1981-01-01

    A high performance 5 MHz distribution system is described which has extremely low phase noise and jitter characteristics and provides multiple buffered outputs. The system is completely redundant with automatic switchover and is self-testing. Since the 5 MHz reference signals distributed by the NATO III distribution system are used for up-conversion and multiplicative functions, a high degree of phase stability and isolation between outputs is necessary. Unique circuit design and packaging concepts insure that the isolation between outputs is sufficient to quarantee a phase perturbation of less than 0.0016 deg when other outputs are open circuited, short circuited or terminated in 50 ohms. Circuit design techniques include high isolation cascode amplifiers. Negative feedback stabilizes system gain and minimizes circuit phase noise contributions. Balanced lines, in lieu of single ended coaxial transmission media, minimize pickup.

  20. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  1. Reusable and Extensible High Level Data Distributions

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Chamberlain, Bradford; James, Mark L.; Zima, Hans P.

    2005-01-01

    This paper presents a reusable design of a data distribution framework for data parallel high performance applications. We are implementing the design in the context of the Chapel high productivity programming language. Distributions in Chapel are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on,the performance of applications, it is important that the distribution strategy can be chosen by a user. At the same time, high productivity concerns require that the user is shielded from error-prone, tedious details such as communication and synchronization. We propose an approach to distributions that enables the user to refine a language-provided distribution type and adjust it to optimize the performance of the application. Additionally, we conceal from the user low-level communication and synchronization details to increase productivity. To emphasize the generality of our distribution machinery, we present its abstract design in the form of a design pattern, which is independent of a concrete implementation. To illustrate the applicability of our distribution framework design, we outline the implementation of data distributions in terms of the Chapel language.

  2. Experimental study of low-cost fiber optic distributed temperature sensor system performance

    NASA Astrophysics Data System (ADS)

    Dashkov, Michael V.; Zharkov, Alexander D.

    2016-03-01

    The distributed control of temperature is an actual task for various application such as oil & gas fields, high-voltage power lines, fire alarm systems etc. The most perspective are optical fiber distributed temperature sensors (DTS). They have advantages on accuracy, resolution and range, but have a high cost. Nevertheless, for some application the accuracy of measurement and localization aren't so important as cost. The results of an experimental study of low-cost Raman based DTS based on standard OTDR are represented.

  3. Efficient abstract data type components for distributed and parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastani, F.; Hilal, W.; Iyengar, S.S.

    1987-10-01

    One way of improving software system's comprehensibility and maintainability is to decompose it into several components, each of which encapsulates some information concerning the system. These components can be classified into four categories, namely, abstract data type, functional, interface, and control components. Such a classfication underscores the need for different specification, implementation, and performance-improvement methods for different types of components. This article focuses on the development of high-performance abstract data type components for distributed and parallel environments.

  4. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  5. Performance Prediction of a Synchronization Link for Distributed Aerospace Wireless Systems

    PubMed Central

    Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link. PMID:23970828

  6. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  7. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  8. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  9. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  10. High sensitivity optical molecular imaging system

    NASA Astrophysics Data System (ADS)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  11. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  12. Derivation of WECC Distributed PV System Model Parameters from Quasi-Static Time-Series Distribution System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Boemer, Jens C.; Vittal, Eknath

    The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less

  13. A UNIX SVR4-OS 9 distributed data acquisition for high energy physics

    NASA Astrophysics Data System (ADS)

    Drouhin, F.; Schwaller, B.; Fontaine, J. C.; Charles, F.; Pallares, A.; Huss, D.

    1998-08-01

    The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMS tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The UNIX system manages a list of OS9 front-end systems with a synchronisation protocol running over a TCP/IP layer.

  14. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  15. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  16. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  17. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  18. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  19. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  20. Automated aberration compensation in high numerical aperture systems for arbitrary laser modes (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hering, Julian; Waller, Erik H.; von Freymann, Georg

    2017-02-01

    Since a large number of optical systems and devices are based on differently shaped focal intensity distributions (point-spread-functions, PSF), the PSF's quality is crucial for the application's performance. E.g., optical tweezers, optical potentials for trapping of ultracold atoms as well as stimulated-emission-depletion (STED) based microscopy and lithography rely on precisely controlled intensity distributions. However, especially in high numerical aperture (NA) systems, such complex laser modes are easily distorted by aberrations leading to performance losses. Although different approaches addressing phase retrieval algorithms have been recently presented[1-3], fast and automated aberration compensation for a broad variety of complex shaped PSFs in high NA systems is still missing. Here, we report on a Gerchberg-Saxton[4] based algorithm (GSA) for automated aberration correction of arbitrary PSFs, especially for high NA systems. Deviations between the desired target intensity distribution and the three-dimensionally (3D) scanned experimental focal intensity distribution are used to calculate a correction phase pattern. The target phase distribution plus the correction pattern are displayed on a phase-only spatial-light-modulator (SLM). Focused by a high NA objective, experimental 3D scans of several intensity distributions allow for characterization of the algorithms performance: aberrations are reliably identified and compensated within less than 10 iterations. References 1. B. M. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, "Phase-retrieved pupil functions in wide-field fluorescence microscopy," J. of Microscopy 216(1), 32-48 (2004). 2. A. Jesacher, A. Schwaighofer, S. Frhapter, C. Maurer, S. Bernet, and M. Ritsch-Marte, "Wavefront correction of spatial light modulators using an optical vortex image," Opt. Express 15(9), 5801-5808 (2007). 3. A. Jesacher and M. J. Booth, "Parallel direct laser writing in three dimensions with spatially dependent aberration correction," Opt. Express 18(20), 21090-21099 (2010). 4. R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determination of the phase from image and diffraction plane pictures," Optik 35(2), 237-246 (1972).

  1. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  2. Modeling and experimental performance of an intermediate temperature reversible solid oxide cell for high-efficiency, distributed-scale electrical energy storage

    NASA Astrophysics Data System (ADS)

    Wendel, Christopher H.; Gao, Zhan; Barnett, Scott A.; Braun, Robert J.

    2015-06-01

    Electrical energy storage is expected to be a critical component of the future world energy system, performing load-leveling operations to enable increased penetration of renewable and distributed generation. Reversible solid oxide cells, operating sequentially between power-producing fuel cell mode and fuel-producing electrolysis mode, have the capability to provide highly efficient, scalable electricity storage. However, challenges ranging from cell performance and durability to system integration must be addressed before widespread adoption. One central challenge of the system design is establishing effective thermal management in the two distinct operating modes. This work leverages an operating strategy to use carbonaceous reactant species and operate at intermediate stack temperature (650 °C) to promote exothermic fuel-synthesis reactions that thermally self-sustain the electrolysis process. We present performance of a doped lanthanum-gallate (LSGM) electrolyte solid oxide cell that shows high efficiency in both operating modes at 650 °C. A physically based electrochemical model is calibrated to represent the cell performance and used to simulate roundtrip operation for conditions unique to these reversible systems. Design decisions related to system operation are evaluated using the cell model including current density, fuel and oxidant reactant compositions, and flow configuration. The analysis reveals tradeoffs between electrical efficiency, thermal management, energy density, and durability.

  3. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  4. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  5. NRL Fact Book

    DTIC Science & Technology

    2008-01-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR

  6. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  7. A Unix SVR-4-OS9 distributed data acquisition for high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouhin, F.; Schwaller, B.; Fontaine, J.C.

    1998-08-01

    The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMs tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The Unix system manages a list of OS9 front-end systems with a synchronization protocolmore » running over a TCP/IP layer.« less

  8. Evaluation of the Performance of the Distributed Phased-MIMO Sonar.

    PubMed

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-11

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.

  9. Evaluation of the Performance of the Distributed Phased-MIMO Sonar

    PubMed Central

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-01

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071

  10. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  11. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  12. Multi-kw dc power distribution system study program

    NASA Technical Reports Server (NTRS)

    Berkery, E. A.; Krausz, A.

    1974-01-01

    The first phase of the Multi-kw dc Power Distribution Technology Program is reported and involves the test and evaluation of a technology breadboard in a specifically designed test facility according to design concepts developed in a previous study on space vehicle electrical power processing, distribution, and control. The static and dynamic performance, fault isolation, reliability, electromagnetic interference characterisitics, and operability factors of high distribution systems were studied in order to gain a technology base for the use of high voltage dc systems in future aerospace vehicles. Detailed technical descriptions are presented and include data for the following: (1) dynamic interactions due to operation of solid state and electromechanical switchgear; (2) multiplexed and computer controlled supervision and checkout methods; (3) pulse width modulator design; and (4) cable design factors.

  13. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  14. Design distributed simulation platform for vehicle management system

    NASA Astrophysics Data System (ADS)

    Wen, Zhaodong; Wang, Zhanlin; Qiu, Lihua

    2006-11-01

    Next generation military aircraft requires the airborne management system high performance. General modules, data integration, high speed data bus and so on are needed to share and manage information of the subsystems efficiently. The subsystems include flight control system, propulsion system, hydraulic power system, environmental control system, fuel management system, electrical power system and so on. The unattached or mixed architecture is changed to integrated architecture. That means the whole airborne system is regarded into one system to manage. So the physical devices are distributed but the system information is integrated and shared. The process function of each subsystem are integrated (including general process modules, dynamic reconfiguration), furthermore, the sensors and the signal processing functions are shared. On the other hand, it is a foundation for power shared. Establish a distributed vehicle management system using 1553B bus and distributed processors which can provide a validation platform for the research of airborne system integrated management. This paper establishes the Vehicle Management System (VMS) simulation platform. Discuss the software and hardware configuration and analyze the communication and fault-tolerant method.

  15. The Case for Distributed Engine Control in Turbo-Shaft Engine Systems

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Paluszewski, Paul J.; Storey, William; Smith, Bert J.

    2009-01-01

    The turbo-shaft engine is an important propulsion system used to power vehicles on land, sea, and in the air. As the power plant for many high performance helicopters, the characteristics of the engine and control are critical to proper vehicle operation as well as being the main determinant to overall vehicle performance. When applied to vertical flight, important distinctions exist in the turbo-shaft engine control system due to the high degree of dynamic coupling between the engine and airframe and the affect on vehicle handling characteristics. In this study, the impact of engine control system architecture is explored relative to engine performance, weight, reliability, safety, and overall cost. Comparison of the impact of architecture on these metrics is investigated as the control system is modified from a legacy centralized structure to a more distributed configuration. A composite strawman system which is typical of turbo-shaft engines in the 1000 to 2000 hp class is described and used for comparison. The overall benefits of these changes to control system architecture are assessed. The availability of supporting technologies to achieve this evolution is also discussed.

  16. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  17. Development and in vivo evaluation of self-microemulsion as delivery system for α-mangostin.

    PubMed

    Xu, Wen-Ke; Jiang, Hui; Yang, Kui; Wang, Ya-Qin; Zhang, Qian; Zuo, Jian

    2017-03-01

    α-Mangostin (MG) is a versatile bioactive compound isolated from mangosteen and possesses significant pharmacokinetic shortages. To augment the potential clinical efficacy, MG-loaded self-microemulsion (MG-SME) was designed and prepared in this study, and its potential as a drug loading system was evaluated based on the pharmacokinetic performance and tissue distribution feature. The formula of MG-SME was optimized by an orthogonal test under the guidance of ternary phase diagram, and the prepared MG-SME was characterized by encapsulation efficiency, size distribution, and morphology. Optimized high performance liquid chromatography method was employed to determine concentrations of MG and characterize the pharmacokinetic and tissue distribution features of MG in rodents. It was found that diluted MG-SME was characterized as spherical particles with a mean diameter of 24.6 nm and an encapsulation efficiency of 87.26%. The delivery system enhanced the area under the curve of MG by 4.75 times and increased the distribution in lymphatic organs. These findings suggested that SME as a nano-sized delivery system efficiently promoted the digestive tract absorption of MG and modified its distribution in tissues. The targeting feature and high oral bioavailability of MG-SME promised a good clinical efficacy, especially for immune diseases. Copyright © 2017. Published by Elsevier Taiwan.

  18. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    NASA Technical Reports Server (NTRS)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  19. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  20. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  1. Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen Minh

    2002-03-31

    This report summarizes the work performed by Honeywell during the January 2002 to March 2002 reporting period under Cooperative Agreement DE-FC26-01NT40779 for the U. S. Department of Energy, National Energy Technology Laboratory (DOE/NETL) entitled ''Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation''. The main objective of this project is to develop and demonstrate the feasibility of a highly efficient hybrid system integrating a planar Solid Oxide Fuel Cell (SOFC) and a turbogenerator. For this reporting period the following activities have been carried out: {lg_bullet} Conceptual system design trade studies were performed {lg_bullet} System-level performance model was created {lg_bullet}more » Dynamic control models are being developed {lg_bullet} Mechanical properties of candidate heat exchanger materials were investigated {lg_bullet} SOFC performance mapping as a function of flow rate and pressure was completed« less

  2. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  3. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  4. Sensitivity of fenestration solar gain to source spectrum and angle of incidence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCluney, W.R.

    1996-12-31

    The solar heat gain coefficient (SHGC) is the fraction of solar radiant flux incident on a fenestration system entering a building as heat gain. In general it depends on both the angle of incidence and the spectral distribution of the incident solar radiation. In attempts to improve energy performance and user acceptance of high-performance glazing systems, manufacturers are producing glazing systems with increasing spectral selectivity. This poses potential difficulties for calculations of solar heat gain through windows based upon the use of a single solar spectral weighting function. The sensitivity of modern high-performance glazing systems to both the angle ofmore » incidence and the shape of the incident solar spectrum is examined using a glazing performance simulation program. It is found that as the spectral selectivity of the glazing system increases, the SHGC can vary as the incident spectral distribution varies. The variations can be as great as 50% when using several different representative direct-beam spectra. These include spectra having low and high air masses and a standard spectrum having an air mass of 1.5. The variations can be even greater if clear blue diffuse skylight is considered. It is recommended that the current broad-band shading coefficient method of calculating solar gain be replaced by one that is spectral based.« less

  5. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  6. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  7. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  8. Experiment and application of soft x-ray grazing incidence optical scattering phenomena

    NASA Astrophysics Data System (ADS)

    Chen, Shuyan; Li, Cheng; Zhang, Yang; Su, Liping; Geng, Tao; Li, Kun

    2017-08-01

    For short wavelength imaging systems,surface scattering effects is one of important factors degrading imaging performance. Study of non-intuitive surface scatter effects resulting from practical optical fabrication tolerances is a necessary work for optical performance evaluation of high resolution short wavelength imaging systems. In this paper, Soft X-ray optical scattering distribution is measured by a soft X-ray reflectometer installed by my lab, for different sample mirrors、wavelength and grazing angle. Then aim at space solar telescope, combining these scattered light distributions, and surface scattering numerical model of grazing incidence imaging system, PSF and encircled energy of optical system of space solar telescope are computed. We can conclude that surface scattering severely degrade imaging performance of grazing incidence systems through analysis and computation.

  9. A Performance Comparison of Tree and Ring Topologies in Distributed System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Min

    A distributed system is a collection of computers that are connected via a communication network. Distributed systems have become commonplace due to the wide availability of low-cost, high performance computers and network devices. However, the management infrastructure often does not scale well when distributed systems get very large. Some of the considerations in building a distributed system are the choice of the network topology and the method used to construct the distributed system so as to optimize the scalability and reliability of the system, lower the cost of linking nodes together and minimize the message delay in transmission, and simplifymore » system resource management. We have developed a new distributed management system that is able to handle the dynamic increase of system size, detect and recover the unexpected failure of system services, and manage system resources. The topologies used in the system are the tree-structured network and the ring-structured network. This thesis presents the research background, system components, design, implementation, experiment results and the conclusions of our work. The thesis is organized as follows: the research background is presented in chapter 1. Chapter 2 describes the system components, including the different node types and different connection types used in the system. In chapter 3, we describe the message types and message formats in the system. We discuss the system design and implementation in chapter 4. In chapter 5, we present the test environment and results, Finally, we conclude with a summary and describe our future work in chapter 6.« less

  10. A Distributed Ambient Intelligence Based Multi-Agent System for Alzheimer Health Care

    NASA Astrophysics Data System (ADS)

    Tapia, Dante I.; RodríGuez, Sara; Corchado, Juan M.

    This chapter presents ALZ-MAS (Alzheimer multi-agent system), an ambient intelligence (AmI)-based multi-agent system aimed at enhancing the assistance and health care for Alzheimer patients. The system makes use of several context-aware technologies that allow it to automatically obtain information from users and the environment in an evenly distributed way, focusing on the characteristics of ubiquity, awareness, intelligence, mobility, etc., all of which are concepts defined by AmI. ALZ-MAS makes use of a services oriented multi-agent architecture, called flexible user and services oriented multi-agent architecture, to distribute resources and enhance its performance. It is demonstrated that a SOA approach is adequate to build distributed and highly dynamic AmI-based multi-agent systems.

  11. Efficient Use of Distributed Systems for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques

    2000-01-01

    Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.

  12. Performance evaluation of a novel high performance pinhole array detector module using NEMA NU-4 image quality phantom for four head SPECT Imaging

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2015-03-01

    Radiolabeled tracer distribution imaging of gamma rays using pinhole collimation is considered promising for small animal imaging. The recent availability of various radiolabeled tracers has enhanced the field of diagnostic study and is simultaneously creating demand for high resolution imaging devices. This paper presents analyses to represent the optimized parameters of a high performance pinhole array detector module using two different characteristics phantoms. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were executed to assess the performance of a four head SPECT system incorporated with pinhole array collimators. The system is based on a pixelated array of NaI(Tl) crystals coupled to an array of position sensitive photomultiplier tubes (PSPMTs). The detector module was simulated to have 48 mm by 48 mm active area along with different pinhole apertures on a tungsten plate. The performance of this system has been evaluated using a uniform shape cylindrical water phantom along with NEMA NU-4 image quality (IQ) phantom filled with 99mTc labeled radiotracers. SPECT images were reconstructed where activity distribution is expected to be well visualized. This system offers the combination of an excellent intrinsic spatial resolution, good sensitivity and signal-to-noise ratio along with high detection efficiency over an energy range between 20-160 keV. Increasing number of heads in a stationary system configuration offers increased sensitivity at a spatial resolution similar to that obtained with the current SPECT system design with four heads.

  13. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.

  14. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  15. Revolutionary Aeropropulsion Concept for Sustainable Aviation: Turboelectric Distributed Propulsion

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Felder, James L.; Tong, Michael. T.; Armstrong, Michael

    2013-01-01

    In response to growing aviation demands and concerns about the environment and energy usage, a team at NASA proposed and examined a revolutionary aeropropulsion concept, a turboelectric distributed propulsion system, which employs multiple electric motor-driven propulsors that are distributed on a large transport vehicle. The power to drive these electric propulsors is generated by separately located gas-turbine-driven electric generators on the airframe. This arrangement enables the use of many small-distributed propulsors, allowing a very high effective bypass ratio, while retaining the superior efficiency of large core engines, which are physically separated but connected to the propulsors through electric power lines. Because of the physical separation of propulsors from power generating devices, a new class of vehicles with unprecedented performance employing such revolutionary propulsion system is possible in vehicle design. One such vehicle currently being investigated by NASA is called the "N3-X" that uses a hybrid-wing-body for an airframe and superconducting generators, motors, and transmission lines for its propulsion system. On the N3-X these new degrees of design freedom are used (1) to place two large turboshaft engines driving generators in freestream conditions to minimize total pressure losses and (2) to embed a broad continuous array of 14 motor-driven fans on the upper surface of the aircraft near the trailing edge of the hybrid-wing-body airframe to maximize propulsive efficiency by ingesting thick airframe boundary layer flow. Through a system analysis in engine cycle and weight estimation, it was determined that the N3-X would be able to achieve a reduction of 70% or 72% (depending on the cooling system) in energy usage relative to the reference aircraft, a Boeing 777-200LR. Since the high-power electric system is used in its propulsion system, a study of the electric power distribution system was performed to identify critical dynamic and safety issues. This paper presents some of the features and issues associated with the turboelectric distributed propulsion system and summarizes the recent study results, including the high electric power distribution, in the analysis of the N3-X vehicle.

  16. Power System Information Delivering System Based on Distributed Object

    NASA Astrophysics Data System (ADS)

    Tanaka, Tatsuji; Tsuchiya, Takehiko; Tamura, Setsuo; Seki, Tomomichi; Kubota, Kenji

    In recent years, improvement in computer performance and development of computer network technology or the distributed information processing technology has a remarkable thing. Moreover, the deregulation is starting and will be spreading in the electric power industry in Japan. Consequently, power suppliers are required to supply low cost power with high quality services to customers. Corresponding to these movements the authors have been proposed SCOPE (System Configuration Of PowEr control system) architecture for distributed EMS/SCADA (Energy Management Systems / Supervisory Control and Data Acquisition) system based on distributed object technology, which offers the flexibility and expandability adapting those movements. In this paper, the authors introduce a prototype of the power system information delivering system, which was developed based on SCOPE architecture. This paper describes the architecture and the evaluation results of this prototype system. The power system information delivering system supplies useful power systems information such as electric power failures to the customers using Internet and distributed object technology. This system is new type of SCADA system which monitors failure of power transmission system and power distribution system with geographic information integrated way.

  17. Distributed metadata in a high performance computing environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less

  18. Joint Sensing/Sampling Optimization for Surface Drifting Mine Detection with High-Resolution Drift Model

    DTIC Science & Technology

    2012-09-01

    as potential tools for large area detection coverage while being moderately inexpensive (Wettergren, Performance of Search via Track - Before - Detect for...via Track - Before - Detect for Distribute 34 Sensor Networks, 2008). These statements highlight three specific needs to further sensor network research...Bay hydrography. Journal of Marine Systems, 12, 221–236. Wettergren, T. A. (2008). Performance of search via track - before - detect for distributed

  19. Study of data I/O performance on distributed disk system in mask data preparation

    NASA Astrophysics Data System (ADS)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  20. FPGA cluster for high-performance AO real-time control system

    NASA Astrophysics Data System (ADS)

    Geng, Deli; Goodsell, Stephen J.; Basden, Alastair G.; Dipper, Nigel A.; Myers, Richard M.; Saunter, Chris D.

    2006-06-01

    Whilst the high throughput and low latency requirements for the next generation AO real-time control systems have posed a significant challenge to von Neumann architecture processor systems, the Field Programmable Gate Array (FPGA) has emerged as a long term solution with high performance on throughput and excellent predictability on latency. Moreover, FPGA devices have highly capable programmable interfacing, which lead to more highly integrated system. Nevertheless, a single FPGA is still not enough: multiple FPGA devices need to be clustered to perform the required subaperture processing and the reconstruction computation. In an AO real-time control system, the memory bandwidth is often the bottleneck of the system, simply because a vast amount of supporting data, e.g. pixel calibration maps and the reconstruction matrix, need to be accessed within a short period. The cluster, as a general computing architecture, has excellent scalability in processing throughput, memory bandwidth, memory capacity, and communication bandwidth. Problems, such as task distribution, node communication, system verification, are discussed.

  1. Introducing high performance distributed logging service for ACS

    NASA Astrophysics Data System (ADS)

    Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca

    2010-07-01

    The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.

  2. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  3. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  4. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application workflows, is identified to improve the system in the coming years.

  5. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  6. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  7. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  8. Empirical Analysis of Optical Attenuator Performance in Quantum Key Distribution Systems Using a Particle Model

    DTIC Science & Technology

    2012-03-01

    EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING A...DISTRIBUTION IS UNLIMITED AFIT/GCS/ENG/12-01 EMPIRICAL ANALYSIS OF OPTICAL ATTENUATOR PERFORMANCE IN QUANTUM KEY DISTRIBUTION SYSTEMS USING ...challenging as the complexity of actual implementation specifics are considered. Two components common to most quantum key distribution

  9. Effect of vane twist on the performance of dome swirlers for gas turbine airblast atomizers

    NASA Technical Reports Server (NTRS)

    Micklow, Gerald J.; Dogra, Anju S.; Nguyen, H. Lee

    1990-01-01

    For advanced gas turbine engines, two combustor systems, the lean premixed/prevaporized (LPP) and the rich burn/quick quench/lean burn (RQL) offer great potential for reducing NO(x) emissions. An important consideration for either concept is the development of an advanced fuel injection system that will provide a stable, efficient, and very uniform combustion system over a wide operating range. High-shear airblast fuel injectors for gas turbine combustors have exhibited superior atomization and mixing compared with pressure-atomizing fuel injectors. This improved mixing has lowered NO(x) emissions and the pattern factor, and has enabled combustors to alternate fuels while maintaining a stable, efficient combustion system. The performance of high-shear airblast fuel injectors is highly dependent on the design of the dome swirl vanes. The type of swirl vanes most widely used in gas turbine combustors are usually flat for ease of manufacture, but vanes with curvature will, in general, give superior aerodynamic performance. The design and performance of high-turning, low-loss curved dome swirl vanes with twist along the span are investigated. The twist induces a secondary vortex flow pattern which will improve the atomization of the fuel, thereby producing a more uniform fuel-air distribution. This uniform distribution will increase combustion efficiency while lowering NO(x) emissions. A systematic swirl vane design system is presented based on one-, two-, and three-dimensional flowfield calculations, with variations in vane-turning angle, rate of turning, vane solidity, and vane twist as design parameters.

  10. Effect of vane twist on the performance of dome swirlers for gas turbine airblast atomizers

    NASA Astrophysics Data System (ADS)

    Micklow, Gerald J.; Dogra, Anju S.; Nguyen, H. Lee

    1990-07-01

    For advanced gas turbine engines, two combustor systems, the lean premixed/prevaporized (LPP) and the rich burn/quick quench/lean burn (RQL) offer great potential for reducing NO(x) emissions. An important consideration for either concept is the development of an advanced fuel injection system that will provide a stable, efficient, and very uniform combustion system over a wide operating range. High-shear airblast fuel injectors for gas turbine combustors have exhibited superior atomization and mixing compared with pressure-atomizing fuel injectors. This improved mixing has lowered NO(x) emissions and the pattern factor, and has enabled combustors to alternate fuels while maintaining a stable, efficient combustion system. The performance of high-shear airblast fuel injectors is highly dependent on the design of the dome swirl vanes. The type of swirl vanes most widely used in gas turbine combustors are usually flat for ease of manufacture, but vanes with curvature will, in general, give superior aerodynamic performance. The design and performance of high-turning, low-loss curved dome swirl vanes with twist along the span are investigated. The twist induces a secondary vortex flow pattern which will improve the atomization of the fuel, thereby producing a more uniform fuel-air distribution. This uniform distribution will increase combustion efficiency while lowering NO(x) emissions. A systematic swirl vane design system is presented based on one-, two-, and three-dimensional flowfield calculations, with variations in vane-turning angle, rate of turning, vane solidity, and vane twist as design parameters.

  11. Effect of vane twist on the performance of dome swirlers for gas turbine airblast atomizers

    NASA Astrophysics Data System (ADS)

    Micklow, Gerald J.; Dogra, Anju S.; Nguyen, H. Lee

    1990-06-01

    For advanced gas turbine engines, two combustor systems, the lean premixed/prevaporized (LPP) and the rich burn/quick quench/lean burn (RQL) offer great potential for reducing NO(x) emissions. An important consideration for either concept is the development of an advanced fuel injection system that will provide a stable, efficient, and very uniform combustion system over a wide operating range. High-shear airblast fuel injectors for gas turbine combustors have exhibited superior atomization and mixing compared with pressure-atomizing fuel injectors. This improved mixing has lowered NO(x) emissions and the pattern factor, and has enabled combustors to alternate fuels while maintaining a stable, efficient combustion system. The performance of high-shear airblast fuel injectors is highly dependent on the design of the dome swirl vanes. The type of swirl vanes most widely used in gas turbine combustors are usually flat for ease of manufacture, but vanes with curvature will, in general, give superior aerodynamic performance. The design and performance of high-turning, low-loss curved dome swirl vanes with twist along the span are investigated. The twist induces a secondary vortex flow pattern which will improve the atomization of the fuel, thereby producing a more uniform fuel-air distribution. This uniform distribution will increase combustion efficiency while lowering NO(x) emissions. A systematic swirl vane design system is presented based on one-, two-, and three-dimensional flowfield calculations, with variations in vane-turning angle, rate of turning, vane solidity, and vane twist as design parameters.

  12. Performance related issues in distributed database systems

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.

  13. A distributed microcomputer-controlled system for data acquisition and power spectral analysis of EEG.

    PubMed

    Vo, T D; Dwyer, G; Szeto, H H

    1986-04-01

    A relatively powerful and inexpensive microcomputer-based system for the spectral analysis of the EEG is presented. High resolution and speed is achieved with the use of recently available large-scale integrated circuit technology with enhanced functionality (INTEL Math co-processors 8087) which can perform transcendental functions rapidly. The versatility of the system is achieved with a hardware organization that has distributed data acquisition capability performed by the use of a microprocessor-based analog to digital converter with large resident memory (Cyborg ISAAC-2000). Compiled BASIC programs and assembly language subroutines perform on-line or off-line the fast Fourier transform and spectral analysis of the EEG which is stored as soft as well as hard copy. Some results obtained from test application of the entire system in animal studies are presented.

  14. High Performance Data Distribution for Scientific Community

    NASA Astrophysics Data System (ADS)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  15. A Theoretical Solid Oxide Fuel Cell Model for Systems Controls and Stability Design

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Brinson, Thomas; Credle, Sydni

    2008-01-01

    As the aviation industry moves toward higher efficiency electrical power generation, all electric aircraft, or zero emissions and more quiet aircraft, fuel cells are sought as the technology that can deliver on these high expectations. The hybrid solid oxide fuel cell system combines the fuel cell with a micro-turbine to obtain up to 70% cycle efficiency, and then distributes the electrical power to the loads via a power distribution system. The challenge is to understand the dynamics of this complex multidiscipline system and the design distributed controls that take the system through its operating conditions in a stable and safe manner while maintaining the system performance. This particular system is a power generation and a distribution system, and the fuel cell and micro-turbine model fidelity should be compatible with the dynamics of the power distribution system in order to allow proper stability and distributed controls design. The novelty in this paper is that, first, the case is made why a high fidelity fuel cell mode is needed for systems control and stability designs. Second, a novel modeling approach is proposed for the fuel cell that will allow the fuel cell and the power system to be integrated and designed for stability, distributed controls, and other interface specifications. This investigation shows that for the fuel cell, the voltage characteristic should be modeled but in addition, conservation equation dynamics, ion diffusion, charge transfer kinetics, and the electron flow inherent impedance should also be included.

  16. Power Hardware-in-the-Loop-Based Anti-Islanding Evaluation and Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoder, Karl; Langston, James; Hauer, John

    2015-10-01

    The National Renewable Energy Laboratory (NREL) teamed with Southern California Edison (SCE), Clean Power Research (CPR), Quanta Technology (QT), and Electrical Distribution Design (EDD) to conduct a U.S. Department of Energy (DOE) and California Public Utility Commission (CPUC) California Solar Initiative (CSI)-funded research project investigating the impacts of integrating high-penetration levels of photovoltaics (PV) onto the California distribution grid. One topic researched in the context of high-penetration PV integration onto the distribution system is the ability of PV inverters to (1) detect islanding conditions (i.e., when the distribution system to which the PV inverter is connected becomes disconnected from themore » utility power connection) and (2) disconnect from the islanded system within the time specified in the performance specifications outlined in IEEE Standard 1547. This condition may cause damage to other connected equipment due to insufficient power quality (e.g., over-and under-voltages) and may also be a safety hazard to personnel that may be working on feeder sections to restore service. NREL teamed with the Florida State University (FSU) Center for Advanced Power Systems (CAPS) to investigate a new way of testing PV inverters for IEEE Standard 1547 unintentional islanding performance specifications using power hardware-in-loop (PHIL) laboratory testing techniques.« less

  17. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  18. Hydrogen storage container

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Feng, Zhili; Zhang, Wei

    An apparatus and system is described for storing high-pressure fluids such as hydrogen. An inner tank and pre-stressed concrete pressure vessel share the structural and/or pressure load on the inner tank. The system and apparatus provide a high performance and low cost container while mitigating hydrogen embrittlement of the metal tank. System is useful for distributing hydrogen to a power grid or to a vehicle refueling station.

  19. Technology Solutions Case Study: Long-Term Monitoring of Mini-Split Ductless Heat Pumps in the Northeast, Devens and Easthampton, Massachusetts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Transformations, Inc., has extensive experience building high-performance homes - production and custom - in a variety of Massachusetts locations and uses mini-split heat pumps (MSHPs) for space conditioning in most of its homes. The use of MSHPs for simplified space-conditioning distribution provides significant first-cost savings, which offsets the increased investment in the building enclosure. In this project, the U.S. Department of Energy Building America team Building Science Corporation evaluated the long-term performance of MSHPs in 8 homes during a period of 3 years. The work examined electrical use of MSHPs, distributions of interior temperatures and humidity when using simplified (two-point)more » heating systems in high-performance housing, and the impact of open-door/closed-door status on temperature distributions.« less

  20. Long-Term Monitoring of Mini-Split Ductless Heat Pumps in the Northeast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueno, K.; Loomis, H.

    Transformations, Inc. has extensive experience building their high performance housing at a variety of Massachusetts locations, in both a production and custom home setting. The majority of their construction uses mini-split heat pumps (MSHPs) for space conditioning. This research covered the long-term performance of MSHPs in Zone 5A; it is the culmination of up to 3 years' worth of monitoring in a set of eight houses. This research examined electricity use of MSHPs, distributions of interior temperatures and humidity when using simplified (two-point) heating systems in high-performance housing, and the impact of open-door/closed-door status on temperature distributions. The use ofmore » simplified space conditioning distribution (through use of MSHPs) provides significant first cost savings, which are used to offset the increased investment in the building enclosure.« less

  1. Technical and Economic Assessment of the Implementation of Measures for Reducing Energy Losses in Distribution Systems

    NASA Astrophysics Data System (ADS)

    Aguila, Alexander; Wilson, Jorge

    2017-07-01

    This paper develops a methodology to assess a group of measures of electrical improvements in distribution systems, starting from the complementation of technical and economic criteria. In order to solve the problem of energy losses in distribution systems, technical and economic analysis was performed based on a mathematical model to establish a direct relationship between the energy saved by way of minimized losses and the costs of implementing the proposed measures. This paper aims at analysing the feasibility of reducing energy losses in distribution systems, by changing existing network conductors by larger crosssection conductors and distribution voltage change at higher levels. The impact of this methodology provides a highly efficient mathematical tool for analysing the feasibility of implementing improvement projects based on their costs which is a very useful tool for the distribution companies that will serve as a starting point to the analysis for this type of projects in distribution systems.

  2. Real-time high speed generator system emulation with hardware-in-the-loop application

    NASA Astrophysics Data System (ADS)

    Stroupe, Nicholas

    The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.

  3. Communication Simulations for Power System Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, Jason C.; Ciraci, Selim; Daily, Jeffrey A.

    2013-05-29

    New smart grid technologies and concepts, such as dynamic pricing, demand response, dynamic state estimation, and wide area monitoring, protection, and control, are expected to require considerable communication resources. As the cost of retrofit can be high, future power grids will require the integration of high-speed, secure connections with legacy communication systems, while still providing adequate system control and security. While considerable work has been performed to create co-simulators for the power domain with load models and market operations, limited work has been performed in integrating communications directly into a power domain solver. The simulation of communication and power systemsmore » will become more important as the two systems become more inter-related. This paper will discuss ongoing work at Pacific Northwest National Laboratory to create a flexible, high-speed power and communication system co-simulator for smart grid applications. The framework for the software will be described, including architecture considerations for modular, high performance computing and large-scale scalability (serialization, load balancing, partitioning, cross-platform support, etc.). The current simulator supports the ns-3 (telecommunications) and GridLAB-D (distribution systems) simulators. Ongoing and future work will be described, including planned future expansions for a traditional transmission solver. A test case using the co-simulator, utilizing a transactive demand response system created for the Olympic Peninsula and AEP gridSMART demonstrations, requiring two-way communication between distributed and centralized market devices, will be used to demonstrate the value and intended purpose of the co-simulation environment.« less

  4. Fabrication and Testing of High-Speed-Single-Rotor and Compound-Rotor Systems

    DTIC Science & Technology

    2016-05-04

    pitch link loads, hub loads, rotor wakes and performance of high -speed single-rotor and compound-rotor systems to support 1. REPORT DATE (DD-MM-YYYY) 4...Public Release; Distribution Unlimited UU UU UU UU 05-04-2016 14-Jul-2014 13-Jan-2016 Final Report: Fabrication and Testing of High -Speed Single- Rotor and...Final Report: Fabrication and Testing of High -Speed Single-Rotor and Compound-Rotor Systems Report Title The Alfred Gessow Rotorcraft Center has

  5. Fabrication and Testing of High-Speed Single-Rotor and Compound-Rotor Systems

    DTIC Science & Technology

    2016-04-05

    pitch link loads, hub loads, rotor wakes and performance of high -speed single-rotor and compound-rotor systems to support 1. REPORT DATE (DD-MM-YYYY) 4...Public Release; Distribution Unlimited UU UU UU UU 05-04-2016 14-Jul-2014 13-Jan-2016 Final Report: Fabrication and Testing of High -Speed Single- Rotor and...Final Report: Fabrication and Testing of High -Speed Single-Rotor and Compound-Rotor Systems Report Title The Alfred Gessow Rotorcraft Center has

  6. Distributed Optimal Consensus Over Resource Allocation Network and Its Application to Dynamical Economic Dispatch.

    PubMed

    Li, Chaojie; Yu, Xinghuo; Huang, Tingwen; He, Xing; Chaojie Li; Xinghuo Yu; Tingwen Huang; Xing He; Li, Chaojie; Huang, Tingwen; He, Xing; Yu, Xinghuo

    2018-06-01

    The resource allocation problem is studied and reformulated by a distributed interior point method via a -logarithmic barrier. By the facilitation of the graph Laplacian, a fully distributed continuous-time multiagent system is developed for solving the problem. Specifically, to avoid high singularity of the -logarithmic barrier at boundary, an adaptive parameter switching strategy is introduced into this dynamical multiagent system. The convergence rate of the distributed algorithm is obtained. Moreover, a novel distributed primal-dual dynamical multiagent system is designed in a smart grid scenario to seek the saddle point of dynamical economic dispatch, which coincides with the optimal solution. The dual decomposition technique is applied to transform the optimization problem into easily solvable resource allocation subproblems with local inequality constraints. The good performance of the new dynamical systems is, respectively, verified by a numerical example and the IEEE six-bus test system-based simulations.

  7. Distributed Ship Navigation Control System Based on Dual Network

    NASA Astrophysics Data System (ADS)

    Yao, Ying; Lv, Wu

    2017-10-01

    Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.

  8. Fission meter and neutron detection using poisson distribution comparison

    DOEpatents

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  9. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  10. Research on and Application to BH-HTC High Density Cementing Slurry System on Tarim Region

    NASA Astrophysics Data System (ADS)

    Yuanhong, Song; Fei, Gao; Jianyong, He; Qixiang, Yang; Jiang, Yang; Xia, Liu

    2017-08-01

    A large section of salt bed is contented in Tarim region Piedmont which constructs complex geological conditions. For high-pressure gas well cementing difficulties from the region, high density cement slurry system has been researched through reasonable level of particle size distribution and second weighting up. The results of laboratory tests and field applications show that the high density cementing slurry system is available to Tarim region cementing because this system has a well performance in slurry stability, gas breakthrough control, fluidity, water loss, and strength.

  11. Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.

  12. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  13. Clinical experience with a high-performance ATM-connected DICOM archive for cardiology

    NASA Astrophysics Data System (ADS)

    Solomon, Harry P.

    1997-05-01

    A system to archive large image sets, such as cardiac cine runs, with near realtime response must address several functional and performance issues, including efficient use of a high performance network connection with standard protocols, an architecture which effectively integrates both short- and long-term mass storage devices, and a flexible data management policy which allows optimization of image distribution and retrieval strategies based on modality and site-specific operational use. Clinical experience with such as archive has allowed evaluation of these systems issues and refinement of a traffic model for cardiac angiography.

  14. FPGA-based distributed computing microarchitecture for complex physical dynamics investigation.

    PubMed

    Borgese, Gianluca; Pace, Calogero; Pantano, Pietro; Bilotta, Eleonora

    2013-09-01

    In this paper, we present a distributed computing system, called DCMARK, aimed at solving partial differential equations at the basis of many investigation fields, such as solid state physics, nuclear physics, and plasma physics. This distributed architecture is based on the cellular neural network paradigm, which allows us to divide the differential equation system solving into many parallel integration operations to be executed by a custom multiprocessor system. We push the number of processors to the limit of one processor for each equation. In order to test the present idea, we choose to implement DCMARK on a single FPGA, designing the single processor in order to minimize its hardware requirements and to obtain a large number of easily interconnected processors. This approach is particularly suited to study the properties of 1-, 2- and 3-D locally interconnected dynamical systems. In order to test the computing platform, we implement a 200 cells, Korteweg-de Vries (KdV) equation solver and perform a comparison between simulations conducted on a high performance PC and on our system. Since our distributed architecture takes a constant computing time to solve the equation system, independently of the number of dynamical elements (cells) of the CNN array, it allows us to reduce the elaboration time more than other similar systems in the literature. To ensure a high level of reconfigurability, we design a compact system on programmable chip managed by a softcore processor, which controls the fast data/control communication between our system and a PC Host. An intuitively graphical user interface allows us to change the calculation parameters and plot the results.

  15. Distributed fiber optic sensor-enhanced detection and prediction of shrinkage-induced delamination of ultra-high-performance concrete overlay

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Valipour, Mahdi; Meng, Weina; Khayat, Kamal H.; Chen, Genda

    2017-08-01

    This study develops a delamination detection system for smart ultra-high-performance concrete (UHPC) overlays using a fully distributed fiber optic sensor. Three 450 mm (length) × 200 mm (width) × 25 mm (thickness) UHPC overlays were cast over an existing 200 mm thick concrete substrate. The initiation and propagation of delamination due to early-age shrinkage of the UHPC overlay were detected as sudden increases and their extension in spatial distribution of shrinkage-induced strains measured from the sensor based on pulse pre-pump Brillouin optical time domain analysis. The distributed sensor is demonstrated effective in detecting delamination openings from microns to hundreds of microns. A three-dimensional finite element model with experimental material properties is proposed to understand the complete delamination process measured from the distributed sensor. The model is validated using the distributed sensor data. The finite element model with cohesive elements for the overlay-substrate interface can predict the complete delamination process.

  16. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System.

    PubMed

    Liu, Ying-Pei; Liang, Hai-Ping; Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane.

  17. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System

    PubMed Central

    Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane. PMID:26098556

  18. Concepts for Distributed Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Thomas, Randy; Saus, Joseph

    2007-01-01

    Gas turbine engines for aero-propulsion systems are found to be highly optimized machines after over 70 years of development. Still, additional performance improvements are sought while reduction in the overall cost is increasingly a driving factor. Control systems play a vitally important part in these metrics but are severely constrained by the operating environment and the consequences of system failure. The considerable challenges facing future engine control system design have been investigated. A preliminary analysis has been conducted of the potential benefits of distributed control architecture when applied to aero-engines. In particular, reductions in size, weight, and cost of the control system are possible. NASA is conducting research to further explore these benefits, with emphasis on the particular benefits enabled by high temperature electronics and an open-systems approach to standardized communications interfaces.

  19. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  20. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, B. M.

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less

  1. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi; Foudriat, E. C.

    1991-01-01

    A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.

  2. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-01-07

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  3. Model Predictive Control-based Optimal Coordination of Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming

    2013-04-03

    Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive controlmore » (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.« less

  4. Distributed Control of Turbofan Engines

    DTIC Science & Technology

    2009-08-01

    performance of the engine. Thus the Full Authority Digital Engine Controller ( FADEC ) still remains the central arbiter of the engine’s dynamic behavior...instance, if the control laws are not distributed the dependence on the FADEC remains high, and system reliability can only be insured through many...if distributed computing is used at the local level and only coordinated by the FADEC . Such an architecture must be studied in the context of noisy

  5. Implementation of system intelligence in a 3-tier telemedicine/PACS hierarchical storage management system

    NASA Astrophysics Data System (ADS)

    Chao, Woodrew; Ho, Bruce K. T.; Chao, John T.; Sadri, Reza M.; Huang, Lu J.; Taira, Ricky K.

    1995-05-01

    Our tele-medicine/PACS archive system is based on a three-tier distributed hierarchical architecture, including magnetic disk farms, optical jukebox, and tape jukebox sub-systems. The hierarchical storage management (HSM) architecture, built around a low cost high performance platform [personal computers (PC) and Microsoft Windows NT], presents a very scaleable and distributed solution ideal for meeting the needs of client/server environments such as tele-medicine, tele-radiology, and PACS. These image based systems typically require storage capacities mirroring those of film based technology (multi-terabyte with 10+ years storage) and patient data retrieval times at near on-line performance as demanded by radiologists. With the scaleable architecture, storage requirements can be easily configured to meet the needs of the small clinic (multi-gigabyte) to those of a major hospital (multi-terabyte). The patient data retrieval performance requirement was achieved by employing system intelligence to manage migration and caching of archived data. Relevant information from HIS/RIS triggers prefetching of data whenever possible based on simple rules. System intelligence embedded in the migration manger allows the clustering of patient data onto a single tape during data migration from optical to tape medium. Clustering of patient data on the same tape eliminates multiple tape loading and associated seek time during patient data retrieval. Optimal tape performance can then be achieved by utilizing the tape drives high performance data streaming capabilities thereby reducing typical data retrieval delays associated with streaming tape devices.

  6. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring

    PubMed Central

    Gharavi, Hamid; Hu, Bin

    2018-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network. PMID:29503505

  7. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.

    PubMed

    Gharavi, Hamid; Hu, Bin

    2017-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.

  8. Open-source framework for power system transmission and distribution dynamics co-simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Fan, Rui; Daily, Jeff

    The promise of the smart grid entails more interactions between the transmission and distribution networks, and there is an immediate need for tools to provide the comprehensive modelling and simulation required to integrate operations at both transmission and distribution levels. Existing electromagnetic transient simulators can perform simulations with integration of transmission and distribution systems, but the computational burden is high for large-scale system analysis. For transient stability analysis, currently there are only separate tools for simulating transient dynamics of the transmission and distribution systems. In this paper, we introduce an open source co-simulation framework “Framework for Network Co-Simulation” (FNCS), togethermore » with the decoupled simulation approach that links existing transmission and distribution dynamic simulators through FNCS. FNCS is a middleware interface and framework that manages the interaction and synchronization of the transmission and distribution simulators. Preliminary testing results show the validity and capability of the proposed open-source co-simulation framework and the decoupled co-simulation methodology.« less

  9. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  10. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  11. Voltage-Load Sensitivity Matrix Based Demand Response for Voltage Control in High Solar Penetration Distribution Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiangqi; Wang, Jiyu; Mulcahy, David

    This paper presents a voltage-load sensitivity matrix (VLSM) based voltage control method to deploy demand response resources for controlling voltage in high solar penetration distribution feeders. The IEEE 123-bus system in OpenDSS is used for testing the performance of the preliminary VLSM-based voltage control approach. A load disaggregation process is applied to disaggregate the total load profile at the feeder head to each load nodes along the feeder so that loads are modeled at residential house level. Measured solar generation profiles are used in the simulation to model the impact of solar power on distribution feeder voltage profiles. Different casemore » studies involving various PV penetration levels and installation locations have been performed. Simulation results show that the VLSM algorithm performance meets the voltage control requirements and is an effective voltage control strategy.« less

  12. Feasibility of introducing ferromagnetic materials to onboard bulk high-Tc superconductors to enhance the performance of present maglev systems

    NASA Astrophysics Data System (ADS)

    Deng, Zigang; Wang, Jiasu; Zheng, Jun; Zhang, Ya; Wang, Suyu

    2013-02-01

    Performance improvement is a long-term research task for the promotion of practical application of promising high-temperature superconducting (HTS) magnetic levitation (maglev) vehicle technologies. We studied the feasibility to enhance the performance of present HTS Maglev systems by introducing ferromagnetic materials to onboard bulk superconductors. The principle here is to make use of the high magnetic permeability of ferromagnetic materials to alter the flux distribution of the permanent magnet guideway for the enhancement of magnetic field density at the position of the bulk superconductors. Ferromagnetic iron plates were added to the upper surface of bulk superconductors and their geometric and positioning effects on the maglev performance were investigated experimentally. Results show that the guidance performance (stability) was enhanced greatly for a particular setup when compared to the present maglev system which is helpful in the application where large guidance forces are needed such as maglev tracks with high degrees of curves.

  13. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  14. Enhanced High Performance Power Compensation Methodology by IPFC Using PIGBT-IDVR

    PubMed Central

    Arumugom, Subramanian; Rajaram, Marimuthu

    2015-01-01

    Currently, power systems are involuntarily controlled without high speed control and are frequently initiated, therefore resulting in a slow process when compared with static electronic devices. Among various power interruptions in power supply systems, voltage dips play a central role in causing disruption. The dynamic voltage restorer (DVR) is a process based on voltage control that compensates for line transients in the distributed system. To overcome these issues and to achieve a higher speed, a new methodology called the Parallel IGBT-Based Interline Dynamic Voltage Restorer (PIGBT-IDVR) method has been proposed, which mainly spotlights the dynamic processing of energy reloads in common dc-linked energy storage with less adaptive transition. The interline power flow controller (IPFC) scheme has been employed to manage the power transmission between the lines and the restorer method for controlling the reactive power in the individual lines. By employing the proposed methodology, the failure of a distributed system has been avoided and provides better performance than the existing methodologies. PMID:26613101

  15. Measurements over distributed high performance computing and storage systems

    NASA Technical Reports Server (NTRS)

    Williams, Elizabeth; Myers, Tom

    1993-01-01

    A strawman proposal is given for a framework for presenting a common set of metrics for supercomputers, workstations, file servers, mass storage systems, and the networks that interconnect them. Production control and database systems are also included. Though other applications and third part software systems are not addressed, it is important to measure them as well.

  16. Event parallelism: Distributed memory parallel computing for high energy physics experiments

    NASA Astrophysics Data System (ADS)

    Nash, Thomas

    1989-12-01

    This paper describes the present and expected future development of distributed memory parallel computers for high energy physics experiments. It covers the use of event parallel microprocessor farms, particularly at Fermilab, including both ACP multiprocessors and farms of MicroVAXES. These systems have proven very cost effective in the past. A case is made for moving to the more open environment of UNIX and RISC processors. The 2nd Generation ACP Multiprocessor System, which is based on powerful RISC system, is described. Given the promise of still more extraordinary increases in processor performance, a new emphasis on point to point, rather than bussed, communication will be required. Developments in this direction are described.

  17. Design of Instantaneous High Power Supply System with power distribution management for portable military devices

    NASA Astrophysics Data System (ADS)

    Kwak, Kiho; Kwak, Dongmin; Yoon, Joohong

    2015-08-01

    A design of an Instantaneous High Power Supply System (IHPSS) with a power distribution management (PDM) for portable military devices is newly addressed. The system includes a power board and a hybrid battery that can not only supply instantaneous high power but also maintain stable operation at critical low temperature (-30 °C). The power leakage and battery overcharge are effectively prevented by the optimal PDM. The performance of the proposed system under the required pulse loads and the operating conditions of a Korean Advanced Combat Rifle employed in the battlefield is modeled with simulations and verified experimentally. The system with the IHPSS charged the fuse setter with 1.7 times higher voltage (8.6 V) than the one without (5.4 V) under the pulse discharging rate (1 A at 0.5 duty, 1 ms) for 500 ms.

  18. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1987-01-01

    Performance analysis was begin on the Ada implementations. The goal is to supply the system designer with tools that will allow a rational decision to be made about whether a particular implementation can support a given application early in the design cycle. Primary activities were: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; preparation and presentation of a paper at the 1987 Washington DC Ada Symposium; development of a refined approach to recovery that is presently being applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.

  19. A model-based analysis of extinction ratio effects on phase-OTDR distributed acoustic sensing system performance

    NASA Astrophysics Data System (ADS)

    Aktas, Metin; Maral, Hakan; Akgun, Toygar

    2018-02-01

    Extinction ratio is an inherent limiting factor that has a direct effect on the detection performance of phase-OTDR based distributed acoustics sensing systems. In this work we present a model based analysis of Rayleigh scattering to simulate the effects of extinction ratio on the received signal under varying signal acquisition scenarios and system parameters. These signal acquisition scenarios are constructed to represent typically observed cases such as multiple vibration sources cluttered around the target vibration source to be detected, continuous wave light sources with center frequency drift, varying fiber optic cable lengths and varying ADC bit resolutions. Results show that an insufficient ER can result in high optical noise floor and effectively hide the effects of elaborate system improvement efforts.

  20. Distributed PACS using distributed file system with hierarchical meta data servers.

    PubMed

    Hiroyasu, Tomoyuki; Minamitani, Yoshiyuki; Miki, Mitsunori; Yokouchi, Hisatake; Yoshimi, Masato

    2012-01-01

    In this research, we propose a new distributed PACS (Picture Archiving and Communication Systems) which is available to integrate several PACSs that exist in each medical institution. The conventional PACS controls DICOM file into one data-base. On the other hand, in the proposed system, DICOM file is separated into meta data and image data and those are stored individually. Using this mechanism, since file is not always accessed the entire data, some operations such as finding files, changing titles, and so on can be performed in high-speed. At the same time, as distributed file system is utilized, accessing image files can also achieve high-speed access and high fault tolerant. The introduced system has a more significant point. That is the simplicity to integrate several PACSs. In the proposed system, only the meta data servers are integrated and integrated system can be constructed. This system also has the scalability of file access with along to the number of file numbers and file sizes. On the other hand, because meta-data server is integrated, the meta data server is the weakness of this system. To solve this defect, hieratical meta data servers are introduced. Because of this mechanism, not only fault--tolerant ability is increased but scalability of file access is also increased. To discuss the proposed system, the prototype system using Gfarm was implemented. For evaluating the implemented system, file search operating time of Gfarm and NFS were compared.

  1. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  2. The high-speed after-pulse measurement system for PMT

    NASA Astrophysics Data System (ADS)

    Cheng, Y.; Qian, S.; Ning, Z.; Xia, J.; Wang, Z.

    2018-05-01

    A system employing a desktop FADC has been developed to investigate the features of 8-inch Hamamatsu PMT R5912. The system stands out for its high-speed and informative results as a consequence of adopting fast waveform sampling technology. Recording the full waveforms allows us to perform pulse shape analysis. High-precision after-pulse time and charge distribution results are presented in this manuscript. Other characteristics of the photomultiplier tube, such as the gain of charge, dark rate and transit time spread, can be also obtained by this system.

  3. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  4. Spectral and spatial characterization of perfluorinated graded-index polymer optical fibers for the distribution of optical wireless communication cells.

    PubMed

    Hajjar, Hani Al; Montero, David S; Lallana, Pedro C; Vázquez, Carmen; Fracasso, Bruno

    2015-02-10

    In this paper, the characterization of a perfluorinated graded-index polymer optical fiber (PF-GIPOF) for a high-bitrate indoor optical wireless system is reported. PF-GIPOF is used here to interconnect different optical wireless access points that distribute optical free-space high-bitrate wireless communication cells. The PF-GIPOF channel is first studied in terms of transmission attenuation and frequency response and, in a second step, the spatial power profile distribution at the fiber output is analyzed. Both characterizations are performed under varying restricted mode launch conditions, enabling us to assess the transmission channel performance subject to potential connectorization errors within an environment where the end users may intervene by themselves on the home network infrastructure.

  5. Distributed Saturation

    NASA Technical Reports Server (NTRS)

    Chung, Ming-Ying; Ciardo, Gianfranco; Siminiceanu, Radu I.

    2007-01-01

    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency.

  6. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  7. Dataflow computing approach in high-speed digital simulation

    NASA Technical Reports Server (NTRS)

    Ercegovac, M. D.; Karplus, W. J.

    1984-01-01

    New computational tools and methodologies for the digital simulation of continuous systems were explored. Programmability, and cost effective performance in multiprocessor organizations for real time simulation was investigated. Approach is based on functional style languages and data flow computing principles, which allow for the natural representation of parallelism in algorithms and provides a suitable basis for the design of cost effective high performance distributed systems. The objectives of this research are to: (1) perform comparative evaluation of several existing data flow languages and develop an experimental data flow language suitable for real time simulation using multiprocessor systems; (2) investigate the main issues that arise in the architecture and organization of data flow multiprocessors for real time simulation; and (3) develop and apply performance evaluation models in typical applications.

  8. Single-user MIMO versus multi-user MIMO in distributed antenna systems with limited feedback

    NASA Astrophysics Data System (ADS)

    Schwarz, Stefan; Heath, Robert W.; Rupp, Markus

    2013-12-01

    This article investigates the performance of cellular networks employing distributed antennas in addition to the central antennas of the base station. Distributed antennas are likely to be implemented using remote radio units, which is enabled by a low latency and high bandwidth dedicated link to the base station. This facilitates coherent transmission from potentially all available antennas at the same time. Such distributed antenna system (DAS) is an effective way to deal with path loss and large-scale fading in cellular systems. DAS can apply precoding across multiple transmission points to implement single-user MIMO (SU-MIMO) and multi-user MIMO (MU-MIMO) transmission. The throughput performance of various SU-MIMO and MU-MIMO transmission strategies is investigated in this article, employing a Long-Term evolution (LTE) standard compliant simulation framework. The previously theoretically established cell-capacity improvement of MU-MIMO in comparison to SU-MIMO in DASs is confirmed under the practical constraints imposed by the LTE standard, even under the assumption of imperfect channel state information (CSI) at the base station. Because practical systems will use quantized feedback, the performance of different CSI feedback algorithms for DASs is investigated. It is shown that significant gains in the CSI quantization accuracy and in the throughput of especially MU-MIMO systems can be achieved with relatively simple quantization codebook constructions that exploit the available temporal correlation and channel gain differences.

  9. Application of high performance asynchronous socket communication in power distribution automation

    NASA Astrophysics Data System (ADS)

    Wang, Ziyu

    2017-05-01

    With the development of information technology and Internet technology, and the growing demand for electricity, the stability and the reliable operation of power system have been the goal of power grid workers. With the advent of the era of big data, the power data will gradually become an important breakthrough to guarantee the safe and reliable operation of the power grid. So, in the electric power industry, how to efficiently and robustly receive the data transmitted by the data acquisition device, make the power distribution automation system be able to execute scientific decision quickly, which is the pursuit direction in power grid. In this paper, some existing problems in the power system communication are analysed, and with the help of the network technology, a set of solutions called Asynchronous Socket Technology to the problem in network communication which meets the high concurrency and the high throughput is proposed. Besides, the paper also looks forward to the development direction of power distribution automation in the era of big data and artificial intelligence.

  10. Framework and Method for Controlling a Robotic System Using a Distributed Computer Network

    NASA Technical Reports Server (NTRS)

    Sanders, Adam M. (Inventor); Strawser, Philip A. (Inventor); Barajas, Leandro G. (Inventor); Permenter, Frank Noble (Inventor)

    2015-01-01

    A robotic system for performing an autonomous task includes a humanoid robot having a plurality of compliant robotic joints, actuators, and other integrated system devices that are controllable in response to control data from various control points, and having sensors for measuring feedback data at the control points. The system includes a multi-level distributed control framework (DCF) for controlling the integrated system components over multiple high-speed communication networks. The DCF has a plurality of first controllers each embedded in a respective one of the integrated system components, e.g., the robotic joints, a second controller coordinating the components via the first controllers, and a third controller for transmitting a signal commanding performance of the autonomous task to the second controller. The DCF virtually centralizes all of the control data and the feedback data in a single location to facilitate control of the robot across the multiple communication networks.

  11. A distributed infrastructure for publishing VO services: an implementation

    NASA Astrophysics Data System (ADS)

    Cepparo, Francesco; Scagnetto, Ivan; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    This contribution describes both the design and the implementation details of a new solution for publishing VO services, enlightening its maintainable, distributed, modular and scalable architecture. Indeed, the new publisher is multithreaded and multiprocess. Multiple instances of the modules can run on different machines to ensure high performance and high availability, and this will be true both for the interface modules of the services and the back end data access ones. The system uses message passing to let its components communicate through an AMQP message broker that can itself be distributed to provide better scalability and availability.

  12. Experiences Integrating Transmission and Distribution Simulations for DERs with the Integrated Grid Modeling System (IGMS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hodge, Bri-Mathias

    2016-08-11

    This paper discusses the development of, approaches for, experiences with, and some results from a large-scale, high-performance-computer-based (HPC-based) co-simulation of electric power transmission and distribution systems using the Integrated Grid Modeling System (IGMS). IGMS was developed at the National Renewable Energy Laboratory (NREL) as a novel Independent System Operator (ISO)-to-appliance scale electric power system modeling platform that combines off-the-shelf tools to simultaneously model 100s to 1000s of distribution systems in co-simulation with detailed ISO markets, transmission power flows, and AGC-level reserve deployment. Lessons learned from the co-simulation architecture development are shared, along with a case study that explores the reactivemore » power impacts of PV inverter voltage support on the bulk power system.« less

  13. Replication Strategy for Spatiotemporal Data Based on Distributed Caching System

    PubMed Central

    Xiong, Lian; Tao, Yang; Xu, Juan; Zhao, Lun

    2018-01-01

    The replica strategy in distributed cache can effectively reduce user access delay and improve system performance. However, developing a replica strategy suitable for varied application scenarios is still quite challenging, owing to differences in user access behavior and preferences. In this paper, a replication strategy for spatiotemporal data (RSSD) based on a distributed caching system is proposed. By taking advantage of the spatiotemporal locality and correlation of user access, RSSD mines high popularity and associated files from historical user access information, and then generates replicas and selects appropriate cache node for placement. Experimental results show that the RSSD algorithm is simple and efficient, and succeeds in significantly reducing user access delay. PMID:29342897

  14. Electric power processing, distribution and control for advanced aerospace vehicles.

    NASA Technical Reports Server (NTRS)

    Krausz, A.; Felch, J. L.

    1972-01-01

    The results of a current study program to develop a rational basis for selection of power processing, distribution, and control configurations for future aerospace vehicles including the Space Station, Space Shuttle, and high-performance aircraft are presented. Within the constraints imposed by the characteristics of power generation subsystems and the load utilization equipment requirements, the power processing, distribution and control subsystem can be optimized by selection of the proper distribution voltage, frequency, and overload/fault protection method. It is shown that, for large space vehicles which rely on static energy conversion to provide electric power, high-voltage dc distribution (above 100 V dc) is preferable to conventional 28 V dc and 115 V ac distribution per MIL-STD-704A. High-voltage dc also has advantages over conventional constant frequency ac systems in many aircraft applications due to the elimination of speed control, wave shaping, and synchronization equipment.

  15. Status of a Power Processor for the Prometheus-1 Electric Propulsion System

    NASA Technical Reports Server (NTRS)

    Pinero, Luis R.; Hill, Gerald M.; Aulisio, Michael; Gerber, Scott; Griebeler, Elmer; Hewitt, Frank; Scina, Joseph

    2006-01-01

    NASA is developing technologies for nuclear electric propulsion for proposed deep space missions in support of the Exploration initiative under Project Prometheus. Electrical power produced by the combination of a fission-based power source and a Brayton power conversion and distribution system is used by a high specific impulse ion propulsion system to propel the spaceship. The ion propulsion system include the thruster, power processor and propellant feed system. A power processor technology development effort was initiated under Project Prometheus to develop high performance and lightweight power-processing technologies suitable for the application. This effort faces multiple challenges including developing radiation hardened power modules and converters with very high power capability and efficiency to minimize the impact on the power conversion and distribution system as well as the heat rejection system. This paper documents the design and test results of the first version of the beam supply, the design of a second version of the beam supply and the design and test results of the ancillary supplies.

  16. Phase sensitive distributed vibration sensing based on ultraweak fiber Bragg grating array using double-pulse

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Wang, Feng; Zhang, Xuping; Zhang, Lin; Yuan, Quan; Liu, Yu; Yan, Zhijun

    2017-08-01

    A distributed vibration sensing technique using double-optical-pulse based on phase-sensitive optical time-domain reflectometry (ϕ-OTDR) and an ultraweak fiber Bragg grating (UWFBG) array is proposed for the first time. The single-mode sensing fiber is integrated with the UWFBG array that has uniform spatial interval and ultraweak reflectivity. The relatively high reflectivity of the UWFBG, compared with the Rayleigh scattering, gains a high signal-to-noise ratio for the signal, which can make the system achieve the maximum detectable frequency limited by the round-trip time of the probe pulse in fiber. A corresponding experimental ϕ-OTDR system with a 4.5 km sensing fiber integrated with the UWFBG array was setup for the evaluation of the system performance. Distributed vibration sensing is successfully realized with spatial resolution of 50 m. The sensing range of the vibration frequency can cover from 3 Hz to 9 kHz.

  17. A Theoretical Solid Oxide Fuel Cell Model for System Controls and Stability Design

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Brinson, Thomas; Credle, Sydni; Xu, Ming

    2006-01-01

    As the aviation industry moves towards higher efficiency electrical power generation, all electric aircraft, or zero emissions and more quiet aircraft, fuel cells are sought as the technology that can deliver on these high expectations. The Hybrid Solid Oxide Fuel Cell system combines the fuel cell with a microturbine to obtain up to 70 percent cycle efficiency, and then distributes the electrical power to the loads via a power distribution system. The challenge is to understand the dynamics of this complex multi-discipline system, and design distributed controls that take the system through its operating conditions in a stable and safe manner while maintaining the system performance. This particular system is a power generation and distribution system and the fuel cell and microturbine model fidelity should be compatible with the dynamics of the power distribution system in order to allow proper stability and distributed controls design. A novel modeling approach is proposed for the fuel cell that will allow the fuel cell and the power system to be integrated and designed for stability, distributed controls, and other interface specifications. This investigation shows that for the fuel cell, the voltage characteristic should be modeled, but in addition, conservation equation dynamics, ion diffusion, charge transfer kinetics, and the electron flow inherent impedance should also be included.

  18. Power management and distribution system for a More-Electric Aircraft (MADMEL) -- Program status

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maldonado, M.A.; Shah, N.M.; Cleek, K.J.

    1995-12-31

    A number of technology breakthroughs in recent years have rekindled the concept of a more-electric aircraft. High-power solid-state switching devices, electrohydrostatic actuators (EHAs), electromechanical actuators (EMAs), and high-power generators are just a few examples of component developments that have made dramatic improvements in properties such as weight, size, power, and cost. However, these components cannot be applied piecemeal. A complete, and somewhat revolutionary, system design approach is needed to exploit the benefits that a more-electric aircraft can provide. A five-phase Power Management and Distribution System for a More-Electric Aircraft (MADMEL) program was awarded by the Air Force to the Northrop/Grumman,more » Military Aircraft Division team in September 1991. The objective of the program is to design, develop, and demonstrate an advanced electrical power generation and distribution system for a more-electric aircraft (MEA). The MEA emphasizes the use of electrical power in place of hydraulics, pneumatic, and mechanical power to optimize the performance and life cycle cost of the aircraft. This paper presents an overview of the MADMEL program and a top-level summary of the program results, development and testing of major components to date. In Phase 1 and Phase 2 studies, the electrical load requirements were established and the electrical power system architecture was defined for both near-term (NT-year 1996) and far-term (FT-year 2003) MEA application. The detailed design and specification for the electrical power system (EPS), its interface with the Vehicle Management System, and the test set-up were developed under the recently completed Phase 3. The subsystem level hardware fabrication and testing will be performed under the on-going Phase 4 activities. The overall system level integration and testing will be performed in Phase 5.« less

  19. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.

    PubMed

    Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar

    2017-03-01

    We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.

  20. A Short-Term and High-Resolution System Load Forecasting Approach Using Support Vector Regression with Hybrid Parameters Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang

    This work proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system.« less

  1. Reprint of “Performance analysis of a model-sized superconducting DC transmission system based VSC-HVDC transmission technologies using RTDS”

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun

    2013-01-01

    The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.

  2. Performance analysis of a model-sized superconducting DC transmission system based VSC-HVDC transmission technologies using RTDS

    NASA Astrophysics Data System (ADS)

    Dinh, Minh-Chau; Ju, Chang-Hyeon; Kim, Sung-Kyu; Kim, Jin-Geun; Park, Minwon; Yu, In-Keun

    2012-08-01

    The combination of a high temperature superconducting DC power cable and a voltage source converter based HVDC (VSC-HVDC) creates a new option for transmitting power with multiple collection and distribution points for long distance and bulk power transmissions. It offers some greater advantages compared with HVAC or conventional HVDC transmission systems, and it is well suited for the grid integration of renewable energy sources in existing distribution or transmission systems. For this reason, a superconducting DC transmission system based HVDC transmission technologies is planned to be set up in the Jeju power system, Korea. Before applying this system to a real power system on Jeju Island, system analysis should be performed through a real time test. In this paper, a model-sized superconducting VSC-HVDC system, which consists of a small model-sized VSC-HVDC connected to a 2 m YBCO HTS DC model cable, is implemented. The authors have performed the real-time simulation method that incorporates the model-sized superconducting VSC-HVDC system into the simulated Jeju power system using Real Time Digital Simulator (RTDS). The performance analysis of the superconducting VSC-HVDC systems has been verified by the proposed test platform and the results were discussed in detail.

  3. High Voltage Distribution System (HVDS) as a better system compared to Low Voltage Distribution System (LVDS) applied at Medan city power network

    NASA Astrophysics Data System (ADS)

    Dinzi, R.; Hamonangan, TS; Fahmi, F.

    2018-02-01

    In the current distribution system, a large-capacity distribution transformer supplies loads to remote locations. The use of 220/380 V network is nowadays less common compared to 20 kV network. This results in losses due to the non-optimal distribution transformer, which neglected the load location, poor consumer profile, and large power losses along the carrier. This paper discusses how high voltage distribution systems (HVDS) can be a better system used in distribution networks than the currently used distribution system (Low Voltage Distribution System, LVDS). The proposed change of the system into the new configuration is done by replacing a large-capacity distribution transformer with some smaller-capacity distribution transformers and installed them in positions that closest to the load. The use of high voltage distribution systems will result in better voltage profiles and fewer power losses. From the non-technical side, the annual savings and payback periods on high voltage distribution systems will also be the advantage.

  4. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  5. System analysis for the Huntsville Operational Support Center distributed computer system

    NASA Technical Reports Server (NTRS)

    Ingels, F. M.; Mauldin, J.

    1984-01-01

    The Huntsville Operations Support Center (HOSC) is a distributed computer system used to provide real time data acquisition, analysis and display during NASA space missions and to perform simulation and study activities during non-mission times. The primary purpose is to provide a HOSC system simulation model that is used to investigate the effects of various HOSC system configurations. Such a model would be valuable in planning the future growth of HOSC and in ascertaining the effects of data rate variations, update table broadcasting and smart display terminal data requirements on the HOSC HYPERchannel network system. A simulation model was developed in PASCAL and results of the simulation model for various system configuraions were obtained. A tutorial of the model is presented and the results of simulation runs are presented. Some very high data rate situations were simulated to observe the effects of the HYPERchannel switch over from contention to priority mode under high channel loading.

  6. Data-Driven Residential Load Modeling and Validation in GridLAB-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gotseff, Peter; Lundstrom, Blake

    Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less

  7. Performance evaluation of FSO system using wavelength and time diversity over malaga turbulence channel with pointing errors

    NASA Astrophysics Data System (ADS)

    Balaji, K. A.; Prabu, K.

    2018-03-01

    There is an immense demand for high bandwidth and high data rate systems, which is fulfilled by wireless optical communication or free space optics (FSO). Hence FSO gained a pivotal role in research which has a added advantage of both cost-effective and licence free huge bandwidth. Unfortunately the optical signal in free space suffers from irradiance and phase fluctuations due to atmospheric turbulence and pointing errors which deteriorates the signal and degrades the performance of communication system over longer distance which is undesirable. In this paper, we have considered polarization shift keying (POLSK) system applied with wavelength and time diversity technique over Malaga(M)distribution to mitigate turbulence induced fading. We derived closed form mathematical expressions for estimating the systems outage probability and average bit error rate (BER). Ultimately from the results we can infer that wavelength and time diversity schemes enhances these systems performance.

  8. Developing an Integration Infrastructure for Distributed Engine Control Technologies

    NASA Technical Reports Server (NTRS)

    Culley, Dennis; Zinnecker, Alicia; Aretskin-Hariton, Eliot; Kratz, Jonathan

    2014-01-01

    Turbine engine control technology is poised to make the first revolutionary leap forward since the advent of full authority digital engine control in the mid-1980s. This change aims squarely at overcoming the physical constraints that have historically limited control system hardware on aero-engines to a federated architecture. Distributed control architecture allows complex analog interfaces existing between system elements and the control unit to be replaced by standardized digital interfaces. Embedded processing, enabled by high temperature electronics, provides for digitization of signals at the source and network communications resulting in a modular system at the hardware level. While this scheme simplifies the physical integration of the system, its complexity appears in other ways. In fact, integration now becomes a shared responsibility among suppliers and system integrators. While these are the most obvious changes, there are additional concerns about performance, reliability, and failure modes due to distributed architecture that warrant detailed study. This paper describes the development of a new facility intended to address the many challenges of the underlying technologies of distributed control. The facility is capable of performing both simulation and hardware studies ranging from component to system level complexity. Its modular and hierarchical structure allows the user to focus their interaction on specific areas of interest.

  9. Joint Force Pre-Deployment Training: An Initial Analysis and Product Definition (Strategic Mobility 21: IT Planning Document for APS Demonstration Document (Task 3.7)

    DTIC Science & Technology

    2010-04-13

    Office of Naval Research. DISTRIBUTION STATEMENT A . Approved for public release; distribution is unlimited. a . This statement may be used only on...documents resulting from contracted fundamental research efforts will normally be assigned Distribution Statement A , except for those rare and exceptional...circumstances where there is a high likelihood of disclosing performance characteristics of military systems, or of manufacturing technologies that

  10. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  11. SPS phase control system performance via analytical simulation

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.; Kantak, A. V.; Chie, C. M.; Booth, R. W. D.

    1979-01-01

    A solar power satellite transmission system which incorporates automatic beam forming, steering, and phase control is discussed. The phase control concept centers around the notation of an active retrodirective phased array as a means of pointing the beam to the appropriate spot on Earth. The transmitting antenna (spacetenna) directs the high power beam so that it focuses on the ground-based receiving antenna (rectenna). A combination of analysis and computerized simulation was conducted to determine the far field performance of the reference distribution system, and the beam forming and microwave power generating systems.

  12. Towards building high performance medical image management system for clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Fusheng; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel

    2011-03-01

    Medical image based biomarkers are being established for therapeutic cancer clinical trials, where image assessment is among the essential tasks. Large scale image assessment is often performed by a large group of experts by retrieving images from a centralized image repository to workstations to markup and annotate images. In such environment, it is critical to provide a high performance image management system that supports efficient concurrent image retrievals in a distributed environment. There are several major challenges: high throughput of large scale image data over the Internet from the server for multiple concurrent client users, efficient communication protocols for transporting data, and effective management of versioning of data for audit trails. We study the major bottlenecks for such a system, propose and evaluate a solution by using a hybrid image storage with solid state drives and hard disk drives, RESTfulWeb Services based protocols for exchanging image data, and a database based versioning scheme for efficient archive of image revision history. Our experiments show promising results of our methods, and our work provides a guideline for building enterprise level high performance medical image management systems.

  13. Experimental investigation on pressurization performance of cryogenic tank during high-temperature helium pressurization process

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Yanzhong, Li; Yonghua, Jin; Yuan, Ma

    2015-03-01

    Sufficient knowledge of thermal performance and pressurization behaviors in cryogenic tanks during rocket launching period is of importance to the design and optimization of a pressurization system. In this paper, ground experiments with liquid oxygen (LO2) as the cryogenic propellant, high-temperature helium exceeding 600 K as the pressurant gas, and radial diffuser and anti-cone diffuser respectively at the tank inlet were performed. The pressurant gas requirements, axial and radial temperature distributions, and energy distributions inside the propellant tank were obtained and analyzed to evaluate the comprehensive performance of the pressurization system. It was found that the pressurization system with high-temperature helium as the pressurant gas could work well that the tank pressure was controlled within a specified range and a stable discharging liquid rate was achieved. For the radial diffuser case, the injected gas had a direct impact on the tank inner wall. The severe gas-wall heat transfer resulted in about 59% of the total input energy absorbed by the tank wall. For the pressurization case with anti-cone diffuser, the direct impact of high-temperature gas flowing toward the liquid surface resulted in a greater deal of energy transferred to the liquid propellant, and the percentage even reached up to 38%. Moreover, both of the two cases showed that the proportion of energy left in ullage to the total input energy was quite small, and the percentage was only about 22-24%. This may indicate that a more efficient diffuser should be developed to improve the pressurization effect. Generally, the present experimental results are beneficial to the design and optimization of the pressurization system with high-temperature gas supplying the pressurization effect.

  14. Application of new type of distributed multimedia databases to networked electronic museum

    NASA Astrophysics Data System (ADS)

    Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki

    1999-01-01

    Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.

  15. High performance reconciliation for continuous-variable quantum key distribution with LDPC code

    NASA Astrophysics Data System (ADS)

    Lin, Dakai; Huang, Duan; Huang, Peng; Peng, Jinye; Zeng, Guihua

    2015-03-01

    Reconciliation is a significant procedure in a continuous-variable quantum key distribution (CV-QKD) system. It is employed to extract secure secret key from the resulted string through quantum channel between two users. However, the efficiency and the speed of previous reconciliation algorithms are low. These problems limit the secure communication distance and the secure key rate of CV-QKD systems. In this paper, we proposed a high-speed reconciliation algorithm through employing a well-structured decoding scheme based on low density parity-check (LDPC) code. The complexity of the proposed algorithm is reduced obviously. By using a graphics processing unit (GPU) device, our method may reach a reconciliation speed of 25 Mb/s for a CV-QKD system, which is currently the highest level and paves the way to high-speed CV-QKD.

  16. General formula for the incidence factor of a solar heliostat receiver system.

    PubMed

    Wei, L Y

    1980-09-15

    A general formula is derived for the effective incidence factor of an array of heliostat mirrors for solar power collection. The formula can be greatly simplified for arrays of high symmetry and offers quick computation of the performance of the array. It shows clearly how the mirror distribution and locations affect the overall performance and thus provide a useful guidance for the design of a solar heliostat receiver system.

  17. Stability and performance of propulsion control systems with distributed control architectures and failures

    NASA Astrophysics Data System (ADS)

    Belapurkar, Rohit K.

    Future aircraft engine control systems will be based on a distributed architecture, in which, the sensors and actuators will be connected to the Full Authority Digital Engine Control (FADEC) through an engine area network. Distributed engine control architecture will allow the implementation of advanced, active control techniques along with achieving weight reduction, improvement in performance and lower life cycle cost. The performance of a distributed engine control system is predominantly dependent on the performance of the communication network. Due to the serial data transmission policy, network-induced time delays and sampling jitter are introduced between the sensor/actuator nodes and the distributed FADEC. Communication network faults and transient node failures may result in data dropouts, which may not only degrade the control system performance but may even destabilize the engine control system. Three different architectures for a turbine engine control system based on a distributed framework are presented. A partially distributed control system for a turbo-shaft engine is designed based on ARINC 825 communication protocol. Stability conditions and control design methodology are developed for the proposed partially distributed turbo-shaft engine control system to guarantee the desired performance under the presence of network-induced time delay and random data loss due to transient sensor/actuator failures. A fault tolerant control design methodology is proposed to benefit from the availability of an additional system bandwidth and from the broadcast feature of the data network. It is shown that a reconfigurable fault tolerant control design can help to reduce the performance degradation in presence of node failures. A T-700 turbo-shaft engine model is used to validate the proposed control methodology based on both single input and multiple-input multiple-output control design techniques.

  18. Current state and future direction of computer systems at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  19. Building America Case Study: Standard- Versus High-Velocity Air Distribution in High-Performance Townhomes, Denver, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    IBACOS investigated the performance of a small-diameter high velocity heat pump system compared to a conventional system in a new construction triplex townhouse. A ductless heat pump system also was installed for comparison, but the homebuyer backed out because of aesthetic concerns about that system. In total, two buildings, having identical solar orientation and comprised of six townhomes, were monitored for comfort and energy performance. Results show that the small-diameter system provides more uniform temperatures from floor to floor in the three-story townhome. No clear energy consumption benefit was observed from either system. The builder is continuing to explore themore » small-diameter system as its new standard system to provide better comfort and indoor air quality. The homebuilder also explored the possibility of shifting its townhome product to meet the U.S. Department of Energy Challenge Home National Program Requirements. Ultimately, the builder decided that adoption of these practices would be too disruptive midstream in the construction cycle. However, the townhomes met the ENERGY STAR Version 3.0 program requirements.« less

  20. Performance optimization of apodized FBG-based temperature sensors in single and quasi-distributed DWDM systems with new and different apodization profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Nazmi A.; Ali, Taha A., E-mail: Taha25@gmail.com; Aly, Moustafa H.

    2013-12-15

    In this work, different FBG temperature sensors are designed and evaluated with various apodization profiles. Evaluation is done under a wide range of controlling design parameters like sensor length and refractive index modulation amplitude, targeting a remarkable temperature sensing performance. New judgment techniques are introduced such as apodization window roll-off rate, asymptotic sidelobe (SL) decay level, number of SLs, and average SL level (SLav). Evaluation techniques like reflectivity, Full width at Half Maximum (FWHM), and Sidelobe Suppression Ratio (SLSR) are also used. A “New” apodization function is proposed, which achieves better performance like asymptotic decay of 18.4 dB/nm, high SLSRmore » of 60 dB, high channel isolation of 57.9 dB, and narrow FWHM less than 0.15 nm. For a single accurate temperature sensor measurement in extensive noisy environment, optimum results are obtained by the Nuttall apodization profile and the new apodization function, which have remarkable SLSR. For a quasi-distributed FBG temperature sensor the Barthann and the new apodization profiles obtain optimum results. Barthann achieves a high asymptotic decay of 40 dB/nm, a narrow FWHM (less than 25 GHZ), a very low SLav of −45.3 dB, high isolation of 44.6 dB, and a high SLSR of 35 dB. The new apodization function achieves narrow FWHM of 0.177 nm, very low SL of −60.1, very low SLav of −63.6 dB, and very high SLSR of −57.7 dB. A study is performed on including an unapodized sensor among apodized sensors in a quasi-distributed sensing system. Finally, an isolation examination is performed on all the discussed apodizations and a linear relation between temperature and the Bragg wavelength shift is observed experimentally and matched with the simulated results.« less

  1. Performance optimization of apodized FBG-based temperature sensors in single and quasi-distributed DWDM systems with new and different apodization profiles

    NASA Astrophysics Data System (ADS)

    Mohammed, Nazmi A.; Ali, Taha A.; Aly, Moustafa H.

    2013-12-01

    In this work, different FBG temperature sensors are designed and evaluated with various apodization profiles. Evaluation is done under a wide range of controlling design parameters like sensor length and refractive index modulation amplitude, targeting a remarkable temperature sensing performance. New judgment techniques are introduced such as apodization window roll-off rate, asymptotic sidelobe (SL) decay level, number of SLs, and average SL level (SLav). Evaluation techniques like reflectivity, Full width at Half Maximum (FWHM), and Sidelobe Suppression Ratio (SLSR) are also used. A "New" apodization function is proposed, which achieves better performance like asymptotic decay of 18.4 dB/nm, high SLSR of 60 dB, high channel isolation of 57.9 dB, and narrow FWHM less than 0.15 nm. For a single accurate temperature sensor measurement in extensive noisy environment, optimum results are obtained by the Nuttall apodization profile and the new apodization function, which have remarkable SLSR. For a quasi-distributed FBG temperature sensor the Barthann and the new apodization profiles obtain optimum results. Barthann achieves a high asymptotic decay of 40 dB/nm, a narrow FWHM (less than 25 GHZ), a very low SLav of -45.3 dB, high isolation of 44.6 dB, and a high SLSR of 35 dB. The new apodization function achieves narrow FWHM of 0.177 nm, very low SL of -60.1, very low SLav of -63.6 dB, and very high SLSR of -57.7 dB. A study is performed on including an unapodized sensor among apodized sensors in a quasi-distributed sensing system. Finally, an isolation examination is performed on all the discussed apodizations and a linear relation between temperature and the Bragg wavelength shift is observed experimentally and matched with the simulated results.

  2. Understanding I/O workload characteristics of a Peta-scale storage system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul

    2015-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the I/O workloads of scientific applications of one of the world s fastest high performance computing (HPC) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). OLCF flagship petascale simulation platform, Titan, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize the system utilization, the demands of reads and writes, idle time, storage space utilization,more » and the distribution of read requests to write requests for the Peta-scale Storage Systems. From this study, we develop synthesized workloads, and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution. We also study the I/O load imbalance problems using I/O performance data collected from the Spider storage system.« less

  3. High altitude airship configuration and power technology and method for operation of same

    NASA Technical Reports Server (NTRS)

    Choi, Sang H. (Inventor); Elliott, Jr., James R. (Inventor); King, Glen C. (Inventor); Park, Yeonjoon (Inventor); Kim, Jae-Woo (Inventor); Chu, Sang-Hyon (Inventor)

    2011-01-01

    A new High Altitude Airship (HAA) capable of various extended applications and mission scenarios utilizing inventive onboard energy harvesting and power distribution systems. The power technology comprises an advanced thermoelectric (ATE) thermal energy conversion system. The high efficiency of multiple stages of ATE materials in a tandem mode, each suited for best performance within a particular temperature range, permits the ATE system to generate a high quantity of harvested energy for the extended mission scenarios. When the figure of merit 5 is considered, the cascaded efficiency of the three-stage ATE system approaches an efficiency greater than 60 percent.

  4. Research into software executives for space operations support

    NASA Technical Reports Server (NTRS)

    Collier, Mark D.

    1990-01-01

    Research concepts pertaining to a software (workstation) executive which will support a distributed processing command and control system characterized by high-performance graphics workstations used as computing nodes are presented. Although a workstation-based distributed processing environment offers many advantages, it also introduces a number of new concerns. In order to solve these problems, allow the environment to function as an integrated system, and present a functional development environment to application programmers, it is necessary to develop an additional layer of software. This 'executive' software integrates the system, provides real-time capabilities, and provides the tools necessary to support the application requirements.

  5. Computer-Assisted Monitoring Of A Complex System

    NASA Technical Reports Server (NTRS)

    Beil, Bob J.; Mickelson, Eric M.; Sterritt, John M.; Costantino, Rob W.; Houvener, Bob C.; Super, Mike A.

    1995-01-01

    Propulsion System Advisor (PSA) computer-based system assists engineers and technicians in analyzing masses of sensory data indicative of operating conditions of space shuttle propulsion system during pre-launch and launch activities. Designed solely for monitoring; does not perform any control functions. Although PSA developed for highly specialized application, serves as prototype of noncontrolling, computer-based subsystems for monitoring other complex systems like electric-power-distribution networks and factories.

  6. Distributed rendering for multiview parallax displays

    NASA Astrophysics Data System (ADS)

    Annen, T.; Matusik, W.; Pfister, H.; Seidel, H.-P.; Zwicker, M.

    2006-02-01

    3D display technology holds great promise for the future of television, virtual reality, entertainment, and visualization. Multiview parallax displays deliver stereoscopic views without glasses to arbitrary positions within the viewing zone. These systems must include a high-performance and scalable 3D rendering subsystem in order to generate multiple views at real-time frame rates. This paper describes a distributed rendering system for large-scale multiview parallax displays built with a network of PCs, commodity graphics accelerators, multiple projectors, and multiview screens. The main challenge is to render various perspective views of the scene and assign rendering tasks effectively. In this paper we investigate two different approaches: Optical multiplexing for lenticular screens and software multiplexing for parallax-barrier displays. We describe the construction of large-scale multi-projector 3D display systems using lenticular and parallax-barrier technology. We have developed different distributed rendering algorithms using the Chromium stream-processing framework and evaluate the trade-offs and performance bottlenecks. Our results show that Chromium is well suited for interactive rendering on multiview parallax displays.

  7. Aesthetic coatings for concrete bridge components

    NASA Astrophysics Data System (ADS)

    Kriha, Brent R.

    This thesis evaluated the durability and aesthetic performance of coating systems for utilization in concrete bridge applications. The principle objectives of this thesis were: 1) Identify aesthetic coating systems appropriate for concrete bridge applications; 2) Evaluate the performance of the selected systems through a laboratory testing regimen; 3) Develop guidelines for coating selection, surface preparation, and application. A series of site visits to various bridges throughout the State of Wisconsin provided insight into the performance of common coating systems and allowed problematic structural details to be identified. To aid in the selection of appropriate coating systems, questionnaires were distributed to coating manufacturers, bridge contractors, and various DOT offices to identify high performing coating systems and best practices for surface preparation and application. These efforts supplemented a literature review investigating recent publications related to formulation, selection, surface preparation, application, and performance evaluation of coating materials.

  8. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  9. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  10. Towards a Scalable and Adaptive Application Support Platform for Large-Scale Distributed E-Sciences in High-Performance Network Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chase Qishi; Zhu, Michelle Mengxia

    The advent of large-scale collaborative scientific applications has demonstrated the potential for broad scientific communities to pool globally distributed resources to produce unprecedented data acquisition, movement, and analysis. System resources including supercomputers, data repositories, computing facilities, network infrastructures, storage systems, and display devices have been increasingly deployed at national laboratories and academic institutes. These resources are typically shared by large communities of users over Internet or dedicated networks and hence exhibit an inherent dynamic nature in their availability, accessibility, capacity, and stability. Scientific applications using either experimental facilities or computation-based simulations with various physical, chemical, climatic, and biological models featuremore » diverse scientific workflows as simple as linear pipelines or as complex as a directed acyclic graphs, which must be executed and supported over wide-area networks with massively distributed resources. Application users oftentimes need to manually configure their computing tasks over networks in an ad hoc manner, hence significantly limiting the productivity of scientists and constraining the utilization of resources. The success of these large-scale distributed applications requires a highly adaptive and massively scalable workflow platform that provides automated and optimized computing and networking services. This project is to design and develop a generic Scientific Workflow Automation and Management Platform (SWAMP), which contains a web-based user interface specially tailored for a target application, a set of user libraries, and several easy-to-use computing and networking toolkits for application scientists to conveniently assemble, execute, monitor, and control complex computing workflows in heterogeneous high-performance network environments. SWAMP will enable the automation and management of the entire process of scientific workflows with the convenience of a few mouse clicks while hiding the implementation and technical details from end users. Particularly, we will consider two types of applications with distinct performance requirements: data-centric and service-centric applications. For data-centric applications, the main workflow task involves large-volume data generation, catalog, storage, and movement typically from supercomputers or experimental facilities to a team of geographically distributed users; while for service-centric applications, the main focus of workflow is on data archiving, preprocessing, filtering, synthesis, visualization, and other application-specific analysis. We will conduct a comprehensive comparison of existing workflow systems and choose the best suited one with open-source code, a flexible system structure, and a large user base as the starting point for our development. Based on the chosen system, we will develop and integrate new components including a black box design of computing modules, performance monitoring and prediction, and workflow optimization and reconfiguration, which are missing from existing workflow systems. A modular design for separating specification, execution, and monitoring aspects will be adopted to establish a common generic infrastructure suited for a wide spectrum of science applications. We will further design and develop efficient workflow mapping and scheduling algorithms to optimize the workflow performance in terms of minimum end-to-end delay, maximum frame rate, and highest reliability. We will develop and demonstrate the SWAMP system in a local environment, the grid network, and the 100Gpbs Advanced Network Initiative (ANI) testbed. The demonstration will target scientific applications in climate modeling and high energy physics and the functions to be demonstrated include workflow deployment, execution, steering, and reconfiguration. Throughout the project period, we will work closely with the science communities in the fields of climate modeling and high energy physics including Spallation Neutron Source (SNS) and Large Hadron Collider (LHC) projects to mature the system for production use.« less

  11. Distributed energy storage systems on the basis of electric-vehicle fleets

    NASA Astrophysics Data System (ADS)

    Zhuk, A. Z.; Buzoverov, E. A.; Sheindlin, A. E.

    2015-01-01

    Several power technologies directed to solving the problem of covering nonuniform loads in power systems are developed at the Joint Institute of High Temperatures, Russian Academy of Sciences (JIHT RAS). One direction of investigations is the use of storage batteries of electric vehicles to compensate load peaks in the power system (V2G—vehicle-to-grid technology). The efficiency of energy storage systems based on electric vehicles with traditional energy-saving technologies is compared in the article by means of performing computations. The comparison is performed by the minimum-cost criterion for the peak energy supply to the system. Computations show that the distributed storage systems based on fleets of electric cars are efficient economically with their usage regime to 1 h/day. In contrast to traditional methods, the prime cost of regulation of the loads in the power system based on V2G technology is independent of the duration of the load compensation period (the duration of the consumption peak).

  12. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao

    2017-12-01

    High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.

  13. Networked buffering: a basic mechanism for distributed robustness in complex adaptive systems.

    PubMed

    Whitacre, James M; Bender, Axel

    2010-06-15

    A generic mechanism--networked buffering--is proposed for the generation of robust traits in complex systems. It requires two basic conditions to be satisfied: 1) agents are versatile enough to perform more than one single functional role within a system and 2) agents are degenerate, i.e. there exists partial overlap in the functional capabilities of agents. Given these prerequisites, degenerate systems can readily produce a distributed systemic response to local perturbations. Reciprocally, excess resources related to a single function can indirectly support multiple unrelated functions within a degenerate system. In models of genome:proteome mappings for which localized decision-making and modularity of genetic functions are assumed, we verify that such distributed compensatory effects cause enhanced robustness of system traits. The conditions needed for networked buffering to occur are neither demanding nor rare, supporting the conjecture that degeneracy may fundamentally underpin distributed robustness within several biotic and abiotic systems. For instance, networked buffering offers new insights into systems engineering and planning activities that occur under high uncertainty. It may also help explain recent developments in understanding the origins of resilience within complex ecosystems.

  14. A distributed automatic target recognition system using multiple low resolution sensors

    NASA Astrophysics Data System (ADS)

    Yue, Zhanfeng; Lakshmi Narasimha, Pramod; Topiwala, Pankaj

    2008-04-01

    In this paper, we propose a multi-agent system which uses swarming techniques to perform high accuracy Automatic Target Recognition (ATR) in a distributed manner. The proposed system can co-operatively share the information from low-resolution images of different looks and use this information to perform high accuracy ATR. An advanced, multiple-agent Unmanned Aerial Vehicle (UAV) systems-based approach is proposed which integrates the processing capabilities, combines detection reporting with live video exchange, and swarm behavior modalities that dramatically surpass individual sensor system performance levels. We employ real-time block-based motion analysis and compensation scheme for efficient estimation and correction of camera jitter, global motion of the camera/scene and the effects of atmospheric turbulence. Our optimized Partition Weighted Sum (PWS) approach requires only bitshifts and additions, yet achieves a stunning 16X pixel resolution enhancement, which is moreover parallizable. We develop advanced, adaptive particle-filtering based algorithms to robustly track multiple mobile targets by adaptively changing the appearance model of the selected targets. The collaborative ATR system utilizes the homographies between the sensors induced by the ground plane to overlap the local observation with the received images from other UAVs. The motion of the UAVs distorts estimated homography frame to frame. A robust dynamic homography estimation algorithm is proposed to address this, by using the homography decomposition and the ground plane surface estimation.

  15. Distributed state-space generation of discrete-state stochastic models

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco; Gluckman, Joshua; Nicol, David

    1995-01-01

    High-level formalisms such as stochastic Petri nets can be used to model complex systems. Analysis of logical and numerical properties of these models of ten requires the generation and storage of the entire underlying state space. This imposes practical limitations on the types of systems which can be modeled. Because of the vast amount of memory consumed, we investigate distributed algorithms for the generation of state space graphs. The distributed construction allows us to take advantage of the combined memory readily available on a network of workstations. The key technical problem is to find effective methods for on-the-fly partitioning, so that the state space is evenly distributed among processors. In this paper we report on the implementation of a distributed state-space generator that may be linked to a number of existing system modeling tools. We discuss partitioning strategies in the context of Petri net models, and report on performance observed on a network of workstations, as well as on a distributed memory multi-computer.

  16. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Christopher J; Ahrens, James P; Wang, Jun

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less

  17. An experimental investigation of the flow physics of high-lift systems

    NASA Technical Reports Server (NTRS)

    Thomas, Flint O.; Nelson, R. C.

    1995-01-01

    This progress report is a series of overviews outlining experiments on the flow physics of confluent boundary layers for high-lift systems. The research objectives include establishing the role of confluent boundary layer flow physics in high-lift production; contrasting confluent boundary layer structures for optimum and non-optimum C(sub L) cases; forming a high quality, detailed archival data base for CFD/modelling; and examining the role of relaminarization and streamline curvature. Goals of this research include completing LDV study of an optimum C(sub L) case; performing detailed LDV confluent boundary layer surveys for multiple non-optimum C(sub L) cases; obtaining skin friction distributions for both optimum and non-optimum C(sub L) cases for scaling purposes; data analysis and inner and outer variable scaling; setting-up and performing relaminarization experiments; and a final report establishing the role of leading edge confluent boundary layer flow physics on high-lift performance.

  18. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Zhu, Mengxia; Rao, Nageswara S

    We propose an intelligent decision support system based on sensor and computer networks that incorporates various component techniques for sensor deployment, data routing, distributed computing, and information fusion. The integrated system is deployed in a distributed environment composed of both wireless sensor networks for data collection and wired computer networks for data processing in support of homeland security defense. We present the system framework and formulate the analytical problems and develop approximate or exact solutions for the subtasks: (i) sensor deployment strategy based on a two-dimensional genetic algorithm to achieve maximum coverage with cost constraints; (ii) data routing scheme tomore » achieve maximum signal strength with minimum path loss, high energy efficiency, and effective fault tolerance; (iii) network mapping method to assign computing modules to network nodes for high-performance distributed data processing; and (iv) binary decision fusion rule that derive threshold bounds to improve system hit rate and false alarm rate. These component solutions are implemented and evaluated through either experiments or simulations in various application scenarios. The extensive results demonstrate that these component solutions imbue the integrated system with the desirable and useful quality of intelligence in decision making.« less

  20. Implementation of High Speed Distributed Data Acquisition System

    NASA Astrophysics Data System (ADS)

    Raju, Anju P.; Sekhar, Ambika

    2012-09-01

    This paper introduces a high speed distributed data acquisition system based on a field programmable gate array (FPGA). The aim is to develop a "distributed" data acquisition interface. The development of instruments such as personal computers and engineering workstations based on "standard" platforms is the motivation behind this effort. Using standard platforms as the controlling unit allows independence in hardware from a particular vendor and hardware platform. The distributed approach also has advantages from a functional point of view: acquisition resources become available to multiple instruments; the acquisition front-end can be physically remote from the rest of the instrument. High speed data acquisition system transmits data faster to a remote computer system through Ethernet interface. The data is acquired through 16 analog input channels. The input data commands are multiplexed and digitized and then the data is stored in 1K buffer for each input channel. The main control unit in this design is the 16 bit processor implemented in the FPGA. This 16 bit processor is used to set up and initialize the data source and the Ethernet controller, as well as control the flow of data from the memory element to the NIC. Using this processor we can initialize and control the different configuration registers in the Ethernet controller in a easy manner. Then these data packets are sending to the remote PC through the Ethernet interface. The main advantages of the using FPGA as standard platform are its flexibility, low power consumption, short design duration, fast time to market, programmability and high density. The main advantages of using Ethernet controller AX88796 over others are its non PCI interface, the presence of embedded SRAM where transmit and reception buffers are located and high-performance SRAM-like interface. The paper introduces the implementation of the distributed data acquisition using FPGA by VHDL. The main advantages of this system are high accuracy, high speed, real time monitoring.

  1. Advanced sensors and instrumentation

    NASA Technical Reports Server (NTRS)

    Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty

    1990-01-01

    NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.

  2. Design and comparison of laser windows for high-power lasers

    NASA Astrophysics Data System (ADS)

    Niu, Yanxiong; Liu, Wenwen; Liu, Haixia; Wang, Caili; Niu, Haisha; Man, Da

    2014-11-01

    High-power laser systems are getting more and more widely used in industry and military affairs. It is necessary to develop a high-power laser system which can operate over long periods of time without appreciable degradation in performance. When a high-energy laser beam transmits through a laser window, it is possible that the permanent damage is caused to the window because of the energy absorption by window materials. So, when we design a high-power laser system, a suitable laser window material must be selected and the laser damage threshold of the window must be known. In this paper, a thermal analysis model of high-power laser window is established, and the relationship between the laser intensity and the thermal-stress field distribution is studied by deducing the formulas through utilizing the integral-transform method. The influence of window radius, thickness and laser intensity on the temperature and stress field distributions is analyzed. Then, the performance of K9 glass and the fused silica glass is compared, and the laser-induced damage mechanism is analyzed. Finally, the damage thresholds of laser windows are calculated. The results show that compared with K9 glass, the fused silica glass has a higher damage threshold due to its good thermodynamic properties. The presented theoretical analysis and simulation results are helpful for the design and selection of high-power laser windows.

  3. Application Characterization at Scale: Lessons learned from developing a distributed Open Community Runtime system for High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landwehr, Joshua B.; Suetterlein, Joshua D.; Marquez, Andres

    2016-05-16

    Since 2012, the U.S. Department of Energy’s X-Stack program has been developing solutions including runtime systems, programming models, languages, compilers, and tools for the Exascale system software to address crucial performance and power requirements. Fine grain programming models and runtime systems show a great potential to efficiently utilize the underlying hardware. Thus, they are essential to many X-Stack efforts. An abundant amount of small tasks can better utilize the vast parallelism available on current and future machines. Moreover, finer tasks can recover faster and adapt better, due to a decrease in state and control. Nevertheless, current applications have been writtenmore » to exploit old paradigms (such as Communicating Sequential Processor and Bulk Synchronous Parallel processing). To fully utilize the advantages of these new systems, applications need to be adapted to these new paradigms. As part of the applications’ porting process, in-depth characterization studies, focused on both application characteristics and runtime features, need to take place to fully understand the application performance bottlenecks and how to resolve them. This paper presents a characterization study for a novel high performance runtime system, called the Open Community Runtime, using key HPC kernels as its vehicle. This study has the following contributions: one of the first high performance, fine grain, distributed memory runtime system implementing the OCR standard (version 0.99a); and a characterization study of key HPC kernels in terms of runtime primitives running on both intra and inter node environments. Running on a general purpose cluster, we have found up to 1635x relative speed-up for a parallel tiled Cholesky Kernels on 128 nodes with 16 cores each and a 1864x relative speed-up for a parallel tiled Smith-Waterman kernel on 128 nodes with 30 cores.« less

  4. M-OTDR sensing system based on 3D encoded microstructures

    PubMed Central

    Sun, Qizhen; Ai, Fan; Liu, Deming; Cheng, Jianwei; Luo, Hongbo; Peng, Kuan; Luo, Yiyang; Yan, Zhijun; Shum, Perry Ping

    2017-01-01

    In this work, a quasi-distributed sensing scheme named as microstructured OTDR (M-OTDR) by introducing ultra-weak microstructures along the fiber is proposed. Owing to its relative higher reflectivity compared with the backscattered coefficient in fiber and three dimensional (3D) i.e. wavelength/frequency/time encoded property, the M-OTDR system exhibits the superiorities of high signal to noise ratio (SNR), high spatial resolution of millimeter level and high multiplexing capacity up to several ten thousands theoretically. A proof-of-concept system consisting of 64 sensing units is constructed to demonstrate the feasibility and sensing performance. With the help of the demodulation method based on 3D analysis and spectrum reconstruction of the signal light, quasi-distributed temperature sensing with a spatial resolution of 20 cm as well as a measurement resolution of 0.1 °C is realized. PMID:28106132

  5. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadika, Zacharia; Dede, Elif; Govindaraju, Madhusudhan

    MapReduce is increasingly becoming a popular framework, and a potent programming model. The most popular open source implementation of MapReduce, Hadoop, is based on the Hadoop Distributed File System (HDFS). However, as HDFS is not POSIX compliant, it cannot be fully leveraged by applications running on a majority of existing HPC environments such as Teragrid and NERSC. These HPC environments typicallysupport globally shared file systems such as NFS and GPFS. On such resourceful HPC infrastructures, the use of Hadoop not only creates compatibility issues, but also affects overall performance due to the added overhead of the HDFS. This paper notmore » only presents a MapReduce implementation directly suitable for HPC environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems' functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPCenvironments, but also allows for better performance in such settings. This paper shows the applicability and high performance of the MapReduce paradigm through MARIANE, an implementation designed for clustered and shared-disk file systems and as such not dedicated to a specific MapReduce solution. The paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in distributed environments over Apache Hadoop in a data intensive setting, on the Magellan testbed at the National Energy Research Scientific Computing Center (NERSC).« less

  7. Comprehensive comparison of the levitation performance of bulk YBaCuO arrays above two different types of magnetic guideways

    NASA Astrophysics Data System (ADS)

    Deng, Zigang; Qian, Nan; Che, Tong; Jin, Liwei; Si, Shuaishuai; Zhang, Ya; Zheng, Jun

    2016-12-01

    The permanent magnet guideway (PMG) is an important part of high temperature superconducting (HTS) maglev systems. So far, two types of PMG, the normal PMG and Halbach-type PMG, are widely applied in present maglev transportation systems. In this paper, the levitation performance of high temperature superconductor bulks above the two PMGs was synthetically compared. Both static levitation performance and dynamic response characteristics were investigated. Benefiting from the reasonable magnetic field distribution, the Halbach-type PMG is able to gain larger levitation force, greater levitation force decay during the same relaxation time, bigger resonance frequency and dynamic stiffness for the bulk superconductor levitation unit compared with the normal PMG. Another finding is that the Halbach-type PMG is not sensitive to the levitation performance of the bulk levitation unit with different arrays. These results are helpful for the practical application of HTS maglev systems.

  8. The SKA1 LOW telescope: system architecture and design performance

    NASA Astrophysics Data System (ADS)

    Waterson, Mark F.; Labate, Maria Grazia; Schnetler, Hermine; Wagg, Jeff; Turner, Wallace; Dewdney, Peter

    2016-07-01

    The SKA1-LOW radio telescope will be a low-frequency (50-350 MHz) aperture array located in Western Australia. Its scientific objectives will prioritize studies of the Epoch of Reionization and pulsar physics. Development of the telescope has been allocated to consortia responsible for the aperture array front end, timing distribution, signal and data transport, correlation and beamforming signal processors, infrastructure, monitor and control systems, and science data processing. This paper will describe the system architectural design and key performance parameters of the telescope and summarize the high-level sub-system designs of the consortia.

  9. Informatic analysis for hidden pulse attack exploiting spectral characteristics of optics in plug-and-play quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Ko, Heasin; Lim, Kyongchun; Oh, Junsang; Rhee, June-Koo Kevin

    2016-10-01

    Quantum channel loopholes due to imperfect implementations of practical devices expose quantum key distribution (QKD) systems to potential eavesdropping attacks. Even though QKD systems are implemented with optical devices that are highly selective on spectral characteristics, information theory-based analysis about a pertinent attack strategy built with a reasonable framework exploiting it has never been clarified. This paper proposes a new type of trojan horse attack called hidden pulse attack that can be applied in a plug-and-play QKD system, using general and optimal attack strategies that can extract quantum information from phase-disturbed quantum states of eavesdropper's hidden pulses. It exploits spectral characteristics of a photodiode used in a plug-and-play QKD system in order to probe modulation states of photon qubits. We analyze the security performance of the decoy-state BB84 QKD system under the optimal hidden pulse attack model that shows enormous performance degradation in terms of both secret key rate and transmission distance.

  10. Foundational Report Series. Advanced Distribution management Systems for Grid Modernization (Importance of DMS for Distribution Grid Modernization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jianhui

    2015-09-01

    Grid modernization is transforming the operation and management of electric distribution systems from manual, paper-driven business processes to electronic, computer-assisted decisionmaking. At the center of this business transformation is the distribution management system (DMS), which provides a foundation from which optimal levels of performance can be achieved in an increasingly complex business and operating environment. Electric distribution utilities are facing many new challenges that are dramatically increasing the complexity of operating and managing the electric distribution system: growing customer expectations for service reliability and power quality, pressure to achieve better efficiency and utilization of existing distribution system assets, and reductionmore » of greenhouse gas emissions by accommodating high penetration levels of distributed generating resources powered by renewable energy sources (wind, solar, etc.). Recent “storm of the century” events in the northeastern United States and the lengthy power outages and customer hardships that followed have greatly elevated the need to make power delivery systems more resilient to major storm events and to provide a more effective electric utility response during such regional power grid emergencies. Despite these newly emerging challenges for electric distribution system operators, only a small percentage of electric utilities have actually implemented a DMS. This paper discusses reasons why a DMS is needed and why the DMS may emerge as a mission-critical system that will soon be considered essential as electric utilities roll out their grid modernization strategies.« less

  11. Gateway: Volume 3 Number 4

    DTIC Science & Technology

    1992-01-01

    applications are described, including windshield, symbols dancing trans- high- flight -time pilots (and big bud- an automobile system that permits driv...ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER Human Systems JAC 2261 Monahan Way, Bldg. 196 GWIII4 WPAFB OH> 45433-7022 9...release; distribution is unlimited. Free to public by contacting the Human Systems IAC. A 13. ABSTRACT (Maximum 200 Words) This issue contains articles

  12. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 3, Issue 1

    DTIC Science & Technology

    2011-01-01

    release; distribution is unlimited. Multiscale Modeling of Materials The rotating reflector antenna associated with airport traffic control systems is...batteries and phased-array antennas . Power and efficiency studies evaluate on-board HPC systems and advanced image processing applications. 2010 marked...giving way in some applications to a newer technology called the phased array antenna system (sometimes called a beamformer, example shown at right

  13. Continued Development of Expert System Tools for NPSS Engine Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewandowski, Henry

    1996-01-01

    The objectives of this grant were to work with previously developed NPSS (Numerical Propulsion System Simulation) tools and enhance their functionality; explore similar AI systems; and work with the High Performance Computing Communication (HPCC) K-12 program. Activities for this reporting period are briefly summarized and a paper addressing the implementation, monitoring and zooming in a distributed jet engine simulation is included as an attachment.

  14. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  15. Studying the Impact of Distributed Solar PV on Power Systems using Integrated Transmission and Distribution Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Himanshu; Palmintier, Bryan S; Krad, Ibrahim

    This paper presents the results of a distributed solar PV impact assessment study that was performed using a synthetic integrated transmission (T) and distribution (D) model. The primary objective of the study was to present a new approach for distributed solar PV impact assessment, where along with detailed models of transmission and distribution networks, consumer loads were modeled using the physics of end-use equipment, and distributed solar PV was geographically dispersed and connected to the secondary distribution networks. The highlights of the study results were (i) increase in the Area Control Error (ACE) at high penetration levels of distributed solarmore » PV; and (ii) differences in distribution voltages profiles and voltage regulator operations between integrated T&D and distribution only simulations.« less

  16. Evaluating effective swath width and droplet distribution of aerial spraying systems on M-18B and Thrush 510G airplanes

    USDA-ARS?s Scientific Manuscript database

    Aerial spraying plays an important role in promoting agricultural production and protecting the biological environment due to its flexibility, high effectiveness, and large operational area per unit of time. In order to evaluate the performance parameters of the spraying systems on two fixed wing ai...

  17. A high performance parallel algorithm for 1-D FFT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agarwal, R.C.; Gustavson, F.G.; Zubair, M.

    1994-12-31

    In this paper the authors propose a parallel high performance FFT algorithm based on a multi-dimensional formulation. They use this to solve a commonly encountered FFT based kernel on a distributed memory parallel machine, the IBM scalable parallel system, SP1. The kernel requires a forward FFT computation of an input sequence, multiplication of the transformed data by a coefficient array, and finally an inverse FFT computation of the resultant data. They show that the multi-dimensional formulation helps in reducing the communication costs and also improves the single node performance by effectively utilizing the memory system of the node. They implementedmore » this kernel on the IBM SP1 and observed a performance of 1.25 GFLOPS on a 64-node machine.« less

  18. Exhaust emission reduction for intermittent combustion aircraft engines

    NASA Technical Reports Server (NTRS)

    Moffett, R. N.

    1979-01-01

    Three concepts for optimizing the performance, increasing the fuel economy, and reducing exhaust emission of the piston aircraft engine were investigated. High energy-multiple spark discharge and spark plug tip penetration, ultrasonic fuel vaporization, and variable valve timing were evaluated individually. Ultrasonic fuel vaporization did not demonstrate sufficient improvement in distribution to offset the performance loss caused by the additional manifold restriction. High energy ignition and revised spark plug tip location provided no change in performance or emissions. Variable valve timing provided some performance benefit; however, even greater performance improvement was obtained through induction system tuning which could be accomplished with far less complexity.

  19. A new communication protocol family for a distributed spacecraft control system

    NASA Technical Reports Server (NTRS)

    Baldi, Andrea; Pace, Marco

    1994-01-01

    In this paper we describe the concepts behind and architecture of a communication protocol family, which was designed to fulfill the communication requirements of ESOC's new distributed spacecraft control system SCOS 2. A distributed spacecraft control system needs a data delivery subsystem to be used for telemetry (TLM) distribution, telecommand (TLC) dispatch and inter-application communication, characterized by the following properties: reliability, so that any operational workstation is guaranteed to receive the data it needs to accomplish its role; efficiency, so that the telemetry distribution, even for missions with high telemetry rates, does not cause a degradation of the overall control system performance; scalability, so that the network is not the bottleneck both in terms of bandwidth and reconfiguration; flexibility, so that it can be efficiently used in many different situations. The new protocol family which satisfies the above requirements is built on top of widely used communication protocols (UDP and TCP), provides reliable point-to-point and broadcast communication (UDP+) and is implemented in C++. Reliability is achieved using a retransmission mechanism based on a sequence numbering scheme. Such a scheme allows to have cost-effective performances compared to the traditional protocols, because retransmission is only triggered by applications which explicitly need reliability. This flexibility enables applications with different profiles to take advantage of the available protocols, so that the best rate between sped and reliability can be achieved case by case.

  20. Intelligent distributed medical image management

    NASA Astrophysics Data System (ADS)

    Garcia, Hong-Mei C.; Yun, David Y.

    1995-05-01

    The rapid advancements in high performance global communication have accelerated cooperative image-based medical services to a new frontier. Traditional image-based medical services such as radiology and diagnostic consultation can now fully utilize multimedia technologies in order to provide novel services, including remote cooperative medical triage, distributed virtual simulation of operations, as well as cross-country collaborative medical research and training. Fast (efficient) and easy (flexible) retrieval of relevant images remains a critical requirement for the provision of remote medical services. This paper describes the database system requirements, identifies technological building blocks for meeting the requirements, and presents a system architecture for our target image database system, MISSION-DBS, which has been designed to fulfill the goals of Project MISSION (medical imaging support via satellite integrated optical network) -- an experimental high performance gigabit satellite communication network with access to remote supercomputing power, medical image databases, and 3D visualization capabilities in addition to medical expertise anywhere and anytime around the country. The MISSION-DBS design employs a synergistic fusion of techniques in distributed databases (DDB) and artificial intelligence (AI) for storing, migrating, accessing, and exploring images. The efficient storage and retrieval of voluminous image information is achieved by integrating DDB modeling and AI techniques for image processing while the flexible retrieval mechanisms are accomplished by combining attribute- based and content-based retrievals.

  1. A Comparison of Brayton and Stirling Space Nuclear Power Systems for Power Levels from 1 Kilowatt to 10 Megawatts

    NASA Technical Reports Server (NTRS)

    Mason, Lee S.

    2000-01-01

    An analytical study was conducted to assess the performance and mass of Brayton and Stirling nuclear power systems for a wide range of future NASA space exploration missions. The power levels and design concepts were based on three different mission classes. Isotope systems, with power levels from 1 to 10 kW, were considered for planetary surface rovers and robotic science. Reactor power systems for planetary surface outposts and bases were evaluated from 10 to 500 kW. Finally, reactor power systems in the range from 100 kW to 10 mW were assessed for advanced propulsion applications. The analysis also examined the effect of advanced component technology on system performance. The advanced technologies included high temperature materials, lightweight radiators, and high voltage power management and distribution.

  2. Distributed Engine Control Empirical/Analytical Verification Tools

    NASA Technical Reports Server (NTRS)

    DeCastro, Jonathan; Hettler, Eric; Yedavalli, Rama; Mitra, Sayan

    2013-01-01

    NASA's vision for an intelligent engine will be realized with the development of a truly distributed control system featuring highly reliable, modular, and dependable components capable of both surviving the harsh engine operating environment and decentralized functionality. A set of control system verification tools was developed and applied to a C-MAPSS40K engine model, and metrics were established to assess the stability and performance of these control systems on the same platform. A software tool was developed that allows designers to assemble easily a distributed control system in software and immediately assess the overall impacts of the system on the target (simulated) platform, allowing control system designers to converge rapidly on acceptable architectures with consideration to all required hardware elements. The software developed in this program will be installed on a distributed hardware-in-the-loop (DHIL) simulation tool to assist NASA and the Distributed Engine Control Working Group (DECWG) in integrating DCS (distributed engine control systems) components onto existing and next-generation engines.The distributed engine control simulator blockset for MATLAB/Simulink and hardware simulator provides the capability to simulate virtual subcomponents, as well as swap actual subcomponents for hardware-in-the-loop (HIL) analysis. Subcomponents can be the communication network, smart sensor or actuator nodes, or a centralized control system. The distributed engine control blockset for MATLAB/Simulink is a software development tool. The software includes an engine simulation, a communication network simulation, control algorithms, and analysis algorithms set up in a modular environment for rapid simulation of different network architectures; the hardware consists of an embedded device running parts of the CMAPSS engine simulator and controlled through Simulink. The distributed engine control simulation, evaluation, and analysis technology provides unique capabilities to study the effects of a given change to the control system in the context of the distributed paradigm. The simulation tool can support treatment of all components within the control system, both virtual and real; these include communication data network, smart sensor and actuator nodes, centralized control system (FADEC full authority digital engine control), and the aircraft engine itself. The DECsim tool can allow simulation-based prototyping of control laws, control architectures, and decentralization strategies before hardware is integrated into the system. With the configuration specified, the simulator allows a variety of key factors to be systematically assessed. Such factors include control system performance, reliability, weight, and bandwidth utilization.

  3. Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant.

    PubMed

    Moreno-Garcia, Isabel M; Palacios-Garcia, Emilio J; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J; Varo-Martinez, Marta; Real-Calvo, Rafael J

    2016-05-26

    There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant's components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid.

  4. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  5. Effects of pupil filter patterns in line-scan focal modulation microscopy

    NASA Astrophysics Data System (ADS)

    Shen, Shuhao; Pant, Shilpa; Chen, Rui; Chen, Nanguang

    2018-03-01

    Line-scan focal modulation microscopy (LSFMM) is an emerging imaging technique that affords high imaging speed and good optical sectioning at the same time. We present a systematic investigation into optimal design of the pupil filter for LSFMM in an attempt to achieve the best performance in terms of spatial resolutions, optical sectioning, and modulation depth. Scalar diffraction theory was used to compute light propagation and distribution in the system and theoretical predictions on system performance, which were then compared with experimental results.

  6. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  7. Monitoring performance of a highly distributed and complex computing infrastructure in LHCb

    NASA Astrophysics Data System (ADS)

    Mathe, Z.; Haen, C.; Stagni, F.

    2017-10-01

    In order to ensure an optimal performance of the LHCb Distributed Computing, based on LHCbDIRAC, it is necessary to be able to inspect the behavior over time of many components: firstly the agents and services on which the infrastructure is built, but also all the computing tasks and data transfers that are managed by this infrastructure. This consists of recording and then analyzing time series of a large number of observables, for which the usage of SQL relational databases is far from optimal. Therefore within DIRAC we have been studying novel possibilities based on NoSQL databases (ElasticSearch, OpenTSDB and InfluxDB) as a result of this study we developed a new monitoring system based on ElasticSearch. It has been deployed on the LHCb Distributed Computing infrastructure for which it collects data from all the components (agents, services, jobs) and allows creating reports through Kibana and a web user interface, which is based on the DIRAC web framework. In this paper we describe this new implementation of the DIRAC monitoring system. We give details on the ElasticSearch implementation within the DIRAC general framework, as well as an overview of the advantages of the pipeline aggregation used for creating a dynamic bucketing of the time series. We present the advantages of using the ElasticSearch DSL high-level library for creating and running queries. Finally we shall present the performances of that system.

  8. TROPIX Power System Architecture

    NASA Technical Reports Server (NTRS)

    Manner, David B.; Hickman, J. Mark

    1995-01-01

    This document contains results obtained in the process of performing a power system definition study of the TROPIX power management and distribution system (PMAD). Requirements derived from the PMADs interaction with other spacecraft systems are discussed first. Since the design is dependent on the performance of the photovoltaics, there is a comprehensive discussion of the appropriate models for cells and arrays. A trade study of the array operating voltage and its effect on array bus mass is also presented. A system architecture is developed which makes use of a combination of high efficiency switching power convertors and analog regulators. Mass and volume estimates are presented for all subsystems.

  9. Design of a highly parallel board-level-interconnection with 320 Gbps capacity

    NASA Astrophysics Data System (ADS)

    Lohmann, U.; Jahns, J.; Limmer, S.; Fey, D.; Bauer, H.

    2012-01-01

    A parallel board-level interconnection design is presented consisting of 32 channels, each operating at 10 Gbps. The hardware uses available optoelectronic components (VCSEL, TIA, pin-diodes) and a combination of planarintegrated free-space optics, fiber-bundles and available MEMS-components, like the DMD™ from Texas Instruments. As a specific feature, we present a new modular inter-board interconnect, realized by 3D fiber-matrix connectors. The performance of the interconnect is evaluated with regard to optical properties and power consumption. Finally, we discuss the application of the interconnect for strongly distributed system architectures, as, for example, in high performance embedded computing systems and data centers.

  10. Horizon: The Portable, Scalable, and Reusable Framework for Developing Automated Data Management and Product Generation Systems

    NASA Astrophysics Data System (ADS)

    Huang, T.; Alarcon, C.; Quach, N. T.

    2014-12-01

    Capture, curate, and analysis are the typical activities performed at any given Earth Science data center. Modern data management systems must be adaptable to heterogeneous science data formats, scalable to meet the mission's quality of service requirements, and able to manage the life-cycle of any given science data product. Designing a scalable data management doesn't happen overnight. It takes countless hours of refining, refactoring, retesting, and re-architecting. The Horizon data management and workflow framework, developed at the Jet Propulsion Laboratory, is a portable, scalable, and reusable framework for developing high-performance data management and product generation workflow systems to automate data capturing, data curation, and data analysis activities. The NASA's Physical Oceanography Distributed Active Archive Center (PO.DAAC)'s Data Management and Archive System (DMAS) is its core data infrastructure that handles capturing and distribution of hundreds of thousands of satellite observations each day around the clock. DMAS is an application of the Horizon framework. The NASA Global Imagery Browse Services (GIBS) is NASA's Earth Observing System Data and Information System (EOSDIS)'s solution for making high-resolution global imageries available to the science communities. The Imagery Exchange (TIE), an application of the Horizon framework, is a core subsystem for GIBS responsible for data capturing and imagery generation automation to support the EOSDIS' 12 distributed active archive centers and 17 Science Investigator-led Processing Systems (SIPS). This presentation discusses our ongoing effort in refining, refactoring, retesting, and re-architecting the Horizon framework to enable data-intensive science and its applications.

  11. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  12. Material handling robot system for flow-through storage applications

    NASA Astrophysics Data System (ADS)

    Dill, James F.; Candiloro, Brian; Downer, James; Wiesman, Richard; Fallin, Larry; Smith, Ron

    1999-01-01

    This paper describes the design, development and planned implementation of a system of mobile robots for use in flow through storage applications. The robots are being designed with on-board embedded controls so that they can perform their tasks as semi-autonomous workers distributed within a centrally controlled network. On the storage input side, boxes will be identified by bar-codes and placed into preassigned flow through bins. On the shipping side, orders will be forwarded to the robots from a central order processing station and boxes will be picked from designated storage bins following proper sequencing to permit direct loading into trucks for shipping. Because of the need to maintain high system availability, a distributed control strategy has been selected. When completed, the system will permit robots to be dynamically reassigned responsibilities if an individual unit fails. On-board health diagnostics and condition monitoring will be used to maintain high reliability of the units.

  13. Fuzzy-driven energy storage system for mitigating voltage unbalance factor on distribution network with photovoltaic system

    NASA Astrophysics Data System (ADS)

    Wong, Jianhui; Lim, Yun Seng; Morris, Stella; Morris, Ezra; Chua, Kein Huat

    2017-04-01

    The amount of small-scaled renewable energy sources is anticipated to increase on the low-voltage distribution networks for the improvement of energy efficiency and reduction of greenhouse gas emission. The growth of the PV systems on the low-voltage distribution networks can create voltage unbalance, voltage rise, and reverse-power flow. Usually these issues happen with little fluctuation. However, it tends to fluctuate severely as Malaysia is a region with low clear sky index. A large amount of clouds often passes over the country, hence making the solar irradiance to be highly scattered. Therefore, the PV power output fluctuates substantially. These issues can lead to the malfunction of the electronic based equipment, reduction in the network efficiency and improper operation of the power protection system. At the current practice, the amount of PV system installed on the distribution network is constraint by the utility company. As a result, this can limit the reduction of carbon footprint. Therefore, energy storage system is proposed as a solution for these power quality issues. To ensure an effective operation of the distribution network with PV system, a fuzzy control system is developed and implemented to govern the operation of an energy storage system. The fuzzy driven energy storage system is able to mitigate the fluctuating voltage rise and voltage unbalance on the electrical grid by actively manipulates the flow of real power between the grid and the batteries. To verify the effectiveness of the proposed fuzzy driven energy storage system, an experimental network integrated with 7.2kWp PV system was setup. Several case studies are performed to evaluate the response of the proposed solution to mitigate voltage rises, voltage unbalance and reduce the amount of reverse power flow under highly intermittent PV power output.

  14. 49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...

  15. 49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...

  16. 49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...

  17. 49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...

  18. 49 CFR 192.621 - Maximum allowable operating pressure: High-pressure distribution systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... STANDARDS Operations § 192.621 Maximum allowable operating pressure: High-pressure distribution systems. (a) No person may operate a segment of a high pressure distribution system at a pressure that exceeds the... segment of a distribution system otherwise designed to operate at over 60 p.s.i. (414 kPa) gage, unless...

  19. Context-aware distributed cloud computing using CloudScheduler

    NASA Astrophysics Data System (ADS)

    Seuster, R.; Leavett-Brown, CR; Casteels, K.; Driemel, C.; Paterson, M.; Ring, D.; Sobie, RJ; Taylor, RP; Weldon, J.

    2017-10-01

    The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.

  20. Distributed Evaluation Functions for Fault Tolerant Multi-Rover Systems

    NASA Technical Reports Server (NTRS)

    Agogino, Adrian; Turner, Kagan

    2005-01-01

    The ability to evolve fault tolerant control strategies for large collections of agents is critical to the successful application of evolutionary strategies to domains where failures are common. Furthermore, while evolutionary algorithms have been highly successful in discovering single-agent control strategies, extending such algorithms to multiagent domains has proven to be difficult. In this paper we present a method for shaping evaluation functions for agents that provide control strategies that both are tolerant to different types of failures and lead to coordinated behavior in a multi-agent setting. This method neither relies of a centralized strategy (susceptible to single point of failures) nor a distributed strategy where each agent uses a system wide evaluation function (severe credit assignment problem). In a multi-rover problem, we show that agents using our agent-specific evaluation perform up to 500% better than agents using the system evaluation. In addition we show that agents are still able to maintain a high level of performance when up to 60% of the agents fail due to actuator, communication or controller faults.

  1. High-voltage electrode optimization towards uniform surface treatment by a pulsed volume discharge

    NASA Astrophysics Data System (ADS)

    Ponomarev, A. V.; Pedos, M. S.; Scherbinin, S. V.; Mamontov, Y. I.; Ponomarev, S. V.

    2015-11-01

    In this study, the shape and material of the high-voltage electrode of an atmospheric pressure plasma generation system were optimised. The research was performed with the goal of achieving maximum uniformity of plasma treatment of the surface of the low-voltage electrode with a diameter of 100 mm. In order to generate low-temperature plasma with the volume of roughly 1 cubic decimetre, a pulsed volume discharge was used initiated with a corona discharge. The uniformity of the plasma in the region of the low-voltage electrode was assessed using a system for measuring the distribution of discharge current density. The system's low-voltage electrode - collector - was a disc of 100 mm in diameter, the conducting surface of which was divided into 64 radially located segments of equal surface area. The current at each segment was registered by a high-speed measuring system controlled by an ARM™-based 32-bit microcontroller. To facilitate the interpretation of results obtained, a computer program was developed to visualise the results. The program provides a 3D image of the current density distribution on the surface of the low-voltage electrode. Based on the results obtained an optimum shape for a high-voltage electrode was determined. Uniformity of the distribution of discharge current density in relation to distance between electrodes was studied. It was proven that the level of non-uniformity of current density distribution depends on the size of the gap between electrodes. Experiments indicated that it is advantageous to use graphite felt VGN-6 (Russian abbreviation) as the material of the high-voltage electrode's emitting surface.

  2. Secure Large-Scale Airport Simulations Using Distributed Computational Resources

    NASA Technical Reports Server (NTRS)

    McDermott, William J.; Maluf, David A.; Gawdiak, Yuri; Tran, Peter; Clancy, Dan (Technical Monitor)

    2001-01-01

    To fully conduct research that will support the far-term concepts, technologies and methods required to improve the safety of Air Transportation a simulation environment of the requisite degree of fidelity must first be in place. The Virtual National Airspace Simulation (VNAS) will provide the underlying infrastructure necessary for such a simulation system. Aerospace-specific knowledge management services such as intelligent data-integration middleware will support the management of information associated with this complex and critically important operational environment. This simulation environment, in conjunction with a distributed network of supercomputers, and high-speed network connections to aircraft, and to Federal Aviation Administration (FAA), airline and other data-sources will provide the capability to continuously monitor and measure operational performance against expected performance. The VNAS will also provide the tools to use this performance baseline to obtain a perspective of what is happening today and of the potential impact of proposed changes before they are introduced into the system.

  3. Configurable e-commerce-oriented distributed seckill system with high availability

    NASA Astrophysics Data System (ADS)

    Zhu, Liye

    2018-04-01

    The rapid development of e-commerce prompted the birth of seckill activity. Seckill activity greatly stimulated public shopping desire because of its significant attraction to customers. In a seckill activity, a limited number of products will be sold at varying degrees of discount, which brings a huge temptation for customers. The discounted products are usually sold out in seconds, which can be a huge challenge for e-commerce systems. In this case, a seckill system with high concurrency and high availability has very practical significance. This research cooperates with Huijin Department Store to design and implement a seckill system of e-commerce platform. The seckill system supports high concurrency network conditions and is highly available in unexpected situation. In addition, due to the short life cycle of seckill activity, the system has the flexibility to be configured and scalable, which means that it is able to add or re-move system resources on demand. Finally, this paper carried out the function test and the performance test of the whole system. The test results show that the system meets the functional requirements and performance requirements of suppliers, administrators as well as users.

  4. A steering law for a roof-type configuration for a single-gimbal control moment gyro system

    NASA Technical Reports Server (NTRS)

    Yoshikawa, T.

    1974-01-01

    Single-Gimbal Control Moment Gyro (SGCMG) systems have been investigated for attitude control of the Large Space Telescope (LST) and the High Energy Astronomy Observatory (HEAO). However, various proposed steering laws for the SGCMG systems thus far have some defects because of singular states of the system. In this report, a steering law for a roof-type SGCMG system is proposed which is based on a new momentum distribution scheme that makes all the singular states unstable. This momentum distribution scheme is formulated by a treatment of the system as a sampled-data system. From analytical considerations, it is shown that this steering law gives control performance which is satisfactory for practical applications. Results of the preliminary computer simulation entirely support this premise.

  5. Computational Analysis of a Wing Designed for the X-57 Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Jeffrey K.; Viken, Sally A.; Carter, Melissa B.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of the wing for the distributed electric propulsion X-57 Maxwell airplane configuration at cruise and takeoff/landing conditions was completed. Two unstructured-mesh, Navier-Stokes computational fluid dynamics methods, FUN3D and USM3D, were used to predict the wing performance. The goal of the X-57 wing and distributed electric propulsion system design was to meet or exceed the required lift coefficient 3.95 for a stall speed of 58 knots, with a cruise speed of 150 knots at an altitude of 8,000 ft. The X-57 Maxwell airplane was designed with a small, high aspect ratio cruise wing that was designed for a high cruise lift coefficient (0.75) at angle of attack of 0deg. The cruise propulsors at the wingtip rotate counter to the wingtip vortex and reduce induced drag by 7.5 percent at an angle of attack of 0.6deg. The unblown maximum lift coefficient of the high-lift wing (with the 30deg flap setting) is 2.439. The stall speed goal performance metric was confirmed with a blown wing computed effective lift coefficient of 4.202. The lift augmentation from the high-lift, distributed electric propulsion system is 1.7. The predicted cruise wing drag coefficient of 0.02191 is 0.00076 above the drag allotted for the wing in the original estimate. However, the predicted drag overage for the wing would only use 10.1 percent of the original estimated drag margin, which is 0.00749.

  6. Dynamics of Biofilm Regrowth in Drinking Water Distribution Systems.

    PubMed

    Douterelo, I; Husband, S; Loza, V; Boxall, J

    2016-07-15

    The majority of biomass within water distribution systems is in the form of attached biofilm. This is known to be central to drinking water quality degradation following treatment, yet little understanding of the dynamics of these highly heterogeneous communities exists. This paper presents original information on such dynamics, with findings demonstrating patterns of material accumulation, seasonality, and influential factors. Rigorous flushing operations repeated over a 1-year period on an operational chlorinated system in the United Kingdom are presented here. Intensive monitoring and sampling were undertaken, including time-series turbidity and detailed microbial analysis using 16S rRNA Illumina MiSeq sequencing. The results show that bacterial dynamics were influenced by differences in the supplied water and by the material remaining attached to the pipe wall following flushing. Turbidity, metals, and phosphate were the main factors correlated with the distribution of bacteria in the samples. Coupled with the lack of inhibition of biofilm development due to residual chlorine, this suggests that limiting inorganic nutrients, rather than organic carbon, might be a viable component in treatment strategies to manage biofilms. The research also showed that repeat flushing exerted beneficial selective pressure, giving another reason for flushing being a viable advantageous biofilm management option. This work advances our understanding of microbiological processes in drinking water distribution systems and helps inform strategies to optimize asset performance. This research provides novel information regarding the dynamics of biofilm formation in real drinking water distribution systems made of different materials. This new knowledge on microbiological process in water supply systems can be used to optimize the performance of the distribution network and to guarantee safe and good-quality drinking water to consumers. Copyright © 2016 Douterelo et al.

  7. Dynamics of Biofilm Regrowth in Drinking Water Distribution Systems

    PubMed Central

    Husband, S.; Loza, V.; Boxall, J.

    2016-01-01

    ABSTRACT The majority of biomass within water distribution systems is in the form of attached biofilm. This is known to be central to drinking water quality degradation following treatment, yet little understanding of the dynamics of these highly heterogeneous communities exists. This paper presents original information on such dynamics, with findings demonstrating patterns of material accumulation, seasonality, and influential factors. Rigorous flushing operations repeated over a 1-year period on an operational chlorinated system in the United Kingdom are presented here. Intensive monitoring and sampling were undertaken, including time-series turbidity and detailed microbial analysis using 16S rRNA Illumina MiSeq sequencing. The results show that bacterial dynamics were influenced by differences in the supplied water and by the material remaining attached to the pipe wall following flushing. Turbidity, metals, and phosphate were the main factors correlated with the distribution of bacteria in the samples. Coupled with the lack of inhibition of biofilm development due to residual chlorine, this suggests that limiting inorganic nutrients, rather than organic carbon, might be a viable component in treatment strategies to manage biofilms. The research also showed that repeat flushing exerted beneficial selective pressure, giving another reason for flushing being a viable advantageous biofilm management option. This work advances our understanding of microbiological processes in drinking water distribution systems and helps inform strategies to optimize asset performance. IMPORTANCE This research provides novel information regarding the dynamics of biofilm formation in real drinking water distribution systems made of different materials. This new knowledge on microbiological process in water supply systems can be used to optimize the performance of the distribution network and to guarantee safe and good-quality drinking water to consumers. PMID:27208119

  8. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet.

    PubMed

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-02-01

    Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼ 1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction.

  9. A comparison of Boolean-based retrieval to the WAIS system for retrieval of aeronautical information

    NASA Technical Reports Server (NTRS)

    Marchionini, Gary; Barlow, Diane

    1994-01-01

    An evaluation of an information retrieval system using a Boolean-based retrieval engine and inverted file architecture and WAIS, which uses a vector-based engine, was conducted. Four research questions in aeronautical engineering were used to retrieve sets of citations from the NASA Aerospace Database which was mounted on a WAIS server and available through Dialog File 108 which served as the Boolean-based system (BBS). High recall and high precision searches were done in the BBS and terse and verbose queries were used in the WAIS condition. Precision values for the WAIS searches were consistently above the precision values for high recall BBS searches and consistently below the precision values for high precision BBS searches. Terse WAIS queries gave somewhat better precision performance than verbose WAIS queries. In every case, a small number of relevant documents retrieved by one system were not retrieved by the other, indicating the incomplete nature of the results from either retrieval system. Relevant documents in the WAIS searches were found to be randomly distributed in the retrieved sets rather than distributed by ranks. Advantages and limitations of both types of systems are discussed.

  10. Active Damping Using Distributed Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Quinones, Juan D.; Wier, Nathan C.

    2010-01-01

    A helicopter structure experiences substantial high-frequency mechanical excitation from powertrain components such as gearboxes and drive shafts. The resulting structure-borne vibration excites the windows which then radiate sound into the passenger cabin. In many cases the radiated sound power can be reduced by adding damping. This can be accomplished using passive or active approaches. Passive treatments such as constrained layer damping tend to reduce window transparency. Therefore this paper focuses on an active approach utilizing compact decentralized control units distributed around the perimeter of the window. Each control unit consists of a triangularly shaped piezoelectric actuator, a miniature accelerometer, and analog electronics. Earlier work has shown that this type of system can increase damping up to approximately 1 kHz. However at higher frequencies the mismatch between the distributed actuator and the point sensor caused control spillover. This paper describes new anisotropic actuators that can be used to improve the bandwidth of the control system. The anisotropic actuators are composed of piezoelectric material sandwiched between interdigitated electrodes, which enables the application of the electric field in a preferred in-plane direction. When shaped correctly the anisotropic actuators outperform traditional isotropic actuators by reducing the mismatch between the distributed actuator and point sensor at high frequencies. Testing performed on a Plexiglas panel, representative of a helicopter window, shows that the control units can increase damping at low frequencies. However high frequency performance was still limited due to the flexible boundary conditions present on the test structure.

  11. Building a highly available and intrusion tolerant Database Security and Protection System (DSPS).

    PubMed

    Cai, Liang; Yang, Xiao-Hu; Dong, Jin-Xiang

    2003-01-01

    Database Security and Protection System (DSPS) is a security platform for fighting malicious DBMS. The security and performance are critical to DSPS. The authors suggested a key management scheme by combining the server group structure to improve availability and the key distribution structure needed by proactive security. This paper detailed the implementation of proactive security in DSPS. After thorough performance analysis, the authors concluded that the performance difference between the replicated mechanism and proactive mechanism becomes smaller and smaller with increasing number of concurrent connections; and that proactive security is very useful and practical for large, critical applications.

  12. Aerodynamics of High-Lift Configuration Civil Aircraft Model in JAXA

    NASA Astrophysics Data System (ADS)

    Yokokawa, Yuzuru; Murayama, Mitsuhiro; Ito, Takeshi; Yamamoto, Kazuomi

    This paper presents basic aerodynamics and stall characteristics of the high-lift configuration aircraft model JSM (JAXA Standard Model). During research process of developing high-lift system design method, wind tunnel testing at JAXA 6.5m by 5.5m low-speed wind tunnel and Navier-Stokes computation on unstructured hybrid mesh were performed for a realistic configuration aircraft model equipped with high-lift devices, fuselage, nacelle-pylon, slat tracks and Flap Track Fairings (FTF), which was assumed 100 passenger class modern commercial transport aircraft. The testing and the computation aimed to understand flow physics and then to obtain some guidelines for designing a high performance high-lift system. As a result of the testing, Reynolds number effects within linear region and stall region were observed. Analysis of static pressure distribution and flow visualization gave the knowledge to understand the aerodynamic performance. CFD could capture the whole characteristics of basic aerodynamics and clarify flow mechanism which governs stall characteristics even for complicated geometry and its flow field. This collaborative work between wind tunnel testing and CFD is advantageous for improving or has improved the aerodynamic performance.

  13. High-Performance Computing Data Center Cooling System Energy Efficiency |

    Science.gov Websites

    approaches involve a cooling distribution unit (CDU) (2), which interfaces with the facility cooling loop and to the energy recovery water (ERW) loop (5), which is a closed-loop system. There are three heat rejection options for this IT load: When possible, heat energy from the energy recovery loop is transferred

  14. Performance Enhancement of Radial Distributed System with Distributed Generators by Reconfiguration Using Binary Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Rajalakshmi, N.; Padma Subramanian, D.; Thamizhavel, K.

    2015-03-01

    The extent of real power loss and voltage deviation associated with overloaded feeders in radial distribution system can be reduced by reconfiguration. Reconfiguration is normally achieved by changing the open/closed state of tie/sectionalizing switches. Finding optimal switch combination is a complicated problem as there are many switching combinations possible in a distribution system. Hence optimization techniques are finding greater importance in reducing the complexity of reconfiguration problem. This paper presents the application of firefly algorithm (FA) for optimal reconfiguration of radial distribution system with distributed generators (DG). The algorithm is tested on IEEE 33 bus system installed with DGs and the results are compared with binary genetic algorithm. It is found that binary FA is more effective than binary genetic algorithm in achieving real power loss reduction and improving voltage profile and hence enhancing the performance of radial distribution system. Results are found to be optimum when DGs are added to the test system, which proved the impact of DGs on distribution system.

  15. Task allocation in a distributed computing system

    NASA Technical Reports Server (NTRS)

    Seward, Walter D.

    1987-01-01

    A conceptual framework is examined for task allocation in distributed systems. Application and computing system parameters critical to task allocation decision processes are discussed. Task allocation techniques are addressed which focus on achieving a balance in the load distribution among the system's processors. Equalization of computing load among the processing elements is the goal. Examples of system performance are presented for specific applications. Both static and dynamic allocation of tasks are considered and system performance is evaluated using different task allocation methodologies.

  16. About Distributed Simulation-based Optimization of Forming Processes using a Grid Architecture

    NASA Astrophysics Data System (ADS)

    Grauer, Manfred; Barth, Thomas

    2004-06-01

    Permanently increasing complexity of products and their manufacturing processes combined with a shorter "time-to-market" leads to more and more use of simulation and optimization software systems for product design. Finding a "good" design of a product implies the solution of computationally expensive optimization problems based on the results of simulation. Due to the computational load caused by the solution of these problems, the requirements on the Information&Telecommunication (IT) infrastructure of an enterprise or research facility are shifting from stand-alone resources towards the integration of software and hardware resources in a distributed environment for high-performance computing. Resources can either comprise software systems, hardware systems, or communication networks. An appropriate IT-infrastructure must provide the means to integrate all these resources and enable their use even across a network to cope with requirements from geographically distributed scenarios, e.g. in computational engineering and/or collaborative engineering. Integrating expert's knowledge into the optimization process is inevitable in order to reduce the complexity caused by the number of design variables and the high dimensionality of the design space. Hence, utilization of knowledge-based systems must be supported by providing data management facilities as a basis for knowledge extraction from product data. In this paper, the focus is put on a distributed problem solving environment (PSE) capable of providing access to a variety of necessary resources and services. A distributed approach integrating simulation and optimization on a network of workstations and cluster systems is presented. For geometry generation the CAD-system CATIA is used which is coupled with the FEM-simulation system INDEED for simulation of sheet-metal forming processes and the problem solving environment OpTiX for distributed optimization.

  17. Assessing performance of Botswana’s public hospital system: the use of the World Health Organization Health System Performance Assessment Framework

    PubMed Central

    Seitio-Kgokgwe, Onalenna; Gauld, Robin DC; Hill, Philip C; Barnett, Pauline

    2014-01-01

    Background: Very few studies have assessed performance of Botswana public hospitals. We draw from a large research study assessing performance of the Botswana Ministry of Health (MoH) to evaluate the performance of public hospital system using the World Health Organization Health Systems Performance Assessment Framework (WHO HSPAF). We aimed to evaluate performance of Botswana public hospital system; relate findings of the assessment to the potential for improvements in hospital performance; and determine the usefulness of the WHO HSPAF in assessing performance of hospital systems in a developing country. Methods: This article is based on data collected from document analysis, 54 key informants comprising senior managers and staff of the MoH (N= 40) and senior officers from stakeholder organizations (N= 14), and surveys of 42 hospital managers and 389 health workers. Data from documents and transcripts were analyzed using content and thematic analysis while data analysis for surveys was descriptive determining proportions and percentages. Results: The organizational structure of the Botswana’s public hospital system, authority and decision-making are highly centralized. Overall physical access to health services is high. However, challenges in the distribution of facilities and inpatient beds create inequities and inefficiencies. Capacity of the hospitals to deliver services is limited by inadequate resources. There are significant challenges with the quality of care. Conclusion: While Botswana invested considerably in building hospitals around the country resulting in high physical access to services, the organization and governance of the hospital system, and inadequate resources limit service delivery. The ongoing efforts to decentralize management of hospitals to district level entities should be expedited. The WHO HSPAF enabled us to conduct a comprehensive assessment of the public hospital system. Though relatively new, this approach proved useful in this study. PMID:25279380

  18. High throughput computing: a solution for scientific analysis

    USGS Publications Warehouse

    O'Donnell, M.

    2011-01-01

    handle job failures due to hardware, software, or network interruptions (obviating the need to manually resubmit the job after each stoppage); be affordable; and most importantly, allow us to complete very large, complex analyses that otherwise would not even be possible. In short, we envisioned a job-management system that would take advantage of unused FORT CPUs within a local area network (LAN) to effectively distribute and run highly complex analytical processes. What we found was a solution that uses High Throughput Computing (HTC) and High Performance Computing (HPC) systems to do exactly that (Figure 1).

  19. Distributed Turboelectric Propulsion for Hybrid Wing Body Aircraft

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Brown, Gerald V.; Felder, James L.

    2008-01-01

    Meeting future goals for aircraft and air traffic system performance will require new airframes with more highly integrated propulsion. Previous studies have evaluated hybrid wing body (HWB) configurations with various numbers of engines and with increasing degrees of propulsion-airframe integration. A recently published configuration with 12 small engines partially embedded in a HWB aircraft, reviewed herein, serves as the airframe baseline for the new concept aircraft that is the subject of this paper. To achieve high cruise efficiency, a high lift-to-drag ratio HWB was adopted as the baseline airframe along with boundary layer ingestion inlets and distributed thrust nozzles to fill in the wakes generated by the vehicle. The distributed powered-lift propulsion concept for the baseline vehicle used a simple, high-lift-capable internally blown flap or jet flap system with a number of small high bypass ratio turbofan engines in the airframe. In that concept, the engine flow path from the inlet to the nozzle is direct and does not involve complicated internal ducts through the airframe to redistribute the engine flow. In addition, partially embedded engines, distributed along the upper surface of the HWB airframe, provide noise reduction through airframe shielding and promote jet flow mixing with the ambient airflow. To improve performance and to reduce noise and environmental impact even further, a drastic change in the propulsion system is proposed in this paper. The new concept adopts the previous baseline cruise-efficient short take-off and landing (CESTOL) airframe but employs a number of superconducting motors to drive the distributed fans rather than using many small conventional engines. The power to drive these electric fans is generated by two remotely located gas-turbine-driven superconducting generators. This arrangement allows many small partially embedded fans while retaining the superior efficiency of large core engines, which are physically separated but connected through electric power lines to the fans. This paper presents a brief description of the earlier CESTOL vehicle concept and the newly proposed electrically driven fan concept vehicle, using the previous CESTOL vehicle as a baseline.

  20. Security of subcarrier wave quantum key distribution against the collective beam-splitting attack.

    PubMed

    Miroshnichenko, G P; Kozubov, A V; Gaidash, A A; Gleim, A V; Horoshko, D B

    2018-04-30

    We consider a subcarrier wave quantum key distribution (QKD) system, where quantum encoding is carried out at weak sidebands generated around a coherent optical beam as a result of electro-optical phase modulation. We study security of two protocols, B92 and BB84, against one of the most powerful attacks for this class of systems, the collective beam-splitting attack. Our analysis includes the case of high modulation index, where the sidebands are essentially multimode. We demonstrate numerically and experimentally that a subcarrier wave QKD system with realistic parameters is capable of distributing cryptographic keys over large distances in presence of collective attacks. We also show that BB84 protocol modification with discrimination of only one state in each basis performs not worse than the original BB84 protocol in this class of QKD systems, thus significantly simplifying the development of cryptographic networks using the considered QKD technique.

  1. Controlled motion in an elastic world. Research project: Manipulation strategies for massive space payloads

    NASA Technical Reports Server (NTRS)

    Book, Wayne J.

    1992-01-01

    The flexibility of the drives and structures of controlled motion systems are presented as an obstacle to be overcome in the design of high performance motion systems, particularly manipulator arms. The task and the measure of performance to be applied determine the technology appropriate to overcome this obstacle. Included in the technologies proposed are control algorithms (feedback and feed forward), passive damping enhancement, operational strategies, and structural design. Modeling of the distributed, nonlinear system is difficult, and alternative approaches are discussed. The author presents personal perspectives on the history, status, and future directions in this area.

  2. Alternative Architectures for Distributed Cooperative Problem-Solving in the National Airspace System

    NASA Technical Reports Server (NTRS)

    Smith, Phillip J.; Billings, Charles; McCoy, C. Elaine; Orasanu, Judith

    1999-01-01

    The air traffic management system in the United States is an example of a distributed problem solving system. It has elements of both cooperative and competitive problem-solving. This system includes complex organizations such as Airline Operations Centers (AOCs), the FAA Air Traffic Control Systems Command Center (ATCSCC), and traffic management units (TMUs) at enroute centers and TRACONs, all of which have a major focus on strategic decision-making. It also includes individuals concerned more with tactical decisions (such as air traffic controllers and pilots). The architecture for this system has evolved over time to rely heavily on the distribution of tasks and control authority in order to keep cognitive complexity manageable for any one individual operator, and to provide redundancy (both human and technological) to serve as a safety net to catch the slips or mistakes that any one person or entity might make. Currently, major changes are being considered for this architecture, especially with respect to the locus of control, in an effort to improve efficiency and safety. This paper uses a series of case studies to help evaluate some of these changes from the perspective of system complexity, and to point out possible alternative approaches that might be taken to improve system performance. The paper illustrates the need to maintain a clear understanding of what is required to assure a high level of performance when alternative system architectures and decompositions are developed.

  3. Design and performance investigation of a highly accurate apodized fiber Bragg grating-based strain sensor in single and quasi-distributed systems.

    PubMed

    Ali, Taha A; Shehata, Mohamed I; Mohamed, Nazmi A

    2015-06-01

    In this work, fiber Bragg grating (FBG) strain sensors in single and quasi-distributed systems are investigated, seeking high-accuracy measurement. Since FBG-based strain sensors of small lengths are preferred in medical applications, and that causes the full width at half-maximum (FWHM) to be larger, a new apodization profile is introduced for the first time, to the best of our knowledge, with a remarkable FWHM at small sensor lengths compared to the Gaussian and Nuttall profiles, in addition to a higher mainlobe slope at these lengths. A careful selection of apodization profiles with detailed investigation is performed-using sidelobe analysis and the FWHM, which are primary judgment factors especially in a quasi-distributed configuration. A comparison between the elite selection of apodization profiles (extracted from related literature) and the proposed new profile is carried out covering the reflectivity peak, FWHM, and sidelobe analysis. The optimization process concludes that the proposed new profile with a chosen small length (L) of 10 mm and Δnac of 1.4×10-4 is the optimum choice for single stage and quasi-distributed strain-sensor networks, even better than the Gaussian profile at small sensor lengths. The proposed profile achieves the smallest FWHM of 15 GHz (suitable for UDWDM), and the highest mainlobe slope of 130 dB/nm. For the quasi-distributed scenario, a noteworthy high isolation of 6.953 dB is achieved while applying a high strain value of 1500 μstrain (με) for a five-stage strain-sensing network. Further investigation was undertaken, proving that consistency in choosing the apodization profile in the quasi-distributed network is mandatory. A test was made of the inclusion of a uniform apodized sensor among other apodized sensors with the proposed profile in an FBG strain-sensor network.

  4. Intermediate photovoltaic system application experiment operational performance report. Volume 6: Beverly High School, Beverly, Mass.

    NASA Astrophysics Data System (ADS)

    1982-03-01

    Performance data are given for the month of February, 1982 for a photovoltaic power supply at a Massachusetts high school. Data given include: monthly and daily electrical energy yield; monthly and daily insolation; monthly and daily array efficiency; energy production as a function of power level, voltage, cell temperature, and hour of day; insolation as a function of hour of the day; input, output and efficiency for each of two power conditioning units and for the total power conditioning system; energy supplied to the load by the photovoltaic system and by the grid; photovoltaic system efficiency; dollar value of the energy supplied by the photovoltaic system; capacity factor; daily photovoltaic energy to load; daily system availability and hours of daylight; heating and cooling degree days; hourly cell temperature, ambient temperature, wind speed, and insolation; average monthly wind speed; wind direction distribution; and daily data acquisition mode and recording interval plot.

  5. Design and Development of a 200-kW Turbo-Electric Distributed Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Papathakis, Kurt V.; Kloesel, Kurt J.; Lin, Yohan; Clarke, Sean; Ediger, Jacob J.; Ginn, Starr

    2016-01-01

    The National Aeronautics and Space Administration (NASA) Armstrong Flight Research Center (AFRC) (Edwards, California) is developing a Hybrid-Electric Integrated Systems Testbed (HEIST) Testbed as part of the HEIST Project, to study power management and transition complexities, modular architectures, and flight control laws for turbo-electric distributed propulsion technologies using representative hardware and piloted simulations. Capabilities are being developed to assess the flight readiness of hybrid electric and distributed electric vehicle architectures. Additionally, NASA will leverage experience gained and assets developed from HEIST to assist in flight-test proposal development, flight-test vehicle design, and evaluation of hybrid electric and distributed electric concept vehicles for flight safety. The HEIST test equipment will include three trailers supporting a distributed electric propulsion wing, a battery system and turbogenerator, dynamometers, and supporting power and communication infrastructure, all connected to the AFRC Core simulation. Plans call for 18 high performance electric motors that will be powered by batteries and the turbogenerator, and commanded by a piloted simulation. Flight control algorithms will be developed on the turbo-electric distributed propulsion system.

  6. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.

  7. Modeling Operator Performance in Low Task Load Supervisory Domains

    DTIC Science & Technology

    2011-06-01

    PDF Probability Distribution Function SAFE System for Aircrew Fatigue Evaluation SAFTE Sleep , Activity, Fatigue, and Task Effectiveness SCT...attentional capacity due to high mental workload. In low task load settings, fatigue is mainly caused by lack of sleep and boredom experienced by...performance decrements. Also, psychological fatigue is strongly correlated with lack of sleep . Not surprisingly, operators of the morning shift reported the

  8. GLOBECOM '87 - Global Telecommunications Conference, Tokyo, Japan, Nov. 15-18, 1987, Conference Record. Volumes 1, 2, & 3

    NASA Astrophysics Data System (ADS)

    The present conference on global telecommunications discusses topics in the fields of Integrated Services Digital Network (ISDN) technology field trial planning and results to date, motion video coding, ISDN networking, future network communications security, flexible and intelligent voice/data networks, Asian and Pacific lightwave and radio systems, subscriber radio systems, the performance of distributed systems, signal processing theory, satellite communications modulation and coding, and terminals for the handicapped. Also discussed are knowledge-based technologies for communications systems, future satellite transmissions, high quality image services, novel digital signal processors, broadband network access interface, traffic engineering for ISDN design and planning, telecommunications software, coherent optical communications, multimedia terminal systems, advanced speed coding, portable and mobile radio communications, multi-Gbit/second lightwave transmission systems, enhanced capability digital terminals, communications network reliability, advanced antimultipath fading techniques, undersea lightwave transmission, image coding, modulation and synchronization, adaptive signal processing, integrated optical devices, VLSI technologies for ISDN, field performance of packet switching, CSMA protocols, optical transport system architectures for broadband ISDN, mobile satellite communications, indoor wireless communication, echo cancellation in communications, and distributed network algorithms.

  9. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Mather, Barry A

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less

  10. Fuzzy comprehensive evaluation for grid-connected performance of integrated distributed PV-ES systems

    NASA Astrophysics Data System (ADS)

    Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.

    2016-08-01

    Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.

  11. Development of Metal Oxide Nanostructure-based Optical Sensors for Fossil Fuel Derived Gases Measurement at High Temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Kevin P.

    2015-02-13

    This final technical report details research works performed supported by a Department of Energy grant (DE-FE0003859), which was awarded under the University Coal Research Program administrated by National Energy Technology Laboratory. This research program studied high temperature fiber sensor for harsh environment applications. It developed two fiber optical sensor platform technology including regenerative fiber Bragg grating sensors and distributed fiber optical sensing based on Rayleigh backscattering optical frequency domain reflectometry. Through the studies of chemical and thermal regenerative techniques for fiber Bragg grating (FBG) fabrication, high-temperature stable FBG sensors were successfully developed and fabricated in air-hole microstructured fibers, high-attenuation fibers,more » rare-earth doped fibers, and standard telecommunication fibers. By optimizing the laser processing and thermal annealing procedures, fiber grating sensors with stable performance up to 1100°C have been developed. Using these temperature-stable FBG gratings as sensor platform, fiber optical flow, temperature, pressure, and chemical sensors have been developed to operate at high temperatures up to 800°C. Through the integration of on-fiber functional coating, the use of application-specific air-hole microstructural fiber, and application of active fiber sensing scheme, distributed fiber sensor for temperature, pressure, flow, liquid level, and chemical sensing have been demonstrated with high spatial resolution (1-cm or better) with wide temperature ranges. These include the demonstration of 1) liquid level sensing from 77K to the room temperature, pressure/temperature sensing from the room temperature to 800C and from the 15psi to 2000 psi, and hydrogen concentration measurement from 0.2% to 10% with temperature ranges from the room temperature to 700°C. Optical sensors developed by this program has broken several technical records including flow sensors with the highest operation temperature up to 750°C, first distributed chemical measurements at the record high temperature up to 700°C, first distributed pressure measurement at the record high temperature up to 800°C, and the fiber laser sensors with the record high operation temperature up to 700°C. The research performed by this program dramatically expand the functionality, adaptability, and applicability of distributed fiber optical sensors with potential applications in a number of high-temperature energy systems such as fossil-fuel power generation, high-temperature fuel cell applications, and potential for nuclear energy systems.« less

  12. Building a generalized distributed system model

    NASA Technical Reports Server (NTRS)

    Mukkamala, Ravi

    1991-01-01

    A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.

  13. Fractional System Identification: An Approach Using Continuous Order-Distributions

    NASA Technical Reports Server (NTRS)

    Hartley, Tom T.; Lorenzo, Carl F.

    1999-01-01

    This paper discusses the identification of fractional- and integer-order systems using the concept of continuous order-distribution. Based on the ability to define systems using continuous order-distributions, it is shown that frequency domain system identification can be performed using least squares techniques after discretizing the order-distribution.

  14. Evolutionary Telemetry and Command Processor (TCP) architecture

    NASA Technical Reports Server (NTRS)

    Schneider, John R.

    1992-01-01

    A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.

  15. Communication Needs Assessment for Distributed Turbine Engine Control

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Behbahani, Alireza R.

    2008-01-01

    Control system architecture is a major contributor to future propulsion engine performance enhancement and life cycle cost reduction. The control system architecture can be a means to effect net weight reduction in future engine systems, provide a streamlined approach to system design and implementation, and enable new opportunities for performance optimization and increased awareness about system health. The transition from a centralized, point-to-point analog control topology to a modular, networked, distributed system is paramount to extracting these system improvements. However, distributed engine control systems are only possible through the successful design and implementation of a suitable communication system. In a networked system, understanding the data flow between control elements is a fundamental requirement for specifying the communication architecture which, itself, is dependent on the functional capability of electronics in the engine environment. This paper presents an assessment of the communication needs for distributed control using strawman designs and relates how system design decisions relate to overall goals as we progress from the baseline centralized architecture, through partially distributed and fully distributed control systems.

  16. Prescribed performance distributed consensus control for nonlinear multi-agent systems with unknown dead-zone input

    NASA Astrophysics Data System (ADS)

    Cui, Guozeng; Xu, Shengyuan; Ma, Qian; Li, Yongmin; Zhang, Zhengqiang

    2018-05-01

    In this paper, the problem of prescribed performance distributed output consensus for higher-order non-affine nonlinear multi-agent systems with unknown dead-zone input is investigated. Fuzzy logical systems are utilised to identify the unknown nonlinearities. By introducing prescribed performance, the transient and steady performance of synchronisation errors are guaranteed. Based on Lyapunov stability theory and the dynamic surface control technique, a new distributed consensus algorithm for non-affine nonlinear multi-agent systems is proposed, which ensures cooperatively uniformly ultimately boundedness of all signals in the closed-loop systems and enables the output of each follower to synchronise with the leader within predefined bounded error. Finally, simulation examples are provided to demonstrate the effectiveness of the proposed control scheme.

  17. High Capacity Single Table Performance Design Using Partitioning in Oracle or PostgreSQL

    DTIC Science & Technology

    2012-03-01

    Indicators ( KPIs ) 13  5.  Conclusion 14  List of Symbols, Abbreviations, and Acronyms 15  Distribution List 16 iv List of Figures Figure 1. Oracle...Figure 7. Time to seek and return one record. 4. Additional Key Performance Indicators ( KPIs ) In addition to pure response time, there are other...Laboratory ASM Automatic Storage Management CPU central processing unit I/O input/output KPIs key performance indicators OS operating system

  18. Got political skill? The impact of justice on the importance of political skill for job performance.

    PubMed

    Andrews, Martha C; Kacmar, K Michele; Harris, Kenneth J

    2009-11-01

    The present study examined the moderating effects of procedural and distributive justice on the relationships between political skill and task performance and organizational citizenship behavior (OCB) among 175 supervisor-subordinate dyads of a government organization. Using Mischel's (1968) situationist perspective, high justice conditions were considered "strong situations," whereas low justice conditions were construed as "weak situations." We found that when both procedural and distributive justice were low, political skill was positively related to performance. Under conditions of both high procedural and high distributive justice, political skill was negatively related to performance. Finally, under conditions of low distributive justice, political skill was positively related to OCB, whereas under conditions of high distributive justice, political skill had little effect on OCB. These results highlight the importance of possessing political skill in weak but not strong situations.

  19. High-Throughput and Low-Latency Network Communication with NetIO

    NASA Astrophysics Data System (ADS)

    Schumacher, Jörn; Plessl, Christian; Vandelli, Wainer

    2017-10-01

    HPC network technologies like Infiniband, TrueScale or OmniPath provide low- latency and high-throughput communication between hosts, which makes them attractive options for data-acquisition systems in large-scale high-energy physics experiments. Like HPC networks, DAQ networks are local and include a well specified number of systems. Unfortunately traditional network communication APIs for HPC clusters like MPI or PGAS exclusively target the HPC community and are not suited well for DAQ applications. It is possible to build distributed DAQ applications using low-level system APIs like Infiniband Verbs, but it requires a non-negligible effort and expert knowledge. At the same time, message services like ZeroMQ have gained popularity in the HEP community. They make it possible to build distributed applications with a high-level approach and provide good performance. Unfortunately, their usage usually limits developers to TCP/IP- based networks. While it is possible to operate a TCP/IP stack on top of Infiniband and OmniPath, this approach may not be very efficient compared to a direct use of native APIs. NetIO is a simple, novel asynchronous message service that can operate on Ethernet, Infiniband and similar network fabrics. In this paper the design and implementation of NetIO is presented and described, and its use is evaluated in comparison to other approaches. NetIO supports different high-level programming models and typical workloads of HEP applications. The ATLAS FELIX project [1] successfully uses NetIO as its central communication platform. The architecture of NetIO is described in this paper, including the user-level API and the internal data-flow design. The paper includes a performance evaluation of NetIO including throughput and latency measurements. The performance is compared against the state-of-the- art ZeroMQ message service. Performance measurements are performed in a lab environment with Ethernet and FDR Infiniband networks.

  20. Chemically modified graphene/polyimide composite films based on utilization of covalent bonding and oriented distribution.

    PubMed

    Huang, Ting; Lu, Renguo; Su, Chao; Wang, Hongna; Guo, Zheng; Liu, Pei; Huang, Zhongyuan; Chen, Haiming; Li, Tongsheng

    2012-05-01

    Herein, we have developed a rather simple composite fabrication approach to achieving molecular-level dispersion and planar orientation of chemically modified graphene (CMG) in the thermosetting polyimide (PI) matrix as well as realizing strong adhesion at the interfacial regions between reinforcing filler and matrix. The covalent adhesion of CMG to PI matrix and oriented distribution of CMG were carefully confirmed and analyzed by detailed investigations. Combination of covalent bonding and oriented distribution could enlarge the effectiveness of CMG in the matrix. Efficient stress transfer was found at the CMG/PI interfaces. Significant improvements in the mechanical performances, thermal stability, electrical conductivity, and hydrophobic behavior were achieved by addition of only a small amount of CMG. Furthermore, it is noteworthy that the hydrophilic-to-hydrophobic transition and the electrical percolation were observed at only 0.2 wt % CMG in this composite system. This facile methodology is believed to afford broad application potential in graphene-based polymer nanocomposites, especially other types of high-performance thermosetting systems.

  1. State of Technology for Rehabilitation of Water Distribution Systems

    EPA Science Inventory

    The impact that the lack of investment in water infrastructure will have on the performance of aging underground infrastructure over time is well documented and the needed funding estimates range as high as $325 billion over the next 20 years. With the current annual replacement...

  2. Fault tolerant computer control for a Maglev transportation system

    NASA Technical Reports Server (NTRS)

    Lala, Jaynarayan H.; Nagle, Gail A.; Anagnostopoulos, George

    1994-01-01

    Magnetically levitated (Maglev) vehicles operating on dedicated guideways at speeds of 500 km/hr are an emerging transportation alternative to short-haul air and high-speed rail. They have the potential to offer a service significantly more dependable than air and with less operating cost than both air and high-speed rail. Maglev transportation derives these benefits by using magnetic forces to suspend a vehicle 8 to 200 mm above the guideway. Magnetic forces are also used for propulsion and guidance. The combination of high speed, short headways, stringent ride quality requirements, and a distributed offboard propulsion system necessitates high levels of automation for the Maglev control and operation. Very high levels of safety and availability will be required for the Maglev control system. This paper describes the mission scenario, functional requirements, and dependability and performance requirements of the Maglev command, control, and communications system. A distributed hierarchical architecture consisting of vehicle on-board computers, wayside zone computers, a central computer facility, and communication links between these entities was synthesized to meet the functional and dependability requirements on the maglev. Two variations of the basic architecture are described: the Smart Vehicle Architecture (SVA) and the Zone Control Architecture (ZCA). Preliminary dependability modeling results are also presented.

  3. An access control model with high security for distributed workflow and real-time application

    NASA Astrophysics Data System (ADS)

    Han, Ruo-Fei; Wang, Hou-Xiang

    2007-11-01

    The traditional mandatory access control policy (MAC) is regarded as a policy with strict regulation and poor flexibility. The security policy of MAC is so compelling that few information systems would adopt it at the cost of facility, except some particular cases with high security requirement as military or government application. However, with the increasing requirement for flexibility, even some access control systems in military application have switched to role-based access control (RBAC) which is well known as flexible. Though RBAC can meet the demands for flexibility but it is weak in dynamic authorization and consequently can not fit well in the workflow management systems. The task-role-based access control (T-RBAC) is then introduced to solve the problem. It combines both the advantages of RBAC and task-based access control (TBAC) which uses task to manage permissions dynamically. To satisfy the requirement of system which is distributed, well defined with workflow process and critically for time accuracy, this paper will analyze the spirit of MAC, introduce it into the improved T&RBAC model which is based on T-RBAC. At last, a conceptual task-role-based access control model with high security for distributed workflow and real-time application (A_T&RBAC) is built, and its performance is simply analyzed.

  4. A prototype Infrastructure for Cloud-based distributed services in High Availability over WAN

    NASA Astrophysics Data System (ADS)

    Bulfon, C.; Carlino, G.; De Salvo, A.; Doria, A.; Graziosi, C.; Pardi, S.; Sanchez, A.; Carboni, M.; Bolletta, P.; Puccio, L.; Capone, V.; Merola, L.

    2015-12-01

    In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.

  5. Workload Characterization of a Leadership Class Storage Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Youngjae; Gunasekaran, Raghul; Shipman, Galen M

    2010-01-01

    Understanding workload characteristics is critical for optimizing and improving the performance of current systems and software, and architecting new storage systems based on observed workload patterns. In this paper, we characterize the scientific workloads of the world s fastest HPC (High Performance Computing) storage cluster, Spider, at the Oak Ridge Leadership Computing Facility (OLCF). Spider provides an aggregate bandwidth of over 240 GB/s with over 10 petabytes of RAID 6 formatted capacity. OLCFs flagship petascale simulation platform, Jaguar, and other large HPC clusters, in total over 250 thousands compute cores, depend on Spider for their I/O needs. We characterize themore » system utilization, the demands of reads and writes, idle time, and the distribution of read requests to write requests for the storage system observed over a period of 6 months. From this study we develop synthesized workloads and we show that the read and write I/O bandwidth usage as well as the inter-arrival time of requests can be modeled as a Pareto distribution.« less

  6. Optimal placement and sizing of wind / solar based DG sources in distribution system

    NASA Astrophysics Data System (ADS)

    Guan, Wanlin; Guo, Niao; Yu, Chunlai; Chen, Xiaoguang; Yu, Haiyang; Liu, Zhipeng; Cui, Jiapeng

    2017-06-01

    Proper placement and sizing of Distributed Generation (DG) in distribution system can obtain maximum potential benefits. This paper proposes quantum particle swarm algorithm (QPSO) based wind turbine generation unit (WTGU) and photovoltaic (PV) array placement and sizing approach for real power loss reduction and voltage stability improvement of distribution system. Performance modeling of wind and solar generation system are described and classified into PQ\\PQ (V)\\PI type models in power flow. Considering the WTGU and PV based DGs in distribution system is geographical restrictive, the optimal area and DG capacity limits of each bus in the setting area need to be set before optimization, the area optimization method is proposed . The method has been tested on IEEE 33-bus radial distribution systems to demonstrate the performance and effectiveness of the proposed method.

  7. Comparative Cooling Season Performance of Air Distribution Systems in Multistory Townhomes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Poerschke; Beach, R.; Beggs, T.

    2016-08-26

    IBACOS investigated the performance of a small-diameter high velocity heat pump system compared to a conventional system in a new construction triplex townhouse. A ductless heat pump system also was installed for comparison, but the homebuyer backed out because of aesthetic concerns about that system. In total, two buildings, having identical solar orientation and comprised of six townhomes, were monitored for comfort and energy performance. Results show that the small-diameter system provides more uniform temperatures from floor to floor in the three-story townhome. No clear energy consumption benefit was observed from either system. The builder is continuing to explore themore » small-diameter system as its new standard system to provide better comfort and indoor air quality. The homebuilder also explored the possibility of shifting its townhome product to meet the U.S. Department of Energy Challenge Home National Program Requirements.« less

  8. Performance analysis of static locking in replicated distributed database systems

    NASA Technical Reports Server (NTRS)

    Kuang, Yinghong; Mukkamala, Ravi

    1991-01-01

    Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.

  9. Job Management Requirements for NAS Parallel Systems and Clusters

    NASA Technical Reports Server (NTRS)

    Saphir, William; Tanner, Leigh Ann; Traversat, Bernard

    1995-01-01

    A job management system is a critical component of a production supercomputing environment, permitting oversubscribed resources to be shared fairly and efficiently. Job management systems that were originally designed for traditional vector supercomputers are not appropriate for the distributed-memory parallel supercomputers that are becoming increasingly important in the high performance computing industry. Newer job management systems offer new functionality but do not solve fundamental problems. We address some of the main issues in resource allocation and job scheduling we have encountered on two parallel computers - a 160-node IBM SP2 and a cluster of 20 high performance workstations located at the Numerical Aerodynamic Simulation facility. We describe the requirements for resource allocation and job management that are necessary to provide a production supercomputing environment on these machines, prioritizing according to difficulty and importance, and advocating a return to fundamental issues.

  10. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  11. Thinking Systemically: Steps for States to Improve Equity in the Distribution of Teachers-- An Action-Planning Workbook to Help Guide Regional Comprehensive Center and State Education Agency Conversation to Address the Inequitable Distribution of Teachers

    ERIC Educational Resources Information Center

    National Comprehensive Center for Teacher Quality, 2009

    2009-01-01

    The National Comprehensive Center for Teacher Quality (TQ Center) is a resource to which the regional comprehensive centers, states, and other education stakeholders turn for strengthening the quality of teaching--especially in high-poverty, low-performing, and hard-to-staff schools--and for finding guidance in addressing specific needs, thereby…

  12. Ultra-short FBG based distributed sensing using shifted optical Gaussian filters and microwave-network analysis.

    PubMed

    Cheng, Rui; Xia, Li; Sima, Chaotan; Ran, Yanli; Rohollahnejad, Jalal; Zhou, Jiaao; Wen, Yongqiang; Yu, Can

    2016-02-08

    Ultrashort fiber Bragg gratings (US-FBGs) have significant potential as weak grating sensors for distributed sensing, but the exploitation have been limited by their inherent broad spectra that are undesirable for most traditional wavelength measurements. To address this, we have recently introduced a new interrogation concept using shifted optical Gaussian filters (SOGF) which is well suitable for US-FBG measurements. Here, we apply it to demonstrate, for the first time, an US-FBG-based self-referencing distributed optical sensing technique, with the advantages of adjustable sensitivity and range, high-speed and wide-range (potentially >14000 με) intensity-based detection, and resistance to disturbance by nonuniform parameter distribution. The entire system is essentially based on a microwave network, which incorporates the SOGF with a fiber delay-line between the two arms. Differential detections of the cascaded US-FBGs are performed individually in the network time-domain response which can be obtained by analyzing its complex frequency response. Experimental results are presented and discussed using eight cascaded US-FBGs. A comprehensive numerical analysis is also conducted to assess the system performance, which shows that the use of US-FBGs instead of conventional weak FBGs could significantly improve the power budget and capacity of the distributed sensing system while maintaining the crosstalk level and intensity decay rate, providing a promising route for future sensing applications.

  13. High Performance, Mission Critical Applications for the War-fighter: Solutions to Network Challenges and Today’s Fluid Combat Environment

    DTIC Science & Technology

    2010-04-01

    ADDRESS(ES) GemStone ,1260 NW Waterhouse Ave., Suite 200,Beaverton,OR,97006 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY...rates as needed, in massively distributed environments. Such a type of software, a “Data Fabric” is available today from GemStone Systems. It is...Plaza, 23rd Floor New York, NY 10001 | Phone: 646.530.8458 Washington D.C. | Phone: 301.564.0550 Copyright© 2008 by GemStone Systems, Inc. All rights

  14. Perceived Annoyance to Noise Produced by a Distributed Electric Propulsion High Lift System

    NASA Technical Reports Server (NTRS)

    Palumbo, Dan; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem

    2016-01-01

    Results of a psychoacoustic test performed to understand the relative annoyance to noise produced by several configurations of a distributed electric propulsion high lift system are given. It is found that the number of propellers in the system is a major factor in annoyance perception. This is an intuitive result as annoyance increases, in general, with frequency, and, the blade passage frequency of the propellers increases with the number of propellers. Additionally, the data indicate that having some variation in the blade passage frequency from propeller-to-propeller is beneficial as it reduces the high tonality generated when all the propellers are spinning in synchrony at the same speed. The propellers can be set to spin at different speeds, but it was found that allowing the motor controllers to drift within 1% of nominal settings produced the best results (lowest overall annoyance). The methodology employed has been demonstrated to be effective in providing timely feedback to designers in the early stages of design development.

  15. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  16. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE PAGES

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...

    2016-02-26

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  17. Model-centric distribution automation: Capacity, reliability, and efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onen, Ahmet; Jung, Jaesung; Dilek, Murat

    A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less

  18. Design and evaluation of cellular power converter architectures

    NASA Astrophysics Data System (ADS)

    Perreault, David John

    Power electronic technology plays an important role in many energy conversion and storage applications, including machine drives, power supplies, frequency changers and UPS systems. Increases in performance and reductions in cost have been achieved through the development of higher performance power semiconductor devices and integrated control devices with increased functionality. Manufacturing techniques, however, have changed little. High power is typically achieved by paralleling multiple die in a sing!e package, producing the physical equivalent of a single large device. Consequently, both the device package and the converter in which the device is used continue to require large, complex mechanical structures, and relatively sophisticated heat transfer systems. An alternative to this approach is the use of a cellular power converter architecture, which is based upon the parallel connection of a large number of quasi-autonomous converters, called cells, each of which is designed for a fraction of the system rating. The cell rating is chosen such that single-die devices in inexpensive packages can be used, and the cell fabricated with an automated assembly process. The use of quasi-autonomous cells means that system performance is not compromised by the failure of a cell. This thesis explores the design of cellular converter architectures with the objective of achieving improvements in performance, reliability, and cost over conventional converter designs. New approaches are developed and experimentally verified for highly distributed control of cellular converters, including methods for ripple cancellation and current-sharing control. The performance of these techniques are quantified, and their dynamics are analyzed. Cell topologies suitable to the cellular architecture are investigated, and their use for systems in the 5-500 kVA range is explored. The design, construction, and experimental evaluation of a 6 kW cellular switched-mode rectifier is also addressed. This cellular system implements entirely distributed control, and achieves performance levels unattainable with an equivalent single converter. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)

  19. Real-Time Monitoring System for a Utility-Scale Photovoltaic Power Plant

    PubMed Central

    Moreno-Garcia, Isabel M.; Palacios-Garcia, Emilio J.; Pallares-Lopez, Victor; Santiago, Isabel; Gonzalez-Redondo, Miguel J.; Varo-Martinez, Marta; Real-Calvo, Rafael J.

    2016-01-01

    There is, at present, considerable interest in the storage and dispatchability of photovoltaic (PV) energy, together with the need to manage power flows in real-time. This paper presents a new system, PV-on time, which has been developed to supervise the operating mode of a Grid-Connected Utility-Scale PV Power Plant in order to ensure the reliability and continuity of its supply. This system presents an architecture of acquisition devices, including wireless sensors distributed around the plant, which measure the required information. It is also equipped with a high-precision protocol for synchronizing all data acquisition equipment, something that is necessary for correctly establishing relationships among events in the plant. Moreover, a system for monitoring and supervising all of the distributed devices, as well as for the real-time treatment of all the registered information, is presented. Performances were analyzed in a 400 kW transformation center belonging to a 6.1 MW Utility-Scale PV Power Plant. In addition to monitoring the performance of all of the PV plant’s components and detecting any failures or deviations in production, this system enables users to control the power quality of the signal injected and the influence of the installation on the distribution grid. PMID:27240365

  20. Role of the ATLAS Grid Information System (AGIS) in Distributed Data Analysis and Simulation

    NASA Astrophysics Data System (ADS)

    Anisenkov, A. V.

    2018-03-01

    In modern high-energy physics experiments, particular attention is paid to the global integration of information and computing resources into a unified system for efficient storage and processing of experimental data. Annually, the ATLAS experiment performed at the Large Hadron Collider at the European Organization for Nuclear Research (CERN) produces tens of petabytes raw data from the recording electronics and several petabytes of data from the simulation system. For processing and storage of such super-large volumes of data, the computing model of the ATLAS experiment is based on heterogeneous geographically distributed computing environment, which includes the worldwide LHC computing grid (WLCG) infrastructure and is able to meet the requirements of the experiment for processing huge data sets and provide a high degree of their accessibility (hundreds of petabytes). The paper considers the ATLAS grid information system (AGIS) used by the ATLAS collaboration to describe the topology and resources of the computing infrastructure, to configure and connect the high-level software systems of computer centers, to describe and store all possible parameters, control, configuration, and other auxiliary information required for the effective operation of the ATLAS distributed computing applications and services. The role of the AGIS system in the development of a unified description of the computing resources provided by grid sites, supercomputer centers, and cloud computing into a consistent information model for the ATLAS experiment is outlined. This approach has allowed the collaboration to extend the computing capabilities of the WLCG project and integrate the supercomputers and cloud computing platforms into the software components of the production and distributed analysis workload management system (PanDA, ATLAS).

  1. Ion mobility mass spectrometry of proteins in a modified commercial mass spectrometer

    NASA Astrophysics Data System (ADS)

    Thalassinos, K.; Slade, S. E.; Jennings, K. R.; Scrivens, J. H.; Giles, K.; Wildgoose, J.; Hoyes, J.; Bateman, R. H.; Bowers, M. T.

    2004-08-01

    Ion mobility has emerged as an important technique for determining biopolymer conformations in solvent free environments. These experiments have been nearly exclusively performed on home built systems. In this paper we describe modifications to a commercial high performance mass spectrometer, the Waters UK "Ultima" Q-Tof, that allows high sensitivity measurement of peptide and protein cross sections. Arrival time distributions are obtained for a series of peptides (bradykinin, LHRH, substance P, bombesin) and proteins (bovine and equine cytochrome c, myoglobin, [alpha]-lactalbumin) with good agreement found with literature cross sections where available. In complex ATD's, mass spectra can be obtained for each feature confirming assignments. The increased sensitivity of the commercial instrument is retained along with the convenience of the data system, crucial features for analysis of protein misfolding systems.

  2. Pump RIN-induced impairments in unrepeatered transmission systems using distributed Raman amplifier.

    PubMed

    Cheng, Jingchi; Tang, Ming; Lau, Alan Pak Tao; Lu, Chao; Wang, Liang; Dong, Zhenhua; Bilal, Syed Muhammad; Fu, Songnian; Shum, Perry Ping; Liu, Deming

    2015-05-04

    High spectral efficiency modulation format based unrepeatered transmission systems using distributed Raman amplifier (DRA) have attracted much attention recently. To enhance the reach and optimize system performance, careful design of DRA is required based on the analysis of various types of impairments and their balance. In this paper, we study various pump RIN induced distortions on high spectral efficiency modulation formats. The vector theory of both 1st and higher-order stimulated Raman scattering (SRS) effect using Jones-matrix formalism is presented. The pump RIN will induce three types of distortion on high spectral efficiency signals: intensity noise stemming from SRS, phase noise stemming from cross phase modulation (XPM), and polarization crosstalk stemming from cross polarization modulation (XPolM). An analytical model for the statistical property of relative phase noise (RPN) in higher order DRA without dealing with complex vector theory is derived. The impact of pump RIN induced impairments are analyzed in polarization-multiplexed (PM)-QPSK and PM-16QAM-based unrepeatered systems simulations using 1st, 2nd and 3rd-order forward pumped Raman amplifier. It is shown that at realistic RIN levels, negligible impairments will be induced to PM-QPSK signals in 1st and 2nd order DRA, while non-negligible impairments will occur in 3rd order case. PM-16QAM signals suffer more penalties compared to PM-QPSK with the same on-off gain where both 2nd and 3rd order DRA will cause non-negligible performance degradations. We also investigate the performance of digital signal processing (DSP) algorithms to mitigate such impairments.

  3. Software/hardware distributed processing network supporting the Ada environment

    NASA Astrophysics Data System (ADS)

    Wood, Richard J.; Pryk, Zen

    1993-09-01

    A high-performance, fault-tolerant, distributed network has been developed, tested, and demonstrated. The network is based on the MIPS Computer Systems, Inc. R3000 Risc for processing, VHSIC ASICs for high speed, reliable, inter-node communications and compatible commercial memory and I/O boards. The network is an evolution of the Advanced Onboard Signal Processor (AOSP) architecture. It supports Ada application software with an Ada- implemented operating system. A six-node implementation (capable of expansion up to 256 nodes) of the RISC multiprocessor architecture provides 120 MIPS of scalar throughput, 96 Mbytes of RAM and 24 Mbytes of non-volatile memory. The network provides for all ground processing applications, has merit for space-qualified RISC-based network, and interfaces to advanced Computer Aided Software Engineering (CASE) tools for application software development.

  4. Development of a high precision dosimetry system for the measurement of surface dose rate distribution for eye applicators.

    PubMed

    Eichmann, Marion; Flühs, Dirk; Spaan, Bernhard

    2009-10-01

    The therapeutic outcome of the therapy with ophthalmic applicators is highly dependent on the application of a sufficient dose to the tumor, whereas the dose applied to the surrounding tissue needs to be minimized. The goal for the newly developed apparatus described in this work is the determination of the individual applicator surface dose rate distribution with a high spatial resolution and a high precision in dose rate with respect to time and budget constraints especially important for clinical procedures. Inhomogeneities of the dose rate distribution can be detected and taken into consideration for the treatment planning. In order to achieve this, a dose rate profile as well as a surface profile of the applicator are measured and correlated with each other. An instrumental setup has been developed consisting of a plastic scintillator detector system and a newly designed apparatus for guiding the detector across the applicator surface at a constant small distance. It performs an angular movement of detector and applicator with high precision. The measurements of surface dose rate distributions discussed in this work demonstrate the successful operation of the measuring setup. Measuring the surface dose rate distribution with a small distance between applicator and detector and with a high density of measuring points results in a complete and gapless coverage of the applicator surface, being capable of distinguishing small sized spots with high activities. The dosimetrical accuracy of the measurements and its analysis is sufficient (uncertainty in the dose rate in terms of absorbed dose to water is <7%), especially when taking the surgical techniques in positioning of the applicator on the eyeball into account. The method developed so far allows a fully automated quality assurance of eye applicators even under clinical conditions. These measurements provide the basis for future calculation of a full 3D dose rate distribution, which then can be used as input for a refined clinical treatment planning system. The improved dose rate measurements will facilitate a clinical study, which could correlate the therapeutic outcome of a brachytherapy treatment with an applicator and its individual dose rate distribution.

  5. Development of a high precision dosimetry system for the measurement of surface dose rate distribution for eye applicators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eichmann, Marion; Fluehs, Dirk; Spaan, Bernhard

    2009-10-15

    Purpose: The therapeutic outcome of the therapy with ophthalmic applicators is highly dependent on the application of a sufficient dose to the tumor, whereas the dose applied to the surrounding tissue needs to be minimized. The goal for the newly developed apparatus described in this work is the determination of the individual applicator surface dose rate distribution with a high spatial resolution and a high precision in dose rate with respect to time and budget constraints especially important for clinical procedures. Inhomogeneities of the dose rate distribution can be detected and taken into consideration for the treatment planning. Methods: Inmore » order to achieve this, a dose rate profile as well as a surface profile of the applicator are measured and correlated with each other. An instrumental setup has been developed consisting of a plastic scintillator detector system and a newly designed apparatus for guiding the detector across the applicator surface at a constant small distance. It performs an angular movement of detector and applicator with high precision. Results: The measurements of surface dose rate distributions discussed in this work demonstrate the successful operation of the measuring setup. Measuring the surface dose rate distribution with a small distance between applicator and detector and with a high density of measuring points results in a complete and gapless coverage of the applicator surface, being capable of distinguishing small sized spots with high activities. The dosimetrical accuracy of the measurements and its analysis is sufficient (uncertainty in the dose rate in terms of absorbed dose to water is <7%), especially when taking the surgical techniques in positioning of the applicator on the eyeball into account. Conclusions: The method developed so far allows a fully automated quality assurance of eye applicators even under clinical conditions. These measurements provide the basis for future calculation of a full 3D dose rate distribution, which then can be used as input for a refined clinical treatment planning system. The improved dose rate measurements will facilitate a clinical study, which could correlate the therapeutic outcome of a brachytherapy treatment with an applicator and its individual dose rate distribution.« less

  6. The Design of a High Performance Earth Imagery and Raster Data Management and Processing Platform

    NASA Astrophysics Data System (ADS)

    Xie, Qingyun

    2016-06-01

    This paper summarizes the general requirements and specific characteristics of both geospatial raster database management system and raster data processing platform from a domain-specific perspective as well as from a computing point of view. It also discusses the need of tight integration between the database system and the processing system. These requirements resulted in Oracle Spatial GeoRaster, a global scale and high performance earth imagery and raster data management and processing platform. The rationale, design, implementation, and benefits of Oracle Spatial GeoRaster are described. Basically, as a database management system, GeoRaster defines an integrated raster data model, supports image compression, data manipulation, general and spatial indices, content and context based queries and updates, versioning, concurrency, security, replication, standby, backup and recovery, multitenancy, and ETL. It provides high scalability using computer and storage clustering. As a raster data processing platform, GeoRaster provides basic operations, image processing, raster analytics, and data distribution featuring high performance computing (HPC). Specifically, HPC features include locality computing, concurrent processing, parallel processing, and in-memory computing. In addition, the APIs and the plug-in architecture are discussed.

  7. 48 CFR 237.7002 - Area of performance and distribution of contracts.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Area of performance and distribution of contracts. 237.7002 Section 237.7002 Federal Acquisition Regulations System DEFENSE ACQUISITION REGULATIONS SYSTEM, DEPARTMENT OF DEFENSE SPECIAL CATEGORIES OF CONTRACTING SERVICE CONTRACTING Mortuary...

  8. Performance analysis of a microfluidic mixer based on high gradient magnetic separation principles

    NASA Astrophysics Data System (ADS)

    Liu, Mengyu; Han, Xiaotao; Cao, Quanliang; Li, Liang

    2017-09-01

    To achieve a rapid mixing between a water-based ferrofluid and DI water in a microfluidic environment, a magnetically actuated mixing system based on high gradient magnetic separation principles is proposed in this work. The microfluidic system consists of a T-shaped mirochannel and an array of integrated soft-magnetic elements at the sidewall of the channel. With the aid of an external magnetic bias field, these elements are magnetized to produce a magnetic volume force acting on the fluids containing magnetic nanoparticles, and then to induce additional flows for improving the mixing performance. The mixing process is numerically investigated through analyzing the concentration distribution of magnetic nanoparticles using a coupled particle-fluid transport model, and mixing performances under different parametrical conditions are investigated in detail. Numerical results show that a high mixing efficiency around 97.5% can be achieved within 2 s under an inlet flow rate of 1 mm s-1 and a relatively low magnetic bias field of 50 mT. Meanwhile, it has been found that there is an optimum number of magnetic elements used for obtaining the best mixing performance. These results show the potential of the proposed mixing method in lab-on-a-chip system and could be helpful in designing and optimizing system performance.

  9. High-density fiber-optic DNA random microsphere array.

    PubMed

    Ferguson, J A; Steemers, F J; Walt, D R

    2000-11-15

    A high-density fiber-optic DNA microarray sensor was developed to monitor multiple DNA sequences in parallel. Microarrays were prepared by randomly distributing DNA probe-functionalized 3.1-microm-diameter microspheres in an array of wells etched in a 500-microm-diameter optical imaging fiber. Registration of the microspheres was performed using an optical encoding scheme and a custom-built imaging system. Hybridization was visualized using fluorescent-labeled DNA targets with a detection limit of 10 fM. Hybridization times of seconds are required for nanomolar target concentrations, and analysis is performed in minutes.

  10. A practical three-dimensional dosimetry system for radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo Pengyi; Adamovics, John; Oldham, Mark

    2006-10-15

    There is a pressing need for a practical three-dimensional (3D) dosimetry system, convenient for clinical use, and with the accuracy and resolution to enable comprehensive verification of the complex dose distributions typical of modern radiation therapy. Here we introduce a dosimetry system that can achieve this challenge, consisting of a radiochromic dosimeter (PRESAGE trade mark sign ) and a commercial optical computed tomography (CT) scanning system (OCTOPUS trade mark sign ). PRESAGE trade mark sign is a transparent material with compelling properties for dosimetry, including insensitivity of the dose response to atmospheric exposure, a solid texture negating the need formore » an external container (reducing edge effects), and amenability to accurate optical CT scanning due to radiochromic optical contrast as opposed to light-scattering contrast. An evaluation of the performance and viability of the PRESAGE trade mark sign /OCTOPUS, combination for routine clinical 3D dosimetry is presented. The performance of the two components (scanner and dosimeter) was investigated separately prior to full system test. The optical CT scanner has a spatial resolution of {<=}1 mm, geometric accuracy within 1 mm, and high reconstruction linearity (with a R{sup 2} value of 0.9979 and a standard error of estimation of {approx}1%) relative to independent measurement. The overall performance of the PRESAGE trade mark sign /OCTOPUS system was evaluated with respect to a simple known 3D dose distribution, by comparison with GAFCHROMIC[reg] EBT film and the calculated dose from a commissioned planning system. The 'measured' dose distribution in a cylindrical PRESAGE trade mark sign dosimeter (16 cm diameter and 11 cm height) was determined by optical-CT, using a filtered backprojection reconstruction algorithm. A three-way Gamma map comparison (4% dose difference and 4 mm distance to agreement), between the PRESAGE trade mark sign , EBT and calculated dose distributions, showed full agreement in measurable region of PRESAGE trade mark sign dosimeter ({approx}90% of radius). The EBT and PRESAGE trade mark sign distributions agreed more closely with each other than with the calculated plan, consistent with penumbral blurring in the planning data which was acquired with an ion chamber. In summary, our results support the conclusion that the PRESAGE trade mark sign optical-CT combination represents a significant step forward in 3D dosimetry, and provides a robust, clinically effective and viable high-resolution relative 3D dosimetry system for radiation therapy.« less

  11. A Rich Metadata Filesystem for Scientific Data

    ERIC Educational Resources Information Center

    Bui, Hoang

    2012-01-01

    As scientific research becomes more data intensive, there is an increasing need for scalable, reliable, and high performance storage systems. Such data repositories must provide both data archival services and rich metadata, and cleanly integrate with large scale computing resources. ROARS is a hybrid approach to distributed storage that provides…

  12. Human resource management in post-conflict health systems: review of research and knowledge gaps.

    PubMed

    Roome, Edward; Raven, Joanna; Martineau, Tim

    2014-01-01

    In post-conflict settings, severe disruption to health systems invariably leaves populations at high risk of disease and in greater need of health provision than more stable resource-poor countries. The health workforce is often a direct victim of conflict. Effective human resource management (HRM) strategies and policies are critical to addressing the systemic effects of conflict on the health workforce such as flight of human capital, mismatches between skills and service needs, breakdown of pre-service training, and lack of human resource data. This paper reviews published literatures across three functional areas of HRM in post-conflict settings: workforce supply, workforce distribution, and workforce performance. We searched published literatures for articles published in English between 2003 and 2013. The search used context-specific keywords (e.g. post-conflict, reconstruction) in combination with topic-related keywords based on an analytical framework containing the three functional areas of HRM (supply, distribution, and performance) and several corresponding HRM topic areas under these. In addition, the framework includes a number of cross-cutting topics such as leadership and governance, finance, and gender. The literature is growing but still limited. Many publications have focused on health workforce supply issues, including pre-service education and training, pay, and recruitment. Less is known about workforce distribution, especially governance and administrative systems for deployment and incentive policies to redress geographical workforce imbalances. Apart from in-service training, workforce performance is particularly under-researched in the areas of performance-based incentives, management and supervision, work organisation and job design, and performance appraisal. Research is largely on HRM in the early post-conflict period and has relied on secondary data. More primary research is needed across the areas of workforce supply, workforce distribution, and workforce performance. However, this should apply a longer-term focus throughout the different post-conflict phases, while paying attention to key cross-cutting themes such as leadership and governance, gender equity, and task shifting. The research gaps identified should enable future studies to examine how HRM could be used to meet both short and long term objectives for rebuilding health workforces and thereby contribute to achieving more equitable and sustainable health systems outcomes after conflict.

  13. Acousto-optic Imaging System for In-situ Measurement of the High Temperature Distribution in Micron-size Specimens

    NASA Astrophysics Data System (ADS)

    Machikhin, Alexander S.; Zinin, Pavel V.; Shurygin, Alexander V.

    We developed a unique acousto-optic imaging system for in-situ measurement of high temperature distribution on micron-size specimens. The system was designed to measure temperature distribution inside minerals and functional material phases subjected to high pressure and high temperatures in a diamond anvil cell (DAC) heated by a high powered laser.

  14. Overview of ATLAS PanDA Workload Management

    NASA Astrophysics Data System (ADS)

    Maeno, T.; De, K.; Wenaus, T.; Nilsson, P.; Stewart, G. A.; Walker, R.; Stradling, A.; Caballero, J.; Potekhin, M.; Smith, D.; ATLAS Collaboration

    2011-12-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in addition to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.

  15. Overview of ATLAS PanDA Workload Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maeno T.; De K.; Wenaus T.

    2011-01-01

    The Production and Distributed Analysis System (PanDA) plays a key role in the ATLAS distributed computing infrastructure. All ATLAS Monte-Carlo simulation and data reprocessing jobs pass through the PanDA system. We will describe how PanDA manages job execution on the grid using dynamic resource estimation and data replication together with intelligent brokerage in order to meet the scaling and automation requirements of ATLAS distributed computing. PanDA is also the primary ATLAS system for processing user and group analysis jobs, bringing further requirements for quick, flexible adaptation to the rapidly evolving analysis use cases of the early datataking phase, in additionmore » to the high reliability, robustness and usability needed to provide efficient and transparent utilization of the grid for analysis users. We will describe how PanDA meets ATLAS requirements, the evolution of the system in light of operational experience, how the system has performed during the first LHC data-taking phase and plans for the future.« less

  16. Examining System-Wide Impacts of Solar PV Control Systems with a Power Hardware-in-the-Loop Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Fuller, Jason C.; Schneider, Kevin P.

    2014-06-08

    High penetration levels of distributed solar PV power generation can lead to adverse power quality impacts, such as excessive voltage rise, voltage flicker, and reactive power values that result in unacceptable voltage levels. Advanced inverter control schemes have been developed that have the potential to mitigate many power quality concerns. However, local closed-loop control may lead to unintended behavior in deployed systems as complex interactions can occur between numerous operating devices. To enable the study of the performance of advanced control schemes in a detailed distribution system environment, a test platform has been developed that integrates Power Hardware-in-the-Loop (PHIL) withmore » concurrent time-series electric distribution system simulation. In the test platform, GridLAB-D, a distribution system simulation tool, runs a detailed simulation of a distribution feeder in real-time mode at the Pacific Northwest National Laboratory (PNNL) and supplies power system parameters at a point of common coupling. At the National Renewable Energy Laboratory (NREL), a hardware inverter interacts with grid and PV simulators emulating an operational distribution system. Power output from the inverters is measured and sent to PNNL to update the real-time distribution system simulation. The platform is described and initial test cases are presented. The platform is used to study the system-wide impacts and the interactions of inverter control modes—constant power factor and active Volt/VAr control—when integrated into a simulated IEEE 8500-node test feeder. We demonstrate that this platform is well-suited to the study of advanced inverter controls and their impacts on the power quality of a distribution feeder. Additionally, results are used to validate GridLAB-D simulations of advanced inverter controls.« less

  17. 77 FR 26583 - Notice Pursuant to the National Cooperative Research and Production Act of 1993-Cooperative...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-04

    ... Production Act of 1993--Cooperative Research Group on Evaluation of Distributed Leak Detection Systems... Institute-- Cooperative Research Group on Evaluation of Distributed Leak Detection Systems--Performance... detection systems for offshore pipelines. Laboratory testing of distributed temperature and distributed...

  18. Design of Distributed Engine Control Systems with Uncertain Delay.

    PubMed

    Liu, Xiaofeng; Li, Yanxi; Sun, Xu

    Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method.

  19. Design of Distributed Engine Control Systems with Uncertain Delay

    PubMed Central

    Li, Yanxi; Sun, Xu

    2016-01-01

    Future gas turbine engine control systems will be based on distributed architecture, in which, the sensors and actuators will be connected to the controllers via a communication network. The performance of the distributed engine control (DEC) is dependent on the network performance. This study introduces a distributed control system architecture based on a networked cascade control system (NCCS). Typical turboshaft engine-distributed controllers are designed based on the NCCS framework with a H∞ output feedback under network-induced time delays and uncertain disturbances. The sufficient conditions for robust stability are derived via the Lyapunov stability theory and linear matrix inequality approach. Both numerical and hardware-in-loop simulations illustrate the effectiveness of the presented method. PMID:27669005

  20. Program Predicts Nonlinear Inverter Performance

    NASA Technical Reports Server (NTRS)

    Al-Ayoubi, R. R.; Oepomo, T. S.

    1985-01-01

    Program developed for ac power distribution system on Shuttle orbiter predicts total load on inverters and node voltages at each of line replaceable units (LRU's). Mathematical model simulates inverter performance at each change of state in power distribution system.

  1. Fault tolerant features and experiments of ANTS distributed real-time system

    NASA Astrophysics Data System (ADS)

    Dominic-Savio, Patrick; Lo, Jien-Chung; Tufts, Donald W.

    1995-01-01

    The ANTS project at the University of Rhode Island introduces the concept of Active Nodal Task Seeking (ANTS) as a way to efficiently design and implement dependable, high-performance, distributed computing. This paper presents the fault tolerant design features that have been incorporated in the ANTS experimental system implementation. The results of performance evaluations and fault injection experiments are reported. The fault-tolerant version of ANTS categorizes all computing nodes into three groups. They are: the up-and-running green group, the self-diagnosing yellow group and the failed red group. Each available computing node will be placed in the yellow group periodically for a routine diagnosis. In addition, for long-life missions, ANTS uses a monitoring scheme to identify faulty computing nodes. In this monitoring scheme, the communication pattern of each computing node is monitored by two other nodes.

  2. High-Throughput Computing on High-Performance Platforms: A Case Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleynik, D; Panitkin, S; Matteo, Turilli

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i)more » a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.« less

  3. Development of a hemispherical rotational modulation collimator system for imaging spatial distribution of radiation sources

    NASA Astrophysics Data System (ADS)

    Na, M.; Lee, S.; Kim, G.; Kim, H. S.; Rho, J.; Ok, J. G.

    2017-12-01

    Detecting and mapping the spatial distribution of radioactive materials is of great importance for environmental and security issues. We design and present a novel hemispherical rotational modulation collimator (H-RMC) system which can visualize the location of the radiation source by collecting signals from incident rays that go through collimator masks. The H-RMC system comprises a servo motor-controlled rotating module and a hollow heavy-metallic hemisphere with slits/slats equally spaced with the same angle subtended from the main axis. In addition, we also designed an auxiliary instrument to test the imaging performance of the H-RMC system, comprising a high-precision x- and y-axis staging station on which one can mount radiation sources of various shapes. We fabricated the H-RMC system which can be operated in a fully-automated fashion through the computer-based controller, and verify the accuracy and reproducibility of the system by measuring the rotational and linear positions with respect to the programmed values. Our H-RMC system may provide a pivotal tool for spatial radiation imaging with high reliability and accuracy.

  4. The effect of entrapped nonaqueous phase liquids on tracer transport in heterogeneous porous media: Laboratory experiments at the intermediate scale

    USGS Publications Warehouse

    Barth, Gilbert R.; Illangasekare, T.H.; Rajaram, H.

    2003-01-01

    This work considers the applicability of conservative tracers for detecting high-saturation nonaqueous-phase liquid (NAPL) entrapment in heterogeneous systems. For this purpose, a series of experiments and simulations was performed using a two-dimensional heterogeneous system (10??1.2 m), which represents an intermediate scale between laboratory and field scales. Tracer tests performed prior to injecting the NAPL provide the baseline response of the heterogeneous porous medium. Two NAPL spill experiments were performed and the entrapped-NAPL saturation distribution measured in detail using a gamma-ray attenuation system. Tracer tests following each of the NAPL spills produced breakthrough curves (BTCs) reflecting the impact of entrapped NAPL on conservative transport. To evaluate significance, the impact of NAPL entrapment on the conservative-tracer breakthrough curves was compared to simulated breakthrough curve variability for different realizations of the heterogeneous distribution. Analysis of the results reveals that the NAPL entrapment has a significant impact on the temporal moments of conservative-tracer breakthrough curves. ?? 2003 Elsevier B.V. All rights reserved.

  5. Simulation study of a high performance brain PET system with dodecahedral geometry.

    PubMed

    Tao, Weijie; Chen, Gaoyu; Weng, Fenghua; Zan, Yunlong; Zhao, Zhixiang; Peng, Qiyu; Xu, Jianfeng; Huang, Qiu

    2018-05-25

    In brain imaging, the spherical PET system achieves the highest sensitivity when the solid angle is concerned. However it is not practical. In this work we designed an alternative sphere-like scanner, the dodecahedral scanner, which has a high sensitivity in imaging and a high feasibility to manufacture. We simulated this system and compared the performance with a few other dedicated brain PET systems. Monte Carlo simulations were conducted to generate data of the dedicated brain PET system with the dodecahedral geometry (11 regular pentagon detectors). The data were then reconstructed using the in-house developed software with the fully three-dimensional maximum-likelihood expectation maximization (3D-MLEM) algorithm. Results show that the proposed system has a high sensitivity distribution for the whole field of view (FOV). With a depth-of-interaction (DOI) resolution around 6.67 mm, the proposed system achieves the spatial resolution of 1.98 mm. Our simulation study also shows that the proposed system improves the image contrast and reduces noise compared with a few other dedicated brain PET systems. Finally, simulations with the Hoffman phantom show the potential application of the proposed system in clinical applications. In conclusion, the proposed dodecahedral PET system is potential for widespread applications in high-sensitivity, high-resolution PET imaging, to lower the injected dose. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  6. Probabilistic performance assessment of complex energy process systems - The case of a self-sustained sanitation system.

    PubMed

    Kolios, Athanasios; Jiang, Ying; Somorin, Tosin; Sowale, Ayodeji; Anastasopoulou, Aikaterini; Anthony, Edward J; Fidalgo, Beatriz; Parker, Alison; McAdam, Ewan; Williams, Leon; Collins, Matt; Tyrrel, Sean

    2018-05-01

    A probabilistic modelling approach was developed and applied to investigate the energy and environmental performance of an innovative sanitation system, the "Nano-membrane Toilet" (NMT). The system treats human excreta via an advanced energy and water recovery island with the aim of addressing current and future sanitation demands. Due to the complex design and inherent characteristics of the system's input material, there are a number of stochastic variables which may significantly affect the system's performance. The non-intrusive probabilistic approach adopted in this study combines a finite number of deterministic thermodynamic process simulations with an artificial neural network (ANN) approximation model and Monte Carlo simulations (MCS) to assess the effect of system uncertainties on the predicted performance of the NMT system. The joint probability distributions of the process performance indicators suggest a Stirling Engine (SE) power output in the range of 61.5-73 W with a high confidence interval (CI) of 95%. In addition, there is high probability (with 95% CI) that the NMT system can achieve positive net power output between 15.8 and 35 W. A sensitivity study reveals the system power performance is mostly affected by SE heater temperature. Investigation into the environmental performance of the NMT design, including water recovery and CO 2 /NO x emissions, suggests significant environmental benefits compared to conventional systems. Results of the probabilistic analysis can better inform future improvements on the system design and operational strategy and this probabilistic assessment framework can also be applied to similar complex engineering systems.

  7. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  8. Network issues for large mass storage requirements

    NASA Technical Reports Server (NTRS)

    Perdue, James

    1992-01-01

    File Servers and Supercomputing environments need high performance networks to balance the I/O requirements seen in today's demanding computing scenarios. UltraNet is one solution which permits both high aggregate transfer rates and high task-to-task transfer rates as demonstrated in actual tests. UltraNet provides this capability as both a Server-to-Server and Server-to-Client access network giving the supercomputing center the following advantages highest performance Transport Level connections (to 40 MBytes/sec effective rates); matches the throughput of the emerging high performance disk technologies, such as RAID, parallel head transfer devices and software striping; supports standard network and file system applications using SOCKET's based application program interface such as FTP, rcp, rdump, etc.; supports access to the Network File System (NFS) and LARGE aggregate bandwidth for large NFS usage; provides access to a distributed, hierarchical data server capability using DISCOS UniTree product; supports file server solutions available from multiple vendors, including Cray, Convex, Alliant, FPS, IBM, and others.

  9. FPGA-Based High-Performance Embedded Systems for Adaptive Edge Computing in Cyber-Physical Systems: The ARTICo³ Framework.

    PubMed

    Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo

    2018-06-08

    Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.

  10. Distributed analysis in ATLAS

    NASA Astrophysics Data System (ADS)

    Dewhurst, A.; Legger, F.

    2015-12-01

    The ATLAS experiment accumulated more than 140 PB of data during the first run of the Large Hadron Collider (LHC) at CERN. The analysis of such an amount of data is a challenging task for the distributed physics community. The Distributed Analysis (DA) system of the ATLAS experiment is an established and stable component of the ATLAS distributed computing operations. About half a million user jobs are running daily on DA resources, submitted by more than 1500 ATLAS physicists. The reliability of the DA system during the first run of the LHC and the following shutdown period has been high thanks to the continuous automatic validation of the distributed analysis sites and the user support provided by a dedicated team of expert shifters. During the LHC shutdown, the ATLAS computing model has undergone several changes to improve the analysis workflows, including the re-design of the production system, a new analysis data format and event model, and the development of common reduction and analysis frameworks. We report on the impact such changes have on the DA infrastructure, describe the new DA components, and include recent performance measurements.

  11. Automation in the Space Station module power management and distribution Breadboard

    NASA Technical Reports Server (NTRS)

    Walls, Bryan; Lollar, Louis F.

    1990-01-01

    The Space Station Module Power Management and Distribution (SSM/PMAD) Breadboard, located at NASA's Marshall Space Flight Center (MSFC) in Huntsville, Alabama, models the power distribution within a Space Station Freedom Habitation or Laboratory module. Originally designed for 20 kHz ac power, the system is now being converted to high voltage dc power with power levels on a par with those expected for a space station module. In addition to the power distribution hardware, the system includes computer control through a hierarchy of processes. The lowest level process consists of fast, simple (from a computing standpoint) switchgear, capable of quickly safing the system. The next level consists of local load center processors called Lowest Level Processors (LLP's). These LLP's execute load scheduling, perform redundant switching, and shed loads which use more than scheduled power. The level above the LLP's contains a Communication and Algorithmic Controller (CAC) which coordinates communications with the highest level. Finally, at this highest level, three cooperating Artificial Intelligence (AI) systems manage load prioritization, load scheduling, load shedding, and fault recovery and management. The system provides an excellent venue for developing and examining advanced automation techniques. The current system and the plans for its future are examined.

  12. Research Electrical Distribution Bus | Energy Systems Integration Facility

    Science.gov Websites

    | NREL Research Electrical Distribution Bus Research Electrical Distribution Bus The research electrical distribution bus (REDB) is the heart of the Energy Systems Integration Facility electrical system throughout the laboratories. Photo of a technician performing maintenance on the Research Electrical

  13. Performance Monitoring of Distributed Data Processing Systems

    NASA Technical Reports Server (NTRS)

    Ojha, Anand K.

    2000-01-01

    Test and checkout systems are essential components in ensuring safety and reliability of aircraft and related systems for space missions. A variety of systems, developed over several years, are in use at the NASA/KSC. Many of these systems are configured as distributed data processing systems with the functionality spread over several multiprocessor nodes interconnected through networks. To be cost-effective, a system should take the least amount of resource and perform a given testing task in the least amount of time. There are two aspects of performance evaluation: monitoring and benchmarking. While monitoring is valuable to system administrators in operating and maintaining, benchmarking is important in designing and upgrading computer-based systems. These two aspects of performance evaluation are the foci of this project. This paper first discusses various issues related to software, hardware, and hybrid performance monitoring as applicable to distributed systems, and specifically to the TCMS (Test Control and Monitoring System). Next, a comparison of several probing instructions are made to show that the hybrid monitoring technique developed by the NIST (National Institutes for Standards and Technology) is the least intrusive and takes only one-fourth of the time taken by software monitoring probes. In the rest of the paper, issues related to benchmarking a distributed system have been discussed and finally a prescription for developing a micro-benchmark for the TCMS has been provided.

  14. Sludge accumulation and distribution impact the hydraulic performance in waste stabilisation ponds.

    PubMed

    Coggins, Liah X; Ghisalberti, Marco; Ghadouani, Anas

    2017-03-01

    Waste stabilisation ponds (WSPs) are used worldwide for wastewater treatment, and throughout their operation require periodic sludge surveys. Sludge accumulation in WSPs can impact performance by reducing the effective volume of the pond, and altering the pond hydraulics and wastewater treatment efficiency. Traditionally, sludge heights, and thus sludge volume, have been measured using low-resolution and labour intensive methods such as 'sludge judge' and the 'white towel test'. A sonar device, a readily available technology, fitted to a remotely operated vehicle (ROV) was shown to improve the spatial resolution and accuracy of sludge height measurements, as well as reduce labour and safety requirements. Coupled with a dedicated software package, the profiling of several WSPs has shown that the ROV with autonomous sonar device is capable of providing sludge bathymetry with greatly increased spatial resolution in a greatly reduced profiling time, leading to a better understanding of the role played by sludge accumulation in hydraulic performance of WSPs. The high-resolution bathymetry collected was used to support a much more detailed hydrodynamic assessment of systems with low, medium and high accumulations of sludge. The results of the modelling show that hydraulic performance is not only influenced by the sludge accumulation, but also that the spatial distribution of sludge plays a critical role in reducing the treatment capacity of these systems. In a range of ponds modelled, the reduction in residence time ranged from 33% in a pond with a uniform sludge distribution to a reduction of up to 60% in a pond with highly channelized flow. The combination of high-resolution measurement of sludge accumulation and hydrodynamic modelling will help in the development of frameworks for wastewater sludge management, including the development of more reliable computer models, and could potentially have wider application in the monitoring of other small to medium water bodies, such as channels, recreational water bodies, and commercial ports. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Report of the Defense Science Board 1981 Summer Study Panel on Operational Readiness with High Performance Systems

    DTIC Science & Technology

    1982-04-01

    Research and Engineering Colonel Francis D. Bettinger , USA Deputy Director, Soldier Support Center, National Capitol Region Major Andrew A . Gorman...WASHINGTON, D.C. NTIS GRA&I DTIC TABj Urlka ounced [] A 3Lstifi"•°to - APRIL 1982 COPY Distribution/__ Availability Codes Avail and/or 4o...SDSTRIBUTION STATEMENT A Appioved fox public teeoazel . Distribution Unlimited OFFICE OF THE SECRETARY OF Dm_..g TI- OFFICE Or WASHINGTON, D.C. Z0301 £ . 13

  16. Improved mine blast algorithm for optimal cost design of water distribution systems

    NASA Astrophysics Data System (ADS)

    Sadollah, Ali; Guen Yoo, Do; Kim, Joong Hoon

    2015-12-01

    The design of water distribution systems is a large class of combinatorial, nonlinear optimization problems with complex constraints such as conservation of mass and energy equations. Since feasible solutions are often extremely complex, traditional optimization techniques are insufficient. Recently, metaheuristic algorithms have been applied to this class of problems because they are highly efficient. In this article, a recently developed optimizer called the mine blast algorithm (MBA) is considered. The MBA is improved and coupled with the hydraulic simulator EPANET to find the optimal cost design for water distribution systems. The performance of the improved mine blast algorithm (IMBA) is demonstrated using the well-known Hanoi, New York tunnels and Balerma benchmark networks. Optimization results obtained using IMBA are compared to those using MBA and other optimizers in terms of their minimum construction costs and convergence rates. For the complex Balerma network, IMBA offers the cheapest network design compared to other optimization algorithms.

  17. High-frequency ac power distribution in Space Station

    NASA Technical Reports Server (NTRS)

    Tsai, Fu-Sheng; Lee, Fred C. Y.

    1990-01-01

    A utility-type 20-kHz ac power distribution system for the Space Station, employing resonant power-conversion techniques, is presented. The system converts raw dc voltage from photovoltaic cells or three-phase LF ac voltage from a solar dynamic generator into a regulated 20-kHz ac voltage for distribution among various loads. The results of EASY5 computer simulations of the local and global performance show that the system has fast response and good transient behavior. The ac bus voltage is effectively regulated using the phase-control scheme, which is demonstrated with both line and load variations. The feasibility of paralleling the driver-module outputs is illustrated with the driver modules synchronized and sharing a common feedback loop. An HF sinusoidal ac voltage is generated in the three-phase ac input case, when the driver modules are phased 120 deg away from one another and their outputs are connected in series.

  18. Versioned distributed arrays for resilience in scientific applications: Global view resilience

    DOE PAGES

    Chien, A.; Balaji, P.; Beckman, P.; ...

    2015-06-01

    Exascale studies project reliability challenges for future high-performance computing (HPC) systems. We propose the Global View Resilience (GVR) system, a library that enables applications to add resilience in a portable, application-controlled fashion using versioned distributed arrays. We describe GVR’s interfaces to distributed arrays, versioning, and cross-layer error recovery. Using several large applications (OpenMC, the preconditioned conjugate gradient solver PCG, ddcMD, and Chombo), we evaluate the programmer effort to add resilience. The required changes are small (<2% LOC), localized, and machine-independent, requiring no software architecture changes. We also measure the overhead of adding GVR versioning and show that generally overheads <2%more » are achieved. We conclude that GVR’s interfaces and implementation are flexible and portable and create a gentle-slope path to tolerate growing error rates in future systems.« less

  19. Remote measurement of microwave distribution based on optical detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Zhong; Ding, Wenzheng; Yang, Sihua

    2016-01-04

    In this letter, we present the development of a remote microwave measurement system. This method employs an arc discharge lamp that serves as an energy converter from microwave to visible light, which can propagate without transmission medium. Observed with a charge coupled device, quantitative microwave power distribution can be achieved when the operators and electronic instruments are in a distance from the high power region in order to reduce the potential risk. We perform the experiments using pulsed microwaves, and the results show that the system response is dependent on the microwave intensity over a certain range. Most importantly, themore » microwave distribution can be monitored in real time by optical observation of the response of a one-dimensional lamp array. The characteristics of low cost, a wide detection bandwidth, remote measurement, and room temperature operation make the system a preferred detector for microwave applications.« less

  20. Dust-concentration measurement based on Mie scattering of a laser beam

    PubMed Central

    Yu, Xiaoyu; Shi, Yunbo; Wang, Tian; Sun, Xu

    2017-01-01

    To realize automatic measurement of the concentration of dust particles in the air, a theory for dust concentration measurement was developed, and a system was designed to implement the dust concentration measurement method based on laser scattering. In the study, the principle of dust concentration detection using laser scattering is studied, and the detection basis of Mie scattering theory is determined. Through simulation, the influence of the incident laser wavelength, dust particle diameter, and refractive index of dust particles on the scattered light intensity distribution are obtained for determining the scattered light intensity curves of single suspended dust particles under different characteristic parameters. A genetic algorithm was used to study the inverse particle size distribution, and the reliability of the measurement system design is proven theoretically. The dust concentration detection system, which includes a laser system, computer circuitry, air flow system, and control system, was then implemented according to the parameters obtained from the theoretical analysis. The performance of the designed system was evaluated. Experimental results show that the system performance was stable and reliable, resulting in high-precision automatic dust concentration measurement with strong anti-interference ability. PMID:28767662

  1. Methods and tools for profiling and control of distributed systems

    NASA Astrophysics Data System (ADS)

    Sukharev, R.; Lukyanchikov, O.; Nikulchev, E.; Biryukov, D.; Ryadchikov, I.

    2018-02-01

    This article is devoted to the topic of profiling and control of distributed systems. Distributed systems have a complex architecture, applications are distributed among various computing nodes, and many network operations are performed. Therefore, today it is important to develop methods and tools for profiling distributed systems. The article analyzes and standardizes methods for profiling distributed systems that focus on simulation to conduct experiments and build a graph model of the system. The theory of queueing networks is used for simulation modeling of distributed systems, receiving and processing user requests. To automate the above method of profiling distributed systems the software application was developed with a modular structure and similar to a SCADA-system.

  2. Discriminating nutritional quality of foods using the 5-Color nutrition label in the French food market: consistency with nutritional recommendations.

    PubMed

    Julia, Chantal; Ducrot, Pauline; Péneau, Sandrine; Deschamps, Valérie; Méjean, Caroline; Fézeu, Léopold; Touvier, Mathilde; Hercberg, Serge; Kesse-Guyot, Emmanuelle

    2015-09-28

    Our objectives were to assess the performance of the 5-Colour nutrition label (5-CNL) front-of-pack nutrition label based on the Food Standards Agency nutrient profiling system to discriminate nutritional quality of foods currently on the market in France and its consistency with French nutritional recommendations. Nutritional composition of 7777 foods available on the French market collected from the web-based collaborative project Open Food Facts were retrieved. Distribution of products across the 5-CNL categories according to food groups, as arranged in supermarket shelves was assessed. Distribution of similar products from different brands in the 5-CNL categories was also assessed. Discriminating performance was considered as the number of color categories present in each food group. In the case of discrepancies between the category allocation and French nutritional recommendations, adaptations of the original score were proposed. Overall, the distribution of foodstuffs in the 5-CNL categories was consistent with French recommendations: 95.4% of 'Fruits and vegetables', 72.5% of 'Cereals and potatoes' were classified as 'Green' or 'Yellow' whereas 86.0% of 'Sugary snacks' were classified as 'Pink' or 'Red'. Adaptations to the original FSA score computation model were necessary for beverages, added fats and cheese in order to be consistent with French official nutritional recommendations. The 5-CNL label displays a high performance in discriminating nutritional quality of foods across food groups, within a food group and for similar products from different brands. Adaptations from the original model were necessary to maintain consistency with French recommendations and high performance of the system.

  3. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    PubMed

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  4. Directions in parallel programming: HPF, shared virtual memory and object parallelism in pC++

    NASA Technical Reports Server (NTRS)

    Bodin, Francois; Priol, Thierry; Mehrotra, Piyush; Gannon, Dennis

    1994-01-01

    Fortran and C++ are the dominant programming languages used in scientific computation. Consequently, extensions to these languages are the most popular for programming massively parallel computers. We discuss two such approaches to parallel Fortran and one approach to C++. The High Performance Fortran Forum has designed HPF with the intent of supporting data parallelism on Fortran 90 applications. HPF works by asking the user to help the compiler distribute and align the data structures with the distributed memory modules in the system. Fortran-S takes a different approach in which the data distribution is managed by the operating system and the user provides annotations to indicate parallel control regions. In the case of C++, we look at pC++ which is based on a concurrent aggregate parallel model.

  5. Seasonal distribution of aliphatic hydrocarbons in the Vaza Barris Estuarine System, Sergipe, Brazil.

    PubMed

    Barbosa, José Carlos S; Santos, Lukas G G V; Sant'Anna, Mércia V S; Souza, Michel R R; Damasceno, Flaviana C; Alexandre, Marcelo R

    2016-03-15

    The seasonal assessment of anthropogenic activities in the Vaza Barris estuarine river system, located in the Sergipe state, northeastern Brazil, was performed using the aliphatic hydrocarbon distribution. The aliphatic hydrocarbon and isoprenoid (Pristane and Phytane) concentrations ranged between 0.19 μg g(-1) and 8.5 μg g(-1) of dry weight. Data were analyzed using Kruskal-Wallis test, with significance level set at p<0.05, and no seasonality distribution change was observed. The Carbon Preference Index (CPI), associated with n-alkanes/n-C16, Low Molecular Weight/High Molecular Weight ratio (LMW/HMW) and Terrigenous to Aquatic Ratio (TAR) suggested biogenic input of aliphatic hydrocarbons for most samples, with significant contribution of higher plants. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Full State Feedback Control for Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Tillay

    This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimalmore » control commands to the DERs of the VPP.« less

  7. Evaluation of a grid based molecular dynamics approach for polypeptide simulations.

    PubMed

    Merelli, Ivan; Morra, Giulia; Milanesi, Luciano

    2007-09-01

    Molecular dynamics is very important for biomedical research because it makes possible simulation of the behavior of a biological macromolecule in silico. However, molecular dynamics is computationally rather expensive: the simulation of some nanoseconds of dynamics for a large macromolecule such as a protein takes very long time, due to the high number of operations that are needed for solving the Newton's equations in the case of a system of thousands of atoms. In order to obtain biologically significant data, it is desirable to use high-performance computation resources to perform these simulations. Recently, a distributed computing approach based on replacing a single long simulation with many independent short trajectories has been introduced, which in many cases provides valuable results. This study concerns the development of an infrastructure to run molecular dynamics simulations on a grid platform in a distributed way. The implemented software allows the parallel submission of different simulations that are singularly short but together bring important biological information. Moreover, each simulation is divided into a chain of jobs to avoid data loss in case of system failure and to contain the dimension of each data transfer from the grid. The results confirm that the distributed approach on grid computing is particularly suitable for molecular dynamics simulations thanks to the elevated scalability.

  8. Enabling High-performance Interactive Geoscience Data Analysis Through Data Placement and Movement Optimization

    NASA Astrophysics Data System (ADS)

    Zhu, F.; Yu, H.; Rilee, M. L.; Kuo, K. S.; Yu, L.; Pan, Y.; Jiang, H.

    2017-12-01

    Since the establishment of data archive centers and the standardization of file formats, scientists are required to search metadata catalogs for data needed and download the data files to their local machines to carry out data analysis. This approach has facilitated data discovery and access for decades, but it inevitably leads to data transfer from data archive centers to scientists' computers through low-bandwidth Internet connections. Data transfer becomes a major performance bottleneck in such an approach. Combined with generally constrained local compute/storage resources, they limit the extent of scientists' studies and deprive them of timely outcomes. Thus, this conventional approach is not scalable with respect to both the volume and variety of geoscience data. A much more viable solution is to couple analysis and storage systems to minimize data transfer. In our study, we compare loosely coupled approaches (exemplified by Spark and Hadoop) and tightly coupled approaches (exemplified by parallel distributed database management systems, e.g., SciDB). In particular, we investigate the optimization of data placement and movement to effectively tackle the variety challenge, and boost the popularization of parallelization to address the volume challenge. Our goal is to enable high-performance interactive analysis for a good portion of geoscience data analysis exercise. We show that tightly coupled approaches can concentrate data traffic between local storage systems and compute units, and thereby optimizing bandwidth utilization to achieve a better throughput. Based on our observations, we develop a geoscience data analysis system that tightly couples analysis engines with storages, which has direct access to the detailed map of data partition locations. Through an innovation data partitioning and distribution scheme, our system has demonstrated scalable and interactive performance in real-world geoscience data analysis applications.

  9. Performance improvement of eight-state continuous-variable quantum key distribution with an optical amplifier

    NASA Astrophysics Data System (ADS)

    Guo, Ying; Li, Renjie; Liao, Qin; Zhou, Jian; Huang, Duan

    2018-02-01

    Discrete modulation is proven to be beneficial to improving the performance of continuous-variable quantum key distribution (CVQKD) in long-distance transmission. In this paper, we suggest a construct to improve the maximal generated secret key rate of discretely modulated eight-state CVQKD using an optical amplifier (OA) with a slight cost of transmission distance. In the proposed scheme, an optical amplifier is exploited to compensate imperfection of Bob's apparatus, so that the generated secret key rate of eight-state protocol is enhanced. Specifically, we investigate two types of optical amplifiers, phase-insensitive amplifier (PIA) and phase-sensitive amplifier (PSA), and thereby obtain approximately equivalent improved performance for eight-state CVQKD system when applying these two different amplifiers. Numeric simulation shows that the proposed scheme can well improve the generated secret key rate of eight-state CVQKD in both asymptotic limit and finite-size regime. We also show that the proposed scheme can achieve the relatively high-rate transmission at long-distance communication system.

  10. Design and Analyses of High Aspect Ratio Nozzles for Distributed Propulsion Acoustic Measurements

    NASA Technical Reports Server (NTRS)

    Dippold, Vance F., III

    2016-01-01

    A series of three convergent round-to-rectangular high-aspect ratio nozzles were designed for acoustics measurements. The nozzles have exit area aspect ratios of 8:1, 12:1, and 16:1. With septa inserts, these nozzles will mimic an array of distributed propulsion system nozzles, as found on hybrid wing-body aircraft concepts. Analyses were performed for the three nozzle designs and showed that the flow through the nozzles was free of separated flow and shocks. The exit flow was mostly uniform with the exception of a pair of vortices at each span-wise end of the nozzle.

  11. Very High Reflectivity Supermirrors And Their Applications

    NASA Astrophysics Data System (ADS)

    Mezei, F.

    1989-01-01

    Very high reflectivity (some 95 % or better) supermirrors, with cut-off angles up to 2 times the critical angle of Ni coated simple total reflection neutron mirrors, can be produced using well established conventional deposition techniques. This performance makes applications involving multiple reflections and transmission geometries feasible, which in turn allow us to use more sophisticated neutron optical systems in order to optimize performance and minimize the amount a scarce supermirrors required. A key feature of several of these novel systems is the distribution of tasks between the several optical components achieving the desired performance by multiple action. The design and characteristics of a series of novel applications, such as polarizing cavities, collimators and guides, non-polarizing guides, beam compressors, deflectors and splitters (most of them tested or being implemented) are the main subjects of the present paper.

  12. Characterizing output bottlenecks in a supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Bing; Chase, Jeffrey; Dillow, David A

    2012-01-01

    Supercomputer I/O loads are often dominated by writes. HPC (High Performance Computing) file systems are designed to absorb these bursty outputs at high bandwidth through massive parallelism. However, the delivered write bandwidth often falls well below the peak. This paper characterizes the data absorption behavior of a center-wide shared Lustre parallel file system on the Jaguar supercomputer. We use a statistical methodology to address the challenges of accurately measuring a shared machine under production load and to obtain the distribution of bandwidth across samples of compute nodes, storage targets, and time intervals. We observe and quantify limitations from competing traffic,more » contention on storage servers and I/O routers, concurrency limitations in the client compute node operating systems, and the impact of variance (stragglers) on coupled output such as striping. We then examine the implications of our results for application performance and the design of I/O middleware systems on shared supercomputers.« less

  13. Faster than Real-Time Dynamic Simulation for Large-Size Power System with Detailed Dynamic Models using High-Performance Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Renke; Jin, Shuangshuang; Chen, Yousu

    This paper presents a faster-than-real-time dynamic simulation software package that is designed for large-size power system dynamic simulation. It was developed on the GridPACKTM high-performance computing (HPC) framework. The key features of the developed software package include (1) faster-than-real-time dynamic simulation for a WECC system (17,000 buses) with different types of detailed generator, controller, and relay dynamic models, (2) a decoupled parallel dynamic simulation algorithm with optimized computation architecture to better leverage HPC resources and technologies, (3) options for HPC-based linear and iterative solvers, (4) hidden HPC details, such as data communication and distribution, to enable development centered on mathematicalmore » models and algorithms rather than on computational details for power system researchers, and (5) easy integration of new dynamic models and related algorithms into the software package.« less

  14. Fabrication of High Performing PEMFC Catalyst-Coated Membranes with a Low Cost Air-Assisted Cylindrical Liquid Jets Spraying System

    DOE PAGES

    Peng, Xiong; Omasta, Travis; Rigdon, William; ...

    2016-11-15

    In this paper, a low cost air-assisted cylindrical liquid jets spraying (ACLJS) system was developed to prepare high-performance catalyst-coated membranes (CCMs) for proton exchange membrane fuel cells (PEMFCs). The catalyst ink was flowed from a cylindrical orifice and was atomized by an air stream fed from a coaxial slit and sprayed directly onto the membrane, which was suctioned to a heated aluminum vacuum plate. The CCM pore architecture including size, distribution and volume can be controlled using various flow parameters, and the impact of spraying conditions on electrode structure and PEMFC performance was investigated. CCMs fabricated in the fiber-type break-upmore » regime by ACLJS achieved very high performance during PEMFC testing, with the top-performing cells having a current density greater than 1900 mA/cm 2 at 0.7 V under H 2/O 2 flows and 700 mA/cm 2 under H 2/Air at 1.5 bar(absolute) pressure and 60% gas RH, and 80°C cell temperature.« less

  15. Fabrication of High Performing PEMFC Catalyst-Coated Membranes with a Low Cost Air-Assisted Cylindrical Liquid Jets Spraying System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Xiong; Omasta, Travis; Rigdon, William

    In this paper, a low cost air-assisted cylindrical liquid jets spraying (ACLJS) system was developed to prepare high-performance catalyst-coated membranes (CCMs) for proton exchange membrane fuel cells (PEMFCs). The catalyst ink was flowed from a cylindrical orifice and was atomized by an air stream fed from a coaxial slit and sprayed directly onto the membrane, which was suctioned to a heated aluminum vacuum plate. The CCM pore architecture including size, distribution and volume can be controlled using various flow parameters, and the impact of spraying conditions on electrode structure and PEMFC performance was investigated. CCMs fabricated in the fiber-type break-upmore » regime by ACLJS achieved very high performance during PEMFC testing, with the top-performing cells having a current density greater than 1900 mA/cm 2 at 0.7 V under H 2/O 2 flows and 700 mA/cm 2 under H 2/Air at 1.5 bar(absolute) pressure and 60% gas RH, and 80°C cell temperature.« less

  16. Distribution of rain height over subtropical region: Durban, South Africa for satellite communication systems

    NASA Astrophysics Data System (ADS)

    Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.

    2018-03-01

    Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.

  17. LIQUID AND GASEOUS FUEL DISTRIBUTION SYSTEM

    EPA Science Inventory

    The report describes the national liquid and gaseous fuel distribution system. he study leading to the report was performed as part of an effort to better understand emissions of volatile organic compounds from the fuel distribution system. he primary, secondary, and tertiary seg...

  18. PCI bus content-addressable-memory (CAM) implementation on FPGA for pattern recognition/image retrieval in a distributed environment

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Yan, Yin; Tanmay, Parikh; Khoury, Jed; Woods, C. L.

    2004-11-01

    Recently surveillance and Automatic Target Recognition (ATR) applications are increasing as the cost of computing power needed to process the massive amount of information continues to fall. This computing power has been made possible partly by the latest advances in FPGAs and SOPCs. In particular, to design and implement state-of-the-Art electro-optical imaging systems to provide advanced surveillance capabilities, there is a need to integrate several technologies (e.g. telescope, precise optics, cameras, image/compute vision algorithms, which can be geographically distributed or sharing distributed resources) into a programmable system and DSP systems. Additionally, pattern recognition techniques and fast information retrieval, are often important components of intelligent systems. The aim of this work is using embedded FPGA as a fast, configurable and synthesizable search engine in fast image pattern recognition/retrieval in a distributed hardware/software co-design environment. In particular, we propose and show a low cost Content Addressable Memory (CAM)-based distributed embedded FPGA hardware architecture solution with real time recognition capabilities and computing for pattern look-up, pattern recognition, and image retrieval. We show how the distributed CAM-based architecture offers a performance advantage of an order-of-magnitude over RAM-based architecture (Random Access Memory) search for implementing high speed pattern recognition for image retrieval. The methods of designing, implementing, and analyzing the proposed CAM based embedded architecture are described here. Other SOPC solutions/design issues are covered. Finally, experimental results, hardware verification, and performance evaluations using both the Xilinx Virtex-II and the Altera Apex20k are provided to show the potential and power of the proposed method for low cost reconfigurable fast image pattern recognition/retrieval at the hardware/software co-design level.

  19. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  20. Visualizing request-flow comparison to aid performance diagnosis in distributed systems.

    PubMed

    Sambasivan, Raja R; Shafer, Ilari; Mazurek, Michelle L; Ganger, Gregory R

    2013-12-01

    Distributed systems are complex to develop and administer, and performance problem diagnosis is particularly challenging. When performance degrades, the problem might be in any of the system's many components or could be a result of poor interactions among them. Recent research efforts have created tools that automatically localize the problem to a small number of potential culprits, but research is needed to understand what visualization techniques work best for helping distributed systems developers understand and explore their results. This paper compares the relative merits of three well-known visualization approaches (side-by-side, diff, and animation) in the context of presenting the results of one proven automated localization technique called request-flow comparison. Via a 26-person user study, which included real distributed systems developers, we identify the unique benefits that each approach provides for different problem types and usage modes.

  1. DEPEND - A design environment for prediction and evaluation of system dependability

    NASA Technical Reports Server (NTRS)

    Goswami, Kumar K.; Iyer, Ravishankar K.

    1990-01-01

    The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible.

  2. Recent GRC Aerospace Technologies Applicable to Terrestrial Energy Systems

    NASA Technical Reports Server (NTRS)

    Kankam, David; Lyons, Valerie J.; Hoberecht, Mark A.; Tacina, Robert R.; Hepp, Aloysius F.

    2000-01-01

    This paper is an overview of a wide range of recent aerospace technologies under development at the NASA Glenn Research Center, in collaboration with other NASA centers, government agencies, industry and academia. The focused areas are space solar power, advanced power management and distribution systems, Stirling cycle conversion systems, fuel cells, advanced thin film photovoltaics and batteries, and combustion technologies. The aerospace-related objectives of the technologies are generation of space power, development of cost-effective and reliable, high performance power systems, cryogenic applications, energy storage, and reduction in gas-turbine emissions, with attendant clean jet engines. The terrestrial energy applications of the technologies include augmentation of bulk power in ground power distribution systems, and generation of residential, commercial and remote power, as well as promotion of pollution-free environment via reduction in combustion emissions.

  3. Nonlinear effects of unbalance in the rotor-floating ring bearing system of turbochargers

    NASA Astrophysics Data System (ADS)

    Tian, L.; Wang, W. J.; Peng, Z. J.

    2013-01-01

    Turbocharger (TC) rotor-floating ring bearing (FRB) system is characterised by high speed as well as high non-linearity. Using the run-up and run-down simulation method, this paper systematically investigates the influence of unbalance on the rotordynamic characteristics of a real TC-FRB system over the speed range from 0 Hz to 3500 Hz. The rotor is discretized by the finite element method, and the desired oil film forces at each simulation step are calculated by an efficient analytical method. The imposed unbalance amount and distribution are the variables considered in the performed non-stationary simulations. The newly obtained results evidently show the distinct phenomena brought about by the variations of the unbalance offset, which confirms that the unbalance level is a critical parameter for the system response. In the meantime, the variations of unbalance distribution, i.e. out-of-phase and in-phase unbalance, can lead to entirely different simulation results as well, which proves the distribution of unbalance is not negligible during the dynamic analysis of the rotor-FRB system. Additionally, considerable effort has been placed on the description as well as discussion of a unique phenomenon termed Critical Limit Cycle Oscillation (CLC Oscillation), which is of great importance and interest to the TC research and development.

  4. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  5. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  6. Distribution System Reliability Analysis for Smart Grid Applications

    NASA Astrophysics Data System (ADS)

    Aljohani, Tawfiq Masad

    Reliability of power systems is a key aspect in modern power system planning, design, and operation. The ascendance of the smart grid concept has provided high hopes of developing an intelligent network that is capable of being a self-healing grid, offering the ability to overcome the interruption problems that face the utility and cost it tens of millions in repair and loss. To address its reliability concerns, the power utilities and interested parties have spent extensive amount of time and effort to analyze and study the reliability of the generation and transmission sectors of the power grid. Only recently has attention shifted to be focused on improving the reliability of the distribution network, the connection joint between the power providers and the consumers where most of the electricity problems occur. In this work, we will examine the effect of the smart grid applications in improving the reliability of the power distribution networks. The test system used in conducting this thesis is the IEEE 34 node test feeder, released in 2003 by the Distribution System Analysis Subcommittee of the IEEE Power Engineering Society. The objective is to analyze the feeder for the optimal placement of the automatic switching devices and quantify their proper installation based on the performance of the distribution system. The measures will be the changes in the reliability system indices including SAIDI, SAIFI, and EUE. The goal is to design and simulate the effect of the installation of the Distributed Generators (DGs) on the utility's distribution system and measure the potential improvement of its reliability. The software used in this work is DISREL, which is intelligent power distribution software that is developed by General Reliability Co.

  7. Aerosols and polar stratospheric clouds measurements during the EASOE campaign

    NASA Technical Reports Server (NTRS)

    Haner, D.; Godin, S.; Megie, G.; David, C.; Mitev, V.

    1992-01-01

    Preliminary results of observations performed using two different lidar systems during the EASOE (European Arctic Stratospheric Ozone Experiment), which has taken place in the winter of 1991-1992 in the northern hemisphere lattitude regions, are presented. The first system is a ground based multiwavelength lidar intended to perform measurements of the ozone vertical distribution in the 5 km to 40 km altitude range. It was located in Sodankyla (67 degrees N, 27 degrees E) as part of the ELSA experiment. The objectives of the ELSA cooperative project is to study the relation between polar stratospheric cloud events and ozone depletion with high vertical resolution and temporal continuity, and the evolution of the ozone distribution in relation to the position of the polar vortex. The second system is an airborne backscatter lidar (Leandre) which allows for the study of the 3-D structure and the optical properties of polar stratospheric clouds. The Leandre instrument is a dual-polarization lidar system, emitting at 532 nm, which allows for the determination of the type of clouds observed, according to the usual classification of polar stratospheric clouds. More than 60 hours of flight were performed in Dec. 1991, and Jan. and Feb. 1992 in Kiruna, Sweden. The operation of the Leandre instrument has led to the observation of the short scale variability of the Pinatubo volcanic cloud in the high latitude regions and to several episodes of polar stratospheric clouds. Preliminary analysis of the data is presented.

  8. Early Performance Results from the GOES-R Product Generation System

    NASA Astrophysics Data System (ADS)

    Marley, S.; Weiner, A.; Kalluri, S. N.; Hansen, D.; Dittberner, G.

    2013-12-01

    Enhancements to remote sensing capabilities for the next generation of Geostationary Operational Environmental Satellite (GOES R-series) scheduled to be launched in 2015 require high performance computing capabilities to output meteorological observations and products at low latency compared to the legacy processing systems. GOES R-series (GOES-R, -S, -T, and -U) represents a generational change in both spacecraft and instrument capability, and the GOES Re-Broadcast (GRB) data which contains calibrated and navigated radiances from all the instruments will be at a data rate of 31 Mb/sec compared to the current 2.11 Mb/sec from existing GOES satellites. To keep up with the data processing rates, the Product Generation (PG) system in the ground segment is designed on a Service Based Architecture (SBA). Each algorithm is executed as a service and subscribes to the data it needs to create higher level products via an enterprise service bus. Various levels of product data are published and retrieved from a data fabric. Together, the SBA and the data fabric provide a flexible, scalable, high performance architecture that meets the needs of product processing now and can grow to accommodate new algorithms in the future. The algorithms are linked together in a precedence chain starting from Level 0 to Level 1b and higher order Level 2 products that are distributed to data distribution nodes for external users. Qualification testing for more than half the product algorithms has so far been completed the PG system.

  9. 242A Distributed Control System Year 2000 Acceptance Test Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    TEATS, M.C.

    1999-08-31

    This report documents acceptance test results for the 242-A Evaporator distributive control system upgrade to D/3 version 9.0-2 for year 2000 compliance. This report documents the test results obtained by acceptance testing as directed by procedure HNF-2695. This verification procedure will document the initial testing and evaluation of the potential 242-A Distributed Control System (DCS) operating difficulties across the year 2000 boundary and the calendar adjustments needed for the leap year. Baseline system performance data will be recorded using current, as-is operating system software. Data will also be collected for operating system software that has been modified to correct yearmore » 2000 problems. This verification procedure is intended to be generic such that it may be performed on any D/3{trademark} (GSE Process Solutions, Inc.) distributed control system that runs with the VMSTM (Digital Equipment Corporation) operating system. This test may be run on simulation or production systems depending upon facility status. On production systems, DCS outages will occur nine times throughout performance of the test. These outages are expected to last about 10 minutes each.« less

  10. Comprehensive analysis of the T-cell receptor beta chain gene in rhesus monkey by high throughput sequencing

    PubMed Central

    Li, Zhoufang; Liu, Guangjie; Tong, Yin; Zhang, Meng; Xu, Ying; Qin, Li; Wang, Zhanhui; Chen, Xiaoping; He, Jiankui

    2015-01-01

    Profiling immune repertoires by high throughput sequencing enhances our understanding of immune system complexity and immune-related diseases in humans. Previously, cloning and Sanger sequencing identified limited numbers of T cell receptor (TCR) nucleotide sequences in rhesus monkeys, thus their full immune repertoire is unknown. We applied multiplex PCR and Illumina high throughput sequencing to study the TCRβ of rhesus monkeys. We identified 1.26 million TCRβ sequences corresponding to 643,570 unique TCRβ sequences and 270,557 unique complementarity-determining region 3 (CDR3) gene sequences. Precise measurements of CDR3 length distribution, CDR3 amino acid distribution, length distribution of N nucleotide of junctional region, and TCRV and TCRJ gene usage preferences were performed. A comprehensive profile of rhesus monkey immune repertoire might aid human infectious disease studies using rhesus monkeys. PMID:25961410

  11. Systems Measures of Water Distribution System Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klise, Katherine A.; Murray, Regan; Walker, La Tonya Nicole

    2015-01-01

    Resilience is a concept that is being used increasingly to refer to the capacity of infrastructure systems to be prepared for and able to respond effectively and rapidly to hazardous events. In Section 2 of this report, drinking water hazards, resilience literature, and available resilience tools are presented. Broader definitions, attributes and methods for measuring resilience are presented in Section 3. In Section 4, quantitative systems performance measures for water distribution systems are presented. Finally, in Section 5, the performance measures and their relevance to measuring the resilience of water systems to hazards is discussed along with needed improvements tomore » water distribution system modeling tools.« less

  12. Improvement of Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size.

    PubMed

    Ma, Haotong; Liu, Zejin; Jiang, Pengzhi; Xu, Xiaojun; Du, Shaojun

    2011-07-04

    We propose and demonstrate the improvement of conventional Galilean refractive beam shaping system for accurately generating near-diffraction-limited flattop beam with arbitrary beam size. Based on the detailed study of the refractive beam shaping system, we found that the conventional Galilean beam shaper can only work well for the magnifying beam shaping. Taking the transformation of input beam with Gaussian irradiance distribution into target beam with high order Fermi-Dirac flattop profile as an example, the shaper can only work well at the condition that the size of input and target beam meets R(0) ≥ 1.3 w(0). For the improvement, the shaper is regarded as the combination of magnifying and demagnifying beam shaping system. The surface and phase distributions of the improved Galilean beam shaping system are derived based on Geometric and Fourier Optics. By using the improved Galilean beam shaper, the accurate transformation of input beam with Gaussian irradiance distribution into target beam with flattop irradiance distribution is realized. The irradiance distribution of the output beam is coincident with that of the target beam and the corresponding phase distribution is maintained. The propagation performance of the output beam is greatly improved. Studies of the influences of beam size and beam order on the improved Galilean beam shaping system show that restriction of beam size has been greatly reduced. This improvement can also be used to redistribute the input beam with complicated irradiance distribution into output beam with complicated irradiance distribution.

  13. High temperature antenna development for space shuttle, volume 1

    NASA Technical Reports Server (NTRS)

    Kuhlman, E. A.

    1973-01-01

    Design concepts for high temperature flush mounted Space Shuttle Orbiter antenna systems are discussed. The design concepts include antenna systems for VHF, L-band, S-band, C-band and Ku-band frequencies. The S-band antenna system design was completed and test hardware fabricated. It was then subjected to electrical and thermal testing to establish design requirements and determine reuse capabilities. The thermal tests consisted of applying ten high temperature cycles simulating the Orbiter entry heating environment in an arc tunnel plasma facility and observing the temperature distributions. Radiation pattern and impedance measurements before and after high temperature exposure were used to evaluated the antenna systems performance. Alternate window design concepts are considered. Layout drawings, supported by thermal and strength analyses, are given for each of the antenna system designs. The results of the electrical and thermal testing of the S-band antenna system are given.

  14. FAWKES Information Management for Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Spetka, S.; Ramseyer, G.; Tucker, S.

    2010-09-01

    Current space situational awareness assets can be fully utilized by managing their inputs and outputs in real time. Ideally, sensors are tasked to perform specific functions to maximize their effectiveness. Many sensors are capable of collecting more data than is needed for a particular purpose, leading to the potential to enhance a sensor’s utilization by allowing it to be re-tasked in real time when it is determined that sufficient data has been acquired to meet the first task’s requirements. In addition, understanding a situation involving fast-traveling objects in space may require inputs from more than one sensor, leading to a need for information sharing in real time. Observations that are not processed in real time may be archived to support forensic analysis for accidents and for long-term studies. Space Situational Awareness (SSA) requires an extremely robust distributed software platform to appropriately manage the collection and distribution for both real-time decision-making as well as for analysis. FAWKES is being developed as a Joint Space Operations Center (JSPOC) Mission System (JMS) compliant implementation of the AFRL Phoenix information management architecture. It implements a pub/sub/archive/query (PSAQ) approach to communications designed for high performance applications. FAWKES provides an easy to use, reliable interface for structuring parallel processing, and is particularly well suited to the requirements of SSA. In addition to supporting point-to-point communications, it offers an elegant and robust implementation of collective communications, to scatter, gather and reduce values. A query capability is also supported that enhances reliability. Archived messages can be queried to re-create a computation or to selectively retrieve previous publications. PSAQ processes express their role in a computation by subscribing to their inputs and by publishing their results. Sensors on the edge can subscribe to inputs by appropriately authorized users, allowing dynamic tasking capabilities. Previously, the publication of sensor data collected by mobile systems was demonstrated. Thumbnails of infrared imagery that were imaged in real time by an aircraft [1] were published over a grid. This airborne system subscribed to requests for and then published the requested detailed images. In another experiment a system employing video subscriptions [2] drove the analysis of live video streams, resulting in a published stream of processed video output. We are currently implementing an SSA system that uses FAWKES to deliver imagery from telescopes through a pipeline of processing steps that are performed on high performance computers. PSAQ facilitates the decomposition of a problem into components that can be distributed across processing assets from the smallest sensors in space to the largest high performance computing (HPC) centers, as well as the integration and distribution of the results, all in real time. FAWKES supports the real-time latency requirements demanded by all of these applications. It also enhances reliability by easily supporting redundant computation. This study shows how FAWKES/PSAQ is utilized in SSA applications, and presents performance results for latency and throughput that meet these needs.

  15. Performance Optimization Design for a High-Speed Weak FBG Interrogation System Based on DFB Laser.

    PubMed

    Yao, Yiqiang; Li, Zhengying; Wang, Yiming; Liu, Siqi; Dai, Yutang; Gong, Jianmin; Wang, Lixin

    2017-06-22

    A performance optimization design for a high-speed fiber Bragg grating (FBG) interrogation system based on a high-speed distributed feedback (DFB) swept laser is proposed. A time-division-multiplexing sensor network with identical weak FBGs is constituted to realize high-capacity sensing. In order to further improve the multiplexing capacity, a waveform repairing algorithm is designed to extend the dynamic demodulation range of FBG sensors. It is based on the fact that the spectrum of an FBG keeps stable over a long period of time. Compared with the pre-collected spectra, the distorted spectra waveform are identified and repaired. Experimental results show that all the identical weak FBGs are distinguished and demodulated at the speed of 100 kHz with a linearity of above 0.99, and the range of dynamic demodulation is extended by 40%.

  16. Performance Optimization Design for a High-Speed Weak FBG Interrogation System Based on DFB Laser

    PubMed Central

    Yao, Yiqiang; Li, Zhengying; Wang, Yiming; Liu, Siqi; Dai, Yutang; Gong, Jianmin; Wang, Lixin

    2017-01-01

    A performance optimization design for a high-speed fiber Bragg grating (FBG) interrogation system based on a high-speed distributed feedback (DFB) swept laser is proposed. A time-division-multiplexing sensor network with identical weak FBGs is constituted to realize high-capacity sensing. In order to further improve the multiplexing capacity, a waveform repairing algorithm is designed to extend the dynamic demodulation range of FBG sensors. It is based on the fact that the spectrum of an FBG keeps stable over a long period of time. Compared with the pre-collected spectra, the distorted spectra waveform are identified and repaired. Experimental results show that all the identical weak FBGs are distinguished and demodulated at the speed of 100 kHz with a linearity of above 0.99, and the range of dynamic demodulation is extended by 40%. PMID:28640187

  17. Shuttle: Reaction control system. Cryogenic liquid distribution system: Study

    NASA Technical Reports Server (NTRS)

    Akkerman, J. W.

    1972-01-01

    A cryogenic liquid distribution system suitable for the reaction control system on space shuttles is described. The system thermodynamics, operation, performance and weight analysis are discussed along with the design, maintenance and integration concepts.

  18. Validation of the multiplex ligation-dependent probe amplification assay and its application on the distribution study of the major alleles of 17 blood group systems in Chinese donors from Guangzhou.

    PubMed

    Ji, Yanli; Wen, Jizhi; Veldhuisen, Barbera; Haer-Wigman, Lonneke; Wang, Zhen; Lodén-van Straaten, Martin; Wei, Ling; Luo, Guangping; Fu, Yongshui; van der Schoot, C Ellen

    2017-02-01

    Genotyping platforms for common red blood cell (RBC) antigens have been successfully applied in Caucasian and black populations but not in Chinese populations. In this study, a genotyping assay based on multiplex ligation-dependent probe amplification (MLPA) technology was applied in a Chinese population to validate the MLPA probes. Subsequently, the comprehensive distribution of 17 blood group systems also was obtained. DNA samples from 200 Chinese donors were extracted and genotyped using the blood-MLPA assay. To confirm the MLPA results, a second independent genotyping assay (ID Core+) was conducted in 40 donors, and serological typing of 14 blood-group antigens was performed in 91 donors. In donors who had abnormal copy numbers of an allele (DI and GYPB) determined by MLPA, additional experiments were performed (polymerase chain reaction, sequencing, and flow cytometry analysis). The genotyping results obtained using the blood-MLPA and ID Core+ assays were consistent. Serological data were consistent with the genotyping results except for one donor who had a Lu(a-b-) phenotype. Of the 17 blood group systems, the distribution of the MNS, Duffy, Kidd, Diego, Yt, and Dombrock systems was polymorphic. The Mur and St a antigens of the MNS system were distributed with a frequency of 9% (18 of 200) and 2% (4 of 200), respectively. One donor with chimerism and one who carried a novel DI*02(A845V) allele, which predicts the depression of Di b antigen expression, were identified. The blood-MLPA assay could easily identify the common blood-group alleles and correctly predicted phenotype in the Chinese population. The Mur and St a antigens were distributed with high frequency in a Southern Chinese Han population. © 2016 AABB.

  19. Development of a High Performance Acousto-ultrasonic Scan System

    NASA Technical Reports Server (NTRS)

    Roth, D. J.; Martin, R. E.; Harmon, L. M.; Gyekenyesi, A. L.; Kautz, H. E.

    2002-01-01

    Acousto-ultrasonic (AU) interrogation is a single-sided nondestructive evaluation (NDE) technique employing separated sending and receiving transducers. It is used for assessing the microstructural condition/distributed damage state of the material between the transducers. AU is complementary to more traditional NDE methods such as ultrasonic c-scan, x-ray radiography, and thermographic inspection that tend to be used primarily for discrete flaw detection. Through its history, AU has been used to inspect polymer matrix composite, metal matrix composite, ceramic matrix composite, and even monolithic metallic materials. The development of a high-performance automated AU scan system for characterizing within-sample microstructural and property homogeneity is currently in a prototype stage at NASA. In this paper, a review of essential AU technology is given. Additionally, the basic hardware and software configuration, and preliminary results with the system, are described.

  20. Distributed Leadership and Teacher Job Satisfaction in Singapore

    ERIC Educational Resources Information Center

    García Torres, Darlene

    2018-01-01

    Purpose: Singapore is a country with low teacher attrition rates and high performance on international assessments (TIMSS 2011/2015 and PISA 2012/2015). Consequently, its education system is often considered as a model for other nations. The purpose of this paper is to extend research on teacher job satisfaction in Singapore and provide…

  1. Running R Statistical Computing Environment Software on the Peregrine

    Science.gov Websites

    for the development of new statistical methodologies and enjoys a large user base. Please consult the distribution details. Natural language support but running in an English locale R is a collaborative project programming paradigms to better leverage modern HPC systems. The CRAN task view for High Performance Computing

  2. Superlinear threshold detectors in quantum cryptography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lydersen, Lars; Maroey, Oystein; Skaar, Johannes

    2011-09-15

    We introduce the concept of a superlinear threshold detector, a detector that has a higher probability to detect multiple photons if it receives them simultaneously rather than at separate times. Highly superlinear threshold detectors in quantum key distribution systems allow eavesdropping the full secret key without being revealed. Here, we generalize the detector control attack, and analyze how it performs against quantum key distribution systems with moderately superlinear detectors. We quantify the superlinearity in superconducting single-photon detectors based on earlier published data, and gated avalanche photodiode detectors based on our own measurements. The analysis shows that quantum key distribution systemsmore » using detector(s) of either type can be vulnerable to eavesdropping. The avalanche photodiode detector becomes superlinear toward the end of the gate. For systems expecting substantial loss, or for systems not monitoring loss, this would allow eavesdropping using trigger pulses containing less than 120 photons per pulse. Such an attack would be virtually impossible to catch with an optical power meter at the receiver entrance.« less

  3. THE LIQUID AND GASEOUS FUEL DISTRIBUTION SYSTEM

    EPA Science Inventory

    The report describes the national liquid and gaseous fuel distribution system. he study leading to the report was performed as part of an effort to better understand emissions of volatile organic compounds from the fuel distribution system. he primary, secondary, and tertiary seg...

  4. Derivation of hydrous pyrolysis kinetic parameters from open-system pyrolysis

    NASA Astrophysics Data System (ADS)

    Tseng, Yu-Hsin; Huang, Wuu-Liang

    2010-05-01

    Kinetic information is essential to predict the temperature, timing or depth of hydrocarbon generation within a hydrocarbon system. The most common experiments for deriving kinetic parameters are mainly by open-system pyrolysis. However, it has been shown that the conditions of open-system pyrolysis are deviant from nature by its low near-ambient pressure and high temperatures. Also, the extrapolation of heating rates in open-system pyrolysis to geological conditions may be questionable. Recent study of Lewan and Ruble shows hydrous-pyrolysis conditions can simulate the natural conditions better and its applications are supported by two case studies with natural thermal-burial histories. Nevertheless, performing hydrous pyrolysis experiment is really tedious and requires large amount of sample, while open-system pyrolysis is rather convenient and efficient. Therefore, the present study aims at the derivation of convincing distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data. Our results unveil that there is a good correlation between open-system Rock-Eval parameter Tmax and the activation energy (Ea) derived from hydrous pyrolysis. The hydrous pyrolysis single Ea can be predicted from Tmax based on the correlation, while the frequency factor (A0) is estimated based on the linear relationship between single Ea and log A0. Because the Ea distribution is more rational than single Ea, we modify the predicted single hydrous pyrolysis Ea into distributed Ea by shifting the pattern of Ea distribution from open-system pyrolysis until the weight mean Ea distribution equals to the single hydrous pyrolysis Ea. Moreover, it has been shown that the shape of the Ea distribution is very much alike the shape of Tmax curve. Thus, in case of the absence of open-system Ea distribution, we may use the shape of Tmax curve to get the distributed hydrous pyrolysis Ea. The study offers a new approach as a simple method for obtaining distributed hydrous pyrolysis Ea with only routine open-system Rock-Eval data, which will allow for better estimating hydrocarbon generation.

  5. Low noise buffer amplifiers and buffered phase comparators for precise time and frequency measurement and distribution

    NASA Technical Reports Server (NTRS)

    Eichinger, R. A.; Dachel, P.; Miller, W. H.; Ingold, J. S.

    1982-01-01

    Extremely low noise, high performance, wideband buffer amplifiers and buffered phase comparators were developed. These buffer amplifiers are designed to distribute reference frequencies from 30 KHz to 45 MHz from a hydrogen maser without degrading the hydrogen maser's performance. The buffered phase comparators are designed to intercompare the phase of state of the art hydrogen masers without adding any significant measurement system noise. These devices have a 27 femtosecond phase stability floor and are stable to better than one picosecond for long periods of time. Their temperature coefficient is less than one picosecond per degree C, and they have shown virtually no voltage coefficients.

  6. Impact of Utility-Scale Distributed Wind on Transmission-Level System Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brancucci Martinez-Anido, C.; Hodge, B. M.

    2014-09-01

    This report presents a new renewable integration study that aims to assess the potential for adding distributed wind to the current power system with minimal or no upgrades to the distribution or transmission electricity systems. It investigates the impacts of integrating large amounts of utility-scale distributed wind power on bulk system operations by performing a case study on the power system of the Independent System Operator-New England (ISO-NE).

  7. An Hybrid liquid nitrogen system to cool a large detector

    NASA Astrophysics Data System (ADS)

    l'Allemand, J. L. Lizon a.

    2017-12-01

    OmegaCAM is a wide field camera housing a mosaic of 32 CCD detectors. For the optimal trade-off between dark current, sensitivity, and cosmetics, these detectors need to be operated at a temperature of about 155 K. The detectors mosaic with a total area of 630 cm2 directly facing the Dewar entrance window, is exposed to a considerable radiation heat load. This can only be achieved with a high-performing cooling system. In addition this system has to be operated at the moving focal plane of a telescope. The paper describes the cooling system, which is build such that it makes the most efficient use of the cooling power of the liquid nitrogen. This is obtained by forcing the nitrogen through a series of well designed and strategically distributed heat exchangers. Results and performance of the system recorded during the laboratory system testing are reported as well. In addition to the cryogenic performance, the document reports also about the overall performance of the instrument including long term vacuum behavior.

  8. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing.

    PubMed

    Ölçer, İbrahim; Öncü, Ahmet

    2017-06-05

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry ( ϕ -OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ -OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ -OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems.

  9. Adaptive Temporal Matched Filtering for Noise Suppression in Fiber Optic Distributed Acoustic Sensing

    PubMed Central

    Ölçer, İbrahim; Öncü, Ahmet

    2017-01-01

    Distributed vibration sensing based on phase-sensitive optical time domain reflectometry (ϕ-OTDR) is being widely used in several applications. However, one of the main challenges in coherent detection-based ϕ-OTDR systems is the fading noise, which impacts the detection performance. In addition, typical signal averaging and differentiating techniques are not suitable for detecting high frequency events. This paper presents a new approach for reducing the effect of fading noise in fiber optic distributed acoustic vibration sensing systems without any impact on the frequency response of the detection system. The method is based on temporal adaptive processing of ϕ-OTDR signals. The fundamental theory underlying the algorithm, which is based on signal-to-noise ratio (SNR) maximization, is presented, and the efficacy of our algorithm is demonstrated with laboratory experiments and field tests. With the proposed digital processing technique, the results show that more than 10 dB of SNR values can be achieved without any reduction in the system bandwidth and without using additional optical amplifier stages in the hardware. We believe that our proposed adaptive processing approach can be effectively used to develop fiber optic-based distributed acoustic vibration sensing systems. PMID:28587240

  10. A real-time diagnostic and performance monitor for UNIX. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dong, Hongchao

    1992-01-01

    There are now over one million UNIX sites and the pace at which new installations are added is steadily increasing. Along with this increase, comes a need to develop simple efficient, effective and adaptable ways of simultaneously collecting real-time diagnostic and performance data. This need exists because distributed systems can give rise to complex failure situations that are often un-identifiable with single-machine diagnostic software. The simultaneous collection of error and performance data is also important for research in failure prediction and error/performance studies. This paper introduces a portable method to concurrently collect real-time diagnostic and performance data on a distributed UNIX system. The combined diagnostic/performance data collection is implemented on a distributed multi-computer system using SUN4's as servers. The approach uses existing UNIX system facilities to gather system dependability information such as error and crash reports. In addition, performance data such as CPU utilization, disk usage, I/O transfer rate and network contention is also collected. In the future, the collected data will be used to identify dependability bottlenecks and to analyze the impact of failures on system performance.

  11. Resident database interfaces to the DAVID system, a heterogeneous distributed database management system

    NASA Technical Reports Server (NTRS)

    Moroh, Marsha

    1988-01-01

    A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.

  12. Research on the performance of low-lift diving tubular pumping system by CFD and Test

    NASA Astrophysics Data System (ADS)

    Xia, Chenzhi; Cheng, Li; Liu, Chao; Zhou, Jiren; Tang, Fangping; Jin, Yan

    2016-11-01

    Post-diving tubular pump is always used in large-discharge & low-head irrigation or storm drainage pumping station, its impeller and motor share the same shaft. Considering diving tubular pump system's excellent hydraulic performance, compact structure, good noise resistance and low operating cost, it is used in Chinese pump stations. To study the hydraulic performance and pressure fluctuation of inlet and outlet passage in diving tubular pump system, both of steady and unsteady full flow fields are numerically simulated at three flow rate conditions by using CFD commercial software. The asymmetry of the longitudinal structure of inlet passage affects the flow pattern on outlet. Especially at small flow rate condition, structural asymmetry will result in the uneven velocity distribution on the outlet of passage inlet. The axial velocity distribution uniformity increases as the flow rate increases on the inlet of passage inlet, and there is a positive correlation between hydraulic loss in the passage inlet and flow rate's quadratic. The axial velocity distribution uniformity on the outlet of passage inlet is 90% at design flow rate condition. The predicted result shows the same trend with test result, and the range of high efficiency area between predicted result and test result is almost identical. The dominant frequency of pressure pulsation is low frequency in inlet passage at design condition. The dominant frequency is high frequency in inlet passage at small and large flow rate condition. At large flow rate condition, the flow pattern is significantly affected by the rotation of impeller in inlet passage. At off-design condition, the pressure pulsation is strong at outlet passage. At design condition, the dominant frequency is 35.57Hz, which is double rotation frequency.

  13. Brillouin distributed temperature sensing system for monitoring of submarine export cables of off-shore wind farms

    NASA Astrophysics Data System (ADS)

    Marx, Benjamin; Rath, Alexander; Kolm, Frederick; Schröder, Andreas; Buntebarth, Christian; Dreß, Albrecht; Hill, Wieland

    2016-05-01

    For high-voltage cables, the maximum temperature of the insulation must never be exceeded at any location and at any load condition. The local temperatures depend not only on the cable design and load history, but also on the local thermal environment of the cable. Therefore, distributed temperature monitoring of high-voltage cables is essential to ensure the integrity of the cable at high load. Especially, the load of the export cables of wind farms varies strongly in dependence on weather conditions. In this field study, we demonstrate the measurement performance of a new, robust Brillouin distributed temperature sensing system (Brillouin-DTS). The system is based on spontaneous Brillouin scattering and does not require a fibre loop. This is essential for long submarine high-voltage cables, where normally no loop can be formed in the seabed. It is completely passively cooled and does not contain any moving or wearing parts. The instrument is dedicated for use in industrial and other rough environments. With a measuring time below 10 min, the temperature resolution is better than 1 °C for distances up to 50 km. In the field study, the submarine export cable of an off-shore wind farm has been monitored. The temperature profile of the export cable shows several hot spots, mostly located at cable joints, and also several cold spots.

  14. Telerobotic system performance measurement - Motivation and methods

    NASA Technical Reports Server (NTRS)

    Kondraske, George V.; Khoury, George J.

    1992-01-01

    A systems performance-based strategy for modeling and conducting experiments relevant to the design and performance characterization of telerobotic systems is described. A developmental testbed consisting of a distributed telerobotics network and initial efforts to implement the strategy described is presented. Consideration is given to the general systems performance theory (GSPT) to tackle human performance problems as a basis for: measurement of overall telerobotic system (TRS) performance; task decomposition; development of a generic TRS model; and the characterization of performance of subsystems comprising the generic model. GSPT employs a resource construct to model performance and resource economic principles to govern the interface of systems to tasks. It provides a comprehensive modeling/measurement strategy applicable to complex systems including both human and artificial components. Application is presented within the framework of a distributed telerobotics network as a testbed. Insight into the design of test protocols which elicit application-independent data is described.

  15. Implementation of Parallel Dynamic Simulation on Shared-Memory vs. Distributed-Memory Environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Chen, Yousu; Wu, Di

    2015-12-09

    Power system dynamic simulation computes the system response to a sequence of large disturbance, such as sudden changes in generation or load, or a network short circuit followed by protective branch switching operation. It consists of a large set of differential and algebraic equations, which is computational intensive and challenging to solve using single-processor based dynamic simulation solution. High-performance computing (HPC) based parallel computing is a very promising technology to speed up the computation and facilitate the simulation process. This paper presents two different parallel implementations of power grid dynamic simulation using Open Multi-processing (OpenMP) on shared-memory platform, and Messagemore » Passing Interface (MPI) on distributed-memory clusters, respectively. The difference of the parallel simulation algorithms and architectures of the two HPC technologies are illustrated, and their performances for running parallel dynamic simulation are compared and demonstrated.« less

  16. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  17. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE PAGES

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    2017-01-16

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  18. Use of model calibration to achieve high accuracy in analysis of computer networks

    DOEpatents

    Frogner, Bjorn; Guarro, Sergio; Scharf, Guy

    2004-05-11

    A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.

  19. Modeling the Delivery Physiology of Distributed Learning Systems.

    ERIC Educational Resources Information Center

    Paquette, Gilbert; Rosca, Ioan

    2003-01-01

    Discusses instructional delivery models and their physiology in distributed learning systems. Highlights include building delivery models; types of delivery models, including distributed classroom, self-training on the Web, online training, communities of practice, and performance support systems; and actors (users) involved, including experts,…

  20. Magnetoacoustic tomography with magnetic induction for high-resolution bioimepedance imaging through vector source reconstruction under the static field of MRI magnet

    PubMed Central

    Mariappan, Leo; Hu, Gang; He, Bin

    2014-01-01

    Purpose: Magnetoacoustic tomography with magnetic induction (MAT-MI) is an imaging modality to reconstruct the electrical conductivity of biological tissue based on the acoustic measurements of Lorentz force induced tissue vibration. This study presents the feasibility of the authors' new MAT-MI system and vector source imaging algorithm to perform a complete reconstruction of the conductivity distribution of real biological tissues with ultrasound spatial resolution. Methods: In the present study, using ultrasound beamformation, imaging point spread functions are designed to reconstruct the induced vector source in the object which is used to estimate the object conductivity distribution. Both numerical studies and phantom experiments are performed to demonstrate the merits of the proposed method. Also, through the numerical simulations, the full width half maximum of the imaging point spread function is calculated to estimate of the spatial resolution. The tissue phantom experiments are performed with a MAT-MI imaging system in the static field of a 9.4 T magnetic resonance imaging magnet. Results: The image reconstruction through vector beamformation in the numerical and experimental studies gives a reliable estimate of the conductivity distribution in the object with a ∼1.5 mm spatial resolution corresponding to the imaging system frequency of 500 kHz ultrasound. In addition, the experiment results suggest that MAT-MI under high static magnetic field environment is able to reconstruct images of tissue-mimicking gel phantoms and real tissue samples with reliable conductivity contrast. Conclusions: The results demonstrate that MAT-MI is able to image the electrical conductivity properties of biological tissues with better than 2 mm spatial resolution at 500 kHz, and the imaging with MAT-MI under a high static magnetic field environment is able to provide improved imaging contrast for biological tissue conductivity reconstruction. PMID:24506649

  1. High-rate dead-time corrections in a general purpose digital pulse processing system

    PubMed Central

    Abbene, Leonardo; Gerardi, Gaetano

    2015-01-01

    Dead-time losses are well recognized and studied drawbacks in counting and spectroscopic systems. In this work the abilities on dead-time correction of a real-time digital pulse processing (DPP) system for high-rate high-resolution radiation measurements are presented. The DPP system, through a fast and slow analysis of the output waveform from radiation detectors, is able to perform multi-parameter analysis (arrival time, pulse width, pulse height, pulse shape, etc.) at high input counting rates (ICRs), allowing accurate counting loss corrections even for variable or transient radiations. The fast analysis is used to obtain both the ICR and energy spectra with high throughput, while the slow analysis is used to obtain high-resolution energy spectra. A complete characterization of the counting capabilities, through both theoretical and experimental approaches, was performed. The dead-time modeling, the throughput curves, the experimental time-interval distributions (TIDs) and the counting uncertainty of the recorded events of both the fast and the slow channels, measured with a planar CdTe (cadmium telluride) detector, will be presented. The throughput formula of a series of two types of dead-times is also derived. The results of dead-time corrections, performed through different methods, will be reported and discussed, pointing out the error on ICR estimation and the simplicity of the procedure. Accurate ICR estimations (nonlinearity < 0.5%) were performed by using the time widths and the TIDs (using 10 ns time bin width) of the detected pulses up to 2.2 Mcps. The digital system allows, after a simple parameter setting, different and sophisticated procedures for dead-time correction, traditionally implemented in complex/dedicated systems and time-consuming set-ups. PMID:26289270

  2. High-efficiency particulate air filter test stand and aerosol generator for particle loading studies

    NASA Astrophysics Data System (ADS)

    Arunkumar, R.; Hogancamp, Kristina U.; Parsons, Michael S.; Rogers, Donna M.; Norton, Olin P.; Nagel, Brian A.; Alderman, Steven L.; Waggoner, Charles A.

    2007-08-01

    This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30×30×29cm3 nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5to12standardm3/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150°C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7standardm3/min, high mass concentrations (˜25mg/m3) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.

  3. High-efficiency particulate air filter test stand and aerosol generator for particle loading studies.

    PubMed

    Arunkumar, R; Hogancamp, Kristina U; Parsons, Michael S; Rogers, Donna M; Norton, Olin P; Nagel, Brian A; Alderman, Steven L; Waggoner, Charles A

    2007-08-01

    This manuscript describes the design, characterization, and operational range of a test stand and high-output aerosol generator developed to evaluate the performance of 30 x 30 x 29 cm(3) nuclear grade high-efficiency particulate air (HEPA) filters under variable, highly controlled conditions. The test stand system is operable at volumetric flow rates ranging from 1.5 to 12 standard m(3)/min. Relative humidity levels are controllable from 5%-90% and the temperature of the aerosol stream is variable from ambient to 150 degrees C. Test aerosols are produced through spray drying source material solutions that are introduced into a heated stainless steel evaporation chamber through an air-atomizing nozzle. Regulation of the particle size distribution of the aerosol challenge is achieved by varying source solution concentrations and through the use of a postgeneration cyclone. The aerosol generation system is unique in that it facilitates the testing of standard HEPA filters at and beyond rated media velocities by consistently providing, into a nominal flow of 7 standard m(3)/min, high mass concentrations (approximately 25 mg/m(3)) of dry aerosol streams having count mean diameters centered near the most penetrating particle size for HEPA filters (120-160 nm). Aerosol streams that have been generated and characterized include those derived from various concentrations of KCl, NaCl, and sucrose solutions. Additionally, a water insoluble aerosol stream in which the solid component is predominantly iron (III) has been produced. Multiple ports are available on the test stand for making simultaneous aerosol measurements upstream and downstream of the test filter. Types of filter performance related studies that can be performed using this test stand system include filter lifetime studies, filtering efficiency testing, media velocity testing, evaluations under high mass loading and high humidity conditions, and determination of the downstream particle size distributions.

  4. Performance Evaluation of Solar Blind NLOS Ultraviolet Communication Systems

    DTIC Science & Technology

    2008-12-01

    noise and signal count statistical distributions . Then we further link key system parameters such as path loss and communication bit error rate (BER... quantum noise limited photon-counting detection. These benefits can now begin to be realized based on technological advances in both miniaturized...multiplication gain of 105~107, high responsivity of 62 A/W, large detection area of a few cm2, reasonable quantum efficiency of 15%, and low dark current

  5. Volatile decision dynamics: experiments, stochastic description, intermittency control and traffic optimization

    NASA Astrophysics Data System (ADS)

    Helbing, Dirk; Schönhof, Martin; Kern, Daniel

    2002-06-01

    The coordinated and efficient distribution of limited resources by individual decisions is a fundamental, unsolved problem. When individuals compete for road capacities, time, space, money, goods, etc, they normally make decisions based on aggregate rather than complete information, such as TV news or stock market indices. In related experiments, we have observed a volatile decision dynamics and far-from-optimal payoff distributions. We have also identified methods of information presentation that can considerably improve the overall performance of the system. In order to determine optimal strategies of decision guidance by means of user-specific recommendations, a stochastic behavioural description is developed. These strategies manage to increase the adaptibility to changing conditions and to reduce the deviation from the time-dependent user equilibrium, thereby enhancing the average and individual payoffs. Hence, our guidance strategies can increase the performance of all users by reducing overreaction and stabilizing the decision dynamics. These results are highly significant for predicting decision behaviour, for reaching optimal behavioural distributions by decision support systems and for information service providers. One of the promising fields of application is traffic optimization.

  6. Flexible architecture of data acquisition firmware based on multi-behaviors finite state machine

    NASA Astrophysics Data System (ADS)

    Arpaia, Pasquale; Cimmino, Pasquale

    2016-11-01

    A flexible firmware architecture for different kinds of data acquisition systems, ranging from high-precision bench instruments to low-cost wireless transducers networks, is presented. The key component is a multi-behaviors finite state machine, easily configurable to both low- and high-performance requirements, to diverse operating systems, as well as to on-line and batch measurement algorithms. The proposed solution was validated experimentally on three case studies with data acquisition architectures: (i) concentrated, in a high-precision instrument for magnetic measurements at CERN, (ii) decentralized, for telemedicine remote monitoring of patients at home, and (iii) distributed, for remote monitoring of building's energy loss.

  7. Research on droplet size measurement of impulse antiriots water cannon based on sheet laser

    NASA Astrophysics Data System (ADS)

    Fa-dong, Zhao; Hong-wei, Zhuang; Ren-jun, Zhan

    2014-04-01

    As a new-style counter-personnel non-lethal weapon, it is the non-steady characteristic and large water mist field that increase the difficulty of measuring the droplet size distribution of impulse anti-riots water cannon which is the most important index to examine its tactical and technology performance. A method based on the technologies of particle scattering, sheet laser imaging and high speed handling was proposed and an universal droplet size measuring algorithm was designed and verified. According to this method, the droplet size distribution was measured. The measuring results of the size distribution under the same position with different timescale, the same axial distance with different radial distance, the same radial distance with different axial distance were analyzed qualitatively and some rational cause was presented. The droplet size measuring method proposed in this article provides a scientific and effective experiment method to ascertain the technical and tactical performance and optimize the relative system performance.

  8. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    ISIS and META are two distributed systems projects at Cornell University. The ISIS project, has developed a new methodology, virtual synchrony, for writing robust distributed software. This approach is directly supported by the ISIS Toolkit, a programming system that is distributed to over 300 academic and industrial sites. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project, is about distributed control in a soft real time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are presented. This approach to distributed computing, a philosophy that is believed to significantly distinguish the work from that of others in the field, is explained.

  9. Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.; Anderson, Thomas E.; Ousterhout, John K.; Patterson, David A.

    1991-01-01

    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications.

  10. a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data

    NASA Astrophysics Data System (ADS)

    Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.

    2015-04-01

    Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.

  11. Data error and highly parameterized groundwater models

    USGS Publications Warehouse

    Hill, M.C.

    2008-01-01

    Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.

  12. Dynamic Performance of High Bypass Ratio Turbine Engines With Water Ingestion

    NASA Technical Reports Server (NTRS)

    Murthy, S. N. B.

    1996-01-01

    The research on dynamic performance of high bypass turbofan engines includes studies on inlets, turbomachinery and the total engine system operating with air-water mixture; the water may be in vapor, droplet, or film form, and their combinations. Prediction codes (WISGS, WINCOF, WINCOF-1, WINCLR, and Transient Engine Performance Code) for performance changes, as well as changes in blade-casing clearance, have been established and demonstrated in application to actual, generic engines. In view of the continuous changes in water distribution in turbomachinery, the performance of both components and the total engine system must be determined in a time-dependent mode; hence, the determination of clearance changes also requires a time-dependent approach. In general, the performance and clearances changes cannot be scaled either with respect to operating or ingestion conditions. Removal of water prior to phase change is the most effective means of avoiding ingestion effects. Sufficient background has been established to perform definitive, full scale tests on a set of components and a complete engine to establish engine control and operability with various air-water vapor-water mixtures.

  13. Development of a Temperature Sensor for Jet Engine and Space Mission Applications

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik; Culley, Dennis

    2008-01-01

    Electronics for Distributed Turbine Engine Control and Space Exploration Missions are expected to encounter extreme temperatures and wide thermal swings. In particular, circuits deployed in a jet engine compartment are likely to be exposed to temperatures well exceeding 150 C. To meet this requirement, efforts exist at the NASA Glenn Research Center (GRC), in support of the Fundamental Aeronautics Program/Subsonic Fixed Wing Project, to develop temperature sensors geared for use in high temperature environments. The sensor and associated circuitry need to be located in the engine compartment under distributed control architecture to simplify system design, improve reliability, and ease signal multiplexing. Several circuits were designed using commercial-off-the-shelf as well as newly-developed components to perform temperature sensing at high temperatures. The temperature-sensing circuits will be described along with the results pertaining to their performance under extreme temperature.

  14. Multilevel photonic modules for millimeter-wave phased-array antennas

    NASA Astrophysics Data System (ADS)

    Paolella, Arthur C.; Joshi, Abhay M.; Wright, James G.; Coryell, Louis A.

    1998-11-01

    Optical signal distribution for phased array antennas in communication system is advantageous to designers. By distributing the microwave and millimeter wave signal through optical fiber there is the potential for improved performance and lower weight. In addition when applied to communication satellites this weight saving translates into substantially reduced launch costs. The goal of the Phase I Small Business Innovation Research (SBIR) Program is the development of multi-level photonic modules for phased array antennas. The proposed module with ultimately comprise of a monolithic, InGaAs/InP p-i-n photodetector-p-HEMT power amplifier, opto-electronic integrated circuit, that has 44 GHz bandwidth and output power of 50 mW integrated with a planar antenna. The photodetector will have a high quantum efficiency and will be front-illuminated, thereby improved optical performance. Under Phase I a module was developed using standard MIC technology with a high frequency coaxial feed interconnect.

  15. A Simple XML Producer-Consumer Protocol

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    There are many different projects from government, academia, and industry that provide services for delivering events in distributed environments. The problem with these event services is that they are not general enough to support all uses and they speak different protocols so that they cannot interoperate. We require such interoperability when we, for example, wish to analyze the performance of an application in a distributed environment. Such an analysis might require performance information from the application, computer systems, networks, and scientific instruments. In this work we propose and evaluate a standard XML-based protocol for the transmission of events in distributed systems. One recent trend in government and academic research is the development and deployment of computational grids. Computational grids are large-scale distributed systems that typically consist of high-performance compute, storage, and networking resources. Examples of such computational grids are the DOE Science Grid, the NASA Information Power Grid (IPG), and the NSF Partnerships for Advanced Computing Infrastructure (PACIs). The major effort to deploy these grids is in the area of developing the software services to allow users to execute applications on these large and diverse sets of resources. These services include security, execution of remote applications, managing remote data, access to information about resources and services, and so on. There are several toolkits for providing these services such as Globus, Legion, and Condor. As part of these efforts to develop computational grids, the Global Grid Forum is working to standardize the protocols and APIs used by various grid services. This standardization will allow interoperability between the client and server software of the toolkits that are providing the grid services. The goal of the Performance Working Group of the Grid Forum is to standardize protocols and representations related to the storage and distribution of performance data. These standard protocols and representations must support tasks such as profiling parallel applications, monitoring the status of computers and networks, and monitoring the performance of services provided by a computational grid. This paper describes a proposed protocol and data representation for the exchange of events in a distributed system. The protocol exchanges messages formatted in XML and it can be layered atop any low-level communication protocol such as TCP or UDP Further, we describe Java and C++ implementations of this protocol and discuss their performance. The next section will provide some further background information. Section 3 describes the main communication patterns of our protocol. Section 4 describes how we represent events and related information using XML. Section 5 describes our protocol and Section 6 discusses the performance of two implementations of the protocol. Finally, an appendix provides the XML Schema definition of our protocol and event information.

  16. Generalized two-dimensional (2D) linear system analysis metrics (GMTF, GDQE) for digital radiography systems including the effect of focal spot, magnification, scatter, and detector characteristics.

    PubMed

    Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen

    2010-03-01

    The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.

  17. Building and measuring a high performance network architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less

  18. The Distributed Wind Cost Taxonomy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Forsyth, Trudy; Jimenez, Tony; Preus, Robert

    To date, there has been no standard method or tool to analyze the installed and operational costs for distributed wind turbine systems. This report describes the development of a classification system, or taxonomy, for distributed wind turbine project costs. The taxonomy establishes a framework to help collect, sort, and compare distributed wind cost data that mirrors how the industry categorizes information. The taxonomy organizes costs so they can be aggregated from installers, developers, vendors, and other sources without losing cost details. Developing a peer-reviewed taxonomy is valuable to industry stakeholders because a common understanding the details of distributed wind turbinemore » costs and balance of station costs is a first step to identifying potential high-value cost reduction opportunities. Addressing cost reduction potential can help increase distributed wind's competitiveness and propel the U.S. distributed wind industry forward. The taxonomy can also be used to perform cost comparisons between technologies and track trends for distributed wind industry costs in the future. As an initial application and piloting of the taxonomy, preliminary cost data were collected for projects of different sizes and from different regions across the contiguous United States. Following the methods described in this report, these data are placed into the established cost categories.« less

  19. Arctic Boreal Vulnerability Experiment (ABoVE) Science Cloud

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Schnase, J. L.; McInerney, M.; Webster, W. P.; Sinno, S.; Thompson, J. H.; Griffith, P. C.; Hoy, E.; Carroll, M.

    2014-12-01

    The effects of climate change are being revealed at alarming rates in the Arctic and Boreal regions of the planet. NASA's Terrestrial Ecology Program has launched a major field campaign to study these effects over the next 5 to 8 years. The Arctic Boreal Vulnerability Experiment (ABoVE) will challenge scientists to take measurements in the field, study remote observations, and even run models to better understand the impacts of a rapidly changing climate for areas of Alaska and western Canada. The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center (GSFC) has partnered with the Terrestrial Ecology Program to create a science cloud designed for this field campaign - the ABoVE Science Cloud. The cloud combines traditional high performance computing with emerging technologies to create an environment specifically designed for large-scale climate analytics. The ABoVE Science Cloud utilizes (1) virtualized high-speed InfiniBand networks, (2) a combination of high-performance file systems and object storage, and (3) virtual system environments tailored for data intensive, science applications. At the center of the architecture is a large object storage environment, much like a traditional high-performance file system, that supports data proximal processing using technologies like MapReduce on a Hadoop Distributed File System (HDFS). Surrounding the storage is a cloud of high performance compute resources with many processing cores and large memory coupled to the storage through an InfiniBand network. Virtual systems can be tailored to a specific scientist and provisioned on the compute resources with extremely high-speed network connectivity to the storage and to other virtual systems. In this talk, we will present the architectural components of the science cloud and examples of how it is being used to meet the needs of the ABoVE campaign. In our experience, the science cloud approach significantly lowers the barriers and risks to organizations that require high performance computing solutions and provides the NCCS with the agility required to meet our customers' rapidly increasing and evolving requirements.

  20. Microwave system performance for a solar power satellite during startup/shutdown operations

    NASA Technical Reports Server (NTRS)

    Arndt, G. D.; Berlin, L. A.

    1979-01-01

    The paper investigates the system performance and antenna characteristics under startup/shutdown conditions for the high power beam from a solar power satellite. Attention is given to the present microwave system reference configuration together with the dc power distribution system in the solar array and in the antenna. The pattern characteristics for the main beam, sidelobes, and grating lobes are examined for eight types of energizing configurations which include: random sequences, two types of concentric circles, and three types of line strips. In conclusion, it is noted that a proper choice of sequences should not cause environmental problems due to increased microwave radiation levels during the short time periods of energizing and de-energizing the antenna.

  1. Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope.

    PubMed

    Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo

    2018-01-17

    Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.

  2. A CO trace gas detection system based on continuous wave DFB-QCL

    NASA Astrophysics Data System (ADS)

    Dang, Jingmin; Yu, Haiye; Sun, Yujing; Wang, Yiding

    2017-05-01

    A compact and mobile system was demonstrated for the detection of carbon monoxide (CO) at trace level. This system adopted a high-power, continuous wave (CW), distributed feedback quantum cascade laser (DFB-QCL) operating at ∼22 °C as excitation source. Wavelength modulation spectroscopy (WMS) as well as second harmonic detection was used to isolate complex, overlapping spectral absorption features typical of ambient pressures and to achieve excellent specificity and high detection sensitivity. For the selected P(11) absorption line of CO molecule, located at 2099.083 cm-1, a limit of detection (LoD) of 26 ppb by volume (ppbv) at atmospheric pressure was achieved with a 1 s acquisition time. Allan deviation analysis was performed to investigate the long term performance of the CO detection system, and a measurement precision of 3.4 ppbv was observed with an optimal integration time of approximate 114 s, which verified the reliable and robust operation of the developed system.

  3. The Mass Distribution of Stellar-mass Black Holes

    NASA Astrophysics Data System (ADS)

    Farr, Will M.; Sravan, Niharika; Cantrell, Andrew; Kreidberg, Laura; Bailyn, Charles D.; Mandel, Ilya; Kalogera, Vicky

    2011-11-01

    We perform a Bayesian analysis of the mass distribution of stellar-mass black holes using the observed masses of 15 low-mass X-ray binary systems undergoing Roche lobe overflow and 5 high-mass, wind-fed X-ray binary systems. Using Markov Chain Monte Carlo calculations, we model the mass distribution both parametrically—as a power law, exponential, Gaussian, combination of two Gaussians, or log-normal distribution—and non-parametrically—as histograms with varying numbers of bins. We provide confidence bounds on the shape of the mass distribution in the context of each model and compare the models with each other by calculating their relative Bayesian evidence as supported by the measurements, taking into account the number of degrees of freedom of each model. The mass distribution of the low-mass systems is best fit by a power law, while the distribution of the combined sample is best fit by the exponential model. This difference indicates that the low-mass subsample is not consistent with being drawn from the distribution of the combined population. We examine the existence of a "gap" between the most massive neutron stars and the least massive black holes by considering the value, M 1%, of the 1% quantile from each black hole mass distribution as the lower bound of black hole masses. Our analysis generates posterior distributions for M 1%; the best model (the power law) fitted to the low-mass systems has a distribution of lower bounds with M 1%>4.3 M sun with 90% confidence, while the best model (the exponential) fitted to all 20 systems has M 1%>4.5 M sun with 90% confidence. We conclude that our sample of black hole masses provides strong evidence of a gap between the maximum neutron star mass and the lower bound on black hole masses. Our results on the low-mass sample are in qualitative agreement with those of Ozel et al., although our broad model selection analysis more reliably reveals the best-fit quantitative description of the underlying mass distribution. The results on the combined sample of low- and high-mass systems are in qualitative agreement with Fryer & Kalogera, although the presence of a mass gap remains theoretically unexplained.

  4. An evaluation to design high performance pinhole array detector module for four head SPECT: a simulation study

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2014-09-01

    The purpose of this study is to derive optimized parameters for a detector module employing an off-the-shelf X-ray camera and a pinhole array collimator applicable for a range of different SPECT systems. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were performed to estimate the performance of the pinhole array collimators and were compared to that of low energy high resolution (LEHR) parallel-hole collimator in a four head SPECT system. A detector module was simulated to have 48 mm by 48 mm active area along with 1mm, 1.6mm and 2 mm pinhole aperture sizes at 0.48 mm pitch on a tungsten plate. Perpendicular lead septa were employed to verify overlapping and non-overlapping projections against a proper acceptance angle without lead septa. A uniform shape cylindrical water phantom was used to evaluate the performance of the proposed four head SPECT system of the pinhole array detector module. For each head, 100 pinhole configurations were evaluated based on sensitivity and detection efficiency for 140 keV γ-rays, and compared to LEHR parallel-hole collimator. SPECT images were reconstructed based on filtered back projection (FBP) algorithm where neither scatter nor attenuation corrections were performed. A better reconstruction algorithm development for this specific system is in progress. Nevertheless, activity distribution was well visualized using the backprojection algorithm. In this study, we have evaluated several quantitative and comparative analyses for a pinhole array imaging system providing high detection efficiency and better system sensitivity over a large FOV, comparing to the conventional four head SPECT system. The proposed detector module is expected to provide improved performance in various SPECT imaging.

  5. NREL and Panasonic | Energy Systems Integration Facility | NREL

    Science.gov Websites

    with distribution system modeling for the first time. The tool combines NREL's building energy system distribution system models, and Panasonic will perform cost-benefit analyses. Along with the creation of the

  6. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE PAGES

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.; ...

    2017-08-17

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  7. Graph Partitioning for Parallel Applications in Heterogeneous Grid Environments

    NASA Technical Reports Server (NTRS)

    Bisws, Rupak; Kumar, Shailendra; Das, Sajal K.; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The problem of partitioning irregular graphs and meshes for parallel computations on homogeneous systems has been extensively studied. However, these partitioning schemes fail when the target system architecture exhibits heterogeneity in resource characteristics. With the emergence of technologies such as the Grid, it is imperative to study the partitioning problem taking into consideration the differing capabilities of such distributed heterogeneous systems. In our model, the heterogeneous system consists of processors with varying processing power and an underlying non-uniform communication network. We present in this paper a novel multilevel partitioning scheme for irregular graphs and meshes, that takes into account issues pertinent to Grid computing environments. Our partitioning algorithm, called MiniMax, generates and maps partitions onto a heterogeneous system with the objective of minimizing the maximum execution time of the parallel distributed application. For experimental performance study, we have considered both a realistic mesh problem from NASA as well as synthetic workloads. Simulation results demonstrate that MiniMax generates high quality partitions for various classes of applications targeted for parallel execution in a distributed heterogeneous environment.

  8. Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Ren; Srivastava, Anurag K.; Bakken, David E.

    Intermittency of wind energy poses a great challenge for power system operation and control. Wind curtailment might be necessary at the certain operating condition to keep the line flow within the limit. Remedial Action Scheme (RAS) offers quick control action mechanism to keep reliability and security of the power system operation with high wind energy integration. In this paper, a new RAS is developed to maximize the wind energy integration without compromising the security and reliability of the power system based on specific utility requirements. A new Distributed Linear State Estimation (DLSE) is also developed to provide the fast andmore » accurate input data for the proposed RAS. A distributed computational architecture is designed to guarantee the robustness of the cyber system to support RAS and DLSE implementation. The proposed RAS and DLSE is validated using the modified IEEE-118 Bus system. Simulation results demonstrate the satisfactory performance of the DLSE and the effectiveness of RAS. Real-time cyber-physical testbed has been utilized to validate the cyber-resiliency of the developed RAS against computational node failure.« less

  9. Finite-dimensional approximation for optimal fixed-order compensation of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Rosen, I. G.

    1988-01-01

    In controlling distributed parameter systems it is often desirable to obtain low-order, finite-dimensional controllers in order to minimize real-time computational requirements. Standard approaches to this problem employ model/controller reduction techniques in conjunction with LQG theory. In this paper we consider the finite-dimensional approximation of the infinite-dimensional Bernstein/Hyland optimal projection theory. This approach yields fixed-finite-order controllers which are optimal with respect to high-order, approximating, finite-dimensional plant models. The technique is illustrated by computing a sequence of first-order controllers for one-dimensional, single-input/single-output, parabolic (heat/diffusion) and hereditary systems using spline-based, Ritz-Galerkin, finite element approximation. Numerical studies indicate convergence of the feedback gains with less than 2 percent performance degradation over full-order LQG controllers for the parabolic system and 10 percent degradation for the hereditary system.

  10. Space station electrical power system availability study

    NASA Technical Reports Server (NTRS)

    Turnquist, Scott R.; Twombly, Mark A.

    1988-01-01

    ARINC Research Corporation performed a preliminary reliability, and maintainability (RAM) anlaysis of the NASA space station Electric Power Station (EPS). The analysis was performed using the ARINC Research developed UNIRAM RAM assessment methodology and software program. The analysis was performed in two phases: EPS modeling and EPS RAM assessment. The EPS was modeled in four parts: the insolar power generation system, the eclipse power generation system, the power management and distribution system (both ring and radial power distribution control unit (PDCU) architectures), and the power distribution to the inner keel PDCUs. The EPS RAM assessment was conducted in five steps: the use of UNIRAM to perform baseline EPS model analyses and to determine the orbital replacement unit (ORU) criticalities; the determination of EPS sensitivity to on-orbit spared of ORUs and the provision of an indication of which ORUs may need to be spared on-orbit; the determination of EPS sensitivity to changes in ORU reliability; the determination of the expected annual number of ORU failures; and the integration of the power generator system model results with the distribution system model results to assess the full EPS. Conclusions were drawn and recommendations were made.

  11. Assessment of distributed photovoltair electric-power systems

    NASA Astrophysics Data System (ADS)

    Neal, R. W.; Deduck, P. F.; Marshall, R. N.

    1982-10-01

    The development of a methodology to assess the potential impacts of distributed photovoltaic (PV) systems on electric utility systems, including subtransmission and distribution networks, and to apply that methodology to several illustrative examples was developed. The investigations focused upon five specific utilities. Impacts upon utility system operations and generation mix were assessed using accepted utility planning methods in combination with models that simulate PV system performance and life cycle economics. Impacts on the utility subtransmission and distribution systems were also investigated. The economic potential of distributed PV systems was investigated for ownership by the utility as well as by the individual utility customer.

  12. Modeling of luminance distribution in CAVE-type virtual reality systems

    NASA Astrophysics Data System (ADS)

    Meironke, Michał; Mazikowski, Adam

    2017-08-01

    At present, one of the most advanced virtual reality systems are CAVE-type (Cave Automatic Virtual Environment) installations. Such systems are usually consisted of four, five or six projection screens and in case of six screens arranged in form of a cube. Providing the user with a high level of immersion feeling in such systems is largely dependent of optical properties of the system. The modeling of physical phenomena plays nowadays a huge role in the most fields of science and technology. It allows to simulate work of device without a need to make any changes in the physical constructions. In this paper distribution of luminance in CAVE-type virtual reality systems were modelled. Calculations were performed for the model of 6-walled CAVE-type installation, based on Immersive 3D Visualization Laboratory, situated at the Faculty of Electronics, Telecommunications and Informatics at the Gdańsk University of Technology. Tests have been carried out for two different scattering distribution of the screen material in order to check how these characteristicinfluence on the luminance distribution of the whole CAVE. The basis assumption and simplification of modeled CAVE-type installation and results were presented. The brief discussion about the results and usefulness of developed model were also carried out.

  13. Development of on-line monitoring system for Nuclear Power Plant (NPP) using neuro-expert, noise analysis, and modified neural networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Subekti, M.; Center for Development of Reactor Safety Technology, National Nuclear Energy Agency of Indonesia, Puspiptek Complex BO.80, Serpong-Tangerang, 15340; Ohno, T.

    2006-07-01

    The neuro-expert has been utilized in previous monitoring-system research of Pressure Water Reactor (PWR). The research improved the monitoring system by utilizing neuro-expert, conventional noise analysis and modified neural networks for capability extension. The parallel method applications required distributed architecture of computer-network for performing real-time tasks. The research aimed to improve the previous monitoring system, which could detect sensor degradation, and to perform the monitoring demonstration in High Temperature Engineering Tested Reactor (HTTR). The developing monitoring system based on some methods that have been tested using the data from online PWR simulator, as well as RSG-GAS (30 MW research reactormore » in Indonesia), will be applied in HTTR for more complex monitoring. (authors)« less

  14. A multiprocessing architecture for real-time monitoring

    NASA Technical Reports Server (NTRS)

    Schmidt, James L.; Kao, Simon M.; Read, Jackson Y.; Weitzenkamp, Scott M.; Laffey, Thomas J.

    1988-01-01

    A multitasking architecture for performing real-time monitoring and analysis using knowledge-based problem solving techniques is described. To handle asynchronous inputs and perform in real time, the system consists of three or more distributed processes which run concurrently and communicate via a message passing scheme. The Data Management Process acquires, compresses, and routes the incoming sensor data to other processes. The Inference Process consists of a high performance inference engine that performs a real-time analysis on the state and health of the physical system. The I/O Process receives sensor data from the Data Management Process and status messages and recommendations from the Inference Process, updates its graphical displays in real time, and acts as the interface to the console operator. The distributed architecture has been interfaced to an actual spacecraft (NASA's Hubble Space Telescope) and is able to process the incoming telemetry in real-time (i.e., several hundred data changes per second). The system is being used in two locations for different purposes: (1) in Sunnyville, California at the Space Telescope Test Control Center it is used in the preflight testing of the vehicle; and (2) in Greenbelt, Maryland at NASA/Goddard it is being used on an experimental basis in flight operations for health and safety monitoring.

  15. Methods to Determine Recommended Feeder-Wide Advanced Inverter Settings for Improving Distribution System Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rylander, Matthew; Reno, Matthew J.; Quiroz, Jimmy E.

    This paper describes methods that a distribution engineer could use to determine advanced inverter settings to improve distribution system performance. These settings are for fixed power factor, volt-var, and volt-watt functionality. Depending on the level of detail that is desired, different methods are proposed to determine single settings applicable for all advanced inverters on a feeder or unique settings for each individual inverter. Seven distinctly different utility distribution feeders are analyzed to simulate the potential benefit in terms of hosting capacity, system losses, and reactive power attained with each method to determine the advanced inverter settings.

  16. Towards multifocal ultrasonic neural stimulation: pattern generation algorithms

    NASA Astrophysics Data System (ADS)

    Hertzberg, Yoni; Naor, Omer; Volovick, Alexander; Shoham, Shy

    2010-10-01

    Focused ultrasound (FUS) waves directed onto neural structures have been shown to dynamically modulate neural activity and excitability, opening up a range of possible systems and applications where the non-invasiveness, safety, mm-range resolution and other characteristics of FUS are advantageous. As in other neuro-stimulation and modulation modalities, the highly distributed and parallel nature of neural systems and neural information processing call for the development of appropriately patterned stimulation strategies which could simultaneously address multiple sites in flexible patterns. Here, we study the generation of sparse multi-focal ultrasonic distributions using phase-only modulation in ultrasonic phased arrays. We analyse the relative performance of an existing algorithm for generating multifocal ultrasonic distributions and new algorithms that we adapt from the field of optical digital holography, and find that generally the weighted Gerchberg-Saxton algorithm leads to overall superior efficiency and uniformity in the focal spots, without significantly increasing the computational burden. By combining phased-array FUS and magnetic-resonance thermometry we experimentally demonstrate the simultaneous generation of tightly focused multifocal distributions in a tissue phantom, a first step towards patterned FUS neuro-modulation systems and devices.

  17. DG Planning with Amalgamation of Operational and Reliability Considerations

    NASA Astrophysics Data System (ADS)

    Battu, Neelakanteshwar Rao; Abhyankar, A. R.; Senroy, Nilanjan

    2016-04-01

    Distributed Generation has been playing a vital role in dealing issues related to distribution systems. This paper presents an approach which provides policy maker with a set of solutions for DG placement to optimize reliability and real power loss of the system. Optimal location of a Distributed Generator is evaluated based on performance indices derived for reliability index and real power loss. The proposed approach is applied on a 15-bus radial distribution system and a 18-bus radial distribution system with conventional and wind distributed generators individually.

  18. Program to develop a performance and heat load prediction system for multistage turbines

    NASA Technical Reports Server (NTRS)

    Sharma, OM

    1994-01-01

    Flows in low-aspect ratio turbines, such as the SSME fuel turbine, are three dimensional and highly unsteady due to the relative motion of adjacent airfoil rows and the circumferential and spanwise gradients in total pressure and temperature, The systems used to design these machines, however, are based on the assumption that the flow is steady. The codes utilized in these design systems are calibrated against turbine rig and engine data through the use of empirical correlations and experience factors. For high aspect ratio turbines, these codes yield reasonably accurate estimates of flow and temperature distributions. However, future design trends will see lower aspect ratio (reduced number of parts) and higher inlet temperature which will result in increased three dimensionality and flow unsteadiness in turbines. Analysis of recently acquired data indicate that temperature streaks and secondary flows generated in combustors and up-stream airfoils can have a large impact on the time-averaged temperature and angle distributions in downstream airfoil rows.

  19. Linac cryogenic distribution system maintenance and upgrades at JLab

    NASA Astrophysics Data System (ADS)

    Dixon, K.; Wright, M.; Ganni, V.

    2014-01-01

    The Central Helium Liquefier (CHL) distribution system to the CEBAF and FEL linacs at Jefferson Lab (JLab) experienced a planned warm up during the late summer and fall of 2012 for the first time after its commissioning in 1991. Various maintenance and modifications were performed to support high beam availability to the experimental users, meet 10 CFR 851 requirements for pressure systems, address operational issues, and prepare the cryogenic interfaces for the high-gradient cryomodules needed for the 12 GeV upgrade. Cryogenic maintenance and installation work had to be coordinated with other activities in the linacs and compete for manpower from other department installation activities. With less than a quarter of the gas storage capacity available to handle the boil-off from the more than 40 cryomodules, 35,000 Nm3 of helium was re-liquefied and shipped to a vendor via a liquid tanker trailer. Nearly 200 u-tubes had to be removed and stored while seals were replaced on related equipment such as vacuum pump outs, bayonet isolation and process valves.

  20. Nonimaging optical designs for maximum-power-density remote irradiation.

    PubMed

    Feuermann, D; Gordon, J M; Ries, H

    1998-04-01

    Designs for flexible, high-power-density, remote irradiation systems are presented. Applications include industrial infrared heating such as in semiconductor processing, alternatives to laser light for certain medical procedures, and general remote high-brightness lighting. The high power densities in herent to the small active radiating regions of conventional metal-halide, halogen, xenon, microwave-sulfur, and related lamps can be restored with nonimaging concentrators with little loss of power. These high fluxlevels can then be transported at high transmissivity with light channels such as optical fibers or lightpipes, and reshaped into luminaires that can deliver prescribed angular and spatial flux distributions onto desired targets. Details for nominally two- and three-dimensional systems are developed, along with estimates ofoptical performance.

  1. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis.

    PubMed

    Lee, Ki-Sun; Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan; Lee, Jeong-Yol

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems.

  2. Biomechanical Evaluation of a Tooth Restored with High Performance Polymer PEKK Post-Core System: A 3D Finite Element Analysis

    PubMed Central

    Shin, Joo-Hee; Kim, Jong-Eun; Kim, Jee-Hwan; Lee, Won-Chang; Shin, Sang-Wan

    2017-01-01

    The aim of this study was to evaluate the biomechanical behavior and long-term safety of high performance polymer PEKK as an intraradicular dental post-core material through comparative finite element analysis (FEA) with other conventional post-core materials. A 3D FEA model of a maxillary central incisor was constructed. A cyclic loading force of 50 N was applied at an angle of 45° to the longitudinal axis of the tooth at the palatal surface of the crown. For comparison with traditionally used post-core materials, three materials (gold, fiberglass, and PEKK) were simulated to determine their post-core properties. PEKK, with a lower elastic modulus than root dentin, showed comparably high failure resistance and a more favorable stress distribution than conventional post-core material. However, the PEKK post-core system showed a higher probability of debonding and crown failure under long-term cyclic loading than the metal or fiberglass post-core systems. PMID:28386547

  3. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    PubMed Central

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  4. A new instantaneous torque control of PM synchronous motor for high-performance direct-drive applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, S.K.; Kim, H.S.; Kim, C.G.

    1998-05-01

    a new instantaneous torque-control strategy is presented for high-performance control of a permanent magnet (PM) synchronous motor. In order to deal with the torque pulsating problem of a PM synchronous motor in a low-speed region, new torque estimation and control techniques are proposed. The linkage flux of a PM synchronous motor is estimated using a model reference adaptive system technique, and the developed torque is instantaneously controlled by the proposed torque controller combining a variable structure control (VSC) with a space-vector pulse-width modulation (PWM). The proposed control provides the advantage of reducing the torque pulsation caused by the nonsinusoidal fluxmore » distribution. This control strategy is applied to the high-torque PM synchronous motor drive system for direct-drive applications and implemented by using a software of the digital signal processor (DSP) TMS320C30. The simulations and experiments are carried out for this system, and the results well demonstrate the effectiveness of the proposed control.« less

  5. An environmental testing facility for Space Station Freedom power management and distribution hardware

    NASA Technical Reports Server (NTRS)

    Jackola, Arthur S.; Hartjen, Gary L.

    1992-01-01

    The plans for a new test facility, including new environmental test systems, which are presently under construction, and the major environmental Test Support Equipment (TSE) used therein are addressed. This all-new Rocketdyne facility will perform space simulation environmental tests on Power Management and Distribution (PMAD) hardware to Space Station Freedom (SSF) at the Engineering Model, Qualification Model, and Flight Model levels of fidelity. Testing will include Random Vibration in three axes - Thermal Vacuum, Thermal Cycling and Thermal Burn-in - as well as numerous electrical functional tests. The facility is designed to support a relatively high throughput of hardware under test, while maintaining the high standards required for a man-rated space program.

  6. Annoyance to Noise Produced by a Distributed Electric Propulsion High-Lift System

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Palumbo, Daniel L.; Rathsam, Jonathan; Christian, Andrew; Rafaelof, Menachem

    2017-01-01

    A psychoacoustic test was performed using simulated sounds from a distributed electric propulsion aircraft concept to help understand factors associated with human annoyance. A design space spanning the number of high-lift leading edge propellers and their relative operating speeds, inclusive of time varying effects associated with motor controller error and atmospheric turbulence, was considered. It was found that the mean annoyance response varies in a statistically significant manner with the number of propellers and with the inclusion of time varying effects, but does not differ significantly with the relative RPM between propellers. An annoyance model was developed, inclusive of confidence intervals, using the noise metrics of loudness, roughness, and tonality as predictors.

  7. Do low-cost ceramic water filters improve water security in rural South Africa?

    NASA Astrophysics Data System (ADS)

    Lange, Jens; Materne, Tineke; Grüner, Jörg

    2016-10-01

    This study examined the performance of a low-cost ceramic candle filter system (CCFS) for point of use (POU) drinking water treatment in the village of Hobeni, Eastern Cape Province, South Africa. CCFSs were distributed in Hobeni and a survey was carried out among their users. The performance of 51 CCFSs was evaluated by dip slides and related to human factors. Already after two-thirds of their specified lifetime, none of the distributed CCFSs produced water without distinct contamination, and more than one-third even deteriorated in hygienic water quality. Besides the water source (springs were preferable compared to river or rain water), a high water throughput was the dominant reason for poor CCFS performance. A stepwise laboratory test documented the negative effects of repeated loading and ambient field temperatures. These findings suggest that not every CCFS type per se guarantees improved drinking water security and that the efficiency of low-cost systems should continuously be monitored. For this purpose, dip slides were found to be a cost-efficient alternative to standard laboratory tests. They consistently underestimated microbial counts but can be used by laypersons and hence by the users themselves to assess critical contamination of their filter systems.

  8. Simulating the Daylight Performance of Complex Fenestration Systems Using Bidirectional Scattering Distribution Functions within Radiance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor

    2011-01-21

    We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.

  9. GPR-Based Water Leak Models in Water Distribution Systems

    PubMed Central

    Ayala-Cabrera, David; Herrera, Manuel; Izquierdo, Joaquín; Ocaña-Levario, Silvia J.; Pérez-García, Rafael

    2013-01-01

    This paper addresses the problem of leakage in water distribution systems through the use of ground penetrating radar (GPR) as a nondestructive method. Laboratory tests are performed to extract features of water leakage from the obtained GPR images. Moreover, a test in a real-world urban system under real conditions is performed. Feature extraction is performed by interpreting GPR images with the support of a pre-processing methodology based on an appropriate combination of statistical methods and multi-agent systems. The results of these tests are presented, interpreted, analyzed and discussed in this paper.

  10. Examining System-Wide Impacts of Solar PV Control Systems with a Power Hardware-in-the-Loop Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Tess L.; Fuller, Jason C.; Schneider, Kevin P.

    2014-10-11

    High penetration levels of distributed solar PV power generation can lead to adverse power quality impacts such as excessive voltage rise, voltage flicker, and reactive power values that result in unacceptable voltage levels. Advanced inverter control schemes have been proposed that have the potential to mitigate many power quality concerns. However, closed-loop control may lead to unintended behavior in deployed systems as complex interactions can occur between numerous operating devices. In order to enable the study of the performance of advanced control schemes in a detailed distribution system environment, a Hardware-in-the-Loop (HIL) platform has been developed. In the HIL system,more » GridLAB-D, a distribution system simulation tool, runs in real-time mode at the Pacific Northwest National Laboratory (PNNL) and supplies power system parameters at a point of common coupling to hardware located at the National Renewable Energy Laboratory (NREL). Hardware inverters interact with grid and PV simulators emulating an operational distribution system and power output from the inverters is measured and sent to PNNL to update the real-time distribution system simulation. The platform is described and initial test cases are presented. The platform is used to study the system-wide impacts and the interactions of controls applied to inverters that are integrated into a simulation of the IEEE 8500-node test feeder, with inverters in either constant power factor control or active volt/VAR control. We demonstrate that this HIL platform is well-suited to the study of advanced inverter controls and their impacts on the power quality of a distribution feeder. Additionally, the results from HIL are used to validate GridLAB-D simulations of advanced inverter controls.« less

  11. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  12. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  13. The Improved Dual-view Field Goniometer System FIGOS

    PubMed Central

    Schopfer, Jürg; Dangel, Stefan; Kneubühler, Mathias; Itten, Klaus I.

    2008-01-01

    In spectrodirectional Remote Sensing (RS) the Earth's surface reflectance characteristics are studied by means of their angular dimensions. Almost all natural surfaces exhibit an individual anisotropic reflectance behaviour due to the contrast between the optical properties of surface elements and background and the geometric surface properties of the observed scene. The underlying concept, which describes the reflectance characteristic of a specific surface area, is called the bidirectional reflectance distribution function (BRDF). BRDF knowledge is essential for both correction of directional effects in RS data and quantitative retrieval of surface parameters. Ground-based spectrodirectional measurements are usually performed with goniometer systems. An accurate retrieval of the bidirectional reflectance factors (BRF) from field goniometer measurements requires hyperspectral knowledge of the angular distribution of the reflected and the incident radiation. However, prior to the study at hand, no operational goniometer system was able to fulfill this requirement. This study presents the first dual-view field goniometer system, which is able to simultaneously collect both the reflected and the incident radiation at high angular and spectral resolution and, thus, providing the necessary spectrodirectional datasets to accurately retrieve the surface specific BRF. Furthermore, the angular distribution of the incoming diffuse radiation is characterized for various atmospheric conditions and the BRF retrieval is performed for an artificial target and compared to laboratory spectrodirectional measurement results obtained with the same goniometer system. Suggestions for further improving goniometer systems are given and the need for intercalibration of various goniometers as well as for standardizing spectrodirectional measurements is expressed. PMID:27873805

  14. The Improved Dual-view Field Goniometer System FIGOS.

    PubMed

    Schopfer, Jürg; Dangel, Stefan; Kneubühler, Mathias; Itten, Klaus I

    2008-08-28

    In spectrodirectional Remote Sensing (RS) the Earth's surface reflectance characteristics are studied by means of their angular dimensions. Almost all natural surfaces exhibit an individual anisotropic reflectance behaviour due to the contrast between the optical properties of surface elements and background and the geometric surface properties of the observed scene. The underlying concept, which describes the reflectance characteristic of a specific surface area, is called the bidirectional reflectance distribution function (BRDF). BRDF knowledge is essential for both correction of directional effects in RS data and quantitative retrieval of surface parameters. Ground-based spectrodirectional measurements are usually performed with goniometer systems. An accurate retrieval of the bidirectional reflectance factors (BRF) from field goniometer measurements requires hyperspectral knowledge of the angular distribution of the reflected and the incident radiation. However, prior to the study at hand, no operational goniometer system was able to fulfill this requirement. This study presents the first dual-view field goniometer system, which is able to simultaneously collect both the reflected and the incident radiation at high angular and spectral resolution and, thus, providing the necessary spectrodirectional datasets to accurately retrieve the surface specific BRF. Furthermore, the angular distribution of the incoming diffuse radiation is characterized for various atmospheric conditions and the BRF retrieval is performed for an artificial target and compared to laboratory spectrodirectional measurement results obtained with the same goniometer system. Suggestions for further improving goniometer systems are given and the need for intercalibration of various goniometers as well as for standardizing spectrodirectional measurements is expressed.

  15. Computer hardware and software for robotic control

    NASA Technical Reports Server (NTRS)

    Davis, Virgil Leon

    1987-01-01

    The KSC has implemented an integrated system that coordinates state-of-the-art robotic subsystems. It is a sensor based real-time robotic control system performing operations beyond the capability of an off-the-shelf robot. The integrated system provides real-time closed loop adaptive path control of position and orientation of all six axes of a large robot; enables the implementation of a highly configurable, expandable testbed for sensor system development; and makes several smart distributed control subsystems (robot arm controller, process controller, graphics display, and vision tracking) appear as intelligent peripherals to a supervisory computer coordinating the overall systems.

  16. Human resource management in post-conflict health systems: review of research and knowledge gaps

    PubMed Central

    2014-01-01

    In post-conflict settings, severe disruption to health systems invariably leaves populations at high risk of disease and in greater need of health provision than more stable resource-poor countries. The health workforce is often a direct victim of conflict. Effective human resource management (HRM) strategies and policies are critical to addressing the systemic effects of conflict on the health workforce such as flight of human capital, mismatches between skills and service needs, breakdown of pre-service training, and lack of human resource data. This paper reviews published literatures across three functional areas of HRM in post-conflict settings: workforce supply, workforce distribution, and workforce performance. We searched published literatures for articles published in English between 2003 and 2013. The search used context-specific keywords (e.g. post-conflict, reconstruction) in combination with topic-related keywords based on an analytical framework containing the three functional areas of HRM (supply, distribution, and performance) and several corresponding HRM topic areas under these. In addition, the framework includes a number of cross-cutting topics such as leadership and governance, finance, and gender. The literature is growing but still limited. Many publications have focused on health workforce supply issues, including pre-service education and training, pay, and recruitment. Less is known about workforce distribution, especially governance and administrative systems for deployment and incentive policies to redress geographical workforce imbalances. Apart from in-service training, workforce performance is particularly under-researched in the areas of performance-based incentives, management and supervision, work organisation and job design, and performance appraisal. Research is largely on HRM in the early post-conflict period and has relied on secondary data. More primary research is needed across the areas of workforce supply, workforce distribution, and workforce performance. However, this should apply a longer-term focus throughout the different post-conflict phases, while paying attention to key cross-cutting themes such as leadership and governance, gender equity, and task shifting. The research gaps identified should enable future studies to examine how HRM could be used to meet both short and long term objectives for rebuilding health workforces and thereby contribute to achieving more equitable and sustainable health systems outcomes after conflict. PMID:25295071

  17. Performance evaluation of distributed wavelength assignment in WDM optical networks

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Tomohiro; Wang, Xi; Morikawa, Hiroyuki; Aoyama, Tomonori

    2004-04-01

    In WDM wavelength routed networks, prior to a data transfer, a call setup procedure is required to reserve a wavelength path between the source-destination node pairs. A distributed approach to a connection setup can achieve a very high speed, while improving the reliability and reducing the implementation cost of the networks. However, along with many advantages, several major challenges have been posed by the distributed scheme in how the management and allocation of wavelength could be efficiently carried out. In this thesis, we apply a distributed wavelength assignment algorithm named priority based wavelength assignment (PWA) that was originally proposed for the use in burst switched optical networks to the problem of reserving wavelengths of path reservation protocols in the distributed control optical networks. Instead of assigning wavelengths randomly, this approach lets each node select the "safest" wavelengths based on the information of wavelength utilization history, thus unnecessary future contention is prevented. The simulation results presented in this paper show that the proposed protocol can enhance the performance of the system without introducing any apparent drawbacks.

  18. A simple-architecture fibered transmission system for dissemination of high stability 100 MHz signals

    NASA Astrophysics Data System (ADS)

    Bakir, A.; Rocher, C.; Maréchal, B.; Bigler, E.; Boudot, R.; Kersalé, Y.; Millo, J.

    2018-05-01

    We report on the development of a simple-architecture fiber-based frequency distribution system used to transfer high frequency stability 100 MHz signals. This work is focused on the emitter and the receiver performances that allow the transmission of the radio-frequency signal over an optical fiber. The system exhibits a residual fractional frequency stability of 1 × 10-14 at 1 s integration time and in the low 10-16 range after 100 s. These performances are suitable to transfer the signal of frequency references such as those of a state-of-the-art hydrogen maser without any phase noise compensation scheme. As an application, we demonstrate the dissemination of such a signal through a 100 m long optical fiber without any degradation. The proposed setup could be easily extended for operating frequencies in the 10 MHz-1 GHz range.

  19. Probabilistic cosmological mass mapping from weak lensing shear

    DOE PAGES

    Schneider, M. D.; Ng, K. Y.; Dawson, W. A.; ...

    2017-04-10

    Here, we infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear ormore » Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  20. Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong

    2017-08-01

    We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.

  1. A thermodynamic analysis of a novel bidirectional district heating and cooling network

    DOE PAGES

    Zarin Pass, R.; Wetter, M.; Piette, M. A.

    2017-11-29

    In this study, we evaluate an ambient, bidirectional thermal network, which uses a single circuit for both district heating and cooling. When in net more cooling is needed than heating, the system circulates from a central plant in one direction. When more heating is needed, the system circulates in the opposite direction. A large benefit of this design is that buildings can recover waste heat from each other directly. We analyze the thermodynamic performance of the bidirectional system. Because the bidirectional system represents the state-of-the-art in design for district systems, its peak energy efficiency represents an upper bound on themore » thermal performance of any district heating and cooling system. However, because any network has mechanical and thermal distribution losses, we develop a diversity criterion to understand when the bidirectional system may be a more energy-efficient alternative to modern individual-building systems. We show that a simple model of a low-density, high-distribution loss network is more efficient than aggregated individual buildings if there is at least 1 unit of cooling energy per 5.7 units of simultaneous heating energy (or vice versa). We apply this criterion to reference building profiles in three cities to look for promising clusters.« less

  2. A thermodynamic analysis of a novel bidirectional district heating and cooling network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zarin Pass, R.; Wetter, M.; Piette, M. A.

    In this study, we evaluate an ambient, bidirectional thermal network, which uses a single circuit for both district heating and cooling. When in net more cooling is needed than heating, the system circulates from a central plant in one direction. When more heating is needed, the system circulates in the opposite direction. A large benefit of this design is that buildings can recover waste heat from each other directly. We analyze the thermodynamic performance of the bidirectional system. Because the bidirectional system represents the state-of-the-art in design for district systems, its peak energy efficiency represents an upper bound on themore » thermal performance of any district heating and cooling system. However, because any network has mechanical and thermal distribution losses, we develop a diversity criterion to understand when the bidirectional system may be a more energy-efficient alternative to modern individual-building systems. We show that a simple model of a low-density, high-distribution loss network is more efficient than aggregated individual buildings if there is at least 1 unit of cooling energy per 5.7 units of simultaneous heating energy (or vice versa). We apply this criterion to reference building profiles in three cities to look for promising clusters.« less

  3. Design of a mixer for the thrust-vectoring system on the high-alpha research vehicle

    NASA Technical Reports Server (NTRS)

    Pahle, Joseph W.; Bundick, W. Thomas; Yeager, Jessie C.; Beissner, Fred L., Jr.

    1996-01-01

    One of the advanced control concepts being investigated on the High-Alpha Research Vehicle (HARV) is multi-axis thrust vectoring using an experimental thrust-vectoring (TV) system consisting of three hydraulically actuated vanes per engine. A mixer is used to translate the pitch-, roll-, and yaw-TV commands into the appropriate TV-vane commands for distribution to the vane actuators. A computer-aided optimization process was developed to perform the inversion of the thrust-vectoring effectiveness data for use by the mixer in performing this command translation. Using this process a new mixer was designed for the HARV and evaluated in simulation and flight. An important element of the Mixer is the priority logic, which determines priority among the pitch-, roll-, and yaw-TV commands.

  4. Network protocols for real-time applications

    NASA Technical Reports Server (NTRS)

    Johnson, Marjory J.

    1987-01-01

    The Fiber Distributed Data Interface (FDDI) and the SAE AE-9B High Speed Ring Bus (HSRB) are emerging standards for high-performance token ring local area networks. FDDI was designed to be a general-purpose high-performance network. HSRB was designed specifically for military real-time applications. A workshop was conducted at NASA Ames Research Center in January, 1987 to compare and contrast these protocols with respect to their ability to support real-time applications. This report summarizes workshop presentations and includes an independent comparison of the two protocols. A conclusion reached at the workshop was that current protocols for the upper layers of the Open Systems Interconnection (OSI) network model are inadequate for real-time applications.

  5. A Review of Hybrid Fiber-Optic Distributed Simultaneous Vibration and Temperature Sensing Technology and Its Geophysical Applications

    PubMed Central

    2017-01-01

    Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications. PMID:29104259

  6. A Review of Hybrid Fiber-Optic Distributed Simultaneous Vibration and Temperature Sensing Technology and Its Geophysical Applications.

    PubMed

    Miah, Khalid; Potter, David K

    2017-11-01

    Distributed sensing systems can transform an optical fiber cable into an array of sensors, allowing users to detect and monitor multiple physical parameters such as temperature, vibration and strain with fine spatial and temporal resolution over a long distance. Fiber-optic distributed acoustic sensing (DAS) and distributed temperature sensing (DTS) systems have been developed for various applications with varied spatial resolution, and spectral and sensing range. Rayleigh scattering-based phase optical time domain reflectometry (OTDR) for vibration and Raman/Brillouin scattering-based OTDR for temperature and strain measurements have been developed over the past two decades. The key challenge has been to find a methodology that would enable the physical parameters to be determined at any point along the sensing fiber with high sensitivity and spatial resolution, yet within acceptable frequency range for dynamic vibration, and temperature detection. There are many applications, especially in geophysical and mining engineering where simultaneous measurements of vibration and temperature are essential. In this article, recent developments of different hybrid systems for simultaneous vibration, temperature and strain measurements are analyzed based on their operation principles and performance. Then, challenges and limitations of the systems are highlighted for geophysical applications.

  7. Efficient Probabilistic Forecasting for High-Resolution Models through Clustered-State Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hamidi, A.; Grossberg, M.; Khanbilvardi, R.

    2016-12-01

    Flood response in an urban area is the product of interactions of spatially and temporally varying rainfall and infrastructures. In urban areas, however, the complex sub-surface networks of tunnels, waste and storm water drainage systems are often inaccessible, pose challenges for modeling and prediction of the drainage infrastructure performance. The increased availability of open data in cities is an emerging information asset for a better understanding of the dynamics of urban water drainage infrastructure. This includes crowd sourced data and community reporting. A well-known source of this type of data is the non-emergency hotline "311" which is available in many US cities, and may contain information pertaining to the performance of physical facilities, condition of the environment, or residents' experience, comfort and well-being. In this study, seven years of New York City 311 (NYC311) call during 2010-2016 is employed, as an alternative approach for identifying the areas of the city most prone to sewer back up flooding. These zones are compared with the hydrologic analysis of runoff flooding zones to provide a predictive model for the City. The proposed methodology is an example of urban system phenomenology using crowd sourced, open data. A novel algorithm for calculating the spatial distribution of flooding complaints across NYC's five boroughs is presented in this study. In this approach, the features that represent reporting bias are separated from those that relate to actual infrastructure system performance. The sewer backup results are assessed with the spatial distribution of runoff in NYC during 2010-2016. With advances in radar technologies, a high spatial-temporal resolution data set for precipitation is available for most of the United States that can be implemented in hydrologic analysis of dense urban environments. High resolution gridded Stage IV radar rainfall data along with the high resolution spatially distributed land cover data are employed to investigate the urban pluvial flooding. The monthly results of excess runoff are compared with the sewer backup in NYC to build a predictive model of flood zones according to the 311 phone calls.

  8. Advanced technologies in the ASI MLRO towards a new generation laser ranging system

    NASA Technical Reports Server (NTRS)

    Varghese, Thomas; Bianco, Giuseppe

    1994-01-01

    Matera Laser Ranging Observatory (MLRO) is a high performance, highly automated optical and astronomical observatory currently under design and development by AlliedSignal for the Italian Space Agency (ASI). It is projected to become operational at the Centro Geodesia Spaziale in Matera, Italy, in 1997. MLRO, based on a 1.5-meter astronomical quality telescope, will perform ranging to spacecraft in earthbound orbits, lunar reflectors, and specially equipped deep space missions. The primary emphasis during design is to incorporate state-of-the-art technologies to produce an intelligent, automated, high accuracy ranging system that will mimic the characteristic features of a fifth generation laser ranging system. The telescope has multiple ports and foci to support future experiments in the areas of laser communications, lidar, astrometry, etc. The key features providing state-of-the-art ranging performance include: a diode-pumped picosecond (50 ps) laser, high speed (3-5 GHz) optoelectronic detection and signal processing, and a high accuracy (6 ps) high resolution (less than 2 ps) time measurement capability. The above combination of technologies is expected to yield millimeter laser ranging precision and accuracy on targets up to 300,000 km, surpassing the best operational instrument performance to date by a factor of five or more. Distributed processing and control using a state-of-the-art computing environment provides the framework for efficient operation, system optimization, and diagnostics. A computationally intelligent environment permits optimal planning, scheduling, tracking, and data processing. It also supports remote access, monitor, and control for joint experiments with other observatories.

  9. Field trial of differential-phase-shift quantum key distribution using polarization independent frequency up-conversion detectors.

    PubMed

    Honjo, T; Yamamoto, S; Yamamoto, T; Kamada, H; Nishida, Y; Tadanaga, O; Asobe, M; Inoue, K

    2007-11-26

    We report a field trial of differential phase shift quantum key distribution (QKD) using polarization independent frequency up-conversion detectors. A frequency up-conversion detector is a promising device for achieving a high key generation rate when combined with a high clock rate QKD system. However, its polarization dependence prevents it from being applied to practical QKD systems. In this paper, we employ a modified polarization diversity configuration to eliminate the polarization dependence. Applying this method, we performed a long-term stability test using a 17.6-km installed fiber. We successfully demonstrated stable operation for 6 hours and achieved a sifted key generation rate of 120 kbps and an average quantum bit error rate of 3.14 %. The sifted key generation rate was not the estimated value but the effective value, which means that the sifted key was continuously generated at a rate of 120 kbps for 6 hours.

  10. An Integrated Framework for Model-Based Distributed Diagnosis and Prognosis

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew J.; Roychoudhury, Indranil

    2012-01-01

    Diagnosis and prognosis are necessary tasks for system reconfiguration and fault-adaptive control in complex systems. Diagnosis consists of detection, isolation and identification of faults, while prognosis consists of prediction of the remaining useful life of systems. This paper presents a novel integrated framework for model-based distributed diagnosis and prognosis, where system decomposition is used to enable the diagnosis and prognosis tasks to be performed in a distributed way. We show how different submodels can be automatically constructed to solve the local diagnosis and prognosis problems. We illustrate our approach using a simulated four-wheeled rover for different fault scenarios. Our experiments show that our approach correctly performs distributed fault diagnosis and prognosis in an efficient and robust manner.

  11. Testing Small CPAS Parachutes Using HIVAS

    NASA Technical Reports Server (NTRS)

    Ray, Eric S.; Hennings, Elsa; Bernatovich, Michael A.

    2013-01-01

    The High Velocity Airflow System (HIVAS) facility at the Naval Air Warfare Center (NAWC) at China Lake was successfully used as an alternative to flight test to determine parachute drag performance of two small Capsule Parachute Assembly System (CPAS) canopies. A similar parachute with known performance was also tested as a control. Realtime computations of drag coefficient were unrealistically low. This is because HIVAS produces a non-uniform flow which rapidly decays from a high central core flow. Additional calibration runs were performed to characterize this flow assuming radial symmetry from the centerline. The flow field was used to post-process effective flow velocities at each throttle setting and parachute diameter using the definition of the momentum flux factor. Because one parachute had significant oscillations, additional calculations were required to estimate the projected flow at off-axis angles. The resulting drag data from HIVAS compared favorably to previously estimated parachute performance based on scaled data from analogous CPAS parachutes. The data will improve drag area distributions in the next version of the CPAS Model Memo.

  12. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    NASA Astrophysics Data System (ADS)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  13. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  14. Development of ultrasonic electrostatic microjets for distributed propulsion and microflight

    NASA Astrophysics Data System (ADS)

    Amirparviz, Babak

    This dissertation details the first attempt to design and fabricate a distributed micro propulsion system based on acoustic streaming. A novel micro propulsion method is suggested by combining Helmholtz resonance, acoustic streaming and flow entrainment and thrust augmentation. In this method, oscillatory motion of an electrostatically actuated diaphragm creates a high frequency acoustic field inside the cavity of a Helmholtz resonator. The initial fluid motion velocity is amplified by the Helmholtz resonator structure and creates a jet flow at the exit nozzle. Acoustic streaming is the phenomenon responsible for primary jet stream creation. Primary jets produced by a few resonators can be combined in an ejector configuration to induce flow entrainment and thrust augmentation. Basic governing equations for the electrostatic actuator, deformation of the diaphragm and the fluid flow inside the resonator are derived. These equations are linearized and used to derive an equivalent electrical circuit model for the operation of the device. Numerical solution of the governing equations and simulation of the circuit model are used to predict the performance of the experimental systems. Thrust values as high as 30.3muN are expected per resonator. A micro machined electrostatically-driven high frequency Helmholtz resonator prototype is designed and fabricated. A new micro fabrication technique is developed for bulk micromachining and in particular fabrication of the resonator. Geometric stops for wet anisotropic etching of silicon are introduced for the fist time for structure formation. Arrays of high frequency (>60kHz) micro Helmholtz resonators are fabricated. In one sample more than 1000 resonators cover the surface of a four-inch silicon wafer and in effect convert it to a distributed propulsion system. A high yield (>85%) micro fabrication process is presented for realization of this propulsion system taking advantage of newly developed deep glass micromachining and lithography on thin (15mum) silicon methods. Extensive test and characterization are performed on the micro jets using current frequency component analysis, laser interferometry, acoustic measurements, hot-wire anemometers, video particle imaging and load cells. The occurrence of acoustic streaming at jet nozzles is verified and flow velocities exceeding 1m/s are measured at the 15mum x 330mum jet exit nozzle.

  15. Intelligent Systems for Power Management and Distribution

    NASA Technical Reports Server (NTRS)

    Button, Robert M.

    2002-01-01

    The motivation behind an advanced technology program to develop intelligent power management and distribution (PMAD) systems is described. The program concentrates on developing digital control and distributed processing algorithms for PMAD components and systems to improve their size, weight, efficiency, and reliability. Specific areas of research in developing intelligent DC-DC converters and distributed switchgear are described. Results from recent development efforts are presented along with expected future benefits to the overall PMAD system performance.

  16. [Treatment of Urban Runoff Pollutants by a Multilayer Biofiltration System].

    PubMed

    Wang, Xiao-lu; Zuo, Jian-e; Gan, Li-li; Xing, Wei; Miao, Heng-feng; Ruan, Wen-quan

    2015-07-01

    In order to control the non-point source pollution from road runoff in Wuxi City effectively, a multilayer biofiltration system was designed to remove a variety of pollutants according to the characteristics of road runoff in Wuxi, and the experimental research was carried out to study the effect on rainwater pollution purification. The results show that the system has a good performance on removing suspended solids (SS), organic pollutant (COD), nitrogen and phosphorus: all types of multilayer biofiltration systems have a high removal rate for SS, which can reach 90%. The system with activated carbon (GAC) has higher removal rates for COD and phosphorus. The system with zeolite (ZFM) has a relatively better removal efficiency for nitrogen. The addition of wood chips in the system can significantly improve the system efficiency for nitrogen removal. Between the two configurations of layered and distributed wood chips, configurations of distributed wood chips reach higher COD, phosphorus and nitrogen pollutants removal efficiencies since they can reduce the release of wood chips dissolution.

  17. Establishment of key grid-connected performance index system for integrated PV-ES system

    NASA Astrophysics Data System (ADS)

    Li, Q.; Yuan, X. D.; Qi, Q.; Liu, H. M.

    2016-08-01

    In order to further promote integrated optimization operation of distributed new energy/ energy storage/ active load, this paper studies the integrated photovoltaic-energy storage (PV-ES) system which is connected with the distribution network, and analyzes typical structure and configuration selection for integrated PV-ES generation system. By combining practical grid- connected characteristics requirements and technology standard specification of photovoltaic generation system, this paper takes full account of energy storage system, and then proposes several new grid-connected performance indexes such as paralleled current sharing characteristic, parallel response consistency, adjusting characteristic, virtual moment of inertia characteristic, on- grid/off-grid switch characteristic, and so on. A comprehensive and feasible grid-connected performance index system is then established to support grid-connected performance testing on integrated PV-ES system.

  18. Compact SAR and Small Satellite Solutions for Earth Observation

    NASA Astrophysics Data System (ADS)

    LaRosa, M.; L'Abbate, M.

    2016-12-01

    Requirements for near and short term mission applications (Observation and Reconnaissance, SIGINT, Early Warning, Meteorology,..) are increasingly calling for spacecraft operational responsiveness, flexible configuration, lower cost satellite constellations and flying formations, to improve both the temporal performance of observation systems (revisit, response time) and the remote sensing techniques (distributed sensors, arrays, cooperative sensors). In answer to these users' needs, leading actors in Space Systems for EO are involved in development of Small and Microsatellites solutions. Thales Alenia Space (TAS) has started the "COMPACT-SAR" project to develop a SAR satellite characterized by low cost and reduced mass while providing, at the same time, high image quality in terms of resolution, swath size, and radiometric performance. Compact SAR will embark a X-band SAR based on a deployable reflector antenna fed by an active phased array feed. This concept allows high performance, providing capability of electronic beam steering both in azimuth and elevation planes, improving operational performance over a purely mechanically steered SAR system. Instrument provides both STRIPMAP and SPOTLIGHT modes, and thanks to very high gain antenna, can also provide a real maritime surveillance mode based on a patented Low PRF radar mode. Further developments are in progress considering missions based on Microsatellites technology, which can provide effective solutions for different user needs, such as Operational responsiveness, low cost constellations, distributed observation concept, flying formations, and can be conceived for applications in the field of Observation, Atmosphere sensing, Intelligence, Surveillance, Reconnaissance (ISR), Signal Intelligence. To satisfy these requirements, flexibility of small platforms is a key driver and especially new miniaturization technologies able to optimize the performance. An overview new micros-satellite (based on NIMBUS platform) and mission concepts is provided, such as passive SAR for multi-static imaging and tandem, Medium swath/medium resolution dual pol MICROSAR for in L-C-X band multi-application for maritime surveillance and land monitoring, applications for Space Debris monitoring, precision farming, Atmosphere sensing.

  19. Heterogeneous Distributed Computing for Computational Aerosciences

    NASA Technical Reports Server (NTRS)

    Sunderam, Vaidy S.

    1998-01-01

    The research supported under this award focuses on heterogeneous distributed computing for high-performance applications, with particular emphasis on computational aerosciences. The overall goal of this project was to and investigate issues in, and develop solutions to, efficient execution of computational aeroscience codes in heterogeneous concurrent computing environments. In particular, we worked in the context of the PVM[1] system and, subsequent to detailed conversion efforts and performance benchmarking, devising novel techniques to increase the efficacy of heterogeneous networked environments for computational aerosciences. Our work has been based upon the NAS Parallel Benchmark suite, but has also recently expanded in scope to include the NAS I/O benchmarks as specified in the NHT-1 document. In this report we summarize our research accomplishments under the auspices of the grant.

  20. Integration of Propulsion-Airframe-Aeroacoustic Technologies and Design Concepts for a Quiet Blended-Wing-Body Transport

    NASA Technical Reports Server (NTRS)

    Hill, G. A.; Brown, S. A.; Geiselhart, K. A.

    2004-01-01

    This paper summarizes the results of studies undertaken to investigate revolutionary propulsion-airframe configurations that have the potential to achieve significant noise reductions over present-day commercial transport aircraft. Using a 300 passenger Blended-Wing-Body (BWB) as a baseline, several alternative low-noise propulsion-airframe-aeroacoustic (PAA) technologies and design concepts were investigated both for their potential to reduce the overall BWB noise levels, and for their impact on the weight, performance, and cost of the vehicle. Two evaluation frameworks were implemented for the assessments. The first was a Multi-Attribute Decision Making (MADM) process that used a Pugh Evaluation Matrix coupled with the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). This process provided a qualitative evaluation of the PAA technologies and design concepts and ranked them based on how well they satisfied chosen design requirements. From the results of the evaluation, it was observed that almost all of the PAA concepts gave the BWB a noise benefit, but degraded its performance. The second evaluation framework involved both deterministic and probabilistic systems analyses that were performed on a down-selected number of BWB propulsion configurations incorporating the PAA technologies and design concepts. These configurations included embedded engines with Boundary Layer Ingesting Inlets, Distributed Exhaust Nozzles installed on podded engines, a High Aspect Ratio Rectangular Nozzle, Distributed Propulsion, and a fixed and retractable aft airframe extension. The systems analyses focused on the BWB performance impacts of each concept using the mission range as a measure of merit. Noise effects were also investigated when enough information was available for a tractable analysis. Some tentative conclusions were drawn from the results. One was that the Boundary Layer Ingesting Inlets provided improvements to the BWB's mission range, by increasing the propulsive efficiency at cruise, and therefore offered a means to offset performance penalties imposed by some of the advanced PAA configurations. It was also found that the podded Distributed Exhaust Nozzle configuration imposed high penalties on the mission range and the need for substantial synergistic performance enhancements from an advanced integration scheme was identified. The High Aspect Ratio Nozzle showed inconclusive noise results and posed significant integration difficulties. Distributed Propulsion, in general, imposed performance penalties but may offer some promise for noise reduction from jet-to-jet shielding effects. Finally, a retractable aft airframe extension provided excellent noise reduction for a modest decrease in range.

Top