Sample records for distributed high performance

  1. Got political skill? The impact of justice on the importance of political skill for job performance.

    PubMed

    Andrews, Martha C; Kacmar, K Michele; Harris, Kenneth J

    2009-11-01

    The present study examined the moderating effects of procedural and distributive justice on the relationships between political skill and task performance and organizational citizenship behavior (OCB) among 175 supervisor-subordinate dyads of a government organization. Using Mischel's (1968) situationist perspective, high justice conditions were considered "strong situations," whereas low justice conditions were construed as "weak situations." We found that when both procedural and distributive justice were low, political skill was positively related to performance. Under conditions of both high procedural and high distributive justice, political skill was negatively related to performance. Finally, under conditions of low distributive justice, political skill was positively related to OCB, whereas under conditions of high distributive justice, political skill had little effect on OCB. These results highlight the importance of possessing political skill in weak but not strong situations.

  2. Reusable and Extensible High Level Data Distributions

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Chamberlain, Bradford; James, Mark L.; Zima, Hans P.

    2005-01-01

    This paper presents a reusable design of a data distribution framework for data parallel high performance applications. We are implementing the design in the context of the Chapel high productivity programming language. Distributions in Chapel are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on,the performance of applications, it is important that the distribution strategy can be chosen by a user. At the same time, high productivity concerns require that the user is shielded from error-prone, tedious details such as communication and synchronization. We propose an approach to distributions that enables the user to refine a language-provided distribution type and adjust it to optimize the performance of the application. Additionally, we conceal from the user low-level communication and synchronization details to increase productivity. To emphasize the generality of our distribution machinery, we present its abstract design in the form of a design pattern, which is independent of a concrete implementation. To illustrate the applicability of our distribution framework design, we outline the implementation of data distributions in terms of the Chapel language.

  3. An XML-Based Protocol for Distributed Event Services

    NASA Technical Reports Server (NTRS)

    Smith, Warren; Gunter, Dan; Quesnel, Darcy; Biegel, Bryan (Technical Monitor)

    2001-01-01

    A recent trend in distributed computing is the construction of high-performance distributed systems called computational grids. One difficulty we have encountered is that there is no standard format for the representation of performance information and no standard protocol for transmitting this information. This limits the types of performance analysis that can be undertaken in complex distributed systems. To address this problem, we present an XML-based protocol for transmitting performance events in distributed systems and evaluate the performance of this protocol.

  4. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  5. DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia

    DOE PAGES

    Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.

    2017-01-16

    Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less

  6. Final Report for DOE Award ER25756

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kesselman, Carl

    2014-11-17

    The SciDAC-funded Center for Enabling Distributed Petascale Science (CEDPS) was established to address technical challenges that arise due to the frequent geographic distribution of data producers (in particular, supercomputers and scientific instruments) and data consumers (people and computers) within the DOE laboratory system. Its goal is to produce technical innovations that meet DOE end-user needs for (a) rapid and dependable placement of large quantities of data within a distributed high-performance environment, and (b) the convenient construction of scalable science services that provide for the reliable and high-performance processing of computation and data analysis requests from many remote clients. The Centermore » is also addressing (c) the important problem of troubleshooting these and other related ultra-high-performance distributed activities from the perspective of both performance and functionality« less

  7. Implementing Access to Data Distributed on Many Processors

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A reference architecture is defined for an object-oriented implementation of domains, arrays, and distributions written in the programming language Chapel. This technology primarily addresses domains that contain arrays that have regular index sets with the low-level implementation details being beyond the scope of this discussion. What is defined is a complete set of object-oriented operators that allows one to perform data distributions for domain arrays involving regular arithmetic index sets. What is unique is that these operators allow for the arbitrary regions of the arrays to be fragmented and distributed across multiple processors with a single point of access giving the programmer the illusion that all the elements are collocated on a single processor. Today's massively parallel High Productivity Computing Systems (HPCS) are characterized by a modular structure, with a large number of processing and memory units connected by a high-speed network. Locality of access as well as load balancing are primary concerns in these systems that are typically used for high-performance scientific computation. Data distributions address these issues by providing a range of methods for spreading large data sets across the components of a system. Over the past two decades, many languages, systems, tools, and libraries have been developed for the support of distributions. Since the performance of data parallel applications is directly influenced by the distribution strategy, users often resort to low-level programming models that allow fine-tuning of the distribution aspects affecting performance, but, at the same time, are tedious and error-prone. This technology presents a reusable design of a data-distribution framework for data parallel high-performance applications. Distributions are a means to express locality in systems composed of large numbers of processor and memory components connected by a network. Since distributions have a great effect on the performance of applications, it is important that the distribution strategy is flexible, so its behavior can change depending on the needs of the application. At the same time, high productivity concerns require that the user be shielded from error-prone, tedious details such as communication and synchronization.

  8. The WorkPlace distributed processing environment

    NASA Technical Reports Server (NTRS)

    Ames, Troy; Henderson, Scott

    1993-01-01

    Real time control problems require robust, high performance solutions. Distributed computing can offer high performance through parallelism and robustness through redundancy. Unfortunately, implementing distributed systems with these characteristics places a significant burden on the applications programmers. Goddard Code 522 has developed WorkPlace to alleviate this burden. WorkPlace is a small, portable, embeddable network interface which automates message routing, failure detection, and re-configuration in response to failures in distributed systems. This paper describes the design and use of WorkPlace, and its application in the construction of a distributed blackboard system.

  9. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    NASA Technical Reports Server (NTRS)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  10. Efficient High Performance Collective Communication for Distributed Memory Environments

    ERIC Educational Resources Information Center

    Ali, Qasim

    2009-01-01

    Collective communication allows efficient communication and synchronization among a collection of processes, unlike point-to-point communication that only involves a pair of communicating processes. Achieving high performance for both kernels and full-scale applications running on a distributed memory system requires an efficient implementation of…

  11. 30 CFR 75.511 - Low-, medium-, or high-voltage distribution circuits and equipment; repair.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Low-, medium-, or high-voltage distribution... Electrical Equipment-General § 75.511 Low-, medium-, or high-voltage distribution circuits and equipment; repair. [Statutory Provision] No electrical work shall be performed on low-, medium-, or high-voltage...

  12. 30 CFR 75.511 - Low-, medium-, or high-voltage distribution circuits and equipment; repair.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Low-, medium-, or high-voltage distribution... Electrical Equipment-General § 75.511 Low-, medium-, or high-voltage distribution circuits and equipment; repair. [Statutory Provision] No electrical work shall be performed on low-, medium-, or high-voltage...

  13. 30 CFR 75.511 - Low-, medium-, or high-voltage distribution circuits and equipment; repair.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Low-, medium-, or high-voltage distribution... Electrical Equipment-General § 75.511 Low-, medium-, or high-voltage distribution circuits and equipment; repair. [Statutory Provision] No electrical work shall be performed on low-, medium-, or high-voltage...

  14. 30 CFR 75.511 - Low-, medium-, or high-voltage distribution circuits and equipment; repair.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Low-, medium-, or high-voltage distribution... Electrical Equipment-General § 75.511 Low-, medium-, or high-voltage distribution circuits and equipment; repair. [Statutory Provision] No electrical work shall be performed on low-, medium-, or high-voltage...

  15. 30 CFR 75.511 - Low-, medium-, or high-voltage distribution circuits and equipment; repair.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Low-, medium-, or high-voltage distribution... Electrical Equipment-General § 75.511 Low-, medium-, or high-voltage distribution circuits and equipment; repair. [Statutory Provision] No electrical work shall be performed on low-, medium-, or high-voltage...

  16. Demonstration of Cost-Effective, High-Performance Computing at Performance and Reliability Levels Equivalent to a 1994 Vector Supercomputer

    NASA Technical Reports Server (NTRS)

    Babrauckas, Theresa

    2000-01-01

    The Affordable High Performance Computing (AHPC) project demonstrated that high-performance computing based on a distributed network of computer workstations is a cost-effective alternative to vector supercomputers for running CPU and memory intensive design and analysis tools. The AHPC project created an integrated system called a Network Supercomputer. By connecting computer work-stations through a network and utilizing the workstations when they are idle, the resulting distributed-workstation environment has the same performance and reliability levels as the Cray C90 vector Supercomputer at less than 25 percent of the C90 cost. In fact, the cost comparison between a Cray C90 Supercomputer and Sun workstations showed that the number of distributed networked workstations equivalent to a C90 costs approximately 8 percent of the C90.

  17. Mathematical modelling of Bit-Level Architecture using Reciprocal Quantum Logic

    NASA Astrophysics Data System (ADS)

    Narendran, S.; Selvakumar, J.

    2018-04-01

    Efficiency of high-performance computing is on high demand with both speed and energy efficiency. Reciprocal Quantum Logic (RQL) is one of the technology which will produce high speed and zero static power dissipation. RQL uses AC power supply as input rather than DC input. RQL has three set of basic gates. Series of reciprocal transmission lines are placed in between each gate to avoid loss of power and to achieve high speed. Analytical model of Bit-Level Architecture are done through RQL. Major drawback of reciprocal Quantum Logic is area, because of lack in proper power supply. To achieve proper power supply we need to use splitters which will occupy large area. Distributed arithmetic uses vector- vector multiplication one is constant and other is signed variable and each word performs as a binary number, they rearranged and mixed to form distributed system. Distributed arithmetic is widely used in convolution and high performance computational devices.

  18. High performance and highly reliable Raman-based distributed temperature sensors based on correlation-coded OTDR and multimode graded-index fibers

    NASA Astrophysics Data System (ADS)

    Soto, M. A.; Sahu, P. K.; Faralli, S.; Sacchi, G.; Bolognini, G.; Di Pasquale, F.; Nebendahl, B.; Rueck, C.

    2007-07-01

    The performance of distributed temperature sensor systems based on spontaneous Raman scattering and coded OTDR are investigated. The evaluated DTS system, which is based on correlation coding, uses graded-index multimode fibers, operates over short-to-medium distances (up to 8 km) with high spatial and temperature resolutions (better than 1 m and 0.3 K at 4 km distance with 10 min measuring time) and high repeatability even throughout a wide temperature range.

  19. High voltage systems (tube-type microwave)/low voltage system (solid-state microwave) power distribution

    NASA Technical Reports Server (NTRS)

    Nussberger, A. A.; Woodcock, G. R.

    1980-01-01

    SPS satellite power distribution systems are described. The reference Satellite Power System (SPS) concept utilizes high-voltage klystrons to convert the onboard satellite power from dc to RF for transmission to the ground receiving station. The solar array generates this required high voltage and the power is delivered to the klystrons through a power distribution subsystem. An array switching of solar cell submodules is used to maintain bus voltage regulation. Individual klystron dc voltage conversion is performed by centralized converters. The on-board data processing system performs the necessary switching of submodules to maintain voltage regulation. Electrical power output from the solar panels is fed via switch gears into feeder buses and then into main distribution buses to the antenna. Power also is distributed to batteries so that critical functions can be provided through solar eclipses.

  20. Particle simulation on heterogeneous distributed supercomputers

    NASA Technical Reports Server (NTRS)

    Becker, Jeffrey C.; Dagum, Leonardo

    1993-01-01

    We describe the implementation and performance of a three dimensional particle simulation distributed between a Thinking Machines CM-2 and a Cray Y-MP. These are connected by a combination of two high-speed networks: a high-performance parallel interface (HIPPI) and an optical network (UltraNet). This is the first application to use this configuration at NASA Ames Research Center. We describe our experience implementing and using the application and report the results of several timing measurements. We show that the distribution of applications across disparate supercomputing platforms is feasible and has reasonable performance. In addition, several practical aspects of the computing environment are discussed.

  1. Implementing a High Performance Work Place in the Distribution and Logistics Industry: Recommendations for Leadership & Team Member Development

    ERIC Educational Resources Information Center

    McCann, Laura Harding

    2012-01-01

    Leadership development and employee engagement are two elements critical to the success of organizations. In response to growth opportunities, our Distribution and Logistics company set on a course to implement High Performance Work Place to meet the leadership and employee engagement needs, and to find methods for improving work processes. This…

  2. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    PubMed Central

    Dondo Gazzano, Julio; Sanchez Molina, Francisco; Rincon, Fernando; López, Juan Carlos

    2015-01-01

    FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC). The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process. PMID:25874241

  3. OpenCluster: A Flexible Distributed Computing Framework for Astronomical Data Processing

    NASA Astrophysics Data System (ADS)

    Wei, Shoulin; Wang, Feng; Deng, Hui; Liu, Cuiyin; Dai, Wei; Liang, Bo; Mei, Ying; Shi, Congming; Liu, Yingbo; Wu, Jingping

    2017-02-01

    The volume of data generated by modern astronomical telescopes is extremely large and rapidly growing. However, current high-performance data processing architectures/frameworks are not well suited for astronomers because of their limitations and programming difficulties. In this paper, we therefore present OpenCluster, an open-source distributed computing framework to support rapidly developing high-performance processing pipelines of astronomical big data. We first detail the OpenCluster design principles and implementations and present the APIs facilitated by the framework. We then demonstrate a case in which OpenCluster is used to resolve complex data processing problems for developing a pipeline for the Mingantu Ultrawide Spectral Radioheliograph. Finally, we present our OpenCluster performance evaluation. Overall, OpenCluster provides not only high fault tolerance and simple programming interfaces, but also a flexible means of scaling up the number of interacting entities. OpenCluster thereby provides an easily integrated distributed computing framework for quickly developing a high-performance data processing system of astronomical telescopes and for significantly reducing software development expenses.

  4. Study of Solid State Drives performance in PROOF distributed analysis system

    NASA Astrophysics Data System (ADS)

    Panitkin, S. Y.; Ernst, M.; Petkus, R.; Rind, O.; Wenaus, T.

    2010-04-01

    Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.

  5. Cooperative high-performance storage in the accelerated strategic computing initiative

    NASA Technical Reports Server (NTRS)

    Gary, Mark; Howard, Barry; Louis, Steve; Minuzzo, Kim; Seager, Mark

    1996-01-01

    The use and acceptance of new high-performance, parallel computing platforms will be impeded by the absence of an infrastructure capable of supporting orders-of-magnitude improvement in hierarchical storage and high-speed I/O (Input/Output). The distribution of these high-performance platforms and supporting infrastructures across a wide-area network further compounds this problem. We describe an architectural design and phased implementation plan for a distributed, Cooperative Storage Environment (CSE) to achieve the necessary performance, user transparency, site autonomy, communication, and security features needed to support the Accelerated Strategic Computing Initiative (ASCI). ASCI is a Department of Energy (DOE) program attempting to apply terascale platforms and Problem-Solving Environments (PSEs) toward real-world computational modeling and simulation problems. The ASCI mission must be carried out through a unified, multilaboratory effort, and will require highly secure, efficient access to vast amounts of data. The CSE provides a logically simple, geographically distributed, storage infrastructure of semi-autonomous cooperating sites to meet the strategic ASCI PSE goal of highperformance data storage and access at the user desktop.

  6. Effect of outboard vertical-fin position and orientation on the low-speed aerodynamic performance of highly swept wings. [supersonic cruise aircraft research

    NASA Technical Reports Server (NTRS)

    Johnson, V. S.; Coe, P. L., Jr.

    1979-01-01

    A theoretical study was conducted to determine the potential low-speed performance improvements which can be achieved by altering the position and orientation of the outboard vertical fins of low-aspect-ratio highly swept wings. Results show that the magnitude of the performance improvements is solely a function of the span-load distribution. Both the vertical-fin-chordwise position and toe angle provided effective means for adjusting the overall span-load distribution.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercier, C.W.

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  8. Automatic selection of dynamic data partitioning schemes for distributed memory multicomputers

    NASA Technical Reports Server (NTRS)

    Palermo, Daniel J.; Banerjee, Prithviraj

    1995-01-01

    For distributed memory multicomputers such as the Intel Paragon, the IBM SP-2, the NCUBE/2, and the Thinking Machines CM-5, the quality of the data partitioning for a given application is crucial to obtaining high performance. This task has traditionally been the user's responsibility, but in recent years much effort has been directed to automating the selection of data partitioning schemes. Several researchers have proposed systems that are able to produce data distributions that remain in effect for the entire execution of an application. For complex programs, however, such static data distributions may be insufficient to obtain acceptable performance. The selection of distributions that dynamically change over the course of a program's execution adds another dimension to the data partitioning problem. In this paper, we present a technique that can be used to automatically determine which partitionings are most beneficial over specific sections of a program while taking into account the added overhead of performing redistribution. This system is being built as part of the PARADIGM (PARAllelizing compiler for DIstributed memory General-purpose Multicomputers) project at the University of Illinois. The complete system will provide a fully automated means to parallelize programs written in a serial programming model obtaining high performance on a wide range of distributed-memory multicomputers.

  9. Jungle Computing: Distributed Supercomputing Beyond Clusters, Grids, and Clouds

    NASA Astrophysics Data System (ADS)

    Seinstra, Frank J.; Maassen, Jason; van Nieuwpoort, Rob V.; Drost, Niels; van Kessel, Timo; van Werkhoven, Ben; Urbani, Jacopo; Jacobs, Ceriel; Kielmann, Thilo; Bal, Henri E.

    In recent years, the application of high-performance and distributed computing in scientific practice has become increasingly wide spread. Among the most widely available platforms to scientists are clusters, grids, and cloud systems. Such infrastructures currently are undergoing revolutionary change due to the integration of many-core technologies, providing orders-of-magnitude speed improvements for selected compute kernels. With high-performance and distributed computing systems thus becoming more heterogeneous and hierarchical, programming complexity is vastly increased. Further complexities arise because urgent desire for scalability and issues including data distribution, software heterogeneity, and ad hoc hardware availability commonly force scientists into simultaneous use of multiple platforms (e.g., clusters, grids, and clouds used concurrently). A true computing jungle.

  10. High-Speed, Low-Cost Workstation for Computation-Intensive Statistics. Phase 1

    DTIC Science & Technology

    1990-06-20

    routine implementation and performance. 5 The two compiled versions given in the table were coded in an attempt to obtain an optimized compiled version...level statistics and linear algebra routines (BSAS and BLAS) that have been prototyped in this study. For each routine, both the C code ( Turbo C...OISTRIBUTION /AVAILABILITY STATEMENT 12b. DISTRIBUTION CODE Unlimited distribution 13. ABSTRACT (Maximum 200 words) High-performance and low-cost

  11. Study on Walking Training System using High-Performance Shoes constructed with Rubber Elements

    NASA Astrophysics Data System (ADS)

    Hayakawa, Y.; Kawanaka, S.; Kanezaki, K.; Doi, S.

    2016-09-01

    The number of accidental falls has been increasing among the elderly as society has aged. The main factor is a deteriorating center of balance due to declining physical performance. Another major factor is that the elderly tend to have bowlegged walking and their center of gravity position of the body tend to swing from side to side during walking. To find ways to counteract falls among the elderly, we developed walking training system to treat the gap in the center of balance. We also designed High-Performance Shoes that showed the status of a person's balance while walking. We also produced walk assistance from the insole in which insole stiffness corresponded to human sole distribution could be changed to correct the person's walking status. We constructed our High- Performances Shoes to detect pressure distribution during walking. Comparing normal sole distribution patterns and corrected ones, we confirmed that our assistance system helped change the user's posture, thereby reducing falls among the elderly.

  12. ISIS and META projects

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth; Cooper, Robert; Marzullo, Keith

    1990-01-01

    The ISIS project has developed a new methodology, virtual synchony, for writing robust distributed software. High performance multicast, large scale applications, and wide area networks are the focus of interest. Several interesting applications that exploit the strengths of ISIS, including an NFS-compatible replicated file system, are being developed. The META project is distributed control in a soft real-time environment incorporating feedback. This domain encompasses examples as diverse as monitoring inventory and consumption on a factory floor, and performing load-balancing on a distributed computing system. One of the first uses of META is for distributed application management: the tasks of configuring a distributed program, dynamically adapting to failures, and monitoring its performance. Recent progress and current plans are reported.

  13. Wheelchair Mobility Performance enhancement by Changing Wheelchair Properties; What is the Effect of Grip, Seat Height and Mass?

    PubMed

    van der Slikke, Rienk M A; de Witte, Annemarie M H; Berger, Monique A M; Bregman, Daan J J; Veeger, Dirk Jan H E J

    2018-02-12

    The purpose of this study was to provide insight in the effect of wheelchair settings on wheelchair mobility performance. Twenty elite wheelchair basketball athletes of low (n=10) and high classification (n=10), were tested in a wheelchair basketball directed field test. Athletes performed the test in their own wheelchair, which was modified for five additional conditions regarding seat height (high - low), mass (central - distributed) and grip. The previously developed, inertial sensor based wheelchair mobility performance monitor 1 was used to extract wheelchair kinematics in all conditions. Adding mass showed most effect on wheelchair mobility performance, with a reduced average acceleration across all activities. Once distributed, additional mass also reduced maximal rotational speed and rotational acceleration. Elevating seat height had effect on several performance aspects in sprinting and turning, whereas lowering seat height influenced performance minimally. Increased rim grip did not alter performance. No differences in response were evident between low and high classified athletes. The wheelchair mobility performance monitor showed sensitive to detect performance differences due to the small changes in wheelchair configuration made. Distributed additional mass had the most effect on wheelchair mobility performance, whereas additional grip had the least effect of conditions tested. Performance effects appear similar for both low and high classified athletes. Athletes, coaches and wheelchair experts are provided with insight in the performance effect of key wheelchair settings, and they are offered a proven sensitive method to apply in sports practice, in their search for the best wheelchair-athlete combination.

  14. Wall shear stress distributions on stented patent ductus arteriosus

    NASA Astrophysics Data System (ADS)

    Kori, Mohamad Ikhwan; Jamalruhanordin, Fara Lyana; Taib, Ishkrizat; Mohammed, Akmal Nizam; Abdullah, Mohammad Kamil; Ariffin, Ahmad Mubarak Tajul; Osman, Kahar

    2017-04-01

    A formation of thrombosis due to hemodynamic conditions after the implantation of stent in patent ductus arteriosus (PDA) will derived the development of re-stenosis. The phenomenon of thrombosis formation is significantly related to the distribution of wall shear stress (WSS) on the arterial wall. Thus, the aims of this study is to investigate the distribution of WSS on the arterial wall after the insertion of stent. Three dimensional model of patent ductus arteriosus inserted with different types of commercial stent are modelled. Computational modelling is used to calculate the distributions of WSS on the arterial stented PDA. The hemodynamic parameters such as high WSS and WSSlow are considered in this study. The result shows that the stented PDA with Type III stent has better hemodynamic performance as compared to others stent. This model has the lowest distributions of WSSlow and also the WSS value more than 20 dyne/cm2. From the observed, the stented PDA with stent Type II showed the highest distributions area of WSS more than 20 dyne/cm2. This situation revealed that the high possibility of atherosclerosis to be developed. However, the highest distribution of WSSlow for stented PDA with stent Type II indicated that high possibility of thrombosis to be formed. In conclusion, the stented PDA model calculated with the lowest distributions of WSSlow and WSS value more than 20dyne/cm2 are considered to be performed well in stent hemodynamic performance as compared to other stents.

  15. R&D100: Lightweight Distributed Metric Service

    ScienceCinema

    Gentile, Ann; Brandt, Jim; Tucker, Tom; Showerman, Mike

    2018-06-12

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  16. R&D100: Lightweight Distributed Metric Service

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gentile, Ann; Brandt, Jim; Tucker, Tom

    2015-11-19

    On today's High Performance Computing platforms, the complexity of applications and configurations makes efficient use of resources difficult. The Lightweight Distributed Metric Service (LDMS) is monitoring software developed by Sandia National Laboratories to provide detailed metrics of system performance. LDMS provides collection, transport, and storage of data from extreme-scale systems at fidelities and timescales to provide understanding of application and system performance with no statistically significant impact on application performance.

  17. Method of making a high performance ultracapacitor

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    2000-07-26

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  18. Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

    PubMed

    Choi, Hyungsuk; Choi, Woohyuk; Quan, Tran Minh; Hildebrand, David G C; Pfister, Hanspeter; Jeong, Won-Ki

    2014-12-01

    As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldi's Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.

  19. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  20. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    ERIC Educational Resources Information Center

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  1. Generalist genes and high cognitive abilities.

    PubMed

    Haworth, Claire M A; Dale, Philip S; Plomin, Robert

    2009-07-01

    The concept of generalist genes operating across diverse domains of cognitive abilities is now widely accepted. Much less is known about the etiology of the high extreme of performance. Is there more specialization at the high extreme? Using a representative sample of 4,000 12-year-old twin pairs from the UK Twins Early Development Study, we investigated the genetic and environmental overlap between web-based tests of general cognitive ability, reading, mathematics and language performance for the top 15% of the distribution using DF extremes analysis. Generalist genes are just as evident at the high extremes of performance as they are for the entire distribution of abilities and for cognitive disabilities. However, a smaller proportion of the phenotypic intercorrelations appears to be explained by genetic influences for high abilities.

  2. Generalist genes and high cognitive abilities

    PubMed Central

    Haworth, Claire M.A.; Dale, Philip S.; Plomin, Robert

    2014-01-01

    The concept of generalist genes operating across diverse domains of cognitive abilities is now widely accepted. Much less is known about the etiology of the high extreme of performance. Is there more specialization at the high extreme? Using a representative sample of 4000 12-year-old twin pairs from the UK Twins Early Development Study, we investigated the genetic and environmental overlap between web-based tests of general cognitive ability, reading, mathematics and language performance for the top 15% of the distribution using DF extremes analysis. Generalist genes are just as evident at the high extremes of performance as they are for the entire distribution of abilities and for cognitive disabilities. However, a smaller proportion of the phenotypic intercorrelations appears to be explained by genetic influences for high abilities. PMID:19377870

  3. Effect of substrate morphology slope distributions on light scattering, nc-Si:H film growth, and solar cell performance.

    PubMed

    Kim, Do Yun; Santbergen, Rudi; Jäger, Klaus; Sever, Martin; Krč, Janez; Topič, Marko; Hänni, Simon; Zhang, Chao; Heidt, Anna; Meier, Matthias; van Swaaij, René A C M M; Zeman, Miro

    2014-12-24

    Thin-film silicon solar cells are often deposited on textured ZnO substrates. The solar-cell performance is strongly correlated to the substrate morphology, as this morphology determines light scattering, defective-region formation, and crystalline growth of hydrogenated nanocrystalline silicon (nc-Si:H). Our objective is to gain deeper insight in these correlations using the slope distribution, rms roughness (σ(rms)) and correlation length (lc) of textured substrates. A wide range of surface morphologies was obtained by Ar plasma treatment and wet etching of textured and flat-as-deposited ZnO substrates. The σ(rms), lc and slope distribution were deduced from AFM scans. Especially, the slope distribution of substrates was represented in an efficient way that light scattering and film growth direction can be more directly estimated at the same time. We observed that besides a high σ(rms), a high slope angle is beneficial to obtain high haze and scattering of light at larger angles, resulting in higher short-circuit current density of nc-Si:H solar cells. However, a high slope angle can also promote the creation of defective regions in nc-Si:H films grown on the substrate. It is also found that the crystalline fraction of nc-Si:H solar cells has a stronger correlation with the slope distributions than with σ(rms) of substrates. In this study, we successfully correlate all these observations with the solar-cell performance by using the slope distribution of substrates.

  4. Global Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnamoorthy, Sriram; Daily, Jeffrey A.; Vishnu, Abhinav

    2015-11-01

    Global Arrays (GA) is a distributed-memory programming model that allows for shared-memory-style programming combined with one-sided communication, to create a set of tools that combine high performance with ease-of-use. GA exposes a relatively straightforward programming abstraction, while supporting fully-distributed data structures, locality of reference, and high-performance communication. GA was originally formulated in the early 1990’s to provide a communication layer for the Northwest Chemistry (NWChem) suite of chemistry modeling codes that was being developed concurrently.

  5. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated with the monitoring architecture to reduce the volume of event traffic flow in the system, and thereby reduce the intrusiveness of the monitoring process. We are developing an event filtering architecture to efficiently process the large volume of event traffic generated by LSD systems (such as distributed interactive applications). This filtering architecture is used to monitor collaborative distance learning application for obtaining debugging and feedback information. Our architecture supports the dynamic (re)configuration and optimization of event filters in large-scale distributed systems. Our work represents a major contribution by (1) survey and evaluating existing event filtering mechanisms In supporting monitoring LSD systems and (2) devising an integrated scalable high- performance architecture of event filtering that spans several kev application domains, presenting techniques to improve the functionality, performance and scalability. This paper describes the primary characteristics and challenges of developing high-performance event filtering for monitoring LSD systems. We survey existing event filtering mechanisms and explain key characteristics for each technique. In addition, we discuss limitations with existing event filtering mechanisms and outline how our architecture will improve key aspects of event filtering.

  6. Investigation of properties of high-performance fiber-reinforced concrete : very early strength, toughness, permeability, and fiber distribution : final report.

    DOT National Transportation Integrated Search

    2017-01-01

    Concrete cracking, high permeability, and leaking joints allow for intrusion of harmful solutions, resulting in concrete deterioration and corrosion of reinforcement in structures. The development of durable, high-performance concretes with limited c...

  7. Ultrascale collaborative visualization using a display-rich global cyberinfrastructure.

    PubMed

    Jeong, Byungil; Leigh, Jason; Johnson, Andrew; Renambot, Luc; Brown, Maxine; Jagodic, Ratko; Nam, Sungwon; Hur, Hyejung

    2010-01-01

    The scalable adaptive graphics environment (SAGE) is high-performance graphics middleware for ultrascale collaborative visualization using a display-rich global cyberinfrastructure. Dozens of sites worldwide use this cyberinfrastructure middleware, which connects high-performance-computing resources over high-speed networks to distributed ultraresolution displays.

  8. Low-cost high performance distributed data storage for multi-channel observations

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Deng, Hui; Ji, Kai-fan; Dai, Wei; Wei, Shou-lin; Liang, Bo; Zhang, Xiao-li

    2015-10-01

    The New Vacuum Solar Telescope (NVST) is a 1-m solar telescope that aims to observe the fine structures in both the photosphere and the chromosphere of the Sun. The observational data acquired simultaneously from one channel for the chromosphere and two channels for the photosphere bring great challenges to the data storage of NVST. The multi-channel instruments of NVST, including scientific cameras and multi-band spectrometers, generate at least 3 terabytes data per day and require high access performance while storing massive short-exposure images. It is worth studying and implementing a storage system for NVST which would balance the data availability, access performance and the cost of development. In this paper, we build a distributed data storage system (DDSS) for NVST and then deeply evaluate the availability of real-time data storage on a distributed computing environment. The experimental results show that two factors, i.e., the number of concurrent read/write and the file size, are critically important for improving the performance of data access on a distributed environment. Referring to these two factors, three strategies for storing FITS files are presented and implemented to ensure the access performance of the DDSS under conditions of multi-host write and read simultaneously. The real applications of the DDSS proves that the system is capable of meeting the requirements of NVST real-time high performance observational data storage. Our study on the DDSS is the first attempt for modern astronomical telescope systems to store real-time observational data on a low-cost distributed system. The research results and corresponding techniques of the DDSS provide a new option for designing real-time massive astronomical data storage system and will be a reference for future astronomical data storage.

  9. Performance Evaluation of Communication Software Systems for Distributed Computing

    NASA Technical Reports Server (NTRS)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  10. Livermore Big Artificial Neural Network Toolkit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essen, Brian Van; Jacobs, Sam; Kim, Hyojin

    2016-07-01

    LBANN is a toolkit that is designed to train artificial neural networks efficiently on high performance computing architectures. It is optimized to take advantages of key High Performance Computing features to accelerate neural network training. Specifically it is optimized for low-latency, high bandwidth interconnects, node-local NVRAM, node-local GPU accelerators, and high bandwidth parallel file systems. It is built on top of the open source Elemental distributed-memory dense and spars-direct linear algebra and optimization library that is released under the BSD license. The algorithms contained within LBANN are drawn from the academic literature and implemented to work within a distributed-memory framework.

  11. Multilinear Computing and Multilinear Algebraic Geometry

    DTIC Science & Technology

    2016-08-10

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send...performance period of this project. 15. SUBJECT TERMS Tensors , multilinearity, algebraic geometry, numerical computations, computational tractability, high...Reset DISTRIBUTION A: Distribution approved for public release. DISTRIBUTION A: Distribution approved for public release. INSTRUCTIONS FOR COMPLETING

  12. Design and Fabrication of High-Performance LWIR Photodetectors Based on Type-II Superlattices

    DTIC Science & Technology

    2017-08-11

    SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV-PS-TR-2017-0090 12. DISTRIBUTION / AVAILABILITY STATEMENT Approved for public...unlimited. 13 DISTRIBUTION LIST DTIC/OCP 8725 John J. Kingman Rd, Suite 0944 Ft Belvoir, VA 22060-6218 1 cy AFRL /RVIL Kirtland AFB, NM 87117-5776 2... AFRL -RV-PS- AFRL -RV-PS- TR-2017-0090 TR-2017-0090 DESIGN AND FABRICATION OF HIGH- PERFORMANCE LWIR PHOTODETECTORS BASED ON TYPE-II SUPERLATTICES

  13. Pressure distribution data from tests of 2.29-meter (7.5-ft.) span EET high-lift research model in Langley 4- by 7-meter tunnel

    NASA Technical Reports Server (NTRS)

    Morgan, H. L., Jr.

    1982-01-01

    A 2.29 m (7.5 ft.) span high-lift research model equipped with full-span leading-edge slat and part-span double-slotted trailing-edge flap was tested in the Langley 4- by 7-Meter Tunnel to determine the low speed performance characteristics of a representative high aspect ratio suprcritical wing. These tests were performed in support of the Energy Efficient Transport (EET) program which is one element of the Aircraft Energy Efficiency (ACEE) project. Static longitudinal forces and moments and chordwise pressure distributions at three spanwise stations were measured for cruise, climb, two take-off flap, and two landing flap wing configurations. The tabulated and plotted pressure distribution data is presented without analysis or discussion.

  14. Pressure distribution data from tests of 2.29 M (7.5 feet) span EET high-lift transport aircraft model in the Ames 12-foot pressure tunnel

    NASA Technical Reports Server (NTRS)

    Kjelgaard, S. O.; Morgan, H. L., Jr.

    1983-01-01

    A high-lift transport aircraft model equipped with full-span leading-edge slat and part-span double-slotted trailing-edge flap was tested in the Ames 12-ft pressure tunnel to determine the low-speed performance characteristics of a representative high-aspect-ratio supercritical wing. These tests were performed in support of the Energy Efficient Transport (EET) program which is one element of the Aircraft Energy Efficiency (ACEE) project. Static longitudinal forces and moments and chordwise pressure distributions at three spanwise stations were measured for cruise, climb, two take-off flap, and two landing flap wing configurations. The tabulated and plotted pressure distribution data is presented without analysis or discussion.

  15. Distributed fiber optic sensor-enhanced detection and prediction of shrinkage-induced delamination of ultra-high-performance concrete overlay

    NASA Astrophysics Data System (ADS)

    Bao, Yi; Valipour, Mahdi; Meng, Weina; Khayat, Kamal H.; Chen, Genda

    2017-08-01

    This study develops a delamination detection system for smart ultra-high-performance concrete (UHPC) overlays using a fully distributed fiber optic sensor. Three 450 mm (length) × 200 mm (width) × 25 mm (thickness) UHPC overlays were cast over an existing 200 mm thick concrete substrate. The initiation and propagation of delamination due to early-age shrinkage of the UHPC overlay were detected as sudden increases and their extension in spatial distribution of shrinkage-induced strains measured from the sensor based on pulse pre-pump Brillouin optical time domain analysis. The distributed sensor is demonstrated effective in detecting delamination openings from microns to hundreds of microns. A three-dimensional finite element model with experimental material properties is proposed to understand the complete delamination process measured from the distributed sensor. The model is validated using the distributed sensor data. The finite element model with cohesive elements for the overlay-substrate interface can predict the complete delamination process.

  16. Effect of through-plane polytetrafluoroethylene distribution in gas diffusion layers on performance of proton exchange membrane fuel cells

    NASA Astrophysics Data System (ADS)

    Ito, Hiroshi; Iwamura, Takuya; Someya, Satoshi; Munakata, Tetsuo; Nakano, Akihiro; Heo, Yun; Ishida, Masayoshi; Nakajima, Hironori; Kitahara, Tatsumi

    2016-02-01

    This experimental study identifies the effect of through-plane polytetrafluoroethylene (PTFE) distribution in gas diffusion backing (GDB) on the performance of proton exchange membrane fuel cells (PEMFC). PTFE-drying under vacuum pressure created a relatively uniform PTFE distribution in GDB compared to drying under atmospheric pressure. Carbon paper samples with different PTFE distributions due to the difference in drying conditions were prepared and used for the cathode gas diffusion layer (GDL) of PEMFCs. Also investigated is the effect of MPL application on the performance for those samples. The current density (i) - voltage (V) characteristics of these PEMFCs measured under high relative humidity conditions clearly showed that, with or without MPL, the cell using the GDL with PTFE dried under vacuum condition showed better performance than that dried under atmospheric condition. It is suggested that this improved performance is caused by the efficient transport of liquid water through the GDB due to the uniform distribution of PTFE.

  17. Study on the flow nonuniformity in a high capacity Stirling pulse tube cryocooler

    NASA Astrophysics Data System (ADS)

    You, X.; Zhi, X.; Duan, C.; Jiang, X.; Qiu, L.; Li, J.

    2017-12-01

    High capacity Stirling-type pulse tube cryocoolers (SPTC) have promising applications in high temperature superconductive motor and gas liquefaction. However, with the increase of cooling capacity, its performance deviates from well-accepted one-dimensional model simulation, such as Sage and Regen, mainly due to the strong field nonuniformity. In this study, several flow straighteners placed at both ends of the pulse tube are investigated to improve the flow distribution. A two-dimensional model of the pulse tube based on the computational fluid dynamics (CFD) method has been built to study the flow distribution of the pulse tube with different flow straighteners including copper screens, copper slots, taper transition and taper stainless slot. A SPTC set-up which has more than one hundred Watts cooling power at 80 K has been built and tested. The flow straighteners mentioned above have been applied and tested. The results show that with the best flow straightener the cooling performance of the SPTC can be significantly improved. Both CFD simulation and experiment show that the straighteners have impacts on the flow distribution and the performance of the high capacity SPTC.

  18. The effect of microstructure on the performance of Li-ion porous electrodes

    NASA Astrophysics Data System (ADS)

    Chung, Ding-Wen

    By combining X-ray tomography data and computer-generated porous elec- trodes, the impact of microstructure on the energy and power density of lithium-ion batteries is analyzed. Specifically, for commercial LiMn2O4 electrodes, results indi- cate that a broad particle size distribution of active material delivers up to two times higher energy density than monodisperse-sized particles for low discharge rates, and a monodisperse particle size distribution delivers the highest energy and power density for high discharge rates. The limits of traditionally used microstructural properties such as tortuosity, reactive area density, particle surface roughness, morphological anisotropy were tested against degree of particle size polydispersity, thus enabling the identification of improved porous architectures. The effects of critical battery processing parameters, such as layer compaction and carbon black, were also rationalized in the context of electrode performance. While a monodisperse particle size distribution exhibits the lowest possible tortuosity and three times higher surface area per unit volume with respect to an electrode conformed of a polydisperse particle size distribution, a comparable performance can be achieved by polydisperse particle size distributions with degrees of polydispersity less than 0.2 of particle size standard deviation. The use of non-spherical particles raises the tortuosity by as much as three hundred percent, which considerably lowers the power performance. However, favorably aligned particles can maximize power performance, particularly for high discharge rate applications.

  19. Shape Modification and Size Classification of Microcrystalline Graphite Powder as Anode Material for Lithium-Ion Batteries

    NASA Astrophysics Data System (ADS)

    Wang, Cong; Gai, Guosheng; Yang, Yufen

    2018-03-01

    Natural microcrystalline graphite (MCG) composed of many crystallites is a promising new anode material for lithium-ion batteries (LiBs) and has received considerable attention from researchers. MCG with narrow particle size distribution and high sphericity exhibits excellent electrochemical performance. A nonaddition process to prepare natural MCG as a high-performance LiB anode material is described. First, raw MCG was broken into smaller particles using a pulverization system. Then, the particles were modified into near-spherical shape using a particle shape modification system. Finally, the particle size distribution was narrowed using a centrifugal rotor classification system. The products with uniform hemispherical shape and narrow size distribution had mean particle size of approximately 9 μm, 10 μm, 15 μm, and 20 μm. Additionally, the innovative pilot experimental process increased the product yield of the raw material. Finally, the electrochemical performance of the prepared MCG was tested, revealing high reversible capacity and good cyclability.

  20. Technology Solutions Case Study: Long-Term Monitoring of Mini-Split Ductless Heat Pumps in the Northeast, Devens and Easthampton, Massachusetts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Transformations, Inc., has extensive experience building high-performance homes - production and custom - in a variety of Massachusetts locations and uses mini-split heat pumps (MSHPs) for space conditioning in most of its homes. The use of MSHPs for simplified space-conditioning distribution provides significant first-cost savings, which offsets the increased investment in the building enclosure. In this project, the U.S. Department of Energy Building America team Building Science Corporation evaluated the long-term performance of MSHPs in 8 homes during a period of 3 years. The work examined electrical use of MSHPs, distributions of interior temperatures and humidity when using simplified (two-point)more » heating systems in high-performance housing, and the impact of open-door/closed-door status on temperature distributions.« less

  1. Long-Term Monitoring of Mini-Split Ductless Heat Pumps in the Northeast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueno, K.; Loomis, H.

    Transformations, Inc. has extensive experience building their high performance housing at a variety of Massachusetts locations, in both a production and custom home setting. The majority of their construction uses mini-split heat pumps (MSHPs) for space conditioning. This research covered the long-term performance of MSHPs in Zone 5A; it is the culmination of up to 3 years' worth of monitoring in a set of eight houses. This research examined electricity use of MSHPs, distributions of interior temperatures and humidity when using simplified (two-point) heating systems in high-performance housing, and the impact of open-door/closed-door status on temperature distributions. The use ofmore » simplified space conditioning distribution (through use of MSHPs) provides significant first cost savings, which are used to offset the increased investment in the building enclosure.« less

  2. Comparison of Communication Architectures and Network Topologies for Distributed Propulsion Controls (Preprint)

    DTIC Science & Technology

    2013-05-01

    logic to perform control function computations and are connected to the full authority digital engine control ( FADEC ) via a high-speed data...Digital Engine Control ( FADEC ) via a high speed data communication bus. The short term distributed engine control configu- rations will be core...concen- trator; and high temperature electronics, high speed communication bus between the data concentrator and the control law processor master FADEC

  3. INTELLIGENT MONITORING SYSTEM WITH HIGH TEMPERATURE DISTRIBUTED FIBEROPTIC SENSOR FOR POWER PLANT COMBUSTION PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kwang Y. Lee; Stuart S. Yin; Andre Boheman

    2004-12-26

    The objective of the proposed work is to develop an intelligent distributed fiber optical sensor system for real-time monitoring of high temperature in a boiler furnace in power plants. Of particular interest is the estimation of spatial and temporal distributions of high temperatures within a boiler furnace, which will be essential in assessing and controlling the mechanisms that form and remove pollutants at the source, such as NOx. The basic approach in developing the proposed sensor system is three fold: (1) development of high temperature distributed fiber optical sensor capable of measuring temperatures greater than 2000 C degree with spatialmore » resolution of less than 1 cm; (2) development of distributed parameter system (DPS) models to map the three-dimensional (3D) temperature distribution for the furnace; and (3) development of an intelligent monitoring system for real-time monitoring of the 3D boiler temperature distribution. Under Task 1, improvement was made on the performance of in-fiber grating fabricated in single crystal sapphire fibers, test was performed on the grating performance of single crystal sapphire fiber with new fabrication methods, and the fabricated grating was applied to high temperature sensor. Under Task 2, models obtained from 3-D modeling of the Demonstration Boiler were used to study relationships between temperature and NOx, as the multi-dimensionality of such systems are most comparable with real-life boiler systems. Studies show that in boiler systems with no swirl, the distributed temperature sensor may provide information sufficient to predict trends of NOx at the boiler exit. Under Task 3, we investigate a mathematical approach to extrapolation of the temperature distribution within a power plant boiler facility, using a combination of a modified neural network architecture and semigroup theory. The 3D temperature data is furnished by the Penn State Energy Institute using FLUENT. Given a set of empirical data with no analytic expression, we first develop an analytic description and then extend that model along a single axis.« less

  4. Aluminum-carbon composite electrode

    DOEpatents

    Farahmandi, C. Joseph; Dispennette, John M.

    1998-07-07

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg.

  5. Aluminum-carbon composite electrode

    DOEpatents

    Farahmandi, C.J.; Dispennette, J.M.

    1998-07-07

    A high performance double layer capacitor having an electric double layer formed in the interface between activated carbon and an electrolyte is disclosed. The high performance double layer capacitor includes a pair of aluminum impregnated carbon composite electrodes having an evenly distributed and continuous path of aluminum impregnated within an activated carbon fiber preform saturated with a high performance electrolytic solution. The high performance double layer capacitor is capable of delivering at least 5 Wh/kg of useful energy at power ratings of at least 600 W/kg. 3 figs.

  6. Behavior of high-performance concrete in structural applications.

    DOT National Transportation Integrated Search

    2007-10-01

    High Performance Concrete (HPC) with improved properties has been developed by obtaining the maximum density of the matrix. Mathematical models developed by J.E. Funk and D.R. Dinger, are used to determine the particle size distribution to achieve th...

  7. Effect of milling methods on performance of Ni-Y 2O 3-stabilized ZrO 2 anode for solid oxide fuel cell

    NASA Astrophysics Data System (ADS)

    Cho, Hyoup Je; Choi, Gyeong Man

    A Ni-YSZ (Y 2O 3-stabilized ZrO 2) composite is commonly used as a solid oxide fuel cell anode. The composite powders are usually synthesized by mixing NiO and YSZ powders. The particle size and distribution of the two phases generally determine the performance of the anode. Two different milling methods are used to prepare the composite anode powders, namely, high-energy milling and ball-milling that reduce the particle size. The particle size and the Ni distribution of the two composite powders are examined. The effects of milling on the performance are evaluated by using both an electrolyte-supported, symmetric Ni-YSZ/YSZ/Ni-YSZ cell and an anode-supported, asymmetric cell. The performance is examined at 800 °C by impedance analysis and current-voltage measurements. Pellets made by using high-energy milled NiO-YSZ powders have much smaller particle sizes and a more uniform distribution of Ni particles than pellets made from ball-milled powder, and thus the polarization resistance of the electrode is also smaller. The maximum power density of the anode-supported cell prepared by using the high-energy milled powder is ∼850 mW cm -2 at 800 °C compared with ∼500 mW cm -2 for the cell with ball-milled powder. Thus, high-energy milling is found to be more effective in reducing particle size and obtaining a uniform distribution of Ni particles.

  8. High performance frame synchronization for continuous variable quantum key distribution systems.

    PubMed

    Lin, Dakai; Huang, Peng; Huang, Duan; Wang, Chao; Peng, Jinye; Zeng, Guihua

    2015-08-24

    Considering a practical continuous variable quantum key distribution(CVQKD) system, synchronization is of significant importance as it is hardly possible to extract secret keys from unsynchronized strings. In this paper, we proposed a high performance frame synchronization method for CVQKD systems which is capable to operate under low signal-to-noise(SNR) ratios and is compatible with random phase shift induced by quantum channel. A practical implementation of this method with low complexity is presented and its performance is analysed. By adjusting the length of synchronization frame, this method can work well with large range of SNR values which paves the way for longer distance CVQKD.

  9. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    NASA Astrophysics Data System (ADS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  10. Power management and distribution technology

    NASA Astrophysics Data System (ADS)

    Dickman, John Ellis

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  11. Power management and distribution technology

    NASA Technical Reports Server (NTRS)

    Dickman, John Ellis

    1993-01-01

    Power management and distribution (PMAD) technology is discussed in the context of developing working systems for a piloted Mars nuclear electric propulsion (NEP) vehicle. The discussion is presented in vugraph form. The following topics are covered: applications and systems definitions; high performance components; the Civilian Space Technology Initiative (CSTI) high capacity power program; fiber optic sensors for power diagnostics; high temperature power electronics; 200 C baseplate electronics; high temperature component characterization; a high temperature coaxial transformer; and a silicon carbide mosfet.

  12. Determination of pharmacological levels of harmane, harmine and harmaline in mammalian brain tissue, cerebrospinal fluid and plasma by high-performance liquid chromatography with fluorimetric detection.

    PubMed

    Moncrieff, J

    1989-11-24

    Increased blood aldehyde levels, as occur in alcohol intoxication, could lead to the formation of beta-carbolines such as harmane by condensation with indoleamines. Endogenous beta-carbolines, therefore, should occur in specific brain areas where indoleamine concentrations are high, whilst exogenous beta-carbolines should exhibit an even distribution. The author presents direct and sensitive methods for assaying the beta-carbolines harmane, harmine and harmaline in brain tissue, cerebrospinal fluid and plasma at picogram sample concentrations using reversed-phase high-performance liquid chromatography with fluorimetric detection and minimal sample preparation. Using these assay methods, it was found that the distribution of beta-carbolines from a source exogenous to the brain results in a relatively even distribution within the brain tissue.

  13. Expression of lactate transporters MCT1, MCT2 and CD147 in the red blood cells of three horse breeds: Finnhorse, Standardbred and Thoroughbred.

    PubMed

    Mykkänen, A K; Pösö, A R; McGowan, C M; McKane, S A

    2010-11-01

    In exercising horses, up to 50% of blood lactate is taken up into red blood cells (RBCs). Lactate transporter proteins MCT1, MCT2 and CD147 (an ancillary protein for MCT1) are expressed in the equine RBC membrane. In Standardbreds (SB), lactate transport activity is bimodally distributed and correlates with the amount of MCT1 and CD147. About 75% of SB studied have high lactate transport activity in RBCs. In other breeds, the distribution of lactate transport activity is unknown. To study whether similar bimodal distribution of MCT1 and CD147 is present also in the racing Finnhorse (FH) and Thoroughbred (TB) as in the SB and to study the distribution of MCT2 in all 3 breeds and to determine if there is a connection between MCT expression and performance markers in TB racehorses. Venous blood samples were taken from 118 FHs, 98 TBs and 44 SBs. Red blood cell membranes were purified and MCT1, MCT2 and CD147 measured by western blot. The amount of transporters was compared with TB performance markers. In TBs, the distribution of MCT1 was bimodal and in all breeds distribution of MCT2 unimodal. The amount of CD147 was clearly bimodal in FH and SB, with 85 and 82% expressing high amounts of CD147. In TBs, 88% had high expression of CD147 and 11% low expression, but one horse showed intermediate expression not apparent in FH or SB. Performance markers did not correlate with the amount of MCT1, MCT2 or CD147. High lactate transport activity was present in all 3 racing breeds, with the greatest proportion in the TB, followed by the racing FH, then SB. There was no significant statistical correlation found between lactate transporters in RBC membrane and markers of racing performance in the TB. © 2010 EVJ Ltd.

  14. Approach Considerations in Aircraft with High-Lift Propeller Systems

    NASA Technical Reports Server (NTRS)

    Patterson, Michael D.; Borer, Nicholas K.

    2017-01-01

    NASA's research into distributed electric propulsion (DEP) includes the design and development of the X-57 Maxwell aircraft. This aircraft has two distinct types of DEP: wingtip propellers and high-lift propellers. This paper focuses on the unique opportunities and challenges that the high-lift propellers--i.e., the small diameter propellers distributed upstream of the wing leading edge to augment lift at low speeds--bring to the aircraft performance in approach conditions. Recent changes to the regulations related to certifying small aircraft (14 CFR x23) and these new regulations' implications on the certification of aircraft with high-lift propellers are discussed. Recommendations about control systems for high-lift propeller systems are made, and performance estimates for the X-57 aircraft with high-lift propellers operating are presented.

  15. Voltage-Load Sensitivity Matrix Based Demand Response for Voltage Control in High Solar Penetration Distribution Feeders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xiangqi; Wang, Jiyu; Mulcahy, David

    This paper presents a voltage-load sensitivity matrix (VLSM) based voltage control method to deploy demand response resources for controlling voltage in high solar penetration distribution feeders. The IEEE 123-bus system in OpenDSS is used for testing the performance of the preliminary VLSM-based voltage control approach. A load disaggregation process is applied to disaggregate the total load profile at the feeder head to each load nodes along the feeder so that loads are modeled at residential house level. Measured solar generation profiles are used in the simulation to model the impact of solar power on distribution feeder voltage profiles. Different casemore » studies involving various PV penetration levels and installation locations have been performed. Simulation results show that the VLSM algorithm performance meets the voltage control requirements and is an effective voltage control strategy.« less

  16. Parametric Study of Pulse-Combustor-Driven Ejectors at High-Pressure

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2015-01-01

    Pulse-combustor configurations developed in recent studies have demonstrated performance levels at high-pressure operating conditions comparable to those observed at atmospheric conditions. However, problems related to the way fuel was being distributed within the pulse combustor were still limiting performance. In the first part of this study, new configurations are investigated computationally aimed at improving the fuel distribution and performance of the pulse-combustor. Subsequent sections investigate the performance of various pulse-combustor driven ejector configurations operating at high pressure conditions, focusing on the effects of fuel equivalence ratio and ejector throat area. The goal is to design pulse-combustor-ejector configurations that maximize pressure gain while achieving a thermal environment acceptable to a turbine, and at the same time maintain acceptable levels of NO(x) emissions and flow non-uniformities. The computations presented here have demonstrated pressure gains of up to 2.8.

  17. Advanced air distribution: improving health and comfort while reducing energy use.

    PubMed

    Melikov, A K

    2016-02-01

    Indoor environment affects the health, comfort, and performance of building occupants. The energy used for heating, cooling, ventilating, and air conditioning of buildings is substantial. Ventilation based on total volume air distribution in spaces is not always an efficient way to provide high-quality indoor environments at the same time as low-energy consumption. Advanced air distribution, designed to supply clean air where, when, and as much as needed, makes it possible to efficiently achieve thermal comfort, control exposure to contaminants, provide high-quality air for breathing and minimizing the risk of airborne cross-infection while reducing energy use. This study justifies the need for improving the present air distribution design in occupied spaces, and in general the need for a paradigm shift from the design of collective environments to the design of individually controlled environments. The focus is on advanced air distribution in spaces, its guiding principles and its advantages and disadvantages. Examples of advanced air distribution solutions in spaces for different use, such as offices, hospital rooms, vehicle compartments, are presented. The potential of advanced air distribution, and individually controlled macro-environment in general, for achieving shared values, that is, improved health, comfort, and performance, energy saving, reduction of healthcare costs and improved well-being is demonstrated. Performance criteria are defined and further research in the field is outlined. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Interphase Thermomechanical Reliability and Optimization for High-Performance Ti Metal Laminates

    DTIC Science & Technology

    2011-12-19

    Thermomechanical Reliability and Optimization for High-Performance Ti FA9550-08-l-0015 Metal Laminates Sb. GRANT NUMBER Program Manager: Dr Joycelyn Harrison...OSR-VA-TR-2012-0202 12. DISTRIBUTION/AVAILABILITY STATEMENT A 13. SUPPLEMENTARY NOTES 14. ABSTRACT Hybrid laminated composites such as titanium...graphite (TiGr) laminates are an emerging class of structural materials with the potential to enable a new generation of efficient, high-performance

  19. Strategies That Challenge: Exploring the Use of Differentiated Assessment to Challenge High-Achieving Students in Large Enrolment Undergraduate Cohorts

    ERIC Educational Resources Information Center

    Varsavsky, Cristina; Rayner, Gerry

    2013-01-01

    Academics teaching large and highly diverse classes are familiar with the inevitable effect this has on promulgating teaching and assessment practices to "middle of the distribution", thus ignoring the distribution extremes. Although the literature documents a wide range of strategies for supporting poor-performing students in large…

  20. Heterogeneous high throughput scientific computing with APM X-Gene and Intel Xeon Phi

    DOE PAGES

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; ...

    2015-05-22

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. As a result, we report our experience on software porting, performance and energy efficiency and evaluatemore » the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).« less

  1. High Performance Data Distribution for Scientific Community

    NASA Astrophysics Data System (ADS)

    Tirado, Juan M.; Higuero, Daniel; Carretero, Jesus

    2010-05-01

    Institutions such as NASA, ESA or JAXA find solutions to distribute data from their missions to the scientific community, and their long term archives. This is a complex problem, as it includes a vast amount of data, several geographically distributed archives, heterogeneous architectures with heterogeneous networks, and users spread around the world. We propose a novel architecture (HIDDRA) that solves this problem aiming to reduce user intervention in data acquisition and processing. HIDDRA is a modular system that provides a highly efficient parallel multiprotocol download engine, using a publish/subscribe policy which helps the final user to obtain data of interest transparently. Our system can deal simultaneously with multiple protocols (HTTP,HTTPS, FTP, GridFTP among others) to obtain the maximum bandwidth, reducing the workload in data server and increasing flexibility. It can also provide high reliability and fault tolerance, as several sources of data can be used to perform one file download. HIDDRA architecture can be arranged into a data distribution network deployed on several sites that can cooperate to provide former features. HIDDRA has been addressed by the 2009 e-IRG Report on Data Management as a promising initiative for data interoperability. Our first prototype has been evaluated in collaboration with the ESAC centre in Villafranca del Castillo (Spain) that shows a high scalability and performance, opening a wide spectrum of opportunities. Some preliminary results have been published in the Journal of Astrophysics and Space Science [1]. [1] D. Higuero, J.M. Tirado, J. Carretero, F. Félix, and A. de La Fuente. HIDDRA: a highly independent data distribution and retrieval architecture for space observation missions. Astrophysics and Space Science, 321(3):169-175, 2009

  2. Distributed Leadership in Action: Leading High-Performing Leadership Teams in English Schools

    ERIC Educational Resources Information Center

    Bush, Tony; Glover, Derek

    2012-01-01

    Heroic models of leadership based on the role of the principal have been supplemented by an emerging recognition of the value of "distributed leadership". The work of effective senior leadership teams (SLTs) is an important manifestation of distributed leadership, but there has been only limited research addressing the relationship…

  3. Continuous high speed coherent one-way quantum key distribution.

    PubMed

    Stucki, Damien; Barreiro, Claudio; Fasel, Sylvain; Gautier, Jean-Daniel; Gay, Olivier; Gisin, Nicolas; Thew, Rob; Thoma, Yann; Trinkler, Patrick; Vannel, Fabien; Zbinden, Hugo

    2009-08-03

    Quantum key distribution (QKD) is the first commercial quantum technology operating at the level of single quanta and is a leading light for quantum-enabled photonic technologies. However, controlling these quantum optical systems in real world environments presents significant challenges. For the first time, we have brought together three key concepts for future QKD systems: a simple high-speed protocol; high performance detection; and integration both, at the component level and for standard fibre network connectivity. The QKD system is capable of continuous and autonomous operation, generating secret keys in real time. Laboratory and field tests were performed and comparisons made with robust InGaAs avalanche photodiodes and superconducting detectors. We report the first real world implementation of a fully functional QKD system over a 43 dB-loss (150 km) transmission line in the Swisscom fibre optic network where we obtained average real-time distribution rates over 3 hours of 2.5 bps.

  4. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  5. Reduced Toxicity High Performance Monopropellant

    DTIC Science & Technology

    2011-09-01

    M315E Distribution A: Approved for public release; distribution unlimited AF - M315E Desirable Properties Characteristic Objective D it *I 3450 N /L...required AF M315E d- excee s SOTA monopropellant (45%) and bipropellant (8%) Next generation exceeds SOTA monopropellant (66%) and bipropellant (23...inert mass fraction Distribution A: Approved for public release; distribution unlimited Toxicity Assessment of AF - M315E Toxicity Testing Results

  6. Experimental study of low-cost fiber optic distributed temperature sensor system performance

    NASA Astrophysics Data System (ADS)

    Dashkov, Michael V.; Zharkov, Alexander D.

    2016-03-01

    The distributed control of temperature is an actual task for various application such as oil & gas fields, high-voltage power lines, fire alarm systems etc. The most perspective are optical fiber distributed temperature sensors (DTS). They have advantages on accuracy, resolution and range, but have a high cost. Nevertheless, for some application the accuracy of measurement and localization aren't so important as cost. The results of an experimental study of low-cost Raman based DTS based on standard OTDR are represented.

  7. Real-Time Embedded High Performance Computing: Communications Scheduling.

    DTIC Science & Technology

    1995-06-01

    real - time operating system must explicitly limit the degradation of the timing performance of all processes as the number of processes...adequately supported by a real - time operating system , could compound the development problems encountered in the past. Many experts feel that the... real - time operating system support for an MPP, although they all provide some support for distributed real-time applications. A distributed real

  8. A Weibull distribution accrual failure detector for cloud computing.

    PubMed

    Liu, Jiaxi; Wu, Zhibo; Wu, Jin; Dong, Jian; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing.

  9. Building America Case Study: Standard- Versus High-Velocity Air Distribution in High-Performance Townhomes, Denver, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Poerschke, R. Beach, T. Begg

    IBACOS investigated the performance of a small-diameter high-velocity heat pump system compared to a conventional system in a new construction triplex townhouse. A ductless heat pump system also was installed for comparison, but the homebuyer backed out because of aesthetic concerns about that system. In total, two buildings, having identical solar orientation and comprised of six townhomes, were monitored for comfort and energy performance.

  10. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    NASA Astrophysics Data System (ADS)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long-distance distributed computing. Finally, the project is developing 2D and 3D visualization software as part of the international AIPS++ project. This research and development project is being carried out by a team of experts in radio astronomy, algorithm development for massively parallel architectures, high-speed networking, database management, and Thinking Machines Corporation personnel. The development of this complete software, distributed computing, and data archive and library solution to the radio astronomy computing problem will advance our expertise in high performance computing and communications technology and the application of these techniques to astronomical data processing.

  11. Model-based optimization of near-field binary-pixelated beam shapers

    DOE PAGES

    Dorrer, C.; Hassett, J.

    2017-01-23

    The optimization of components that rely on spatially dithered distributions of transparent or opaque pixels and an imaging system with far-field filtering for transmission control is demonstrated. The binary-pixel distribution can be iteratively optimized to lower an error function that takes into account the design transmission and the characteristics of the required far-field filter. Simulations using a design transmission chosen in the context of high-energy lasers show that the beam-fluence modulation at an image plane can be reduced by a factor of 2, leading to performance similar to using a non-optimized spatial-dithering algorithm with pixels of size reduced by amore » factor of 2 without the additional fabrication complexity or cost. The optimization process preserves the pixel distribution statistical properties. Analysis shows that the optimized pixel distribution starting from a high-noise distribution defined by a random-draw algorithm should be more resilient to fabrication errors than the optimized pixel distributions starting from a low-noise, error-diffusion algorithm, while leading to similar beamshaping performance. Furthermore, this is confirmed by experimental results obtained with various pixel distributions and induced fabrication errors.« less

  12. Task Assignment Heuristics for Distributed CFD Applications

    NASA Technical Reports Server (NTRS)

    Lopez-Benitez, N.; Djomehri, M. J.; Biswas, R.; Biegel, Bryan (Technical Monitor)

    2001-01-01

    CFD applications require high-performance computational platforms: 1. Complex physics and domain configuration demand strongly coupled solutions; 2. Applications are CPU and memory intensive; and 3. Huge resource requirements can only be satisfied by teraflop-scale machines or distributed computing.

  13. Electric power processing, distribution and control for advanced aerospace vehicles.

    NASA Technical Reports Server (NTRS)

    Krausz, A.; Felch, J. L.

    1972-01-01

    The results of a current study program to develop a rational basis for selection of power processing, distribution, and control configurations for future aerospace vehicles including the Space Station, Space Shuttle, and high-performance aircraft are presented. Within the constraints imposed by the characteristics of power generation subsystems and the load utilization equipment requirements, the power processing, distribution and control subsystem can be optimized by selection of the proper distribution voltage, frequency, and overload/fault protection method. It is shown that, for large space vehicles which rely on static energy conversion to provide electric power, high-voltage dc distribution (above 100 V dc) is preferable to conventional 28 V dc and 115 V ac distribution per MIL-STD-704A. High-voltage dc also has advantages over conventional constant frequency ac systems in many aircraft applications due to the elimination of speed control, wave shaping, and synchronization equipment.

  14. Computing distance distributions from dipolar evolution data with overtones: RIDME spectroscopy with Gd(iii)-based spin labels.

    PubMed

    Keller, Katharina; Mertens, Valerie; Qi, Mian; Nalepa, Anna I; Godt, Adelheid; Savitsky, Anton; Jeschke, Gunnar; Yulikov, Maxim

    2017-07-21

    Extraction of distance distributions between high-spin paramagnetic centers from relaxation induced dipolar modulation enhancement (RIDME) data is affected by the presence of overtones of dipolar frequencies. As previously proposed, we account for these overtones by using a modified kernel function in Tikhonov regularization analysis. This paper analyzes the performance of such an approach on a series of model compounds with the Gd(iii)-PyMTA complex serving as paramagnetic high-spin label. We describe the calibration of the overtone coefficients for the RIDME kernel, demonstrate the accuracy of distance distributions obtained with this approach, and show that for our series of Gd-rulers RIDME technique provides more accurate distance distributions than Gd(iii)-Gd(iii) double electron-electron resonance (DEER). The analysis of RIDME data including harmonic overtones can be performed using the MATLAB-based program OvertoneAnalysis, which is available as open-source software from the web page of ETH Zurich. This approach opens a perspective for the routine use of the RIDME technique with high-spin labels in structural biology and structural studies of other soft matter.

  15. An Efficient Modulation Strategy for Cascaded Photovoltaic Systems Suffering From Module Mismatch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Cheng; Zhang, Kai; Xiong, Jian

    Modular multilevel cascaded converter (MMCC) is a promising technique for medium/high-voltage high-power photovoltaic systems due to its modularity, scalability, and capability of distributed maximum power point tracking (MPPT) etc. However, distributed MPPT under module-mismatch might polarize the distribution of ac output voltages as well as the dc-link voltages among the modules, distort grid currents, and even cause system instability. For the better acceptance in practical applications, such issues need to be well addressed. Based on mismatch degree that is defined to consider both active power distribution and maximum modulation index, this paper presents an efficient modulation strategy for a cascaded-H-bridge-basedmore » MMCC under module mismatch. It can operate in loss-reducing mode or range-extending mode. By properly switching between the two modes, performance indices such as system efficiency, grid current quality, and balance of dc voltages, can be well coordinated. In this way, the MMCC system can maintain high-performance over a wide range of operating conditions. As a result, effectiveness of the proposed modulation strategy is proved with experiments.« less

  16. An Efficient Modulation Strategy for Cascaded Photovoltaic Systems Suffering From Module Mismatch

    DOE PAGES

    Wang, Cheng; Zhang, Kai; Xiong, Jian; ...

    2017-09-26

    Modular multilevel cascaded converter (MMCC) is a promising technique for medium/high-voltage high-power photovoltaic systems due to its modularity, scalability, and capability of distributed maximum power point tracking (MPPT) etc. However, distributed MPPT under module-mismatch might polarize the distribution of ac output voltages as well as the dc-link voltages among the modules, distort grid currents, and even cause system instability. For the better acceptance in practical applications, such issues need to be well addressed. Based on mismatch degree that is defined to consider both active power distribution and maximum modulation index, this paper presents an efficient modulation strategy for a cascaded-H-bridge-basedmore » MMCC under module mismatch. It can operate in loss-reducing mode or range-extending mode. By properly switching between the two modes, performance indices such as system efficiency, grid current quality, and balance of dc voltages, can be well coordinated. In this way, the MMCC system can maintain high-performance over a wide range of operating conditions. As a result, effectiveness of the proposed modulation strategy is proved with experiments.« less

  17. Spectral and spatial characterization of perfluorinated graded-index polymer optical fibers for the distribution of optical wireless communication cells.

    PubMed

    Hajjar, Hani Al; Montero, David S; Lallana, Pedro C; Vázquez, Carmen; Fracasso, Bruno

    2015-02-10

    In this paper, the characterization of a perfluorinated graded-index polymer optical fiber (PF-GIPOF) for a high-bitrate indoor optical wireless system is reported. PF-GIPOF is used here to interconnect different optical wireless access points that distribute optical free-space high-bitrate wireless communication cells. The PF-GIPOF channel is first studied in terms of transmission attenuation and frequency response and, in a second step, the spatial power profile distribution at the fiber output is analyzed. Both characterizations are performed under varying restricted mode launch conditions, enabling us to assess the transmission channel performance subject to potential connectorization errors within an environment where the end users may intervene by themselves on the home network infrastructure.

  18. Reynolds Number Effects on Leading Edge Radius Variations of a Supersonic Transport at Transonic Conditions

    NASA Technical Reports Server (NTRS)

    Rivers, S. M. B.; Wahls, R. A.; Owens, L. R.

    2001-01-01

    A computational study focused on leading-edge radius effects and associated Reynolds number sensitivity for a High Speed Civil Transport configuration at transonic conditions was conducted as part of NASA's High Speed Research Program. The primary purposes were to assess the capabilities of computational fluid dynamics to predict Reynolds number effects for a range of leading-edge radius distributions on a second-generation supersonic transport configuration, and to evaluate the potential performance benefits of each at the transonic cruise condition. Five leading-edge radius distributions are described, and the potential performance benefit including the Reynolds number sensitivity for each is presented. Computational results for two leading-edge radius distributions are compared with experimental results acquired in the National Transonic Facility over a broad Reynolds number range.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, J.; Xue, X.

    A comprehensive 3D CFD model is developed for a bi-electrode supported cell (BSC) SOFC. The model includes complicated transport phenomena of mass/heat transfer, charge (electron and ion) migration, and electrochemical reaction. The uniqueness of the modeling study is that functionally graded porous electrode property is taken into account, including not only linear but nonlinear porosity distributions. Extensive numerical analysis is performed to elucidate the effects of both porous microstructure distributions and operating condition on cell performance. Results indicate that cell performance is strongly dependent on both operating conditions and porous microstructure distributions of electrodes. Using the proposed fuel/gas feeding design,more » the uniform hydrogen distribution within porous anode is achieved; the oxygen distribution within the cathode is dependent on porous microstructure distributions as well as pressure loss conditions. Simulation results show that fairly uniform temperature distribution can be obtained with the proposed fuel/gas feeding design. The modeling results can be employed to guide experimental design of BSC test and provide pre-experimental analysis, as a result, to circumvent high cost associated with try-and-error experimental design and setup.« less

  20. Joint Sensing/Sampling Optimization for Surface Drifting Mine Detection with High-Resolution Drift Model

    DTIC Science & Technology

    2012-09-01

    as potential tools for large area detection coverage while being moderately inexpensive (Wettergren, Performance of Search via Track - Before - Detect for...via Track - Before - Detect for Distribute 34 Sensor Networks, 2008). These statements highlight three specific needs to further sensor network research...Bay hydrography. Journal of Marine Systems, 12, 221–236. Wettergren, T. A. (2008). Performance of search via track - before - detect for distributed

  1. Molecular Dynamics and Morphology of High Performance Elastomers and Fibers by Solid State NMR

    DTIC Science & Technology

    2016-06-30

    Distribution Unlimited UU UU UU UU 30-06-2016 1-Sep-2015 31-May-2016 Final Report: Molecular Dynamics and Morphology of High - Performance Elastomers and...non peer-reviewed journals: Final Report: Molecular Dynamics and Morphology of High -Performance Elastomers and Fibers by Solid-State NMR Report Title...Kanbargi 0.50 0.50 1 PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Sub Contractors (DD882) Names of Faculty Supported Names of Under Graduate

  2. Aho-Corasick String Matching on Shared and Distributed Memory Parallel Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Villa, Oreste; Chavarría-Miranda, Daniel

    String matching is at the core of many critical applications, including network intrusion detection systems, search engines, virus scanners, spam filters, DNA and protein sequencing, and data mining. For all of these applications string matching requires a combination of (sometimes all) the following characteristics: high and/or predictable performance, support for large data sets and flexibility of integration and customization. Many software based implementations targeting conventional cache-based microprocessors fail to achieve high and predictable performance requirements, while Field-Programmable Gate Array (FPGA) implementations and dedicated hardware solutions fail to support large data sets (dictionary sizes) and are difficult to integrate and customize.more » The advent of multicore, multithreaded, and GPU-based systems is opening the possibility for software based solutions to reach very high performance at a sustained rate. This paper compares several software-based implementations of the Aho-Corasick string searching algorithm for high performance systems. We discuss the implementation of the algorithm on several types of shared-memory high-performance architectures (Niagara 2, large x86 SMPs and Cray XMT), distributed memory with homogeneous processing elements (InfiniBand cluster of x86 multicores) and heterogeneous processing elements (InfiniBand cluster of x86 multicores with NVIDIA Tesla C10 GPUs). We describe in detail how each solution achieves the objectives of supporting large dictionaries, sustaining high performance, and enabling customization and flexibility using various data sets.« less

  3. Simulation of the Focal Spot of the Accelerator Bremsstrahlung Radiation

    NASA Astrophysics Data System (ADS)

    Sorokin, V.; Bespalov, V.

    2016-06-01

    Testing of thick-walled objects by bremsstrahlung radiation (BR) is primarily performed via high-energy quanta. The testing parameters are specified by the focal spot size of the high-energy bremsstrahlung radiation. In determining the focal spot size, the high- energy BR portion cannot be experimentally separated from the low-energy BR to use high- energy quanta only. The patterns of BR focal spot formation have been investigated via statistical modeling of the radiation transfer in the target material. The distributions of BR quanta emitted by the target for different energies and emission angles under normal distribution of the accelerated electrons bombarding the target have been obtained, and the ratio of the distribution parameters has been determined.

  4. Windows .NET Network Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST)

    PubMed Central

    Dowd, Scot E; Zaragoza, Joaquin; Rodriguez, Javier R; Oliver, Melvin J; Payton, Paxton R

    2005-01-01

    Background BLAST is one of the most common and useful tools for Genetic Research. This paper describes a software application we have termed Windows .NET Distributed Basic Local Alignment Search Toolkit (W.ND-BLAST), which enhances the BLAST utility by improving usability, fault recovery, and scalability in a Windows desktop environment. Our goal was to develop an easy to use, fault tolerant, high-throughput BLAST solution that incorporates a comprehensive BLAST result viewer with curation and annotation functionality. Results W.ND-BLAST is a comprehensive Windows-based software toolkit that targets researchers, including those with minimal computer skills, and provides the ability increase the performance of BLAST by distributing BLAST queries to any number of Windows based machines across local area networks (LAN). W.ND-BLAST provides intuitive Graphic User Interfaces (GUI) for BLAST database creation, BLAST execution, BLAST output evaluation and BLAST result exportation. This software also provides several layers of fault tolerance and fault recovery to prevent loss of data if nodes or master machines fail. This paper lays out the functionality of W.ND-BLAST. W.ND-BLAST displays close to 100% performance efficiency when distributing tasks to 12 remote computers of the same performance class. A high throughput BLAST job which took 662.68 minutes (11 hours) on one average machine was completed in 44.97 minutes when distributed to 17 nodes, which included lower performance class machines. Finally, there is a comprehensive high-throughput BLAST Output Viewer (BOV) and Annotation Engine components, which provides comprehensive exportation of BLAST hits to text files, annotated fasta files, tables, or association files. Conclusion W.ND-BLAST provides an interactive tool that allows scientists to easily utilizing their available computing resources for high throughput and comprehensive sequence analyses. The install package for W.ND-BLAST is freely downloadable from . With registration the software is free, installation, networking, and usage instructions are provided as well as a support forum. PMID:15819992

  5. A Weibull distribution accrual failure detector for cloud computing

    PubMed Central

    Wu, Zhibo; Wu, Jin; Zhao, Yao; Wen, Dongxin

    2017-01-01

    Failure detectors are used to build high availability distributed systems as the fundamental component. To meet the requirement of a complicated large-scale distributed system, accrual failure detectors that can adapt to multiple applications have been studied extensively. However, several implementations of accrual failure detectors do not adapt well to the cloud service environment. To solve this problem, a new accrual failure detector based on Weibull Distribution, called the Weibull Distribution Failure Detector, has been proposed specifically for cloud computing. It can adapt to the dynamic and unexpected network conditions in cloud computing. The performance of the Weibull Distribution Failure Detector is evaluated and compared based on public classical experiment data and cloud computing experiment data. The results show that the Weibull Distribution Failure Detector has better performance in terms of speed and accuracy in unstable scenarios, especially in cloud computing. PMID:28278229

  6. Generating high-accuracy urban distribution map for short-term change monitoring based on convolutional neural network by utilizing SAR imagery

    NASA Astrophysics Data System (ADS)

    Iino, Shota; Ito, Riho; Doi, Kento; Imaizumi, Tomoyuki; Hikosaka, Shuhei

    2017-10-01

    In the developing countries, urban areas are expanding rapidly. With the rapid developments, a short term monitoring of urban changes is important. A constant observation and creation of urban distribution map of high accuracy and without noise pollution are the key issues for the short term monitoring. SAR satellites are highly suitable for day or night and regardless of atmospheric weather condition observations for this type of study. The current study highlights the methodology of generating high-accuracy urban distribution maps derived from the SAR satellite imagery based on Convolutional Neural Network (CNN), which showed the outstanding results for image classification. Several improvements on SAR polarization combinations and dataset construction were performed for increasing the accuracy. As an additional data, Digital Surface Model (DSM), which are useful to classify land cover, were added to improve the accuracy. From the obtained result, high-accuracy urban distribution map satisfying the quality for short-term monitoring was generated. For the evaluation, urban changes were extracted by taking the difference of urban distribution maps. The change analysis with time series of imageries revealed the locations of urban change areas for short-term. Comparisons with optical satellites were performed for validating the results. Finally, analysis of the urban changes combining X-band, L-band and C-band SAR satellites was attempted to increase the opportunity of acquiring satellite imageries. Further analysis will be conducted as future work of the present study

  7. Differential models of twin correlations in skew for body-mass index (BMI).

    PubMed

    Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric

    2018-01-01

    Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.

  8. Distributed measurement of high electric current by means of polarimetric optical fiber sensor.

    PubMed

    Palmieri, Luca; Sarchi, Davide; Galtarossa, Andrea

    2015-05-04

    A novel distributed optical fiber sensor for spatially resolved monitoring of high direct electric current is proposed and analyzed. The sensor exploits Faraday rotation and is based on the polarization analysis of the Rayleigh backscattered light. Preliminary laboratory tests, performed on a section of electric cable for currents up to 2.5 kA, have confirmed the viability of the method.

  9. A Simple Index for the High-Citation Tail of Citation Distribution to Quantify Research Performance in Countries and Institutions

    PubMed Central

    Rodríguez-Navarro, Alonso

    2011-01-01

    Background Conventional scientometric predictors of research performance such as the number of papers, citations, and papers in the top 1% of highly cited papers cannot be validated in terms of the number of Nobel Prize achievements across countries and institutions. The purpose of this paper is to find a bibliometric indicator that correlates with the number of Nobel Prize achievements. Methodology/Principal Findings This study assumes that the high-citation tail of citation distribution holds most of the information about high scientific performance. Here I propose the x-index, which is calculated from the number of national articles in the top 1% and 0.1% of highly cited papers and has a subtractive term to discount highly cited papers that are not scientific breakthroughs. The x-index, the number of Nobel Prize achievements, and the number of national articles in Nature or Science are highly correlated. The high correlations among these independent parameters demonstrate that they are good measures of high scientific performance because scientific excellence is their only common characteristic. However, the x-index has superior features as compared to the other two parameters. Nobel Prize achievements are low frequency events and their number is an imprecise indicator, which in addition is zero in most institutions; the evaluation of research making use of the number of publications in prestigious journals is not advised. Conclusion The x-index is a simple and precise indicator for high research performance. PMID:21647383

  10. A simple index for the high-citation tail of citation distribution to quantify research performance in countries and institutions.

    PubMed

    Rodríguez-Navarro, Alonso

    2011-01-01

    Conventional scientometric predictors of research performance such as the number of papers, citations, and papers in the top 1% of highly cited papers cannot be validated in terms of the number of Nobel Prize achievements across countries and institutions. The purpose of this paper is to find a bibliometric indicator that correlates with the number of Nobel Prize achievements. This study assumes that the high-citation tail of citation distribution holds most of the information about high scientific performance. Here I propose the x-index, which is calculated from the number of national articles in the top 1% and 0.1% of highly cited papers and has a subtractive term to discount highly cited papers that are not scientific breakthroughs. The x-index, the number of Nobel Prize achievements, and the number of national articles in Nature or Science are highly correlated. The high correlations among these independent parameters demonstrate that they are good measures of high scientific performance because scientific excellence is their only common characteristic. However, the x-index has superior features as compared to the other two parameters. Nobel Prize achievements are low frequency events and their number is an imprecise indicator, which in addition is zero in most institutions; the evaluation of research making use of the number of publications in prestigious journals is not advised. The x-index is a simple and precise indicator for high research performance.

  11. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    NASA Astrophysics Data System (ADS)

    Dykstra, D.; Bockelman, B.; Blomer, J.; Herner, K.; Levshina, T.; Slyz, M.

    2015-12-01

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliary data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called "alien cache" to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.

  12. Engineering the CernVM-Filesystem as a High Bandwidth Distributed Filesystem for Auxiliary Physics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dykstra, D.; Bockelman, B.; Blomer, J.

    A common use pattern in the computing models of particle physics experiments is running many distributed applications that read from a shared set of data files. We refer to this data is auxiliary data, to distinguish it from (a) event data from the detector (which tends to be different for every job), and (b) conditions data about the detector (which tends to be the same for each job in a batch of jobs). Relatively speaking, conditions data also tends to be relatively small per job where both event data and auxiliary data are larger per job. Unlike event data, auxiliarymore » data comes from a limited working set of shared files. Since there is spatial locality of the auxiliary data access, the use case appears to be identical to that of the CernVM- Filesystem (CVMFS). However, we show that distributing auxiliary data through CVMFS causes the existing CVMFS infrastructure to perform poorly. We utilize a CVMFS client feature called 'alien cache' to cache data on existing local high-bandwidth data servers that were engineered for storing event data. This cache is shared between the worker nodes at a site and replaces caching CVMFS files on both the worker node local disks and on the site's local squids. We have tested this alien cache with the dCache NFSv4.1 interface, Lustre, and the Hadoop Distributed File System (HDFS) FUSE interface, and measured performance. In addition, we use high-bandwidth data servers at central sites to perform the CVMFS Stratum 1 function instead of the low-bandwidth web servers deployed for the CVMFS software distribution function. We have tested this using the dCache HTTP interface. As a result, we have a design for an end-to-end high-bandwidth distributed caching read-only filesystem, using existing client software already widely deployed to grid worker nodes and existing file servers already widely installed at grid sites. Files are published in a central place and are soon available on demand throughout the grid and cached locally on the site with a convenient POSIX interface. This paper discusses the details of the architecture and reports performance measurements.« less

  13. Scale effect challenges in urban hydrology highlighted with a Fully Distributed Model and High-resolution rainfall data

    NASA Astrophysics Data System (ADS)

    Ichiba, Abdellah; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel; Bompard, Philippe; Ten Veldhuis, Marie-Claire

    2017-04-01

    Nowadays, there is a growing interest on small-scale rainfall information, provided by weather radars, to be used in urban water management and decision-making. Therefore, an increasing interest is in parallel devoted to the development of fully distributed and grid-based models following the increase of computation capabilities, the availability of high-resolution GIS information needed for such models implementation. However, the choice of an appropriate implementation scale to integrate the catchment heterogeneity and the whole measured rainfall variability provided by High-resolution radar technologies still issues. This work proposes a two steps investigation of scale effects in urban hydrology and its effects on modeling works. In the first step fractal tools are used to highlight the scale dependency observed within distributed data used to describe the catchment heterogeneity, both the structure of the sewer network and the distribution of impervious areas are analyzed. Then an intensive multi-scale modeling work is carried out to understand scaling effects on hydrological model performance. Investigations were conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model was implemented at 17 spatial resolutions ranging from 100 m to 5 m and modeling investigations were performed using both rain gauge rainfall information as well as high resolution X band radar data in order to assess the sensitivity of the model to small scale rainfall variability. Results coming out from this work demonstrate scale effect challenges in urban hydrology modeling. In fact, fractal concept highlights the scale dependency observed within distributed data used to implement hydrological models. Patterns of geophysical data change when we change the observation pixel size. The multi-scale modeling investigation performed with Multi-Hydro model at 17 spatial resolutions confirms scaling effect on hydrological model performance. Results were analyzed at three ranges of scales identified in the fractal analysis and confirmed in the modeling work. The sensitivity of the model to small-scale rainfall variability was discussed as well.

  14. Design considerations of high-performance InGaAs/InP single-photon avalanche diodes for quantum key distribution.

    PubMed

    Ma, Jian; Bai, Bing; Wang, Liu-Jun; Tong, Cun-Zhu; Jin, Ge; Zhang, Jun; Pan, Jian-Wei

    2016-09-20

    InGaAs/InP single-photon avalanche diodes (SPADs) are widely used in practical applications requiring near-infrared photon counting such as quantum key distribution (QKD). Photon detection efficiency and dark count rate are the intrinsic parameters of InGaAs/InP SPADs, due to the fact that their performances cannot be improved using different quenching electronics given the same operation conditions. After modeling these parameters and developing a simulation platform for InGaAs/InP SPADs, we investigate the semiconductor structure design and optimization. The parameters of photon detection efficiency and dark count rate highly depend on the variables of absorption layer thickness, multiplication layer thickness, excess bias voltage, and temperature. By evaluating the decoy-state QKD performance, the variables for SPAD design and operation can be globally optimized. Such optimization from the perspective of specific applications can provide an effective approach to design high-performance InGaAs/InP SPADs.

  15. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less

  16. Estimating the proportion of true null hypotheses when the statistics are discrete.

    PubMed

    Dialsingh, Isaac; Austin, Stefanie R; Altman, Naomi S

    2015-07-15

    In high-dimensional testing problems π0, the proportion of null hypotheses that are true is an important parameter. For discrete test statistics, the P values come from a discrete distribution with finite support and the null distribution may depend on an ancillary statistic such as a table margin that varies among the test statistics. Methods for estimating π0 developed for continuous test statistics, which depend on a uniform or identical null distribution of P values, may not perform well when applied to discrete testing problems. This article introduces a number of π0 estimators, the regression and 'T' methods that perform well with discrete test statistics and also assesses how well methods developed for or adapted from continuous tests perform with discrete tests. We demonstrate the usefulness of these estimators in the analysis of high-throughput biological RNA-seq and single-nucleotide polymorphism data. implemented in R. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Distributed Computing Architecture for Image-Based Wavefront Sensing and 2 D FFTs

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey S.; Dean, Bruce H.; Haghani, Shadan

    2006-01-01

    Image-based wavefront sensing (WFS) provides significant advantages over interferometric-based wavefi-ont sensors such as optical design simplicity and stability. However, the image-based approach is computational intensive, and therefore, specialized high-performance computing architectures are required in applications utilizing the image-based approach. The development and testing of these high-performance computing architectures are essential to such missions as James Webb Space Telescope (JWST), Terrestial Planet Finder-Coronagraph (TPF-C and CorSpec), and Spherical Primary Optical Telescope (SPOT). The development of these specialized computing architectures require numerous two-dimensional Fourier Transforms, which necessitate an all-to-all communication when applied on a distributed computational architecture. Several solutions for distributed computing are presented with an emphasis on a 64 Node cluster of DSPs, multiple DSP FPGAs, and an application of low-diameter graph theory. Timing results and performance analysis will be presented. The solutions offered could be applied to other all-to-all communication and scientifically computationally complex problems.

  18. Processing Benefits of Resonance Acoustic Mixing on High Performance Propellants and Explosives

    DTIC Science & Technology

    2012-02-01

    slightly greater stress Modulus similar Dewetting Distribution Statement A: Approved for Public Release Tensile Comparison File: NAVAIR Brief 18...greater stress Modulus similar Dewetting Distribution Statement A: Approved for Public Release Resodyn Mixed Explosive 19 File: NAVAIR Brief

  19. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  20. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  1. Universality, correlations, and rankings in the Brazilian universities national admission examinations

    NASA Astrophysics Data System (ADS)

    da Silva, Roberto; Lamb, Luis C.; Barbosa, Marcia C.

    2016-09-01

    We analyze the scores obtained by students who have taken the ENEM examination, The Brazilian High School National Examination which is used in the admission process at Brazilian universities. The average high schools scores from different disciplines are compared through the Pearson correlation coefficient. The results show a very large correlation between the performance in the different school subjects. Even though the students' scores in the ENEM form a Gaussian due to the standardization, we show that the high schools' scores form a bimodal distribution that cannot be used to evaluate and compare students performance over time. We also show that this high schools distribution reflects the correlation between school performance and the economic level (based on the average family income) of the students. The ENEM scores are compared with a Brazilian non standardized exam, the entrance examination from the Universidade Federal do Rio Grande do Sul. The analysis of the performance of the same individuals in both tests shows that the two tests not only select different abilities, but also lead to the admission of different sets of individuals. Our results indicate that standardized tests might be an interesting tool to compare performance of individuals over the years, but not of institutions.

  2. Mean estimation in highly skewed samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, S P

    The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.

  3. High-performance mass storage system for workstations

    NASA Technical Reports Server (NTRS)

    Chiang, T.; Tang, Y.; Gupta, L.; Cooperman, S.

    1993-01-01

    Reduced Instruction Set Computer (RISC) workstations and Personnel Computers (PC) are very popular tools for office automation, command and control, scientific analysis, database management, and many other applications. However, when using Input/Output (I/O) intensive applications, the RISC workstations and PC's are often overburdened with the tasks of collecting, staging, storing, and distributing data. Also, by using standard high-performance peripherals and storage devices, the I/O function can still be a common bottleneck process. Therefore, the high-performance mass storage system, developed by Loral AeroSys' Independent Research and Development (IR&D) engineers, can offload a RISC workstation of I/O related functions and provide high-performance I/O functions and external interfaces. The high-performance mass storage system has the capabilities to ingest high-speed real-time data, perform signal or image processing, and stage, archive, and distribute the data. This mass storage system uses a hierarchical storage structure, thus reducing the total data storage cost, while maintaining high-I/O performance. The high-performance mass storage system is a network of low-cost parallel processors and storage devices. The nodes in the network have special I/O functions such as: SCSI controller, Ethernet controller, gateway controller, RS232 controller, IEEE488 controller, and digital/analog converter. The nodes are interconnected through high-speed direct memory access links to form a network. The topology of the network is easily reconfigurable to maximize system throughput for various applications. This high-performance mass storage system takes advantage of a 'busless' architecture for maximum expandability. The mass storage system consists of magnetic disks, a WORM optical disk jukebox, and an 8mm helical scan tape to form a hierarchical storage structure. Commonly used files are kept in the magnetic disk for fast retrieval. The optical disks are used as archive media, and the tapes are used as backup media. The storage system is managed by the IEEE mass storage reference model-based UniTree software package. UniTree software will keep track of all files in the system, will automatically migrate the lesser used files to archive media, and will stage the files when needed by the system. The user can access the files without knowledge of their physical location. The high-performance mass storage system developed by Loral AeroSys will significantly boost the system I/O performance and reduce the overall data storage cost. This storage system provides a highly flexible and cost-effective architecture for a variety of applications (e.g., realtime data acquisition with a signal and image processing requirement, long-term data archiving and distribution, and image analysis and enhancement).

  4. Field Demonstration of a Centrifugal Ultra High Pressure (UHP) P-19

    DTIC Science & Technology

    2010-03-01

    States Air Force  Tyndall Air Force Base, FL 32403-5323 DISTRIBUTION A: Approved for public release; distribution unlimited. NOTICE AND SIGNATURE...PUBLICATION IN ACCORDANCE WITH ASSIGNED DISTRIBUTION STATEMENT. ___//SIGNATURE//______________________ ___//SIGNATURE//______________________ R...PROGRAM ELEMENT NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 6. AUTHOR(S) 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8

  5. Conversion of methanol to propylene over hierarchical HZSM-5: the effect of Al spatial distribution.

    PubMed

    Li, Jianwen; Ma, Hongfang; Chen, Yan; Xu, Zhiqiang; Li, Chunzhong; Ying, Weiyong

    2018-06-08

    Different silicon sources caused diverse Al spatial distribution in HZSM-5, and this affected the hierarchical structures and catalytic performance of desilicated zeolites. After being treated with 0.1 M NaOH, HZSM-5 zeolites synthesized with silica sol exhibited relatively widely distributed mesopores and channels, and possessed highly improved propylene selectivity and activity stability.

  6. Benchmark experiments at ASTRA facility on definition of space distribution of {sup 235}U fission reaction rate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bobrov, A. A.; Boyarinov, V. F.; Glushkov, A. E.

    2012-07-01

    Results of critical experiments performed at five ASTRA facility configurations modeling the high-temperature helium-cooled graphite-moderated reactors are presented. Results of experiments on definition of space distribution of {sup 235}U fission reaction rate performed at four from these five configurations are presented more detail. Analysis of available information showed that all experiments on criticality at these five configurations are acceptable for use them as critical benchmark experiments. All experiments on definition of space distribution of {sup 235}U fission reaction rate are acceptable for use them as physical benchmark experiments. (authors)

  7. Groundwater remediation engineering sparging using acetylene--study on the flow distribution of air.

    PubMed

    Zheng, Yan-Mei; Zhang, Ying; Huang, Guo-Qiang; Jiang, Bin; Li, Xin-Gang

    2005-01-01

    Air sparging (AS) is an emerging method to remove VOCs from saturated soils and groundwater. Air sparging performance highly depends on the air distribution resulting in the aquifer. In order to study gas flow characterization, a two-dimensional experimental chamber was designed and installed. In addition, the method by using acetylene as the tracer to directly image the gas distribution results of AS process has been put forward. Experiments were performed with different injected gas flow rates. The gas flow patterns were found to depend significantly on the injected gas flow rate, and the characterization of gas flow distributions in porous media was very different from the acetylene tracing study. Lower and higher gas flow rates generally yield more irregular in shape and less effective gas distributions.

  8. Distributed deep learning networks among institutions for medical imaging.

    PubMed

    Chang, Ken; Balachandar, Niranjan; Lam, Carson; Yi, Darvin; Brown, James; Beers, Andrew; Rosen, Bruce; Rubin, Daniel L; Kalpathy-Cramer, Jayashree

    2018-03-29

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data. We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet). We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer. We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

  9. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  10. Effect of mahlep on molecular weight distribution of cookie flour gluten proteins

    USDA-ARS?s Scientific Manuscript database

    Size Exclusion-High performance Chromatography (SE-HPLC) has been extensively used in molecular weight distribution analysis of wheat proteins. In this study the protein analysis was conducted on different cookie dough blends with different percentages of some ingredients. The mean chromatography ...

  11. IGMS: An Integrated ISO-to-Appliance Scale Grid Modeling System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palmintier, Bryan; Hale, Elaine; Hansen, Timothy M.

    This paper describes the Integrated Grid Modeling System (IGMS), a novel electric power system modeling platform for integrated transmission-distribution analysis that co-simulates off-the-shelf tools on high performance computing (HPC) platforms to offer unprecedented resolution from ISO markets down to appliances and other end uses. Specifically, the system simultaneously models hundreds or thousands of distribution systems in co-simulation with detailed Independent System Operator (ISO) markets and AGC-level reserve deployment. IGMS uses a new MPI-based hierarchical co-simulation framework to connect existing sub-domain models. Our initial efforts integrate opensource tools for wholesale markets (FESTIV), bulk AC power flow (MATPOWER), and full-featured distribution systemsmore » including physics-based end-use and distributed generation models (many instances of GridLAB-D[TM]). The modular IGMS framework enables tool substitution and additions for multi-domain analyses. This paper describes the IGMS tool, characterizes its performance, and demonstrates the impacts of the coupled simulations for analyzing high-penetration solar PV and price responsive load scenarios.« less

  12. A data distributed parallel algorithm for ray-traced volume rendering

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Painter, James S.; Hansen, Charles D.; Krogh, Michael F.

    1993-01-01

    This paper presents a divide-and-conquer ray-traced volume rendering algorithm and a parallel image compositing method, along with their implementation and performance on the Connection Machine CM-5, and networked workstations. This algorithm distributes both the data and the computations to individual processing units to achieve fast, high-quality rendering of high-resolution data. The volume data, once distributed, is left intact. The processing nodes perform local ray tracing of their subvolume concurrently. No communication between processing units is needed during this locally ray-tracing process. A subimage is generated by each processing unit and the final image is obtained by compositing subimages in the proper order, which can be determined a priori. Test results on both the CM-5 and a group of networked workstations demonstrate the practicality of our rendering algorithm and compositing method.

  13. Excimer laser annealing for low-voltage power MOSFET

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Okada, Tatsuya; Noguchi, Takashi; Mazzamuto, Fulvio; Huet, Karim

    2016-08-01

    Excimer laser annealing of lumped beam was performed to form the P-base junction for high-performance low-voltage-power MOSFET. An equivalent shallow-junction structure for the P-base junction with a uniform impurity distribution is realized by adopting excimer laser annealing (ELA). The impurity distribution in the P-base junction can be controlled precisely by the irradiated pulse energy density and the number of shots of excimer laser. High impurity activation for the shallow junction has been confirmed in the melted phase. The application of the laser annealing technology in the fabrication process of a practical low-voltage trench gate MOSFET was also examined.

  14. Distributed Accounting on the Grid

    NASA Technical Reports Server (NTRS)

    Thigpen, William; Hacker, Thomas J.; McGinnis, Laura F.; Athey, Brian D.

    2001-01-01

    By the late 1990s, the Internet was adequately equipped to move vast amounts of data between HPC (High Performance Computing) systems, and efforts were initiated to link together the national infrastructure of high performance computational and data storage resources together into a general computational utility 'grid', analogous to the national electrical power grid infrastructure. The purpose of the Computational grid is to provide dependable, consistent, pervasive, and inexpensive access to computational resources for the computing community in the form of a computing utility. This paper presents a fully distributed view of Grid usage accounting and a methodology for allocating Grid computational resources for use on a Grid computing system.

  15. 3D noninvasive ultrasound Joule heat tomography based on acousto-electric effect using unipolar pulses: a simulation study

    PubMed Central

    Yang, Renhuan; Li, Xu; Song, Aiguo; He, Bin; Yan, Ruqiang

    2012-01-01

    Electrical properties of biological tissues are highly sensitive to their physiological and pathological status. Thus it is of importance to image electrical properties of biological tissues. However, spatial resolution of conventional electrical impedance tomography (EIT) is generally poor. Recently, hybrid imaging modalities combining electric conductivity contrast and ultrasonic resolution based on acouto-electric effect has attracted considerable attention. In this study, we propose a novel three-dimensional (3D) noninvasive ultrasound Joule heat tomography (UJHT) approach based on acouto-electric effect using unipolar ultrasound pulses. As the Joule heat density distribution is highly dependent on the conductivity distribution, an accurate and high resolution mapping of the Joule heat density distribution is expected to give important information that is closely related to the conductivity contrast. The advantages of the proposed ultrasound Joule heat tomography using unipolar pulses include its simple inverse solution, better performance than UJHT using common bipolar pulses and its independence of any priori knowledge of the conductivity distribution of the imaging object. Computer simulation results show that using the proposed method, it is feasible to perform a high spatial resolution Joule heat imaging in an inhomogeneous conductive media. Application of this technique on tumor scanning is also investigated by a series of computer simulations. PMID:23123757

  16. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  17. Performance prediction of a synchronization link for distributed aerospace wireless systems.

    PubMed

    Wang, Wen-Qin; Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.

  18. The implementation and use of Ada on distributed systems with high reliability requirements

    NASA Technical Reports Server (NTRS)

    Knight, J. C.

    1988-01-01

    The use and implementation of Ada were investigated in distributed environments in which reliability is the primary concern. In particular, the focus was on the possibility that a distributed system may be programmed entirely in Ada so that the individual tasks of the system are unconcerned with which processors are being executed, and that failures may occur in the software and underlying hardware. A secondary interest is in the performance of Ada systems and how that performance can be gauged reliably. Primary activities included: analysis of the original approach to recovery in distributed Ada programs using the Advanced Transport Operating System (ATOPS) example; review and assessment of the original approach which was found to be capable of improvement; development of a refined approach to recovery that was applied to the ATOPS example; and design and development of a performance assessment scheme for Ada programs based on a flexible user-driven benchmarking system.

  19. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 2; Preliminary Results

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Weston, R. P.; Samareh, J. A.; Mason, B. H.; Green, L. L.; Biedron, R. T.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity finite-element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a high-speed civil transport configuration. The paper describes both the preliminary results from implementing and validating the multidisciplinary analysis and the results from an aerodynamic optimization. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture compliant software product. A companion paper describes the formulation of the multidisciplinary analysis and optimization system.

  20. Robust modeling and performance analysis of high-power diode side-pumped solid-state laser systems.

    PubMed

    Kashef, Tamer; Ghoniemy, Samy; Mokhtar, Ayman

    2015-12-20

    In this paper, we present an enhanced high-power extrinsic diode side-pumped solid-state laser (DPSSL) model to accurately predict the dynamic operations and pump distribution under different practical conditions. We introduce a new implementation technique for the proposed model that provides a compelling incentive for the performance assessment and enhancement of high-power diode side-pumped Nd:YAG lasers using cooperative agents and by relying on the MATLAB, GLAD, and Zemax ray tracing software packages. A large-signal laser model that includes thermal effects and a modified laser gain formulation and incorporates the geometrical pump distribution for three radially arranged arrays of laser diodes is presented. The design of a customized prototype diode side-pumped high-power laser head fabricated for the purpose of testing is discussed. A detailed comparative experimental and simulation study of the dynamic operation and the beam characteristics that are used to verify the accuracy of the proposed model for analyzing the performance of high-power DPSSLs under different conditions are discussed. The simulated and measured results of power, pump distribution, beam shape, and slope efficiency are shown under different conditions and for a specific case, where the targeted output power is 140 W, while the input pumping power is 400 W. The 95% output coupler reflectivity showed good agreement with the slope efficiency, which is approximately 35%; this assures the robustness of the proposed model to accurately predict the design parameters of practical, high-power DPSSLs.

  1. Parametric Study of Pulse-Combustor-Driven Ejectors at High-Pressure

    NASA Technical Reports Server (NTRS)

    Yungster, Shaye; Paxson, Daniel E.; Perkins, Hugh D.

    2015-01-01

    Pulse-combustor configurations developed in recent studies have demonstrated performance levels at high-pressure operating conditions comparable to those observed at atmospheric conditions. However, problems related to the way fuel was being distributed within the pulse combustor were still limiting performance. In the first part of this study, new configurations are investigated computationally aimed at improving the fuel distribution and performance of the pulse-combustor. Subsequent sections investigate the performance of various pulse-combustor driven ejector configurations operating at highpressure conditions, focusing on the effects of fuel equivalence ratio and ejector throat area. The goal is to design pulse-combustor-ejector configurations that maximize pressure gain while achieving a thermal environment acceptable to a turbine, and at the same time maintain acceptable levels of NOx emissions and flow non-uniformities. The computations presented here have demonstrated pressure gains of up to 2.8%.

  2. Integrating prediction, provenance, and optimization into high energy workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schram, M.; Bansal, V.; Friese, R. D.

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  3. A unified framework for building high performance DVEs

    NASA Astrophysics Data System (ADS)

    Lei, Kaibin; Ma, Zhixia; Xiong, Hua

    2011-10-01

    A unified framework for integrating PC cluster based parallel rendering with distributed virtual environments (DVEs) is presented in this paper. While various scene graphs have been proposed in DVEs, it is difficult to enable collaboration of different scene graphs. This paper proposes a technique for non-distributed scene graphs with the capability of object and event distribution. With the increase of graphics data, DVEs require more powerful rendering ability. But general scene graphs are inefficient in parallel rendering. The paper also proposes a technique to connect a DVE and a PC cluster based parallel rendering environment. A distributed multi-player video game is developed to show the interaction of different scene graphs and the parallel rendering performance on a large tiled display wall.

  4. Automated aberration compensation in high numerical aperture systems for arbitrary laser modes (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Hering, Julian; Waller, Erik H.; von Freymann, Georg

    2017-02-01

    Since a large number of optical systems and devices are based on differently shaped focal intensity distributions (point-spread-functions, PSF), the PSF's quality is crucial for the application's performance. E.g., optical tweezers, optical potentials for trapping of ultracold atoms as well as stimulated-emission-depletion (STED) based microscopy and lithography rely on precisely controlled intensity distributions. However, especially in high numerical aperture (NA) systems, such complex laser modes are easily distorted by aberrations leading to performance losses. Although different approaches addressing phase retrieval algorithms have been recently presented[1-3], fast and automated aberration compensation for a broad variety of complex shaped PSFs in high NA systems is still missing. Here, we report on a Gerchberg-Saxton[4] based algorithm (GSA) for automated aberration correction of arbitrary PSFs, especially for high NA systems. Deviations between the desired target intensity distribution and the three-dimensionally (3D) scanned experimental focal intensity distribution are used to calculate a correction phase pattern. The target phase distribution plus the correction pattern are displayed on a phase-only spatial-light-modulator (SLM). Focused by a high NA objective, experimental 3D scans of several intensity distributions allow for characterization of the algorithms performance: aberrations are reliably identified and compensated within less than 10 iterations. References 1. B. M. Hanser, M. G. L. Gustafsson, D. A. Agard, and J. W. Sedat, "Phase-retrieved pupil functions in wide-field fluorescence microscopy," J. of Microscopy 216(1), 32-48 (2004). 2. A. Jesacher, A. Schwaighofer, S. Frhapter, C. Maurer, S. Bernet, and M. Ritsch-Marte, "Wavefront correction of spatial light modulators using an optical vortex image," Opt. Express 15(9), 5801-5808 (2007). 3. A. Jesacher and M. J. Booth, "Parallel direct laser writing in three dimensions with spatially dependent aberration correction," Opt. Express 18(20), 21090-21099 (2010). 4. R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determination of the phase from image and diffraction plane pictures," Optik 35(2), 237-246 (1972).

  5. Cooperative fault-tolerant distributed computing U.S. Department of Energy Grant DE-FG02-02ER25537 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunderam, Vaidy S.

    2007-01-09

    The Harness project has developed novel software frameworks for the execution of high-end simulations in a fault-tolerant manner on distributed resources. The H2O subsystem comprises the kernel of the Harness framework, and controls the key functions of resource management across multiple administrative domains, especially issues of access and allocation. It is based on a “pluggable” architecture that enables the aggregated use of distributed heterogeneous resources for high performance computing. The major contributions of the Harness II project result in significantly enhancing the overall computational productivity of high-end scientific applications by enabling robust, failure-resilient computations on cooperatively pooled resource collections.

  6. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, Peer-Timo; Mohr, Bernd; Schulz, Martin

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  7. New Teacher Distribution Methods Hold Promise

    ERIC Educational Resources Information Center

    Sawchuk, Stephen

    2010-01-01

    With effective teaching a top policy priority, certain school districts, the federal government, and nonprofit groups are renewing efforts to pilot and study strategies for pairing effective teachers with students in low-performing, high-poverty schools. The results could offer clues about how to rectify an imbalance in the distribution of the…

  8. A distributed infrastructure for publishing VO services: an implementation

    NASA Astrophysics Data System (ADS)

    Cepparo, Francesco; Scagnetto, Ivan; Molinaro, Marco; Smareglia, Riccardo

    2016-07-01

    This contribution describes both the design and the implementation details of a new solution for publishing VO services, enlightening its maintainable, distributed, modular and scalable architecture. Indeed, the new publisher is multithreaded and multiprocess. Multiple instances of the modules can run on different machines to ensure high performance and high availability, and this will be true both for the interface modules of the services and the back end data access ones. The system uses message passing to let its components communicate through an AMQP message broker that can itself be distributed to provide better scalability and availability.

  9. High performance architecture design for large scale fibre-optic sensor arrays using distributed EDFAs and hybrid TDM/DWDM

    NASA Astrophysics Data System (ADS)

    Liao, Yi; Austin, Ed; Nash, Philip J.; Kingsley, Stuart A.; Richardson, David J.

    2013-09-01

    A distributed amplified dense wavelength division multiplexing (DWDM) array architecture is presented for interferometric fibre-optic sensor array systems. This architecture employs a distributed erbium-doped fibre amplifier (EDFA) scheme to decrease the array insertion loss, and employs time division multiplexing (TDM) at each wavelength to increase the number of sensors that can be supported. The first experimental demonstration of this system is reported including results which show the potential for multiplexing and interrogating up to 4096 sensors using a single telemetry fibre pair with good system performance. The number can be increased to 8192 by using dual pump sources.

  10. Three-dimensional fuel pin model validation by prediction of hydrogen distribution in cladding and comparison with experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aly, A.; Avramova, Maria; Ivanov, Kostadin

    To correctly describe and predict this hydrogen distribution there is a need for multi-physics coupling to provide accurate three-dimensional azimuthal, radial, and axial temperature distributions in the cladding. Coupled high-fidelity reactor-physics codes with a sub-channel code as well as with a computational fluid dynamics (CFD) tool have been used to calculate detailed temperature distributions. These high-fidelity coupled neutronics/thermal-hydraulics code systems are coupled further with the fuel-performance BISON code with a kernel (module) for hydrogen. Both hydrogen migration and precipitation/dissolution are included in the model. Results from this multi-physics analysis is validated utilizing calculations of hydrogen distribution using models informed bymore » data from hydrogen experiments and PIE data.« less

  11. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  12. Maydays and Murphies: A Study of the Effect of Organizational Design, Task, and Stress on Organizational Performance.

    ERIC Educational Resources Information Center

    Lin, Zhiang; Carley, Kathleen

    How should organizations of intelligent agents be designed so that they exhibit high performance even during periods of stress? A formal model of organizational performance given a distributed decision-making environment in which agents encounter a radar detection task is presented. Using this model the performance of organizations with various…

  13. In-plane structuring of proton exchange membrane fuel cell cathodes: Effect of ionomer equivalent weight structuring on performance and current density distribution

    NASA Astrophysics Data System (ADS)

    Herden, Susanne; Riewald, Felix; Hirschfeld, Julian A.; Perchthaler, Markus

    2017-07-01

    Within the active area of a fuel cell inhomogeneous operating conditions occur, however, state of the art electrodes are homogenous over the complete active area. This study uses current density distribution measurements to analyze which ionomer equivalent weight (EW) shows locally the highest current densities. With this information a segmented cathode electrode is manufactured by decal transfer. The segmented electrode shows better performance especially at high current densities compared to homogenous electrodes. Furthermore this segmented catalyst coated membrane (CCM) performs optimal in wet as well as dry conditions, both operating conditions arise in automotive fuel cell applications. Thus, cathode electrodes with an optimized ionomer EW distribution might have a significant impact on future automotive fuel cell development.

  14. Performance optimization of apodized FBG-based temperature sensors in single and quasi-distributed DWDM systems with new and different apodization profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohammed, Nazmi A.; Ali, Taha A., E-mail: Taha25@gmail.com; Aly, Moustafa H.

    2013-12-15

    In this work, different FBG temperature sensors are designed and evaluated with various apodization profiles. Evaluation is done under a wide range of controlling design parameters like sensor length and refractive index modulation amplitude, targeting a remarkable temperature sensing performance. New judgment techniques are introduced such as apodization window roll-off rate, asymptotic sidelobe (SL) decay level, number of SLs, and average SL level (SLav). Evaluation techniques like reflectivity, Full width at Half Maximum (FWHM), and Sidelobe Suppression Ratio (SLSR) are also used. A “New” apodization function is proposed, which achieves better performance like asymptotic decay of 18.4 dB/nm, high SLSRmore » of 60 dB, high channel isolation of 57.9 dB, and narrow FWHM less than 0.15 nm. For a single accurate temperature sensor measurement in extensive noisy environment, optimum results are obtained by the Nuttall apodization profile and the new apodization function, which have remarkable SLSR. For a quasi-distributed FBG temperature sensor the Barthann and the new apodization profiles obtain optimum results. Barthann achieves a high asymptotic decay of 40 dB/nm, a narrow FWHM (less than 25 GHZ), a very low SLav of −45.3 dB, high isolation of 44.6 dB, and a high SLSR of 35 dB. The new apodization function achieves narrow FWHM of 0.177 nm, very low SL of −60.1, very low SLav of −63.6 dB, and very high SLSR of −57.7 dB. A study is performed on including an unapodized sensor among apodized sensors in a quasi-distributed sensing system. Finally, an isolation examination is performed on all the discussed apodizations and a linear relation between temperature and the Bragg wavelength shift is observed experimentally and matched with the simulated results.« less

  15. Performance optimization of apodized FBG-based temperature sensors in single and quasi-distributed DWDM systems with new and different apodization profiles

    NASA Astrophysics Data System (ADS)

    Mohammed, Nazmi A.; Ali, Taha A.; Aly, Moustafa H.

    2013-12-01

    In this work, different FBG temperature sensors are designed and evaluated with various apodization profiles. Evaluation is done under a wide range of controlling design parameters like sensor length and refractive index modulation amplitude, targeting a remarkable temperature sensing performance. New judgment techniques are introduced such as apodization window roll-off rate, asymptotic sidelobe (SL) decay level, number of SLs, and average SL level (SLav). Evaluation techniques like reflectivity, Full width at Half Maximum (FWHM), and Sidelobe Suppression Ratio (SLSR) are also used. A "New" apodization function is proposed, which achieves better performance like asymptotic decay of 18.4 dB/nm, high SLSR of 60 dB, high channel isolation of 57.9 dB, and narrow FWHM less than 0.15 nm. For a single accurate temperature sensor measurement in extensive noisy environment, optimum results are obtained by the Nuttall apodization profile and the new apodization function, which have remarkable SLSR. For a quasi-distributed FBG temperature sensor the Barthann and the new apodization profiles obtain optimum results. Barthann achieves a high asymptotic decay of 40 dB/nm, a narrow FWHM (less than 25 GHZ), a very low SLav of -45.3 dB, high isolation of 44.6 dB, and a high SLSR of 35 dB. The new apodization function achieves narrow FWHM of 0.177 nm, very low SL of -60.1, very low SLav of -63.6 dB, and very high SLSR of -57.7 dB. A study is performed on including an unapodized sensor among apodized sensors in a quasi-distributed sensing system. Finally, an isolation examination is performed on all the discussed apodizations and a linear relation between temperature and the Bragg wavelength shift is observed experimentally and matched with the simulated results.

  16. Reduced Toxicity, High Performance Monopropellant at the U.S. Air Force Research Laboratory

    DTIC Science & Technology

    2010-04-27

    develop reduced toxicity monopropellant formulations to replace spacecraft hydrazine monopropellant. The Air Force Research Laboratory’s (AFRL’s...Public Release, Distribution unlimited REDUCED TOXICITY, HIGH PERFORMANCE MONOPROPELLANT AT THE U.S. AIR FORCE RESEARCH LABORATORY T.W. Hawkins...information, including suggestions for reducing this burden to Department of Defense, Washington Headquarters Services, Directorate for Information Operations

  17. The NATO III 5 MHz Distribution System

    NASA Technical Reports Server (NTRS)

    Vulcan, A.; Bloch, M.

    1981-01-01

    A high performance 5 MHz distribution system is described which has extremely low phase noise and jitter characteristics and provides multiple buffered outputs. The system is completely redundant with automatic switchover and is self-testing. Since the 5 MHz reference signals distributed by the NATO III distribution system are used for up-conversion and multiplicative functions, a high degree of phase stability and isolation between outputs is necessary. Unique circuit design and packaging concepts insure that the isolation between outputs is sufficient to quarantee a phase perturbation of less than 0.0016 deg when other outputs are open circuited, short circuited or terminated in 50 ohms. Circuit design techniques include high isolation cascode amplifiers. Negative feedback stabilizes system gain and minimizes circuit phase noise contributions. Balanced lines, in lieu of single ended coaxial transmission media, minimize pickup.

  18. NRL Fact Book 2010

    DTIC Science & Technology

    2010-01-01

    service) High assurance software Distributed network-based battle management High performance computing supporting uniform and nonuniform memory...VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power photodetector characteriza- tion...Antimonide (InSb) imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services

  19. High-Performance Liquid Chromatography (HPLC) Measurements of Phytoplankton Pigment Distributions of Ocean Waters

    DTIC Science & Technology

    1988-11-01

    coccolithophorids 19. ABSTRACT (CanMyw on rviosfe Inhcesway aM den*t byblock nmber) Until the application of high-performance liquid chromatography (HPLC) to... phycocyanin , has a maximum 0 01 absorption peak. The spectra for the 008 chlorophyll degradation products (chlo- 0.06 rophyllides, phaeophorbides and...phaeo- phytins) which are not shown in Figure z I have similar absorption maxima as their associated chlorophylls, 002 , Until the application of high

  20. Distributed Control of Turbofan Engines

    DTIC Science & Technology

    2009-08-01

    performance of the engine. Thus the Full Authority Digital Engine Controller ( FADEC ) still remains the central arbiter of the engine’s dynamic behavior...instance, if the control laws are not distributed the dependence on the FADEC remains high, and system reliability can only be insured through many...if distributed computing is used at the local level and only coordinated by the FADEC . Such an architecture must be studied in the context of noisy

  1. Efficient abstract data type components for distributed and parallel systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bastani, F.; Hilal, W.; Iyengar, S.S.

    1987-10-01

    One way of improving software system's comprehensibility and maintainability is to decompose it into several components, each of which encapsulates some information concerning the system. These components can be classified into four categories, namely, abstract data type, functional, interface, and control components. Such a classfication underscores the need for different specification, implementation, and performance-improvement methods for different types of components. This article focuses on the development of high-performance abstract data type components for distributed and parallel environments.

  2. Incorporating population-level variation in thermal performance into predictions of geographic range shifts.

    PubMed

    Angert, Amy L; Sheth, Seema N; Paul, John R

    2011-11-01

    Determining how species' geographic ranges are governed by current climates and how they will respond to rapid climatic change poses a major biological challenge. Geographic ranges are often spatially fragmented and composed of genetically differentiated populations that are locally adapted to different thermal regimes. Tradeoffs between different aspects of thermal performance, such as between tolerance to high temperature and tolerance to low temperature or between maximal performance and breadth of performance, suggest that the performance of a given population will be a subset of that of the species. Therefore, species-level projections of distribution might overestimate the species' ability to persist at any given location. However, current approaches to modeling distributions often do not consider variation among populations. Here, we estimated genetically-based differences in thermal performance curves for growth among 12 populations of the scarlet monkeyflower, Mimulus cardinalis, a perennial herb of western North America. We inferred the maximum relative growth rate (RGR(max)), temperature optimum (T(opt)), and temperature breadth (T(breadth)) for each population. We used these data to test for tradeoffs in thermal performance, generate mechanistic population-level projections of distribution under current and future climates, and examine how variation in aspects of thermal performance influences forecasts of range shifts. Populations differed significantly in RGR(max) and had variable, but overlapping, estimates of T(opt) and T(breadth). T(opt) declined with latitude and increased with temperature of origin, consistent with tradeoffs between performances at low temperatures versus those at high temperatures. Further, T(breadth) was negatively related to RGR(max), as expected for a specialist-generalist tradeoff. Parameters of the thermal performance curve influenced properties of projected distributions. For both current and future climates, T(opt) was negatively related to latitudinal position, while T(breadth) was positively related to projected range size. The magnitude and direction of range shifts also varied with T(opt) and T(breadth), but sometimes in unexpected ways. For example, the fraction of habitat remaining suitable increased with T(opt) but decreased with T(breadth). Northern limits of all populations were projected to shift north, but the magnitude of shift decreased with T(opt) and increased with T(breadth). Median latitude was projected to shift north for populations with high T(breadth) and low T(opt), but south for populations with low T(breadth) and high T(opt). Distributions inferred by integrating population-level projections did not differ from a species-level projection that ignored variation among populations. However, the species-level approach masked the potential array of divergent responses by populations that might lead to genotypic sorting within the species' range. Thermal performance tradeoffs among populations within the species' range had important, but sometimes counterintuitive, effects on projected responses to climatic change. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.

  3. Enhanced response and sensitivity of self-corrugated graphene sensors with anisotropic charge distribution

    PubMed Central

    Yol Jeong, Seung; Jeong, Sooyeon; Won Lee, Sang; Tae Kim, Sung; Kim, Daeho; Jin Jeong, Hee; Tark Han, Joong; Baeg, Kang-Jun; Yang, Sunhye; Seok Jeong, Mun; Lee, Geon-Woong

    2015-01-01

    We introduce a high-performance molecular sensor using self-corrugated chemically modified graphene as a three dimensional (3D) structure that indicates anisotropic charge distribution. This is capable of room-temperature operation, and, in particular, exhibiting high sensitivity and reversible fast response with equilibrium region. The morphology consists of periodic, “cratered” arrays that can be formed by condensation and evaporation of graphene oxide (GO) solution on interdigitated electrodes. Subsequent hydrazine reduction, the corrugated edge area of the graphene layers have a high electric potential compared with flat graphene films. This local accumulation of electrons interacts with a large number of gas molecules. The sensitivity of 3D-graphene sensors significantly increases in the atmosphere of NO2 gas. The intriguing structures have several advantages for straightforward fabrication on patterned substrates, high-performance graphene sensors without post-annealing process. PMID:26053892

  4. Multidisciplinary High-Fidelity Analysis and Optimization of Aerospace Vehicles. Part 1; Formulation

    NASA Technical Reports Server (NTRS)

    Walsh, J. L.; Townsend, J. C.; Salas, A. O.; Samareh, J. A.; Mukhopadhyay, V.; Barthelemy, J.-F.

    2000-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite element structural analysis and computational fluid dynamics aerodynamic analysis in a distributed, heterogeneous computing environment that includes high performance parallel computing. A software system has been designed and implemented to integrate a set of existing discipline analysis codes, some of them computationally intensive, into a distributed computational environment for the design of a highspeed civil transport configuration. The paper describes the engineering aspects of formulating the optimization by integrating these analysis codes and associated interface codes into the system. The discipline codes are integrated by using the Java programming language and a Common Object Request Broker Architecture (CORBA) compliant software product. A companion paper presents currently available results.

  5. Distributed Neural Processing Predictors of Multi-dimensional Properties of Affect

    PubMed Central

    Bush, Keith A.; Inman, Cory S.; Hamann, Stephan; Kilts, Clinton D.; James, G. Andrew

    2017-01-01

    Recent evidence suggests that emotions have a distributed neural representation, which has significant implications for our understanding of the mechanisms underlying emotion regulation and dysregulation as well as the potential targets available for neuromodulation-based emotion therapeutics. This work adds to this evidence by testing the distribution of neural representations underlying the affective dimensions of valence and arousal using representational models that vary in both the degree and the nature of their distribution. We used multi-voxel pattern classification (MVPC) to identify whole-brain patterns of functional magnetic resonance imaging (fMRI)-derived neural activations that reliably predicted dimensional properties of affect (valence and arousal) for visual stimuli viewed by a normative sample (n = 32) of demographically diverse, healthy adults. Inter-subject leave-one-out cross-validation showed whole-brain MVPC significantly predicted (p < 0.001) binarized normative ratings of valence (positive vs. negative, 59% accuracy) and arousal (high vs. low, 56% accuracy). We also conducted group-level univariate general linear modeling (GLM) analyses to identify brain regions whose response significantly differed for the contrasts of positive versus negative valence or high versus low arousal. Multivoxel pattern classifiers using voxels drawn from all identified regions of interest (all-ROIs) exhibited mixed performance; arousal was predicted significantly better than chance but worse than the whole-brain classifier, whereas valence was not predicted significantly better than chance. Multivoxel classifiers derived using individual ROIs generally performed no better than chance. Although performance of the all-ROI classifier improved with larger ROIs (generated by relaxing the clustering threshold), performance was still poorer than the whole-brain classifier. These findings support a highly distributed model of neural processing for the affective dimensions of valence and arousal. Finally, joint error analyses of the MVPC hyperplanes encoding valence and arousal identified regions within the dimensional affect space where multivoxel classifiers exhibited the greatest difficulty encoding brain states – specifically, stimuli of moderate arousal and high or low valence. In conclusion, we highlight new directions for characterizing affective processing for mechanistic and therapeutic applications in affective neuroscience. PMID:28959198

  6. Performance Studies on Distributed Virtual Screening

    PubMed Central

    Krüger, Jens; de la Garza, Luis; Kohlbacher, Oliver; Nagel, Wolfgang E.

    2014-01-01

    Virtual high-throughput screening (vHTS) is an invaluable method in modern drug discovery. It permits screening large datasets or databases of chemical structures for those structures binding possibly to a drug target. Virtual screening is typically performed by docking code, which often runs sequentially. Processing of huge vHTS datasets can be parallelized by chunking the data because individual docking runs are independent of each other. The goal of this work is to find an optimal splitting maximizing the speedup while considering overhead and available cores on Distributed Computing Infrastructures (DCIs). We have conducted thorough performance studies accounting not only for the runtime of the docking itself, but also for structure preparation. Performance studies were conducted via the workflow-enabled science gateway MoSGrid (Molecular Simulation Grid). As input we used benchmark datasets for protein kinases. Our performance studies show that docking workflows can be made to scale almost linearly up to 500 concurrent processes distributed even over large DCIs, thus accelerating vHTS campaigns significantly. PMID:25032219

  7. An efficient distribution method for nonlinear transport problems in highly heterogeneous stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, Fayadhoi; Meyer, Daniel; Tchelepi, Hamdi

    2016-04-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are crucial to explore possible scenarios and assess risks in subsurface problems. In particular, nonlinear two-phase flows in porous media are essential, yet challenging, in reservoir simulation and hydrology. Adding highly heterogeneous and uncertain input, such as the permeability and porosity fields, transforms the estimation of the flow response into a tough stochastic problem for which computationally expensive Monte Carlo (MC) simulations remain the preferred option.We propose an alternative approach to evaluate the probability distribution of the (water) saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the (water) saturation. The distribution method draws inspiration from a Lagrangian approach of the stochastic transport problem and expresses the saturation PDF and CDF essentially in terms of a deterministic mapping and the distribution and statistics of scalar random fields. In a large class of applications these random fields can be estimated at low computational costs (few MC runs), thus making the distribution method attractive. Even though the method relies on a key assumption of fixed streamlines, we show that it performs well for high input variances, which is the case of interest. Once the saturation distribution is determined, any one-point statistics thereof can be obtained, especially the saturation average and standard deviation. Moreover, the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be efficiently derived from the distribution method. These statistics can then be used for risk assessment, as well as data assimilation and uncertainty reduction in the prior knowledge of input distributions. We provide various examples and comparisons with MC simulations to illustrate the performance of the method.

  8. Morphology and Doping Engineering of Sn-Doped Hematite Nanowire Photoanodes.

    PubMed

    Li, Mingyang; Yang, Yi; Ling, Yichuan; Qiu, Weitao; Wang, Fuxin; Liu, Tianyu; Song, Yu; Liu, Xiaoxia; Fang, Pingping; Tong, Yexiang; Li, Yat

    2017-04-12

    High-temperature activation has been commonly used to boost the photoelectrochemical (PEC) performance of hematite nanowires for water oxidation, by inducing Sn diffusion from fluorine-doped tin oxide (FTO) substrate into hematite. Yet, hematite nanowires thermally annealed at high temperature suffer from two major drawbacks that negatively affect their performance. First, the structural deformation reduces light absorption capability of nanowire. Second, this "passive" doping method leads to nonuniform distribution of Sn dopant in nanowire and limits the Sn doping concentration. Both factors impair the electrochemical properties of hematite nanowire. Here we demonstrate a silica encapsulation method that is able to simultaneously retain the hematite nanowire morphology even after high-temperature calcination at 800 °C and improve the concentration and uniformity of dopant distribution along the nanowire growth axis. The capability of retaining nanowire morphology allows tuning the nanowire length for optimal light absorption. Uniform distribution of Sn doping enhances the donor density and charge transport of hematite nanowire. The morphology and doping engineered hematite nanowire photoanode decorated with a cobalt oxide-based oxygen evolution reaction (OER) catalyst achieves an outstanding photocurrent density of 2.2 mA cm -2 at 0.23 V vs Ag/AgCl. This work provides important insights on how the morphology and doping uniformity of hematite photoanodes affect their PEC performance.

  9. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    PubMed

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  10. Architecture and Programming Models for High Performance Intensive Computation

    DTIC Science & Technology

    2016-06-29

    Applications Systems and Large-Scale-Big-Data & Large-Scale-Big-Computing (DDDAS- LS ). ICCS 2015, June 2015. Reykjavk, Ice- land. 2. Bo YT, Wang P, Guo ZL...The Mahali project,” Communications Magazine , vol. 52, pp. 111–133, Aug 2014. 14 DISTRIBUTION A: Distribution approved for public release. Response ID

  11. Tree canopy types constrain plant distributions in ponderosa pine-Gambel oak forests, northern Arizona

    Treesearch

    Scott R. Abella

    2009-01-01

    Trees in many forests affect the soils and plants below their canopies. In current high-density southwestern ponderosa pine (Pinus ponderosa) forests, managers have opportunities to enhance multiple ecosystem values by manipulating tree density, distribution, and canopy cover through tree thinning. I performed a study in northern Arizona ponderosa...

  12. High performance flexible heat pipes

    NASA Technical Reports Server (NTRS)

    Shaubach, R. M.; Gernert, N. J.

    1985-01-01

    A Phase I SBIR NASA program for developing and demonstrating high-performance flexible heat pipes for use in the thermal management of spacecraft is examined. The program combines several technologies such as flexible screen arteries and high-performance circumferential distribution wicks within an envelope which is flexible in the adiabatic heat transport zone. The first six months of work during which the Phase I contract goal were met, are described. Consideration is given to the heat-pipe performance requirements. A preliminary evaluation shows that the power requirement for Phase II of the program is 30.5 kilowatt meters at an operating temperature from 0 to 100 C.

  13. Studying the Impact of Distributed Solar PV on Power Systems using Integrated Transmission and Distribution Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Himanshu; Palmintier, Bryan S; Krad, Ibrahim

    This paper presents the results of a distributed solar PV impact assessment study that was performed using a synthetic integrated transmission (T) and distribution (D) model. The primary objective of the study was to present a new approach for distributed solar PV impact assessment, where along with detailed models of transmission and distribution networks, consumer loads were modeled using the physics of end-use equipment, and distributed solar PV was geographically dispersed and connected to the secondary distribution networks. The highlights of the study results were (i) increase in the Area Control Error (ACE) at high penetration levels of distributed solarmore » PV; and (ii) differences in distribution voltages profiles and voltage regulator operations between integrated T&D and distribution only simulations.« less

  14. Integrating security in a group oriented distributed system

    NASA Technical Reports Server (NTRS)

    Reiter, Michael; Birman, Kenneth; Gong, LI

    1992-01-01

    A distributed security architecture is proposed for incorporation into group oriented distributed systems, and in particular, into the Isis distributed programming toolkit. The primary goal of the architecture is to make common group oriented abstractions robust in hostile settings, in order to facilitate the construction of high performance distributed applications that can tolerate both component failures and malicious attacks. These abstractions include process groups and causal group multicast. Moreover, a delegation and access control scheme is proposed for use in group oriented systems. The focus is the security architecture; particular cryptosystems and key exchange protocols are not emphasized.

  15. Performance Prediction of a Synchronization Link for Distributed Aerospace Wireless Systems

    PubMed Central

    Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link. PMID:23970828

  16. A short-term and high-resolution distribution system load forecasting approach using support vector regression with hybrid parameters optimization

    DOE PAGES

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard; ...

    2016-01-01

    This paper proposes an approach for distribution system load forecasting, which aims to provide highly accurate short-term load forecasting with high resolution utilizing a support vector regression (SVR) based forecaster and a two-step hybrid parameters optimization method. Specifically, because the load profiles in distribution systems contain abrupt deviations, a data normalization is designed as the pretreatment for the collected historical load data. Then an SVR model is trained by the load data to forecast the future load. For better performance of SVR, a two-step hybrid optimization algorithm is proposed to determine the best parameters. In the first step of themore » hybrid optimization algorithm, a designed grid traverse algorithm (GTA) is used to narrow the parameters searching area from a global to local space. In the second step, based on the result of the GTA, particle swarm optimization (PSO) is used to determine the best parameters in the local parameter space. After the best parameters are determined, the SVR model is used to forecast the short-term load deviation in the distribution system. The performance of the proposed approach is compared to some classic methods in later sections of the paper.« less

  17. Development of Metal Oxide Nanostructure-based Optical Sensors for Fossil Fuel Derived Gases Measurement at High Temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Kevin P.

    2015-02-13

    This final technical report details research works performed supported by a Department of Energy grant (DE-FE0003859), which was awarded under the University Coal Research Program administrated by National Energy Technology Laboratory. This research program studied high temperature fiber sensor for harsh environment applications. It developed two fiber optical sensor platform technology including regenerative fiber Bragg grating sensors and distributed fiber optical sensing based on Rayleigh backscattering optical frequency domain reflectometry. Through the studies of chemical and thermal regenerative techniques for fiber Bragg grating (FBG) fabrication, high-temperature stable FBG sensors were successfully developed and fabricated in air-hole microstructured fibers, high-attenuation fibers,more » rare-earth doped fibers, and standard telecommunication fibers. By optimizing the laser processing and thermal annealing procedures, fiber grating sensors with stable performance up to 1100°C have been developed. Using these temperature-stable FBG gratings as sensor platform, fiber optical flow, temperature, pressure, and chemical sensors have been developed to operate at high temperatures up to 800°C. Through the integration of on-fiber functional coating, the use of application-specific air-hole microstructural fiber, and application of active fiber sensing scheme, distributed fiber sensor for temperature, pressure, flow, liquid level, and chemical sensing have been demonstrated with high spatial resolution (1-cm or better) with wide temperature ranges. These include the demonstration of 1) liquid level sensing from 77K to the room temperature, pressure/temperature sensing from the room temperature to 800C and from the 15psi to 2000 psi, and hydrogen concentration measurement from 0.2% to 10% with temperature ranges from the room temperature to 700°C. Optical sensors developed by this program has broken several technical records including flow sensors with the highest operation temperature up to 750°C, first distributed chemical measurements at the record high temperature up to 700°C, first distributed pressure measurement at the record high temperature up to 800°C, and the fiber laser sensors with the record high operation temperature up to 700°C. The research performed by this program dramatically expand the functionality, adaptability, and applicability of distributed fiber optical sensors with potential applications in a number of high-temperature energy systems such as fossil-fuel power generation, high-temperature fuel cell applications, and potential for nuclear energy systems.« less

  18. Estimating the mean and standard deviation of environmental data with below detection limit observations: Considering highly skewed data and model misspecification.

    PubMed

    Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin

    2015-11-01

    In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Variability Extraction and Synthesis via Multi-Resolution Analysis using Distribution Transformer High-Speed Power Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chamana, Manohar; Mather, Barry A

    A library of load variability classes is created to produce scalable synthetic data sets using historical high-speed raw data. These data are collected from distribution monitoring units connected at the secondary side of a distribution transformer. Because of the irregular patterns and large volume of historical high-speed data sets, the utilization of current load characterization and modeling techniques are challenging. Multi-resolution analysis techniques are applied to extract the necessary components and eliminate the unnecessary components from the historical high-speed raw data to create the library of classes, which are then utilized to create new synthetic load data sets. A validationmore » is performed to ensure that the synthesized data sets contain the same variability characteristics as the training data sets. The synthesized data sets are intended to be utilized in quasi-static time-series studies for distribution system planning studies on a granular scale, such as detailed PV interconnection studies.« less

  20. Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions

    DOE PAGES

    McLerran, Larry; Tribedy, Prithwish

    2015-11-02

    High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less

  1. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  2. Diagnosing Expertise: Human Capital, Decision Making, and Performance among Physicians

    PubMed Central

    Currie, Janet; MacLeod, W. Bentley

    2017-01-01

    Expert performance is often evaluated assuming that good experts have good outcomes. We examine expertise in medicine and develop a model that allows for two dimensions of physician performance: decision making and procedural skill. Better procedural skill increases the use of intensive procedures for everyone, while better decision making results in a reallocation of procedures from fewer low-risk to high-risk cases. We show that poor diagnosticians can be identified using administrative data and that improving decision making improves birth outcomes by reducing C-section rates at the bottom of the risk distribution and increasing them at the top of the distribution. PMID:29276336

  3. UHPC for Blast and Ballistic Protection, Explosion Testing and Composition Optimization

    NASA Astrophysics Data System (ADS)

    Bibora, P.; Drdlová, M.; Prachař, V.; Sviták, O.

    2017-10-01

    The realization of high performance concrete resistant to detonation is the aim and expected outcome of the presented project, which is oriented to development of construction materials for larger objects as protective walls and bunkers. Use of high-strength concrete (HSC / HPC - “high strength / performance concrete”) and high-fiber reinforced concrete (UHPC / UHPFC -“Ultra High Performance Fiber Reinforced Concrete”) seems to be optimal for this purpose of research. The paper describes the research phase of the project, in which we focused on the selection of specific raw materials and chemical additives, including determining the most suitable type and amount of distributed fiber reinforcement. Composition of UHPC was optimized during laboratory manufacture of test specimens to obtain the best desired physical- mechanical properties of developed high performance concretes. In connection with laboratory testing, explosion field tests of UHPC specimens were performed and explosion resistance of laboratory produced UHPC testing boards was investigated.

  4. The Processing and Mechanical Properties of High Temperature/High Performance Composites. Book 5. Processing and Miscellaneous Properties

    DTIC Science & Technology

    1993-04-01

    tensile fiber stress of 150-300 MPa, too little compared to measured fiber strengths of 3-4 GPa. A final possibility is that of nonuniform inelastic...flow of the matrix as a result of a spatially nonuniform distribution of porosity; this leads to a nonuniform distribution of forces along the fiber...the damage with the specific mechanism being fiber bending. The effects due to nonuniform inelastic flow (i.e., fiber bending) can be thought to occur

  5. High sensitivity optical molecular imaging system

    NASA Astrophysics Data System (ADS)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  6. Advanced sensors and instrumentation

    NASA Technical Reports Server (NTRS)

    Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty

    1990-01-01

    NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.

  7. Hybrid approach combining multiple characterization techniques and simulations for microstructural analysis of proton exchange membrane fuel cell electrodes

    NASA Astrophysics Data System (ADS)

    Cetinbas, Firat C.; Ahluwalia, Rajesh K.; Kariuki, Nancy; De Andrade, Vincent; Fongalland, Dash; Smith, Linda; Sharman, Jonathan; Ferreira, Paulo; Rasouli, Somaye; Myers, Deborah J.

    2017-03-01

    The cost and performance of proton exchange membrane fuel cells strongly depend on the cathode electrode due to usage of expensive platinum (Pt) group metal catalyst and sluggish reaction kinetics. Development of low Pt content high performance cathodes requires comprehensive understanding of the electrode microstructure. In this study, a new approach is presented to characterize the detailed cathode electrode microstructure from nm to μm length scales by combining information from different experimental techniques. In this context, nano-scale X-ray computed tomography (nano-CT) is performed to extract the secondary pore space of the electrode. Transmission electron microscopy (TEM) is employed to determine primary C particle and Pt particle size distributions. X-ray scattering, with its ability to provide size distributions of orders of magnitude more particles than TEM, is used to confirm the TEM-determined size distributions. The number of primary pores that cannot be resolved by nano-CT is approximated using mercury intrusion porosimetry. An algorithm is developed to incorporate all these experimental data in one geometric representation. Upon validation of pore size distribution against gas adsorption and mercury intrusion porosimetry data, reconstructed ionomer size distribution is reported. In addition, transport related characteristics and effective properties are computed by performing simulations on the hybrid microstructure.

  8. Playable Serious Games for Studying and Programming Computational STEM and Informatics Applications of Distributed and Parallel Computer Architectures

    ERIC Educational Resources Information Center

    Amenyo, John-Thones

    2012-01-01

    Carefully engineered playable games can serve as vehicles for students and practitioners to learn and explore the programming of advanced computer architectures to execute applications, such as high performance computing (HPC) and complex, inter-networked, distributed systems. The article presents families of playable games that are grounded in…

  9. Joint Force Pre-Deployment Training: An Initial Analysis and Product Definition (Strategic Mobility 21: IT Planning Document for APS Demonstration Document (Task 3.7)

    DTIC Science & Technology

    2010-04-13

    Office of Naval Research. DISTRIBUTION STATEMENT A . Approved for public release; distribution is unlimited. a . This statement may be used only on...documents resulting from contracted fundamental research efforts will normally be assigned Distribution Statement A , except for those rare and exceptional...circumstances where there is a high likelihood of disclosing performance characteristics of military systems, or of manufacturing technologies that

  10. Autonomous Control Modes and Optimized Path Guidance for Shipboard Landing in High Sea States

    DTIC Science & Technology

    2016-01-29

    Research in Sea-Based Aviation ONR #BAA12-SN-028 CDRL A001 DISTRIBUTION STATEMENT A: Distribution Approved for public release; distribution...is performed under the Office of Naval Research program on Basic and Applied Research in Sea- Based Aviation (ONR BAA12-SN-0028). This project...addresses the Sea Based Aviation (SBA) initiative in Advanced Handling Qualities for Rotorcraft. Landing a rotorcraft on a moving ship deck and under the

  11. A distributed parallel storage architecture and its potential application within EOSDIS

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Tierney, Brian; Feuquay, Jay; Butzer, Tony

    1994-01-01

    We describe the architecture, implementation, use of a scalable, high performance, distributed-parallel data storage system developed in the ARPA funded MAGIC gigabit testbed. A collection of wide area distributed disk servers operate in parallel to provide logical block level access to large data sets. Operated primarily as a network-based cache, the architecture supports cooperation among independently owned resources to provide fast, large-scale, on-demand storage to support data handling, simulation, and computation.

  12. Development and Validation of a Low Cost, Flexible, Open Source Robot for Use as a Teaching and Research Tool across the Educational Spectrum

    ERIC Educational Resources Information Center

    Howell, Abraham L.

    2012-01-01

    In the high tech factories of today robots can be used to perform various tasks that span a wide spectrum that encompasses the act of performing high-speed, automated assembly of cell phones, laptops and other electronic devices to the compounding, filling, packaging and distribution of life-saving pharmaceuticals. As robot usage continues to…

  13. Naval Research Laboratory Fact Book 2012

    DTIC Science & Technology

    2012-11-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...hyperspectral systems VNIR, MWIR, and LWIR high-resolution systems Wideband SAR systems RF and laser data links High-speed, high-power...hyperspectral imaging system Long-wave infrared ( LWIR ) quantum well IR photodetector (QWIP) imaging system Research and Development Services Divi- sion

  14. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar; ...

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  15. Optimization of tomographic reconstruction workflows on geographically distributed resources

    PubMed Central

    Bicer, Tekin; Gürsoy, Doǧa; Kettimuthu, Rajkumar; De Carlo, Francesco; Foster, Ian T.

    2016-01-01

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modeling of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Moreover, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks. PMID:27359149

  16. Optimization of tomographic reconstruction workflows on geographically distributed resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Kettimuthu, Rajkumar

    New technological advancements in synchrotron light sources enable data acquisitions at unprecedented levels. This emergent trend affects not only the size of the generated data but also the need for larger computational resources. Although beamline scientists and users have access to local computational resources, these are typically limited and can result in extended execution times. Applications that are based on iterative processing as in tomographic reconstruction methods require high-performance compute clusters for timely analysis of data. Here, time-sensitive analysis and processing of Advanced Photon Source data on geographically distributed resources are focused on. Two main challenges are considered: (i) modelingmore » of the performance of tomographic reconstruction workflows and (ii) transparent execution of these workflows on distributed resources. For the former, three main stages are considered: (i) data transfer between storage and computational resources, (i) wait/queue time of reconstruction jobs at compute resources, and (iii) computation of reconstruction tasks. These performance models allow evaluation and estimation of the execution time of any given iterative tomographic reconstruction workflow that runs on geographically distributed resources. For the latter challenge, a workflow management system is built, which can automate the execution of workflows and minimize the user interaction with the underlying infrastructure. The system utilizes Globus to perform secure and efficient data transfer operations. The proposed models and the workflow management system are evaluated by using three high-performance computing and two storage resources, all of which are geographically distributed. Workflows were created with different computational requirements using two compute-intensive tomographic reconstruction algorithms. Experimental evaluation shows that the proposed models and system can be used for selecting the optimum resources, which in turn can provide up to 3.13× speedup (on experimented resources). Furthermore, the error rates of the models range between 2.1 and 23.3% (considering workflow execution times), where the accuracy of the model estimations increases with higher computational demands in reconstruction tasks.« less

  17. Multi-kw dc power distribution system study program

    NASA Technical Reports Server (NTRS)

    Berkery, E. A.; Krausz, A.

    1974-01-01

    The first phase of the Multi-kw dc Power Distribution Technology Program is reported and involves the test and evaluation of a technology breadboard in a specifically designed test facility according to design concepts developed in a previous study on space vehicle electrical power processing, distribution, and control. The static and dynamic performance, fault isolation, reliability, electromagnetic interference characterisitics, and operability factors of high distribution systems were studied in order to gain a technology base for the use of high voltage dc systems in future aerospace vehicles. Detailed technical descriptions are presented and include data for the following: (1) dynamic interactions due to operation of solid state and electromechanical switchgear; (2) multiplexed and computer controlled supervision and checkout methods; (3) pulse width modulator design; and (4) cable design factors.

  18. AGARD Index of Publications 1983-1985

    DTIC Science & Technology

    1987-06-01

    a high performance high speed General Aviation propeller the advent of the highly loaded program...distribution data at high speed and CLmax data at low speed are NS3-3036# Saab-.;cania, Linkoping (Sweden). described. A flight wing pressure survey which...also well with predictions based on wind tunnel data. flight at high speed and wind tunnel measurements on a half Reynolds Number and transition

  19. Genome-wide survey of DNA-binding proteins in Arabidopsis thaliana: analysis of distribution and functions.

    PubMed

    Malhotra, Sony; Sowdhamini, Ramanathan

    2013-08-01

    The interaction of proteins with their respective DNA targets is known to control many high-fidelity cellular processes. Performing a comprehensive survey of the sequenced genomes for DNA-binding proteins (DBPs) will help in understanding their distribution and the associated functions in a particular genome. Availability of fully sequenced genome of Arabidopsis thaliana enables the review of distribution of DBPs in this model plant genome. We used profiles of both structure and sequence-based DNA-binding families, derived from PDB and PFam databases, to perform the survey. This resulted in 4471 proteins, identified as DNA-binding in Arabidopsis genome, which are distributed across 300 different PFam families. Apart from several plant-specific DNA-binding families, certain RING fingers and leucine zippers also had high representation. Our search protocol helped to assign DNA-binding property to several proteins that were previously marked as unknown, putative or hypothetical in function. The distribution of Arabidopsis genes having a role in plant DNA repair were particularly studied and noted for their functional mapping. The functions observed to be overrepresented in the plant genome harbour DNA-3-methyladenine glycosylase activity, alkylbase DNA N-glycosylase activity and DNA-(apurinic or apyrimidinic site) lyase activity, suggesting their role in specialized functions such as gene regulation and DNA repair.

  20. Alternative evaluation metrics for risk adjustment methods.

    PubMed

    Park, Sungchul; Basu, Anirban

    2018-06-01

    Risk adjustment is instituted to counter risk selection by accurately equating payments with expected expenditures. Traditional risk-adjustment methods are designed to estimate accurate payments at the group level. However, this generates residual risks at the individual level, especially for high-expenditure individuals, thereby inducing health plans to avoid those with high residual risks. To identify an optimal risk-adjustment method, we perform a comprehensive comparison of prediction accuracies at the group level, at the tail distributions, and at the individual level across 19 estimators: 9 parametric regression, 7 machine learning, and 3 distributional estimators. Using the 2013-2014 MarketScan database, we find that no one estimator performs best in all prediction accuracies. Generally, machine learning and distribution-based estimators achieve higher group-level prediction accuracy than parametric regression estimators. However, parametric regression estimators show higher tail distribution prediction accuracy and individual-level prediction accuracy, especially at the tails of the distribution. This suggests that there is a trade-off in selecting an appropriate risk-adjustment method between estimating accurate payments at the group level and lower residual risks at the individual level. Our results indicate that an optimal method cannot be determined solely on the basis of statistical metrics but rather needs to account for simulating plans' risk selective behaviors. Copyright © 2018 John Wiley & Sons, Ltd.

  1. Development and in vivo evaluation of self-microemulsion as delivery system for α-mangostin.

    PubMed

    Xu, Wen-Ke; Jiang, Hui; Yang, Kui; Wang, Ya-Qin; Zhang, Qian; Zuo, Jian

    2017-03-01

    α-Mangostin (MG) is a versatile bioactive compound isolated from mangosteen and possesses significant pharmacokinetic shortages. To augment the potential clinical efficacy, MG-loaded self-microemulsion (MG-SME) was designed and prepared in this study, and its potential as a drug loading system was evaluated based on the pharmacokinetic performance and tissue distribution feature. The formula of MG-SME was optimized by an orthogonal test under the guidance of ternary phase diagram, and the prepared MG-SME was characterized by encapsulation efficiency, size distribution, and morphology. Optimized high performance liquid chromatography method was employed to determine concentrations of MG and characterize the pharmacokinetic and tissue distribution features of MG in rodents. It was found that diluted MG-SME was characterized as spherical particles with a mean diameter of 24.6 nm and an encapsulation efficiency of 87.26%. The delivery system enhanced the area under the curve of MG by 4.75 times and increased the distribution in lymphatic organs. These findings suggested that SME as a nano-sized delivery system efficiently promoted the digestive tract absorption of MG and modified its distribution in tissues. The targeting feature and high oral bioavailability of MG-SME promised a good clinical efficacy, especially for immune diseases. Copyright © 2017. Published by Elsevier Taiwan.

  2. MultiPhyl: a high-throughput phylogenomics webserver using distributed computing

    PubMed Central

    Keane, Thomas M.; Naughton, Thomas J.; McInerney, James O.

    2007-01-01

    With the number of fully sequenced genomes increasing steadily, there is greater interest in performing large-scale phylogenomic analyses from large numbers of individual gene families. Maximum likelihood (ML) has been shown repeatedly to be one of the most accurate methods for phylogenetic construction. Recently, there have been a number of algorithmic improvements in maximum-likelihood-based tree search methods. However, it can still take a long time to analyse the evolutionary history of many gene families using a single computer. Distributed computing refers to a method of combining the computing power of multiple computers in order to perform some larger overall calculation. In this article, we present the first high-throughput implementation of a distributed phylogenetics platform, MultiPhyl, capable of using the idle computational resources of many heterogeneous non-dedicated machines to form a phylogenetics supercomputer. MultiPhyl allows a user to upload hundreds or thousands of amino acid or nucleotide alignments simultaneously and perform computationally intensive tasks such as model selection, tree searching and bootstrapping of each of the alignments using many desktop machines. The program implements a set of 88 amino acid models and 56 nucleotide maximum likelihood models and a variety of statistical methods for choosing between alternative models. A MultiPhyl webserver is available for public use at: http://www.cs.nuim.ie/distributed/multiphyl.php. PMID:17553837

  3. Implementing High-Performance Geometric Multigrid Solver with Naturally Grained Messages

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shan, Hongzhang; Williams, Samuel; Zheng, Yili

    2015-10-26

    Structured-grid linear solvers often require manually packing and unpacking of communication data to achieve high performance.Orchestrating this process efficiently is challenging, labor-intensive, and potentially error-prone.In this paper, we explore an alternative approach that communicates the data with naturally grained messagesizes without manual packing and unpacking. This approach is the distributed analogue of shared-memory programming, taking advantage of the global addressspace in PGAS languages to provide substantial programming ease. However, its performance may suffer from the large number of small messages. We investigate theruntime support required in the UPC ++ library for this naturally grained version to close the performance gapmore » between the two approaches and attain comparable performance at scale using the High-Performance Geometric Multgrid (HPGMG-FV) benchmark as a driver.« less

  4. Efficiently passing messages in distributed spiking neural network simulation.

    PubMed

    Thibeault, Corey M; Minkovich, Kirill; O'Brien, Michael J; Harris, Frederick C; Srinivasa, Narayan

    2013-01-01

    Efficiently passing spiking messages in a neural model is an important aspect of high-performance simulation. As the scale of networks has increased so has the size of the computing systems required to simulate them. In addition, the information exchange of these resources has become more of an impediment to performance. In this paper we explore spike message passing using different mechanisms provided by the Message Passing Interface (MPI). A specific implementation, MVAPICH, designed for high-performance clusters with Infiniband hardware is employed. The focus is on providing information about these mechanisms for users of commodity high-performance spiking simulators. In addition, a novel hybrid method for spike exchange was implemented and benchmarked.

  5. Low latency network and distributed storage for next generation HPC systems: the ExaNeSt project

    NASA Astrophysics Data System (ADS)

    Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Paolucci, P. S.; Pastorelli, E.; Pisani, F.; Simula, F.; Vicini, P.; Navaridas, J.; Chaix, F.; Chrysos, N.; Katevenis, M.; Papaeustathiou, V.

    2017-10-01

    With processor architecture evolution, the HPC market has undergone a paradigm shift. The adoption of low-cost, Linux-based clusters extended the reach of HPC from its roots in modelling and simulation of complex physical systems to a broader range of industries, from biotechnology, cloud computing, computer analytics and big data challenges to manufacturing sectors. In this perspective, the near future HPC systems can be envisioned as composed of millions of low-power computing cores, densely packed — meaning cooling by appropriate technology — with a tightly interconnected, low latency and high performance network and equipped with a distributed storage architecture. Each of these features — dense packing, distributed storage and high performance interconnect — represents a challenge, made all the harder by the need to solve them at the same time. These challenges lie as stumbling blocks along the road towards Exascale-class systems; the ExaNeSt project acknowledges them and tasks itself with investigating ways around them.

  6. Nanometer scale composition study of MBE grown BGaN performed by atom probe tomography

    NASA Astrophysics Data System (ADS)

    Bonef, Bastien; Cramer, Richard; Speck, James S.

    2017-06-01

    Laser assisted atom probe tomography is used to characterize the alloy distribution in BGaN. The effect of the evaporation conditions applied on the atom probe specimens on the mass spectrum and the quantification of the III site atoms is first evaluated. The evolution of the Ga++/Ga+ charge state ratio is used to monitor the strength of the applied field. Experiments revealed that applying high electric fields on the specimen results in the loss of gallium atoms, leading to the over-estimation of boron concentration. Moreover, spatial analysis of the surface field revealed a significant loss of atoms at the center of the specimen where high fields are applied. A good agreement between X-ray diffraction and atom probe tomography concentration measurements is obtained when low fields are applied on the tip. A random distribution of boron in the BGaN layer grown by molecular beam epitaxy is obtained by performing accurate and site specific statistical distribution analysis.

  7. Emotion-attention interactions in recognition memory for distractor faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.

  8. Modeling and experimental performance of an intermediate temperature reversible solid oxide cell for high-efficiency, distributed-scale electrical energy storage

    NASA Astrophysics Data System (ADS)

    Wendel, Christopher H.; Gao, Zhan; Barnett, Scott A.; Braun, Robert J.

    2015-06-01

    Electrical energy storage is expected to be a critical component of the future world energy system, performing load-leveling operations to enable increased penetration of renewable and distributed generation. Reversible solid oxide cells, operating sequentially between power-producing fuel cell mode and fuel-producing electrolysis mode, have the capability to provide highly efficient, scalable electricity storage. However, challenges ranging from cell performance and durability to system integration must be addressed before widespread adoption. One central challenge of the system design is establishing effective thermal management in the two distinct operating modes. This work leverages an operating strategy to use carbonaceous reactant species and operate at intermediate stack temperature (650 °C) to promote exothermic fuel-synthesis reactions that thermally self-sustain the electrolysis process. We present performance of a doped lanthanum-gallate (LSGM) electrolyte solid oxide cell that shows high efficiency in both operating modes at 650 °C. A physically based electrochemical model is calibrated to represent the cell performance and used to simulate roundtrip operation for conditions unique to these reversible systems. Design decisions related to system operation are evaluated using the cell model including current density, fuel and oxidant reactant compositions, and flow configuration. The analysis reveals tradeoffs between electrical efficiency, thermal management, energy density, and durability.

  9. Evolutionary Telemetry and Command Processor (TCP) architecture

    NASA Technical Reports Server (NTRS)

    Schneider, John R.

    1992-01-01

    A low cost, modular, high performance, and compact Telemetry and Command Processor (TCP) is being built as the foundation of command and data handling subsystems for the next generation of satellites. The TCP product line will support command and telemetry requirements for small to large spacecraft and from low to high rate data transmission. It is compatible with the latest TDRSS, STDN and SGLS transponders and provides CCSDS protocol communications in addition to standard TDM formats. Its high performance computer provides computing resources for hosted flight software. Layered and modular software provides common services using standardized interfaces to applications thereby enhancing software re-use, transportability, and interoperability. The TCP architecture is based on existing standards, distributed networking, distributed and open system computing, and packet technology. The first TCP application is planned for the 94 SDIO SPAS 3 mission. The architecture enhances rapid tailoring of functions thereby reducing costs and schedules developed for individual spacecraft missions.

  10. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  11. High Resolution Neutron Radiography and Tomography of Hydrided Zircaloy-4 Cladding Materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Tyler S; Bilheux, Hassina Z; Ray, Holly B

    2015-01-01

    Neutron radiography for hydrogen analysis was performed with several Zircaloy-4 cladding samples with controlled hydrogen concentrations up to 1100 ppm. Hydrogen charging was performed in a process tube that was heated to facilitate hydrogen absorption by the metal. A correlation between the hydrogen concentration in the hydrided tubes and the neutron intensity was established, by which hydrogen content can be determined precisely in a small area (55 m x 55 m). Radiography analysis was also performed to evaluate the heating rate and its correlation with the hydrogen distribution through hydrided materials. In addition to radiography analysis, tomography experiments were performedmore » on Zircaloy-4 tube samples to study the local hydrogen distribution. Through tomography analysis a 3D reconstruction of the tube was evaluated in which an uneven hydrogen distribution in the circumferential direction can be observed.« less

  12. Interaction and Impact Studies for Distributed Energy Resource, Transactive Energy, and Electric Grid, using High Performance Computing ?based Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelley, B. M.

    The electric utility industry is undergoing significant transformations in its operation model, including a greater emphasis on automation, monitoring technologies, and distributed energy resource management systems (DERMS). With these changes and new technologies, while driving greater efficiencies and reliability, these new models may introduce new vectors of cyber attack. The appropriate cybersecurity controls to address and mitigate these newly introduced attack vectors and potential vulnerabilities are still widely unknown and performance of the control is difficult to vet. This proposal argues that modeling and simulation (M&S) is a necessary tool to address and better understand these problems introduced by emergingmore » technologies for the grid. M&S will provide electric utilities a platform to model its transmission and distribution systems and run various simulations against the model to better understand the operational impact and performance of cybersecurity controls.« less

  13. Exhaust emission reduction for intermittent combustion aircraft engines

    NASA Technical Reports Server (NTRS)

    Moffett, R. N.

    1979-01-01

    Three concepts for optimizing the performance, increasing the fuel economy, and reducing exhaust emission of the piston aircraft engine were investigated. High energy-multiple spark discharge and spark plug tip penetration, ultrasonic fuel vaporization, and variable valve timing were evaluated individually. Ultrasonic fuel vaporization did not demonstrate sufficient improvement in distribution to offset the performance loss caused by the additional manifold restriction. High energy ignition and revised spark plug tip location provided no change in performance or emissions. Variable valve timing provided some performance benefit; however, even greater performance improvement was obtained through induction system tuning which could be accomplished with far less complexity.

  14. Counterfactual entanglement distribution without transmitting any particles.

    PubMed

    Guo, Qi; Cheng, Liu-Yong; Chen, Li; Wang, Hong-Fu; Zhang, Shou

    2014-04-21

    To date, all schemes for entanglement distribution needed to send entangled particles or a separable mediating particle among distant participants. Here, we propose a counterfactual protocol for entanglement distribution against the traditional forms, that is, two distant particles can be entangled with no physical particles travel between the two remote participants. We also present an alternative scheme for realizing the counterfactual photonic entangled state distribution using Michelson-type interferometer and self-assembled GaAs/InAs quantum dot embedded in a optical microcavity. The numerical analysis about the effect of experimental imperfections on the performance of the scheme shows that the entanglement distribution may be implementable with high fidelity.

  15. High- and Reproducible-Performance Graphene/II-VI Semiconductor Film Hybrid Photodetectors

    PubMed Central

    Huang, Fan; Jia, Feixiang; Cai, Caoyuan; Xu, Zhihao; Wu, Congjun; Ma, Yang; Fei, Guangtao; Wang, Min

    2016-01-01

    High- and reproducible-performance photodetectors are critical to the development of many technologies, which mainly include one-dimensional (1D) nanostructure based and film based photodetectors. The former suffer from a huge performance variation because the performance is quite sensitive to the synthesis microenvironment of 1D nanostructure. Herein, we show that the graphene/semiconductor film hybrid photodetectors not only possess a high performance but also have a reproducible performance. As a demo, the as-produced graphene/ZnS film hybrid photodetector shows a high responsivity of 1.7 × 107 A/W and a fast response speed of 50 ms, and shows a highly reproducible performance, in terms of narrow distribution of photocurrent (38–65 μA) and response speed (40–60 ms) for 20 devices. Graphene/ZnSe film and graphene/CdSe film hybrid photodetectors fabricated by this method also show a high and reproducible performance. The general method is compatible with the conventional planar process, and would be easily standardized and thus pay a way for the photodetector applications. PMID:27349692

  16. LibIsopach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brunhart-Lupo, Nicholas

    2016-12-06

    LibIsopach is a toolkit for high performance distributed immersive visualization, leveraging modern OpenGL. It features a multi-process scenegraph, explicit instance rendering, mesh generation, and three-dimensional user interaction event processing.

  17. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    NASA Astrophysics Data System (ADS)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  18. Formulating the shear stress distribution in circular open channels based on the Renyi entropy

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein

    2018-01-01

    The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.

  19. Experiments in structural dynamics and control using a grid

    NASA Technical Reports Server (NTRS)

    Montgomery, R. C.

    1985-01-01

    Future spacecraft are being conceived that are highly flexible and of extreme size. The two features of flexibility and size pose new problems in control system design. Since large scale structures are not testable in ground based facilities, the decision on component placement must be made prior to full-scale tests on the spacecraft. Control law research is directed at solving problems of inadequate modelling knowledge prior to operation required to achieve peak performance. Another crucial problem addressed is accommodating failures in systems with smart components that are physically distributed on highly flexible structures. Parameter adaptive control is a method of promise that provides on-orbit tuning of the control system to improve performance by upgrading the mathematical model of the spacecraft during operation. Two specific questions are answered in this work. They are: What limits does on-line parameter identification with realistic sensors and actuators place on the ultimate achievable performance of a system in the highly flexible environment? Also, how well must the mathematical model used in on-board analytic redundancy be known and what are the reasonable expectations for advanced redundancy management schemes in the highly flexible and distributed component environment?

  20. Collaborative modeling: the missing piece of distributed simulation

    NASA Astrophysics Data System (ADS)

    Sarjoughian, Hessam S.; Zeigler, Bernard P.

    1999-06-01

    The Department of Defense overarching goal of performing distributed simulation by overcoming geographic and time constraints has brought the problem of distributed modeling to the forefront. The High Level Architecture standard is primarily intended for simulation interoperability. However, as indicated, the existence of a distributed modeling infrastructure plays a fundamental and central role in supporting the development of distributed simulations. In this paper, we describe some fundamental distributed modeling concepts and their implications for constructing successful distributed simulations. In addition, we discuss the Collaborative DEVS Modeling environment that has been devised to enable graphically dispersed modelers to collaborate and synthesize modular and hierarchical models. We provide an actual example of the use of Collaborative DEVS Modeler in application to a project involving corporate partners developing an HLA-compliant distributed simulation exercise.

  1. Introducing high performance distributed logging service for ACS

    NASA Astrophysics Data System (ADS)

    Avarias, Jorge A.; López, Joao S.; Maureira, Cristián; Sommer, Heiko; Chiozzi, Gianluca

    2010-07-01

    The ALMA Common Software (ACS) is a software framework that provides the infrastructure for the Atacama Large Millimeter Array and other projects. ACS, based on CORBA, offers basic services and common design patterns for distributed software. Every properly built system needs to be able to log status and error information. Logging in a single computer scenario can be as easy as using fprintf statements. However, in a distributed system, it must provide a way to centralize all logging data in a single place without overloading the network nor complicating the applications. ACS provides a complete logging service infrastructure in which every log has an associated priority and timestamp, allowing filtering at different levels of the system (application, service and clients). Currently the ACS logging service uses an implementation of the CORBA Telecom Log Service in a customized way, using only a minimal subset of the features provided by the standard. The most relevant feature used by ACS is the ability to treat the logs as event data that gets distributed over the network in a publisher-subscriber paradigm. For this purpose the CORBA Notification Service, which is resource intensive, is used. On the other hand, the Data Distribution Service (DDS) provides an alternative standard for publisher-subscriber communication for real-time systems, offering better performance and featuring decentralized message processing. The current document describes how the new high performance logging service of ACS has been modeled and developed using DDS, replacing the Telecom Log Service. Benefits and drawbacks are analyzed. A benchmark is presented comparing the differences between the implementations.

  2. Charon Message-Passing Toolkit for Scientific Computations

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob F.; Yan, Jerry (Technical Monitor)

    2000-01-01

    Charon is a library, callable from C and Fortran, that aids the conversion of structured-grid legacy codes-such as those used in the numerical computation of fluid flows-into parallel, high- performance codes. Key are functions that define distributed arrays, that map between distributed and non-distributed arrays, and that allow easy specification of common communications on structured grids. The library is based on the widely accepted MPI message passing standard. We present an overview of the functionality of Charon, and some representative results.

  3. Reducing Avoidable Deaths Among Veterans: Directing Private-Sector Surgical Care to High-Performance Hospitals

    PubMed Central

    Weeks, William B.; West, Alan N.; Wallace, Amy E.; Lee, Richard E.; Goodman, David C.; Dimick, Justin B.; Bagian, James P.

    2007-01-01

    Objectives. We quantified older (65 years and older) Veterans Health Administration (VHA) patients’ use of the private sector to obtain 14 surgical procedures and assessed the potential impact of directing that care to high-performance hospitals. Methods. Using a merged VHA–Medicare inpatient database for 2000 and 2001, we determined where older VHA enrollees obtained 6 cardiovascular surgeries and 8 cancer resections and whether private-sector care was obtained in high- or low-performance hospitals (based on historical performance and determined 2 years in advance of the service year). We then modeled the mortality and travel burden effect of directing private-sector care to high-performance hospitals. Results. Older veterans obtained most of their procedures in the private sector, but that care was equally distributed across high- and low-performance hospitals. Directing private-sector care to high-performance hospitals could have led to the avoidance of 376 to 584 deaths, most through improved cardiovascular care outcomes. Using historical mortality to define performance would produce better outcomes with lower travel time. Conclusions. Policy that directs older VHA enrollees’ private-sector care to high-performance hospitals promises to reduce mortality for VHA’s service population and warrants further exploration. PMID:17971543

  4. The study of aluminium anodes for high power density Al/air batteries with brine electrolytes

    NASA Astrophysics Data System (ADS)

    Nestoridi, Maria; Pletcher, Derek; Wood, Robert J. K.; Wang, Shuncai; Jones, Richard L.; Stokes, Keith R.; Wilcock, Ian

    Aluminium alloys containing small additions of both tin (∼0.1 wt%) and gallium (∼0.05 wt%) are shown to dissolve anodically at high rates in sodium chloride media at room temperatures; current densities >0.2 A cm -2 can be obtained at potentials close to the open circuit potential, ∼-1500 mV versus SCE. The tin exists in the alloys as a second phase, typically as ∼1 μm inclusions (precipitates) distributed throughout the aluminium structure, and anodic dissolution occurs to form pits around the tin inclusions. Although the distribution of the gallium in the alloy could not be established, it is also shown to be critical in the formation of these pits as well as maintaining their activity. The stability of the alloys to open circuit corrosion and the overpotential for high rate dissolution, both critical to battery performance, are shown to depend on factors in addition to elemental composition; both heat treatment and mechanical working influence the performance of the alloy. The correlation between alloy performance and their microstructure has been investigated.

  5. Drilling High Precision Holes in Ti6Al4V Using Rotary Ultrasonic Machining and Uncertainties Underlying Cutting Force, Tool Wear, and Production Inaccuracies.

    PubMed

    Chowdhury, M A K; Sharif Ullah, A M M; Anwar, Saqib

    2017-09-12

    Ti6Al4V alloys are difficult-to-cut materials that have extensive applications in the automotive and aerospace industry. A great deal of effort has been made to develop and improve the machining operations of Ti6Al4V alloys. This paper presents an experimental study that systematically analyzes the effects of the machining conditions (ultrasonic power, feed rate, spindle speed, and tool diameter) on the performance parameters (cutting force, tool wear, overcut error, and cylindricity error), while drilling high precision holes on the workpiece made of Ti6Al4V alloys using rotary ultrasonic machining (RUM). Numerical results were obtained by conducting experiments following the design of an experiment procedure. The effects of the machining conditions on each performance parameter have been determined by constructing a set of possibility distributions (i.e., trapezoidal fuzzy numbers) from the experimental data. A possibility distribution is a probability-distribution-neural representation of uncertainty, and is effective in quantifying the uncertainty underlying physical quantities when there is a limited number of data points which is the case here. Lastly, the optimal machining conditions have been identified using these possibility distributions.

  6. Design of distributed PID-type dynamic matrix controller for fractional-order systems

    NASA Astrophysics Data System (ADS)

    Wang, Dawei; Zhang, Ridong

    2018-01-01

    With the continuous requirements for product quality and safety operation in industrial production, it is difficult to describe the complex large-scale processes with integer-order differential equations. However, the fractional differential equations may precisely represent the intrinsic characteristics of such systems. In this paper, a distributed PID-type dynamic matrix control method based on fractional-order systems is proposed. First, the high-order approximate model of integer order is obtained by utilising the Oustaloup method. Then, the step response model vectors of the plant is obtained on the basis of the high-order model, and the online optimisation for multivariable processes is transformed into the optimisation of each small-scale subsystem that is regarded as a sub-plant controlled in the distributed framework. Furthermore, the PID operator is introduced into the performance index of each subsystem and the fractional-order PID-type dynamic matrix controller is designed based on Nash optimisation strategy. The information exchange among the subsystems is realised through the distributed control structure so as to complete the optimisation task of the whole large-scale system. Finally, the control performance of the designed controller in this paper is verified by an example.

  7. Comparison of Aero-Propulsive Performance Predictions for Distributed Propulsion Configurations

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas K.; Derlaga, Joseph M.; Deere, Karen A.; Carter, Melissa B.; Viken, Sally A.; Patterson, Michael D.; Litherland, Brandon L.; Stoll, Alex M.

    2017-01-01

    NASA's X-57 "Maxwell" flight demonstrator incorporates distributed electric propulsion technologies in a design that will achieve a significant reduction in energy used in cruise flight. A substantial portion of these energy savings come from beneficial aerodynamic-propulsion interaction. Previous research has shown the benefits of particular instantiations of distributed propulsion, such as the use of wingtip-mounted cruise propellers and leading edge high-lift propellers. However, these benefits have not been reduced to a generalized design or analysis approach suitable for large-scale design exploration. This paper discusses the rapid, "design-order" toolchains developed to investigate the large, complex tradespace of candidate geometries for the X-57. Due to the lack of an appropriate, rigorous set of validation data, the results of these tools were compared to three different computational flow solvers for selected wing and propulsion geometries. The comparisons were conducted using a common input geometry, but otherwise different input grids and, when appropriate, different flow assumptions to bound the comparisons. The results of these studies showed that the X-57 distributed propulsion wing should be able to meet the as-designed performance in cruise flight, while also meeting or exceeding targets for high-lift generation in low-speed flight.

  8. Significantly reducing the processing times of high-speed photometry data sets using a distributed computing model

    NASA Astrophysics Data System (ADS)

    Doyle, Paul; Mtenzi, Fred; Smith, Niall; Collins, Adrian; O'Shea, Brendan

    2012-09-01

    The scientific community is in the midst of a data analysis crisis. The increasing capacity of scientific CCD instrumentation and their falling costs is contributing to an explosive generation of raw photometric data. This data must go through a process of cleaning and reduction before it can be used for high precision photometric analysis. Many existing data processing pipelines either assume a relatively small dataset or are batch processed by a High Performance Computing centre. A radical overhaul of these processing pipelines is required to allow reduction and cleaning rates to process terabyte sized datasets at near capture rates using an elastic processing architecture. The ability to access computing resources and to allow them to grow and shrink as demand fluctuates is essential, as is exploiting the parallel nature of the datasets. A distributed data processing pipeline is required. It should incorporate lossless data compression, allow for data segmentation and support processing of data segments in parallel. Academic institutes can collaborate and provide an elastic computing model without the requirement for large centralized high performance computing data centers. This paper demonstrates how a base 10 order of magnitude improvement in overall processing time has been achieved using the "ACN pipeline", a distributed pipeline spanning multiple academic institutes.

  9. High-performance size-exclusion chromatography studies on the formation and distribution of polar compounds in camellia seed oil during heating.

    PubMed

    Feng, Hong-Xia; Sam, Rokayya; Jiang, Lian-Zhou; Li, Yang; Cao, Wen-Ming

    Camellia seed oil (CSO) is rich in oleic acid and has a high number of active components, which give the oil high nutritional value and a variety of biological activity. The aim of the present study was to determine the changes in the content and distribution of total polar compounds (TPC) in CSO during heating. TPC were isolated by means of preparative flash chromatography and further analyzed by high-performance size-exclusion chromatography (HPSEC). The TPC content of CSO increased from 4.74% to 25.29%, showing a significantly lower formation rate as compared to that of extra virgin olive oil (EVOO) and soybean oil (SBO) during heating. Furthermore, heating also resulted in significant differences (P<0.05) in the distribution of TPC among these oils. Though the content of oxidized triacylglycerol dimers, oxidized triacylglycerol oligomers, and oxidized triacylglycerol monomers significantly increased in all these oils, their increased percentages were much less in CSO than those in EVOO, indicating that CSO has a greater ability to resist oxidation. This work may be useful for the food oil industry and consumers in helping to choose the correct oil and to decide on the useful lifetime of the oil.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  11. Hydraulic Tomography and High-Resolution Slug Testing to Determine Hydraulic Conductivity Distributions

    DTIC Science & Technology

    2011-02-01

    Research Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average...REPORT DATE FEB 2011 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Hydraulic Tomography and High-Resolution Slug Testing to...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Kansas Center for Research 8. PERFORMING

  12. Performance of high power S-band klystrons focused with permanent magnet

    NASA Astrophysics Data System (ADS)

    Fukuda, S.; Shidara, T.; Saito, Y.; Hanaki, H.; Nakao, K.; Homma, H.; Anami, S.; Tanaka, J.

    1987-02-01

    Performance of high power S-band klystrons focused with permanent magnet is presented. The axial magnetic field distribution and the transverse magnetic field play an important role in the tube performance. Effects of the reversal field in the collector and the cathode-anode region are discussed precisely. It is also shown that the tube efficiency is strongly affected with the residual transverse magnetic field. The allowable transverse field is less than 0.3 percent of the longitudinal field in the entire RF interaction region of the klystron.

  13. Performance of a 300 Mbps 1:16 serial/parallel optoelectronic receiver module

    NASA Technical Reports Server (NTRS)

    Richard, M. A.; Claspy, P. C.; Bhasin, K. B.; Bendett, M. B.

    1990-01-01

    Optical interconnects are being considered for the high speed distribution of multiplexed control signals in GaAs monolithic microwave integrated circuit (MMIC) based phased array antennas. The performance of a hybrid GaAs optoelectronic integrated circuit (OEIC) is described, as well as its design and fabrication. The OEIC converts a 16-bit serial optical input to a 16 parallel line electrical output using an on-board 1:16 demultiplexer and operates at data rates as high as 30b Mbps. The performance characteristics and potential applications of the device are presented.

  14. Efficient Use of Distributed Systems for Scientific Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Valerie; Chen, Jian; Canfield, Thomas; Richard, Jacques

    2000-01-01

    Distributed computing has been regarded as the future of high performance computing. Nationwide high speed networks such as vBNS are becoming widely available to interconnect high-speed computers, virtual environments, scientific instruments and large data sets. One of the major issues to be addressed with distributed systems is the development of computational tools that facilitate the efficient execution of parallel applications on such systems. These tools must exploit the heterogeneous resources (networks and compute nodes) in distributed systems. This paper presents a tool, called PART, which addresses this issue for mesh partitioning. PART takes advantage of the following heterogeneous system features: (1) processor speed; (2) number of processors; (3) local network performance; and (4) wide area network performance. Further, different finite element applications under consideration may have different computational complexities, different communication patterns, and different element types, which also must be taken into consideration when partitioning. PART uses parallel simulated annealing to partition the domain, taking into consideration network and processor heterogeneity. The results of using PART for an explicit finite element application executing on two IBM SPs (located at Argonne National Laboratory and the San Diego Supercomputer Center) indicate an increase in efficiency by up to 36% as compared to METIS, a widely used mesh partitioning tool. The input to METIS was modified to take into consideration heterogeneous processor performance; METIS does not take into consideration heterogeneous networks. The execution times for these applications were reduced by up to 30% as compared to METIS. These results are given in Figure 1 for four irregular meshes with number of elements ranging from 30,269 elements for the Barth5 mesh to 11,451 elements for the Barth4 mesh. Future work with PART entails using the tool with an integrated application requiring distributed systems. In particular this application, illustrated in the document entails an integration of finite element and fluid dynamic simulations to address the cooling of turbine blades of a gas turbine engine design. It is not uncommon to encounter high-temperature, film-cooled turbine airfoils with 1,000,000s of degrees of freedom. This results because of the complexity of the various components of the airfoils, requiring fine-grain meshing for accuracy. Additional information is contained in the original.

  15. High speed propeller performance and noise predictions at takeoff/landing conditions

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Woodward, R. P.; Groeneweg, J. F.

    1988-01-01

    The performance and noise of a high speed SR-7A model propeller under takeoff/landing conditions are considered. The blade loading distributions are obtained by solving the three-dimensional Euler equations and the sound pressure levels are computed using a time domain approach. At the nominal takeoff operating point, the blade sections near the hub are lightly or negatively loaded. The chordwise loading distributions are distinctly different from those of cruise conditions. The noise of the SR-7A model propeller at takeoff is dominated by the loading noise, similar to that at cruise conditions. The waveforms of the acoustic pressure signature are nearly sinusoidal in the plane of the propeller. The computed directivity of the blade passing frequency tone agrees fairly well with the data at nominal takeoff blade angle.

  16. High speed propeller performance and noise predictions at takeoff/landing conditions

    NASA Technical Reports Server (NTRS)

    Nallasamy, M.; Woodward, R. P.; Groeneweg, J. F.

    1987-01-01

    The performance and noise of a high speed SR-7A model propeller under takeoff/landing conditions are considered. The blade loading distributions are obtained by solving the three-dimensional Euler equations and the sound pressure levels are computed using a time domain approach. At the nominal takeoff operating point, the blade sections near the hub are lightly or negatively loaded. The chordwise loading distributions are distinctly different from those of cruise conditions. The noise of the SR-7A model propeller at takeoff is dominated by the loading noise, similar to that at cruise conditions. The waveforms of the acoustic pressure signature are nearly sinusoidal in the plane of the propeller. The computed directivity of the blade passing frequency tone agrees fairly well with the data at nominal takeoff blade angle.

  17. Energetic Particle Loss Estimates in W7-X

    NASA Astrophysics Data System (ADS)

    Lazerson, Samuel; Akaslompolo, Simppa; Drevlak, Micheal; Wolf, Robert; Darrow, Douglass; Gates, David; W7-X Team

    2017-10-01

    The collisionless loss of high energy H+ and D+ ions in the W7-X device are examined using the BEAMS3D code. Simulations of collisionless losses are performed for a large ensemble of particles distributed over various flux surfaces. A clear loss cone of particles is present in the distribution for all particles. These simulations are compared against slowing down simulations in which electron impact, ion impact, and pitch angle scattering are considered. Full device simulations allow tracing of particle trajectories to the first wall components. These simulations provide estimates for placement of a novel set of energetic particle detectors. Recent performance upgrades to the code are allowing simulations with > 1000 processors providing high fidelity simulations. Speedup and future works are discussed. DE-AC02-09CH11466.

  18. A novel liquid chromatography Orbitrap mass spectrometry method with full scan for simultaneous determination of multiple bioactive constituents of Shenkang injection in rat tissues: Application to tissue distribution and pharmacokinetic studies.

    PubMed

    Yang, Jie; Sun, Zhi; Li, Duolu; Duan, Fei; Li, Zhuolun; Lu, Jingli; Shi, Yingying; Xu, Tanye; Zhang, Xiaojian

    2018-06-07

    Shenkang injection is a traditional Chinese formula with good curative effect on chronic renal failure. In this paper, a novel, rapid and sensitive ultra high performance liquid chromatography coupled with Q Exactive hybrid quadrupole orbitrap high resolution accurate mass spectrometry was developed and validated for simultaneous determination of seven bioactive constituents of Shenkang injection in rat plasma and tissues after intravenous administration. Acetonitrile was used as protein precipitation agent in biological samples dispose with carbamazepine as internal standard. The chromatographic separation was carried out on a C 18 column with a gradient mobile phase consisting of acetonitrile and water (containing 0.1% formic acid). The MS analysis was performed in the full scan positive and negative ion mode. The lower limits of quantification for the seven analytes in rat plasma and tissues were 0.1-10 ng/mL. The validated method was successfully applied to tissue distribution and pharmacokinetic studies of Shenkang injection after intravenous administration. The results of the tissue distribution study showed that the high concentration of seven constituents were primarily in the kidney tract. This is the first time to report the application of Q-Orbitrap with full scan mass spectrometry in tissue distribution and pharmacokinetic studies of Shenkang injection. This article is protected by copyright. All rights reserved.

  19. High performance network and channel-based storage

    NASA Technical Reports Server (NTRS)

    Katz, Randy H.

    1991-01-01

    In the traditional mainframe-centered view of a computer system, storage devices are coupled to the system through complex hardware subsystems called input/output (I/O) channels. With the dramatic shift towards workstation-based computing, and its associated client/server model of computation, storage facilities are now found attached to file servers and distributed throughout the network. We discuss the underlying technology trends that are leading to high performance network-based storage, namely advances in networks, storage devices, and I/O controller and server architectures. We review several commercial systems and research prototypes that are leading to a new approach to high performance computing based on network-attached storage.

  20. A high performance linear equation solver on the VPP500 parallel supercomputer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakanishi, Makoto; Ina, Hiroshi; Miura, Kenichi

    1994-12-31

    This paper describes the implementation of two high performance linear equation solvers developed for the Fujitsu VPP500, a distributed memory parallel supercomputer system. The solvers take advantage of the key architectural features of VPP500--(1) scalability for an arbitrary number of processors up to 222 processors, (2) flexible data transfer among processors provided by a crossbar interconnection network, (3) vector processing capability on each processor, and (4) overlapped computation and transfer. The general linear equation solver based on the blocked LU decomposition method achieves 120.0 GFLOPS performance with 100 processors in the LIN-PACK Highly Parallel Computing benchmark.

  1. Extending HPF for advanced data parallel applications

    NASA Technical Reports Server (NTRS)

    Chapman, Barbara; Mehrotra, Piyush; Zima, Hans

    1994-01-01

    The stated goal of High Performance Fortran (HPF) was to 'address the problems of writing data parallel programs where the distribution of data affects performance'. After examining the current version of the language we are led to the conclusion that HPF has not fully achieved this goal. While the basic distribution functions offered by the language - regular block, cyclic, and block cyclic distributions - can support regular numerical algorithms, advanced applications such as particle-in-cell codes or unstructured mesh solvers cannot be expressed adequately. We believe that this is a major weakness of HPF, significantly reducing its chances of becoming accepted in the numeric community. The paper discusses the data distribution and alignment issues in detail, points out some flaws in the basic language, and outlines possible future paths of development. Furthermore, we briefly deal with the issue of task parallelism and its integration with the data parallel paradigm of HPF.

  2. Aerosol size distribution at Nansen Ice Sheet Antarctica

    NASA Astrophysics Data System (ADS)

    Belosi, F.; Contini, D.; Donateo, A.; Santachiara, G.; Prodi, F.

    2012-04-01

    During austral summer 2006, in the framework of the XXII Italian Antarctic expedition of PNRA (Italian National Program for Research in Antarctica), aerosol particle number size distribution measurements were performed in the 10-500 range nm over the Nansen Ice Sheet glacier (NIS, 74°30' S, 163°27' E; 85 m a.s.l), a permanently iced branch of the Ross Sea. Observed total particle number concentrations varied between 169 and 1385 cm- 3. A monomodal number size distribution, peaking at about 70 nm with no variation during the day, was observed for continental air mass, high wind speed and low relative humidity. Trimodal number size distributions were also observed, in agreement with measurements performed at Aboa station, which is located on the opposite side of the Antarctic continent to the NIS. In this case new particle formation, with subsequent particle growth up to about 30 nm, was observed even if not associated with maritime air masses.

  3. Efficient MgO-based mesoporous CO2 trapper and its performance at high temperature.

    PubMed

    Han, Kun Kun; Zhou, Yu; Chun, Yuan; Zhu, Jian Hua

    2012-02-15

    A novel MgO-based porous adsorbent has been synthesized in a facile co-precipitation method for the first time, in order to provide a candidate for trapping CO(2) in flue gas at high temperature. The resulting composite exhibits a mesoporous structure with a wide pore size distribution, due to the even dispersion and distribution of microcrystalline MgO in the framework of alumina to form a concrete-like structure. These sorbents can capture CO(2) at high temperature (150-400°C), possessing high reactivity and stability in cyclic adsorption-desorption processes, providing competitive candidates to control CO(2) emission. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Three-Dimensional Measurements of Fuel Distribution in High-Pressure, High- Temperature, Next-Generation Aviation Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Hicks, Yolanda R.; Locke, Randy J.; Anderson, Robert C.; Zaller, Michelle M.

    1998-01-01

    In our world-class, optically accessible combustion facility at the NASA Lewis Research Center, we have developed the unique capability of making three-dimensional fuel distribution measurements of aviation gas turbine fuel injectors at actual operating conditions. These measurements are made in situ at the actual operating temperatures and pressures using the JP-grade fuels of candidate next-generation advanced aircraft engines for the High Speed Research (HSR) and Advanced Subsonics Technology (AST) programs. The inlet temperature and pressure ranges used thus far are 300 to 1100 F and 80 to 250 psia. With these data, we can obtain the injector spray angles, the fuel mass distributions of liquid and vapor, the degree of fuel vaporization, and the degree to which fuel has been consumed. The data have been used to diagnose the performance of injectors designed both in-house and by major U.S. engine manufacturers and to design new fuel injectors with overall engine performance goals of increased efficiency and reduced environmental impact. Mie scattering is used to visualize the liquid fuel, and laser-induced fluorescence is used to visualize both liquid and fuel vapor.

  5. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    NASA Technical Reports Server (NTRS)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  6. Fabrication and Electrical Characterization of Correlated Oxide Field Effect Switching Devices for High Speed Electronics

    DTIC Science & Technology

    2015-11-19

    Shriram Ramanathan HARVARD COLLEGE PRESIDENT & FELLOWS OF Final Report 11/19/2015 DISTRIBUTION A: Distribution approved for public release. AF Office... Harvard University 29 Oxford St, Pierce Hall, Cambridge, MA 02138 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...characterization of correlated oxide field effect switching devices for  high speed electronics  PI: Shriram Ramanathan,  Harvard  University  AFOSR Grant FA9550‐12‐1

  7. Transforming Our Cities: High-Performance Green Infrastructure (WERF Report INFR1R11)

    EPA Science Inventory

    The objective of this project is to demonstrate that the highly distributed real-time control (DRTC) technologies for green infrastructure being developed by the research team can play a critical role in transforming our nation’s urban infrastructure. These technologies include a...

  8. Normalization of High Dimensional Genomics Data Where the Distribution of the Altered Variables Is Skewed

    PubMed Central

    Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per

    2011-01-01

    Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175

  9. Design and performance investigation of a highly accurate apodized fiber Bragg grating-based strain sensor in single and quasi-distributed systems.

    PubMed

    Ali, Taha A; Shehata, Mohamed I; Mohamed, Nazmi A

    2015-06-01

    In this work, fiber Bragg grating (FBG) strain sensors in single and quasi-distributed systems are investigated, seeking high-accuracy measurement. Since FBG-based strain sensors of small lengths are preferred in medical applications, and that causes the full width at half-maximum (FWHM) to be larger, a new apodization profile is introduced for the first time, to the best of our knowledge, with a remarkable FWHM at small sensor lengths compared to the Gaussian and Nuttall profiles, in addition to a higher mainlobe slope at these lengths. A careful selection of apodization profiles with detailed investigation is performed-using sidelobe analysis and the FWHM, which are primary judgment factors especially in a quasi-distributed configuration. A comparison between the elite selection of apodization profiles (extracted from related literature) and the proposed new profile is carried out covering the reflectivity peak, FWHM, and sidelobe analysis. The optimization process concludes that the proposed new profile with a chosen small length (L) of 10 mm and Δnac of 1.4×10-4 is the optimum choice for single stage and quasi-distributed strain-sensor networks, even better than the Gaussian profile at small sensor lengths. The proposed profile achieves the smallest FWHM of 15 GHz (suitable for UDWDM), and the highest mainlobe slope of 130 dB/nm. For the quasi-distributed scenario, a noteworthy high isolation of 6.953 dB is achieved while applying a high strain value of 1500 μstrain (με) for a five-stage strain-sensing network. Further investigation was undertaken, proving that consistency in choosing the apodization profile in the quasi-distributed network is mandatory. A test was made of the inclusion of a uniform apodized sensor among other apodized sensors with the proposed profile in an FBG strain-sensor network.

  10. A multidisciplinary approach to the development of low-cost high-performance lightwave networks

    NASA Technical Reports Server (NTRS)

    Maitan, Jacek; Harwit, Alex

    1991-01-01

    Our research focuses on high-speed distributed systems. We anticipate that our results will allow the fabrication of low-cost networks employing multi-gigabit-per-second data links for space and military applications. The recent development of high-speed low-cost photonic components and new generations of microprocessors creates an opportunity to develop advanced large-scale distributed information systems. These systems currently involve hundreds of thousands of nodes and are made up of components and communications links that may fail during operation. In order to realize these systems, research is needed into technologies that foster adaptability and scaleability. Self-organizing mechanisms are needed to integrate a working fabric of large-scale distributed systems. The challenge is to fuse theory, technology, and development methodologies to construct a cost-effective, efficient, large-scale system.

  11. Contiguous metallic rings: an inductive mesh with high transmissivity, strong electromagnetic shielding, and uniformly distributed stray light.

    PubMed

    Tan, Jiubin; Lu, Zhengang

    2007-02-05

    This paper presents the experimental study on an inductive mesh composed of contiguous metallic rings fabricated using UV-lithography on quartz glass. Experimental results indicate that, at the same period and linewidth as square mesh, ring mesh has better transmissivity for its higher obscuration ratio, stronger electromagnetic shielding performance for its smaller maximum aperture, and less degradation of imaging quality for its lower ratio and uniform distribution of high order diffraction energy. It is therefore concluded that this kind of ring mesh can be used as high-pass filters to provide electromagnetic shielding of optical transparent elements.

  12. High-efficiency reconciliation for continuous variable quantum key distribution

    NASA Astrophysics Data System (ADS)

    Bai, Zengliang; Yang, Shenshen; Li, Yongmin

    2017-04-01

    Quantum key distribution (QKD) is the most mature application of quantum information technology. Information reconciliation is a crucial step in QKD and significantly affects the final secret key rates shared between two legitimate parties. We analyze and compare various construction methods of low-density parity-check (LDPC) codes and design high-performance irregular LDPC codes with a block length of 106. Starting from these good codes and exploiting the slice reconciliation technique based on multilevel coding and multistage decoding, we realize high-efficiency Gaussian key reconciliation with efficiency higher than 95% for signal-to-noise ratios above 1. Our demonstrated method can be readily applied in continuous variable QKD.

  13. Estimation of lifetime distributions on 1550-nm DFB laser diodes using Monte-Carlo statistic computations

    NASA Astrophysics Data System (ADS)

    Deshayes, Yannick; Verdier, Frederic; Bechou, Laurent; Tregon, Bernard; Danto, Yves; Laffitte, Dominique; Goudard, Jean Luc

    2004-09-01

    High performance and high reliability are two of the most important goals driving the penetration of optical transmission into telecommunication systems ranging from 880 nm to 1550 nm. Lifetime prediction defined as the time at which a parameter reaches its maximum acceptable shirt still stays the main result in terms of reliability estimation for a technology. For optoelectronic emissive components, selection tests and life testing are specifically used for reliability evaluation according to Telcordia GR-468 CORE requirements. This approach is based on extrapolation of degradation laws, based on physics of failure and electrical or optical parameters, allowing both strong test time reduction and long-term reliability prediction. Unfortunately, in the case of mature technology, there is a growing complexity to calculate average lifetime and failure rates (FITs) using ageing tests in particular due to extremely low failure rates. For present laser diode technologies, time to failure tend to be 106 hours aged under typical conditions (Popt=10 mW and T=80°C). These ageing tests must be performed on more than 100 components aged during 10000 hours mixing different temperatures and drive current conditions conducting to acceleration factors above 300-400. These conditions are high-cost, time consuming and cannot give a complete distribution of times to failure. A new approach consists in use statistic computations to extrapolate lifetime distribution and failure rates in operating conditions from physical parameters of experimental degradation laws. In this paper, Distributed Feedback single mode laser diodes (DFB-LD) used for 1550 nm telecommunication network working at 2.5 Gbit/s transfer rate are studied. Electrical and optical parameters have been measured before and after ageing tests, performed at constant current, according to Telcordia GR-468 requirements. Cumulative failure rates and lifetime distributions are computed using statistic calculations and equations of drift mechanisms versus time fitted from experimental measurements.

  14. A parametric approach for simultaneous bias correction and high-resolution downscaling of climate model rainfall

    NASA Astrophysics Data System (ADS)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto; Marrocu, Marino

    2017-03-01

    Distribution mapping has been identified as the most efficient approach to bias-correct climate model rainfall, while reproducing its statistics at spatial and temporal resolutions suitable to run hydrologic models. Yet its implementation based on empirical distributions derived from control samples (referred to as nonparametric distribution mapping) makes the method's performance sensitive to sample length variations, the presence of outliers, the spatial resolution of climate model results, and may lead to biases, especially in extreme rainfall estimation. To address these shortcomings, we propose a methodology for simultaneous bias correction and high-resolution downscaling of climate model rainfall products that uses: (a) a two-component theoretical distribution model (i.e., a generalized Pareto (GP) model for rainfall intensities above a specified threshold u*, and an exponential model for lower rainrates), and (b) proper interpolation of the corresponding distribution parameters on a user-defined high-resolution grid, using kriging for uncertain data. We assess the performance of the suggested parametric approach relative to the nonparametric one, using daily raingauge measurements from a dense network in the island of Sardinia (Italy), and rainfall data from four GCM/RCM model chains of the ENSEMBLES project. The obtained results shed light on the competitive advantages of the parametric approach, which is proved more accurate and considerably less sensitive to the characteristics of the calibration period, independent of the GCM/RCM combination used. This is especially the case for extreme rainfall estimation, where the GP assumption allows for more accurate and robust estimates, also beyond the range of the available data.

  15. Supercomputing '91; Proceedings of the 4th Annual Conference on High Performance Computing, Albuquerque, NM, Nov. 18-22, 1991

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Various papers on supercomputing are presented. The general topics addressed include: program analysis/data dependence, memory access, distributed memory code generation, numerical algorithms, supercomputer benchmarks, latency tolerance, parallel programming, applications, processor design, networks, performance tools, mapping and scheduling, characterization affecting performance, parallelism packaging, computing climate change, combinatorial algorithms, hardware and software performance issues, system issues. (No individual items are abstracted in this volume)

  16. Size-exclusion chromatography for the determination of the boiling point distribution of high-boiling petroleum fractions.

    PubMed

    Boczkaj, Grzegorz; Przyjazny, Andrzej; Kamiński, Marian

    2015-03-01

    The paper describes a new procedure for the determination of boiling point distribution of high-boiling petroleum fractions using size-exclusion chromatography with refractive index detection. Thus far, the determination of boiling range distribution by chromatography has been accomplished using simulated distillation with gas chromatography with flame ionization detection. This study revealed that in spite of substantial differences in the separation mechanism and the detection mode, the size-exclusion chromatography technique yields similar results for the determination of boiling point distribution compared with simulated distillation and novel empty column gas chromatography. The developed procedure using size-exclusion chromatography has a substantial applicability, especially for the determination of exact final boiling point values for high-boiling mixtures, for which a standard high-temperature simulated distillation would have to be used. In this case, the precision of final boiling point determination is low due to the high final temperatures of the gas chromatograph oven and an insufficient thermal stability of both the gas chromatography stationary phase and the sample. Additionally, the use of high-performance liquid chromatography detectors more sensitive than refractive index detection allows a lower detection limit for high-molar-mass aromatic compounds, and thus increases the sensitivity of final boiling point determination. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Numerical Investigation of Fuel Distribution Effect on Flow and Temperature Field in a Heavy Duty Gas Turbine Combustor

    NASA Astrophysics Data System (ADS)

    Deng, Xiaowen; Xing, Li; Yin, Hong; Tian, Feng; Zhang, Qun

    2018-03-01

    Multiple-swirlers structure is commonly adopted for combustion design strategy in heavy duty gas turbine. The multiple-swirlers structure might shorten the flame brush length and reduce emissions. In engineering application, small amount of gas fuel is distributed for non-premixed combustion as a pilot flame while most fuel is supplied to main burner for premixed combustion. The effect of fuel distribution on the flow and temperature field related to the combustor performance is a significant issue. This paper investigates the fuel distribution effect on the combustor performance by adjusting the pilot/main burner fuel percentage. Five pilot fuel distribution schemes are considered including 3 %, 5 %, 7 %, 10 % and 13 %. Altogether five pilot fuel distribution schemes are computed and deliberately examined. The flow field and temperature field are compared, especially on the multiple-swirlers flow field. Computational results show that there is the optimum value for the base load of combustion condition. The pilot fuel percentage curve is calculated to optimize the combustion operation. Under the combustor structure and fuel distribution scheme, the combustion achieves high efficiency with acceptable OTDF and low NOX emission. Besides, the CO emission is also presented.

  18. β-Cobalt sulfide nanoparticles decorated graphene composite electrodes for high capacity and power supercapacitors

    NASA Astrophysics Data System (ADS)

    Qu, Baihua; Chen, Yuejiao; Zhang, Ming; Hu, Lingling; Lei, Danni; Lu, Bingan; Li, Qiuhong; Wang, Yanguo; Chen, Libao; Wang, Taihong

    2012-11-01

    Electrochemical supercapacitors have drawn much attention because of their high power and reasonably high energy densities. However, their performances still do not reach the demand of energy storage. In this paper β-cobalt sulfide nanoparticles were homogeneously distributed on a highly conductive graphene (CS-G) nanocomposite, which was confirmed by transmission electron microscopy analysis, and exhibit excellent electrochemical performances including extremely high values of specific capacitance (~1535 F g-1) at a current density of 2 A g-1, high-power density (11.98 kW kg-1) at a discharge current density of 40 A g-1 and excellent cyclic stability. The excellent electrochemical performances could be attributed to the graphene nanosheets (GNSs) which could maintain the mechanical integrity. Also the CS-G nanocomposite electrodes have high electrical conductivity. These results indicate that high electronic conductivity of graphene nanocomposite materials is crucial to achieving high power and energy density for supercapacitors.

  19. β-Cobalt sulfide nanoparticles decorated graphene composite electrodes for high capacity and power supercapacitors.

    PubMed

    Qu, Baihua; Chen, Yuejiao; Zhang, Ming; Hu, Lingling; Lei, Danni; Lu, Bingan; Li, Qiuhong; Wang, Yanguo; Chen, Libao; Wang, Taihong

    2012-12-21

    Electrochemical supercapacitors have drawn much attention because of their high power and reasonably high energy densities. However, their performances still do not reach the demand of energy storage. In this paper β-cobalt sulfide nanoparticles were homogeneously distributed on a highly conductive graphene (CS-G) nanocomposite, which was confirmed by transmission electron microscopy analysis, and exhibit excellent electrochemical performances including extremely high values of specific capacitance (~1535 F g(-1)) at a current density of 2 A g(-1), high-power density (11.98 kW kg(-1)) at a discharge current density of 40 A g(-1) and excellent cyclic stability. The excellent electrochemical performances could be attributed to the graphene nanosheets (GNSs) which could maintain the mechanical integrity. Also the CS-G nanocomposite electrodes have high electrical conductivity. These results indicate that high electronic conductivity of graphene nanocomposite materials is crucial to achieving high power and energy density for supercapacitors.

  20. Room temperature solvent-free reduction of SiCl4 to nano-Si for high-performance Li-ion batteries.

    PubMed

    Liu, Zhiliang; Chang, Xinghua; Sun, Bingxue; Yang, Sungjin; Zheng, Jie; Li, Xingguo

    2017-06-06

    SiCl 4 can be directly reduced to nano-Si with commercial Na metal under solvent-free conditions by mechanical milling. Crystalline nano-Si with an average size of 25 nm and quite uniform size distribution can be obtained, which shows excellent lithium storage performance, for a high reversible capacity of 1600 mA h g -1 after 500 cycles at 2.1 A g -1 .

  1. Stability Analysis of High-Speed Boundary-Layer Flow with Gas Injection

    DTIC Science & Technology

    2014-06-01

    Vitaly G. Soudakov; Ivett A Leyva 5e. TASK NUMBER 5f. WORK UNIT NUMBER Q0AF 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING...cases of low injection rates in which the N -factors in the near field region are below the critical level, shaping can produce a significant...distribution unlimited Stability analysis of high-speed boundary-layer flow with gas injection Alexander V. Fedorov* and Vitaly G. Soudakov

  2. Stability Analysis of High-Speed Boundary-Layer Flow with Gas Injection (Briefing Charts)

    DTIC Science & Technology

    2014-06-01

    Vitaly G. Soudakov; Ivett A Leyva 5e. TASK NUMBER 5f. WORK UNIT NUMBER Q0AF 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING...cases of low injection rates in which the N -factors in the near field region are below the critical level, shaping can produce a significant...Release; Distribution Unlimited Stability analysis of high-speed boundary-layer flow with gas injection Alexander Fedorov and Vitaly Soudakov Moscow

  3. Interactions Between Structure and Processing that Control Moisture Uptake in High-Performance Polycyanurates (Briefing Charts)

    DTIC Science & Technology

    2015-03-24

    distribution is unlimited.  . Interactions Between Structure and Processing that Control Moisture Uptake in High-Performance Polycyanurates Presenter: Dr...Edwards AFB, CA 4 California State University, Long Beach, CA 90840 2 Outline: Basic Studies of Moisture Uptake in Cyanate Ester Networks • Background...Motivation • SOTA Theories of Moisture Uptake in Thermosetting Networks • New Tools and New Discoveries • Unresolved Issues and Ways to Address Them

  4. Does greater thermal plasticity facilitate range expansion of an invasive terrestrial anuran into higher latitudes?

    PubMed

    Winwood-Smith, Hugh S; Alton, Lesley A; Franklin, Craig E; White, Craig R

    2015-01-01

    Temperature has pervasive effects on physiological processes and is critical in setting species distribution limits. Since invading Australia, cane toads have spread rapidly across low latitudes, but slowly into higher latitudes. Low temperature is the likely factor limiting high-latitude advancement. Several previous attempts have been made to predict future cane toad distributions in Australia, but understanding the potential contribution of phenotypic plasticity and adaptation to future range expansion remains challenging. Previous research demonstrates the considerable thermal metabolic plasticity of the cane toad, but suggests limited thermal plasticity of locomotor performance. Additionally, the oxygen-limited thermal tolerance hypothesis predicts that reduced aerobic scope sets thermal limits for ectotherm performance. Metabolic plasticity, locomotor performance and aerobic scope are therefore predicted targets of natural selection as cane toads invade colder regions. We measured these traits at temperatures of 10, 15, 22.5 and 30°C in low- and high-latitude toads acclimated to 15 and 30°C, to test the hypothesis that cane toads have adapted to cooler temperatures. High-latitude toads show increased metabolic plasticity and higher resting metabolic rates at lower temperatures. Burst locomotor performance was worse for high-latitude toads. Other traits showed no regional differences. We conclude that increased metabolic plasticity may facilitate invasion into higher latitudes by maintaining critical physiological functions at lower temperatures.

  5. An Ephemeral Burst-Buffer File System for Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Teng; Moody, Adam; Yu, Weikuan

    BurstFS is a distributed file system for node-local burst buffers on high performance computing systems. BurstFS presents a shared file system space across the burst buffers so that applications that use shared files can access the highly-scalable burst buffers without changing their applications.

  6. Assessment of a membrane drinking water filter in an emergency setting.

    PubMed

    Ensink, Jeroen H J; Bastable, Andy; Cairncross, Sandy

    2015-06-01

    The performance and acceptability of the Nerox(TM) membrane drinking water filter were evaluated among an internally displaced population in Pakistan. The membrane filter and a control ceramic candle filter were distributed to over 3,000 households. Following a 6-month period, 230 households were visited and filter performance and use were assessed. Only 6% of the visited households still had a functioning filter, and the removal performance ranged from 80 to 93%. High turbidity in source water (irrigation canals), together with high temperatures and large family size were likely to have contributed to poor performance and uptake of the filters.

  7. Building high-performance system for processing a daily large volume of Chinese satellites imagery

    NASA Astrophysics Data System (ADS)

    Deng, Huawu; Huang, Shicun; Wang, Qi; Pan, Zhiqiang; Xin, Yubin

    2014-10-01

    The number of Earth observation satellites from China increases dramatically recently and those satellites are acquiring a large volume of imagery daily. As the main portal of image processing and distribution from those Chinese satellites, the China Centre for Resources Satellite Data and Application (CRESDA) has been working with PCI Geomatics during the last three years to solve two issues in this regard: processing the large volume of data (about 1,500 scenes or 1 TB per day) in a timely manner and generating geometrically accurate orthorectified products. After three-year research and development, a high performance system has been built and successfully delivered. The high performance system has a service oriented architecture and can be deployed to a cluster of computers that may be configured with high end computing power. The high performance is gained through, first, making image processing algorithms into parallel computing by using high performance graphic processing unit (GPU) cards and multiple cores from multiple CPUs, and, second, distributing processing tasks to a cluster of computing nodes. While achieving up to thirty (and even more) times faster in performance compared with the traditional practice, a particular methodology was developed to improve the geometric accuracy of images acquired from Chinese satellites (including HJ-1 A/B, ZY-1-02C, ZY-3, GF-1, etc.). The methodology consists of fully automatic collection of dense ground control points (GCP) from various resources and then application of those points to improve the photogrammetric model of the images. The delivered system is up running at CRESDA for pre-operational production and has been and is generating good return on investment by eliminating a great amount of manual labor and increasing more than ten times of data throughput daily with fewer operators. Future work, such as development of more performance-optimized algorithms, robust image matching methods and application workflows, is identified to improve the system in the coming years.

  8. Evaluation of the Performance of the Distributed Phased-MIMO Sonar.

    PubMed

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-11

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments.

  9. Evaluation of the Performance of the Distributed Phased-MIMO Sonar

    PubMed Central

    Pan, Xiang; Jiang, Jingning; Wang, Nan

    2017-01-01

    A broadband signal model is proposed for a distributed multiple-input multiple-output (MIMO) sonar system consisting of two transmitters and a receiving linear array. Transmitters are widely separated to illuminate the different aspects of an extended target of interest. The beamforming technique is utilized at the reception ends for enhancement of weak target echoes. A MIMO detector is designed with the estimated target position parameters within the general likelihood rate test (GLRT) framework. For the high signal-to-noise ratio case, the detection performance of the MIMO system is better than that of the phased-array system in the numerical simulations and the tank experiments. The robustness of the distributed phased-MIMO sonar system is further demonstrated in localization of a target in at-lake experiments. PMID:28085071

  10. Pressure distribution on mattresses.

    PubMed

    Nicol, K; Rusteberg, D

    1993-12-01

    Measurements of pressure distribution are usually performed on a hard base, such as those in gait analysis or tire research; measurements on soft surfaces are avoided because of technical problems. A sensor mat was developed which consists of 512 pressure sensors, glued to arbitrary locations of a fabric. The mat can be bent to spherical and saddle shapes so that it can be utilised on soft and flexible surfaces like chairs and beds. Performance of eight hospital mattresses concerning decubitus prophylactics and support in supine and side position was studied in four subjects representing extreme body build. It was found that one particular mattress served well for three subjects, whereas no mattress was suitable for the high and heavy type. It was concluded that measurement of pressure distribution is a valuable tool for designing and selecting.

  11. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  12. High Performance Data Transfer for Distributed Data Intensive Sciences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Chin; Cottrell, R 'Les' A.; Hanushevsky, Andrew B.

    We report on the development of ZX software providing high performance data transfer and encryption. The design scales in: computation power, network interfaces, and IOPS while carefully balancing the available resources. Two U.S. patent-pending algorithms help tackle data sets containing lots of small files and very large files, and provide insensitivity to network latency. It has a cluster-oriented architecture, using peer-to-peer technologies to ease deployment, operation, usage, and resource discovery. Its unique optimizations enable effective use of flash memory. Using a pair of existing data transfer nodes at SLAC and NERSC, we compared its performance to that of bbcp andmore » GridFTP and determined that they were comparable. With a proof of concept created using two four-node clusters with multiple distributed multi-core CPUs, network interfaces and flash memory, we achieved 155Gbps memory-to-memory over a 2x100Gbps link aggregated channel and 70Gbps file-to-file with encryption over a 5000 mile 100Gbps link.« less

  13. A Full Navier-Stokes Analysis of Subsonic Diffuser of a Bifurcated 70/30 Supersonic Inlet for High Speed Civil Transport Application

    NASA Technical Reports Server (NTRS)

    Kapoor, Kamlesh; Anderson, Bernhard H.; Shaw, Robert J.

    1994-01-01

    A full Navier-Stokes analysis was performed to evaluate the performance of the subsonic diffuser of a NASA Lewis Research Center 70/30 mixed-compression bifurcated supersonic inlet for high speed civil transport application. The PARC3D code was used in the present study. The computations were also performed when approximately 2.5 percent of the engine mass flow was allowed to bypass through the engine bypass doors. The computational results were compared with the available experimental data which consisted of detailed Mach number and total pressure distribution along the entire length of the subsonic diffuser. The total pressure recovery, flow distortion, and crossflow velocity at the engine face were also calculated. The computed surface ramp and cowl pressure distributions were compared with experiments. Overall, the computational results compared well with experimental data. The present CFD analysis demonstrated that the bypass flow improves the total pressure recovery and lessens flow distortions at the engine face.

  14. Six weeks of a polarized training-intensity distribution leads to greater physiological and performance adaptations than a threshold model in trained cyclists.

    PubMed

    Neal, Craig M; Hunter, Angus M; Brennan, Lorraine; O'Sullivan, Aifric; Hamilton, D Lee; De Vito, Giuseppe; Galloway, Stuart D R

    2013-02-15

    This study was undertaken to investigate physiological adaptation with two endurance-training periods differing in intensity distribution. In a randomized crossover fashion, separated by 4 wk of detraining, 12 male cyclists completed two 6-wk training periods: 1) a polarized model [6.4 (±1.4 SD) h/wk; 80%, 0%, and 20% of training time in low-, moderate-, and high-intensity zones, respectively]; and 2) a threshold model [7.5 (±2.0 SD) h/wk; 57%, 43%, and 0% training-intensity distribution]. Before and after each training period, following 2 days of diet and exercise control, fasted skeletal muscle biopsies were obtained for mitochondrial enzyme activity and monocarboxylate transporter (MCT) 1 and 4 expression, and morning first-void urine samples were collected for NMR spectroscopy-based metabolomics analysis. Endurance performance (40-km time trial), incremental exercise, peak power output (PPO), and high-intensity exercise capacity (95% maximal work rate to exhaustion) were also assessed. Endurance performance, PPOs, lactate threshold (LT), MCT4, and high-intensity exercise capacity all increased over both training periods. Improvements were greater following polarized rather than threshold for PPO [mean (±SE) change of 8 (±2)% vs. 3 (±1)%, P < 0.05], LT [9 (±3)% vs. 2 (±4)%, P < 0.05], and high-intensity exercise capacity [85 (±14)% vs. 37 (±14)%, P < 0.05]. No changes in mitochondrial enzyme activities or MCT1 were observed following training. A significant multilevel, partial least squares-discriminant analysis model was obtained for the threshold model but not the polarized model in the metabolomics analysis. A polarized training distribution results in greater systemic adaptation over 6 wk in already well-trained cyclists. Markers of muscle metabolic adaptation are largely unchanged, but metabolomics markers suggest different cellular metabolic stress that requires further investigation.

  15. Issues in ATM Support of High-Performance, Geographically Distributed Computing

    NASA Technical Reports Server (NTRS)

    Claus, Russell W.; Dowd, Patrick W.; Srinidhi, Saragur M.; Blade, Eric D.G

    1995-01-01

    This report experimentally assesses the effect of the underlying network in a cluster-based computing environment. The assessment is quantified by application-level benchmarking, process-level communication, and network file input/output. Two testbeds were considered, one small cluster of Sun workstations and another large cluster composed of 32 high-end IBM RS/6000 platforms. The clusters had Ethernet, fiber distributed data interface (FDDI), Fibre Channel, and asynchronous transfer mode (ATM) network interface cards installed, providing the same processors and operating system for the entire suite of experiments. The primary goal of this report is to assess the suitability of an ATM-based, local-area network to support interprocess communication and remote file input/output systems for distributed computing.

  16. Computational Analysis of a Wing Designed for the X-57 Distributed Electric Propulsion Aircraft

    NASA Technical Reports Server (NTRS)

    Deere, Karen A.; Viken, Jeffrey K.; Viken, Sally A.; Carter, Melissa B.; Wiese, Michael R.; Farr, Norma L.

    2017-01-01

    A computational study of the wing for the distributed electric propulsion X-57 Maxwell airplane configuration at cruise and takeoff/landing conditions was completed. Two unstructured-mesh, Navier-Stokes computational fluid dynamics methods, FUN3D and USM3D, were used to predict the wing performance. The goal of the X-57 wing and distributed electric propulsion system design was to meet or exceed the required lift coefficient 3.95 for a stall speed of 58 knots, with a cruise speed of 150 knots at an altitude of 8,000 ft. The X-57 Maxwell airplane was designed with a small, high aspect ratio cruise wing that was designed for a high cruise lift coefficient (0.75) at angle of attack of 0deg. The cruise propulsors at the wingtip rotate counter to the wingtip vortex and reduce induced drag by 7.5 percent at an angle of attack of 0.6deg. The unblown maximum lift coefficient of the high-lift wing (with the 30deg flap setting) is 2.439. The stall speed goal performance metric was confirmed with a blown wing computed effective lift coefficient of 4.202. The lift augmentation from the high-lift, distributed electric propulsion system is 1.7. The predicted cruise wing drag coefficient of 0.02191 is 0.00076 above the drag allotted for the wing in the original estimate. However, the predicted drag overage for the wing would only use 10.1 percent of the original estimated drag margin, which is 0.00749.

  17. Distribution of biomolecules in porous nitrocellulose membrane pads using confocal laser scanning microscopy and high-speed cameras.

    PubMed

    Mujawar, Liyakat Hamid; Maan, Abid Aslam; Khan, Muhammad Kashif Iqbal; Norde, Willem; van Amerongen, Aart

    2013-04-02

    The main focus of our research was to study the distribution of inkjet printed biomolecules in porous nitrocellulose membrane pads of different brands. We produced microarrays of fluorophore-labeled IgG and bovine serum albumin (BSA) on FAST, Unisart, and Oncyte-Avid slides and compared the spot morphology of the inkjet printed biomolecules. The distribution of these biomolecules within the spot embedded in the nitrocellulose membrane was analyzed by confocal laser scanning microscopy in the "Z" stack mode. By applying a "concentric ring" format, the distribution profile of the fluorescence intensity in each horizontal slice was measured and represented in a graphical color-coded way. Furthermore, a one-step diagnostic antibody assay was performed with a primary antibody, double-labeled amplicons, and fluorophore-labeled streptavidin in order to study the functionality and distribution of the immune complex in the nitrocellulose membrane slides. Under the conditions applied, the spot morphology and distribution of the primary labeled biomolecules was nonhomogenous and doughnut-like on the FAST and Unisart nitrocellulose slides, whereas a better spot morphology with more homogeneously distributed biomolecules was observed on the Oncyte-Avid slide. Similar morphologies and distribution patterns were observed when the diagnostic one-step nucleic acid microarray immunoassay was performed on these nitrocellulose slides. We also investigated possible reasons for the differences in the observed spot morphology by monitoring the dynamic behavior of a liquid droplet on and in these nitrocellulose slides. Using high speed cameras, we analyzed the wettability and fluid flow dynamics of a droplet on the various nitrocellulose substrates. The spreading of the liquid droplet was comparable for the FAST and Unisart slides but different, i.e., slower, for the Oncyte-Avid slide. The results of the spreading of the droplet and the penetration behavior of the liquid in the nitrocellulose membrane may (partly) explain the distribution of the biomolecules in the different slides. To our knowledge, this is the first time that fluid dynamics in diagnostic membranes have been analyzed by the use of high-speed cameras.

  18. Distributed communication and psychosocial performance in simulated space dwelling groups

    NASA Astrophysics Data System (ADS)

    Hienz, R. D.; Brady, J. V.; Hursh, S. R.; Ragusa, L. C.; Rouse, C. O.; Gasior, E. D.

    2005-05-01

    The present report describes the development and application of a distributed interactive multi-person simulation in a computer-generated planetary environment as an experimental test bed for modeling the human performance effects of variations in the types of communication modes available, and in the types of stress and incentive conditions underlying the completion of mission goals. The results demonstrated a high degree of interchangeability between communication modes (audio, text) when one mode was not available. Additionally, the addition of time pressure stress to complete tasks resulted in a reduction in performance effectiveness, and these performance reductions were ameliorated via the introduction of positive incentives contingent upon improved performances. The results obtained confirmed that cooperative and productive psychosocial interactions can be maintained between individually isolated and dispersed members of simulated spaceflight crews communicating and problem-solving effectively over extended time intervals without the benefit of one another's physical presence.

  19. Sensitivity of fenestration solar gain to source spectrum and angle of incidence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCluney, W.R.

    1996-12-31

    The solar heat gain coefficient (SHGC) is the fraction of solar radiant flux incident on a fenestration system entering a building as heat gain. In general it depends on both the angle of incidence and the spectral distribution of the incident solar radiation. In attempts to improve energy performance and user acceptance of high-performance glazing systems, manufacturers are producing glazing systems with increasing spectral selectivity. This poses potential difficulties for calculations of solar heat gain through windows based upon the use of a single solar spectral weighting function. The sensitivity of modern high-performance glazing systems to both the angle ofmore » incidence and the shape of the incident solar spectrum is examined using a glazing performance simulation program. It is found that as the spectral selectivity of the glazing system increases, the SHGC can vary as the incident spectral distribution varies. The variations can be as great as 50% when using several different representative direct-beam spectra. These include spectra having low and high air masses and a standard spectrum having an air mass of 1.5. The variations can be even greater if clear blue diffuse skylight is considered. It is recommended that the current broad-band shading coefficient method of calculating solar gain be replaced by one that is spectral based.« less

  20. Open source acceleration of wave optics simulations on energy efficient high-performance computing platforms

    NASA Astrophysics Data System (ADS)

    Beck, Jeffrey; Bos, Jeremy P.

    2017-05-01

    We compare several modifications to the open-source wave optics package, WavePy, intended to improve execution time. Specifically, we compare the relative performance of the Intel MKL, a CPU based OpenCV distribution, and GPU-based version. Performance is compared between distributions both on the same compute platform and between a fully-featured computing workstation and the NVIDIA Jetson TX1 platform. Comparisons are drawn in terms of both execution time and power consumption. We have found that substituting the Fast Fourier Transform operation from OpenCV provides a marked improvement on all platforms. In addition, we show that embedded platforms offer some possibility for extensive improvement in terms of efficiency compared to a fully featured workstation.

  1. Climate suitability for European ticks: assessing species distribution models against null models and projection under AR5 climate.

    PubMed

    Williams, Hefin Wyn; Cross, Dónall Eoin; Crump, Heather Louise; Drost, Cornelis Jan; Thomas, Christopher James

    2015-08-28

    There is increasing evidence that the geographic distribution of tick species is changing. Whilst correlative Species Distribution Models (SDMs) have been used to predict areas that are potentially suitable for ticks, models have often been assessed without due consideration for spatial patterns in the data that may inflate the influence of predictor variables on species distributions. This study used null models to rigorously evaluate the role of climate and the potential for climate change to affect future climate suitability for eight European tick species, including several important disease vectors. We undertook a comparative assessment of the performance of Maxent and Mahalanobis Distance SDMs based on observed data against those of null models based on null species distributions or null climate data. This enabled the identification of species whose distributions demonstrate a significant association with climate variables. Latest generation (AR5) climate projections were subsequently used to project future climate suitability under four Representative Concentration Pathways (RCPs). Seven out of eight tick species exhibited strong climatic signals within their observed distributions. Future projections intimate varying degrees of northward shift in climate suitability for these tick species, with the greatest shifts forecasted under the most extreme RCPs. Despite the high performance measure obtained for the observed model of Hyalomma lusitanicum, it did not perform significantly better than null models; this may result from the effects of non-climatic factors on its distribution. By comparing observed SDMs with null models, our results allow confidence that we have identified climate signals in tick distributions that are not simply a consequence of spatial patterns in the data. Observed climate-driven SDMs for seven out of eight species performed significantly better than null models, demonstrating the vulnerability of these tick species to the effects of climate change in the future.

  2. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Effect of size distribution on magnetic properties in cobalt nanowires

    NASA Astrophysics Data System (ADS)

    Xu, Huanhuan; Wu, Qiong; Yue, Ming; Li, Chenglin; Li, Hongjian; Palaka, Subhashini

    2018-05-01

    Cobalt nanowires were synthesized by reduction of carboxylate salts of Co in 1, 2-butanediol using a solvothermal chemical process. These nanowires crystallize with the hcp structure and the growth axis is parallel to the crystallographic c-axis. The morphology of the nanowires that prepared with mechanical stirring during earlier stage of the reaction process exhibits a smaller averaged aspect ratio but narrow size distribution. The assembly of the nanowires that prepared with mechanical stirring shows almost same coercivity and remanent magnetization but 59% increase of magnetic energy product. This remarkable improvement of energy product has been further understood by micromagnetic simulations. The magnetic performance at variant temperatures of Co nanowires has also been presented. These ferromagnetic nanowires could be new ideal building blocks for permanent magnets with high performance and high thermal stability.

  4. Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2002-01-01

    The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.

  5. Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope.

    PubMed

    Harada, Ken; Akashi, Tetsuya; Niitsu, Kodai; Shimada, Keiko; Ono, Yoshimasa A; Shindo, Daisuke; Shinada, Hiroyuki; Mori, Shigeo

    2018-01-17

    Advanced electron microscopy technologies have made it possible to perform precise double-slit interference experiments. We used a 1.2-MV field emission electron microscope providing coherent electron waves and a direct detection camera system enabling single-electron detections at a sub-second exposure time. We developed a method to perform the interference experiment by using an asymmetric double-slit fabricated by a focused ion beam instrument and by operating the microscope under a "pre-Fraunhofer" condition, different from the Fraunhofer condition of conventional double-slit experiments. Here, pre-Fraunhofer condition means that each single-slit observation was performed under the Fraunhofer condition, while the double-slit observations were performed under the Fresnel condition. The interference experiments with each single slit and with the asymmetric double slit were carried out under two different electron dose conditions: high-dose for calculation of electron probability distribution and low-dose for each single electron distribution. Finally, we exemplified the distribution of single electrons by color-coding according to the above three types of experiments as a composite image.

  6. Ring-array processor distribution topology for optical interconnects

    NASA Technical Reports Server (NTRS)

    Li, Yao; Ha, Berlin; Wang, Ting; Wang, Sunyu; Katz, A.; Lu, X. J.; Kanterakis, E.

    1992-01-01

    The existing linear and rectangular processor distribution topologies for optical interconnects, although promising in many respects, cannot solve problems such as clock skews, the lack of supporting elements for efficient optical implementation, etc. The use of a ring-array processor distribution topology, however, can overcome these problems. Here, a study of the ring-array topology is conducted with an aim of implementing various fast clock rate, high-performance, compact optical networks for digital electronic multiprocessor computers. Practical design issues are addressed. Some proof-of-principle experimental results are included.

  7. Fission meter and neutron detection using poisson distribution comparison

    DOEpatents

    Rowland, Mark S; Snyderman, Neal J

    2014-11-18

    A neutron detector system and method for discriminating fissile material from non-fissile material wherein a digital data acquisition unit collects data at high rate, and in real-time processes large volumes of data directly into information that a first responder can use to discriminate materials. The system comprises counting neutrons from the unknown source and detecting excess grouped neutrons to identify fission in the unknown source. Comparison of the observed neutron count distribution with a Poisson distribution is performed to distinguish fissile material from non-fissile material.

  8. Environmental tolerances of rare and common mangroves along light and salinity gradients.

    PubMed

    Dangremond, Emily M; Feller, Ilka C; Sousa, Wayne P

    2015-12-01

    Although mangroves possess a variety of morphological and physiological adaptations for life in a stressful habitat, interspecific differences in survival and growth under different environmental conditions can shape their local and geographic distributions. Soil salinity and light are known to affect mangrove performance, often in an interactive fashion. It has also been hypothesized that mangroves are intrinsically shade intolerant due to the high physiological cost of coping with saline flooded soils. To evaluate the relationship between stress tolerance and species distributions, we compared responses of seedlings of three widespread mangrove species and one narrow endemic mangrove species in a factorial array of light levels and soil salinities in an outdoor laboratory experiment. The more narrowly distributed species was expected to exhibit a lower tolerance of potentially stressful conditions. Two of the widespread species, Avicennia germinans and Lumnitzera racemosa, survived and grew well at low-medium salinity, regardless of light level, but performed poorly at high salinity, particularly under high light. The third widespread species, Rhizophora mangle, responded less to variation in light and salinity. However, at high salinity, its relative growth rate was low at every light level and none of these plants flushed leaves. As predicted, the rare species, Pelliciera rhizophorae, was the most sensitive to environmental stressors, suffering especially high mortality and reduced growth and quantum yield under the combined conditions of high light and medium-high salinity. That it only thrives under shaded conditions represents an important exception to the prevailing belief that halophytes are intrinsically constrained to be shade intolerant.

  9. Performance analysis of a miniature Joule-Thomson cryocooler with and without the distributed J-T effect

    NASA Astrophysics Data System (ADS)

    Damle, Rashmin; Atrey, Milind

    2015-12-01

    Cryogenic temperatures are obtained with Joule-Thomson (J-T) cryocoolers in an easier way as compared to other cooling techniques. Miniature J-T cryocoolers are often employed for cooling of infrared sensors, cryoprobes, biological samples, etc. A typical miniature J-T cryocooler consists of a storage reservoir/compressor providing the high pressure gas, a finned tube recuperative heat exchanger, an expansion valve/orifice, and the cold end. The recuperative heat exchanger is indispensable for attaining cryogenic temperatures. The geometrical parameters and the operating conditions of the heat exchanger drastically affect the cryocooler performance in terms of cool down time and cooling effect. In the literature, the numerical models for the finned recuperative heat exchanger have neglected the distributed J-T effect. The distributed J-T effect accounts for the changes in enthalpy of the fluid due to changes of pressure in addition to those due to changes of temperature. The objective of this work is to explore the distributed J-T effect and study the performance of a miniature J-T cryocooler with and without the distributed J-T effect. A one dimensional transient model is employed for the numerical analysis of the cryocooler. Cases with different operating conditions are worked out with argon and nitrogen as working fluids.

  10. LXtoo: an integrated live Linux distribution for the bioinformatics community

    PubMed Central

    2012-01-01

    Background Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Findings Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. Conclusions LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo. PMID:22813356

  11. LXtoo: an integrated live Linux distribution for the bioinformatics community.

    PubMed

    Yu, Guangchuang; Wang, Li-Gen; Meng, Xiao-Hua; He, Qing-Yu

    2012-07-19

    Recent advances in high-throughput technologies dramatically increase biological data generation. However, many research groups lack computing facilities and specialists. This is an obstacle that remains to be addressed. Here, we present a Linux distribution, LXtoo, to provide a flexible computing platform for bioinformatics analysis. Unlike most of the existing live Linux distributions for bioinformatics limiting their usage to sequence analysis and protein structure prediction, LXtoo incorporates a comprehensive collection of bioinformatics software, including data mining tools for microarray and proteomics, protein-protein interaction analysis, and computationally complex tasks like molecular dynamics. Moreover, most of the programs have been configured and optimized for high performance computing. LXtoo aims to provide well-supported computing environment tailored for bioinformatics research, reducing duplication of efforts in building computing infrastructure. LXtoo is distributed as a Live DVD and freely available at http://bioinformatics.jnu.edu.cn/LXtoo.

  12. Using rank-order geostatistics for spatial interpolation of highly skewed data in a heavy-metal contaminated site.

    PubMed

    Juang, K W; Lee, D Y; Ellsworth, T R

    2001-01-01

    The spatial distribution of a pollutant in contaminated soils is usually highly skewed. As a result, the sample variogram often differs considerably from its regional counterpart and the geostatistical interpolation is hindered. In this study, rank-order geostatistics with standardized rank transformation was used for the spatial interpolation of pollutants with a highly skewed distribution in contaminated soils when commonly used nonlinear methods, such as logarithmic and normal-scored transformations, are not suitable. A real data set of soil Cd concentrations with great variation and high skewness in a contaminated site of Taiwan was used for illustration. The spatial dependence of ranks transformed from Cd concentrations was identified and kriging estimation was readily performed in the standardized-rank space. The estimated standardized rank was back-transformed into the concentration space using the middle point model within a standardized-rank interval of the empirical distribution function (EDF). The spatial distribution of Cd concentrations was then obtained. The probability of Cd concentration being higher than a given cutoff value also can be estimated by using the estimated distribution of standardized ranks. The contour maps of Cd concentrations and the probabilities of Cd concentrations being higher than the cutoff value can be simultaneously used for delineation of hazardous areas of contaminated soils.

  13. Clinical experience with a high-performance ATM-connected DICOM archive for cardiology

    NASA Astrophysics Data System (ADS)

    Solomon, Harry P.

    1997-05-01

    A system to archive large image sets, such as cardiac cine runs, with near realtime response must address several functional and performance issues, including efficient use of a high performance network connection with standard protocols, an architecture which effectively integrates both short- and long-term mass storage devices, and a flexible data management policy which allows optimization of image distribution and retrieval strategies based on modality and site-specific operational use. Clinical experience with such as archive has allowed evaluation of these systems issues and refinement of a traffic model for cardiac angiography.

  14. Fast spatially resolved exhaust gas recirculation (EGR) distribution measurements in an internal combustion engine using absorption spectroscopy.

    PubMed

    Yoo, Jihyung; Prikhodko, Vitaly; Parks, James E; Perfetto, Anthony; Geckler, Sam; Partridge, William P

    2015-09-01

    Exhaust gas recirculation (EGR) in internal combustion engines is an effective method of reducing NOx emissions while improving efficiency. However, insufficient mixing between fresh air and exhaust gas can lead to cycle-to-cycle and cylinder-to-cylinder non-uniform charge gas mixtures of a multi-cylinder engine, which can in turn reduce engine performance and efficiency. A sensor packaged into a compact probe was designed, built and applied to measure spatiotemporal EGR distributions in the intake manifold of an operating engine. The probe promotes the development of more efficient and higher-performance engines by resolving high-speed in situ CO2 concentration at various locations in the intake manifold. The study employed mid-infrared light sources tuned to an absorption band of CO2 near 4.3 μm, an industry standard species for determining EGR fraction. The calibrated probe was used to map spatial EGR distributions in an intake manifold with high accuracy and monitor cycle-resolved cylinder-specific EGR fluctuations at a rate of up to 1 kHz.

  15. Economic Statistical Design of Integrated X-bar-S Control Chart with Preventive Maintenance and General Failure Distribution

    PubMed Central

    Caballero Morales, Santiago Omar

    2013-01-01

    The application of Preventive Maintenance (PM) and Statistical Process Control (SPC) are important practices to achieve high product quality, small frequency of failures, and cost reduction in a production process. However there are some points that have not been explored in depth about its joint application. First, most SPC is performed with the X-bar control chart which does not fully consider the variability of the production process. Second, many studies of design of control charts consider just the economic aspect while statistical restrictions must be considered to achieve charts with low probabilities of false detection of failures. Third, the effect of PM on processes with different failure probability distributions has not been studied. Hence, this paper covers these points, presenting the Economic Statistical Design (ESD) of joint X-bar-S control charts with a cost model that integrates PM with general failure distribution. Experiments showed statistically significant reductions in costs when PM is performed on processes with high failure rates and reductions in the sampling frequency of units for testing under SPC. PMID:23527082

  16. Distributed Parallel Processing and Dynamic Load Balancing Techniques for Multidisciplinary High Speed Aircraft Design

    NASA Technical Reports Server (NTRS)

    Krasteva, Denitza T.

    1998-01-01

    Multidisciplinary design optimization (MDO) for large-scale engineering problems poses many challenges (e.g., the design of an efficient concurrent paradigm for global optimization based on disciplinary analyses, expensive computations over vast data sets, etc.) This work focuses on the application of distributed schemes for massively parallel architectures to MDO problems, as a tool for reducing computation time and solving larger problems. The specific problem considered here is configuration optimization of a high speed civil transport (HSCT), and the efficient parallelization of the embedded paradigm for reasonable design space identification. Two distributed dynamic load balancing techniques (random polling and global round robin with message combining) and two necessary termination detection schemes (global task count and token passing) were implemented and evaluated in terms of effectiveness and scalability to large problem sizes and a thousand processors. The effect of certain parameters on execution time was also inspected. Empirical results demonstrated stable performance and effectiveness for all schemes, and the parametric study showed that the selected algorithmic parameters have a negligible effect on performance.

  17. Fast Spatially Resolved Exhaust Gas Recirculation (EGR) Distribution Measurements in an Internal Combustion Engine Using Absorption Spectroscopy

    DOE PAGES

    Yoo, Jihyung; Prikhodko, Vitaly; Parks, James E.; ...

    2015-09-01

    One effective method of reducing NO x emissions while improving efficiency is exhaust gas recirculation (EGR) in internal combustion engines. But, insufficient mixing between fresh air and exhaust gas can lead to cycle-to-cycle and cylinder-to-cylinder nonuniform charge gas mixtures of a multi-cylinder engine, which can in turn reduce engine performance and efficiency. Furthermore, a sensor packaged into a compact probe was designed, built and applied to measure spatiotemporal EGR distributions in the intake manifold of an operating engine. The probe promotes the development of more efficient and higher-performance engines by resolving high-speed in situ CO 2 concentration at various locationsmore » in the intake manifold. Our study employed mid-infrared light sources tuned to an absorption band of CO 2 near 4.3 μm, an industry standard species for determining EGR fraction. The calibrated probe was used to map spatial EGR distributions in an intake manifold with high accuracy and monitor cycle-resolved cylinder-specific EGR fluctuations at a rate of up to 1 kHz.« less

  18. Wave energy and swimming performance shape coral reef fish assemblages

    PubMed Central

    Fulton, C.J; Bellwood, D.R; Wainwright, P.C

    2005-01-01

    Physical factors often have an overriding influence on the distribution patterns of organisms, and can ultimately shape the long-term structure of communities. Although distribution patterns in sessile marine organisms have frequently been attributed to functional characteristics interacting with wave-induced water motion, similar evidence for mobile organisms is lacking. Links between fin morphology and swimming performance were examined in three diverse coral reef fish families from two major evolutionary lineages. Among-habitat variation in morphology and performance was directly compared with quantitative values of wave-induced water motion from seven coral reef habitats of different depth and wave exposure on the Great Barrier Reef. Fin morphology was strongly correlated with both field and experimental swimming speeds in all three families. The range of observed swimming speeds coincided closely with the magnitude of water velocities commonly found on coral reefs. Distribution patterns in all three families displayed highly congruent relationships between fin morphology and wave-induced water motion. Our findings indicate a general functional relationship between fin morphology and swimming performance in labriform-swimming fishes, and provide quantitative evidence that wave energy may directly influence the assemblage structure of coral reef fishes through interactions with morphology and swimming performance. PMID:15888415

  19. HPF Implementation of ARC3D

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Yan, Jerry

    1999-01-01

    We present an HPF (High Performance Fortran) implementation of ARC3D code along with the profiling and performance data on SGI Origin 2000. Advantages and limitations of HPF as a parallel programming language for CFD applications are discussed. For achieving good performance results we used the data distributions optimized for implementation of implicit and explicit operators of the solver and boundary conditions. We compare the results with MPI and directive based implementations.

  20. Comprehensive analysis of the T-cell receptor beta chain gene in rhesus monkey by high throughput sequencing

    PubMed Central

    Li, Zhoufang; Liu, Guangjie; Tong, Yin; Zhang, Meng; Xu, Ying; Qin, Li; Wang, Zhanhui; Chen, Xiaoping; He, Jiankui

    2015-01-01

    Profiling immune repertoires by high throughput sequencing enhances our understanding of immune system complexity and immune-related diseases in humans. Previously, cloning and Sanger sequencing identified limited numbers of T cell receptor (TCR) nucleotide sequences in rhesus monkeys, thus their full immune repertoire is unknown. We applied multiplex PCR and Illumina high throughput sequencing to study the TCRβ of rhesus monkeys. We identified 1.26 million TCRβ sequences corresponding to 643,570 unique TCRβ sequences and 270,557 unique complementarity-determining region 3 (CDR3) gene sequences. Precise measurements of CDR3 length distribution, CDR3 amino acid distribution, length distribution of N nucleotide of junctional region, and TCRV and TCRJ gene usage preferences were performed. A comprehensive profile of rhesus monkey immune repertoire might aid human infectious disease studies using rhesus monkeys. PMID:25961410

  1. Raising the stakes: How students' motivation for mathematics associates with high- and low-stakes test achievement.

    PubMed

    Simzar, Rahila M; Martinez, Marcela; Rutherford, Teomara; Domina, Thurston; Conley, AnneMarie M

    2015-04-01

    This study uses data from an urban school district to examine the relation between students' motivational beliefs about mathematics and high- versus low-stakes math test performance. We use ordinary least squares and quantile regression analyses and find that the association between students' motivation and test performance differs based on the stakes of the exam. Students' math self-efficacy and performance avoidance goal orientation were the strongest predictors for both exams; however, students' math self-efficacy was more strongly related to achievement on the low-stakes exam. Students' motivational beliefs had a stronger association at the low-stakes exam proficiency cutoff than they did at the high-stakes passing cutoff. Lastly, the negative association between performance avoidance goals and high-stakes performance showed a decreasing trend across the achievement distribution, suggesting that performance avoidance goals are more detrimental for lower achieving students. These findings help parse out the ways motivation influences achievement under different stakes.

  2. Raising the stakes: How students’ motivation for mathematics associates with high- and low-stakes test achievement☆

    PubMed Central

    Simzar, Rahila M.; Martinez, Marcela; Rutherford, Teomara; Domina, Thurston; Conley, AnneMarie M.

    2016-01-01

    This study uses data from an urban school district to examine the relation between students’ motivational beliefs about mathematics and high- versus low-stakes math test performance. We use ordinary least squares and quantile regression analyses and find that the association between students’ motivation and test performance differs based on the stakes of the exam. Students’ math self-efficacy and performance avoidance goal orientation were the strongest predictors for both exams; however, students’ math self-efficacy was more strongly related to achievement on the low-stakes exam. Students’ motivational beliefs had a stronger association at the low-stakes exam proficiency cutoff than they did at the high-stakes passing cutoff. Lastly, the negative association between performance avoidance goals and high-stakes performance showed a decreasing trend across the achievement distribution, suggesting that performance avoidance goals are more detrimental for lower achieving students. These findings help parse out the ways motivation influences achievement under different stakes. PMID:27840563

  3. Multipositional silica-coated silver nanoparticles for high-performance polymer solar cells.

    PubMed

    Choi, Hyosung; Lee, Jung-Pil; Ko, Seo-Jin; Jung, Jae-Woo; Park, Hyungmin; Yoo, Seungmin; Park, Okji; Jeong, Jong-Ryul; Park, Soojin; Kim, Jin Young

    2013-05-08

    We demonstrate high-performance polymer solar cells using the plasmonic effect of multipositional silica-coated silver nanoparticles. The location of the nanoparticles is critical for increasing light absorption and scattering via enhanced electric field distribution. The device incorporating nanoparticles between the hole transport layer and the active layer achieves a power conversion efficiency of 8.92% with an external quantum efficiency of 81.5%. These device efficiencies are the highest values reported to date for plasmonic polymer solar cells using metal nanoparticles.

  4. High-performance liquid chromatography analysis of plant saponins: An update 2005-2010

    PubMed Central

    Negi, Jagmohan S.; Singh, Pramod; Pant, Geeta Joshi Nee; Rawat, M. S. M.

    2011-01-01

    Saponins are widely distributed in plant kingdom. In view of their wide range of biological activities and occurrence as complex mixtures, saponins have been purified and separated by high-performance liquid chromatography using reverse-phase columns at lower wavelength. Mostly, saponins are not detected by ultraviolet detector due to lack of chromophores. Electrospray ionization mass spectrometry, diode array detector , evaporative light scattering detection, and charged aerosols have been used for overcoming the detection problem of saponins. PMID:22303089

  5. Strategic Staffing? How Performance Pressures Affect the Distribution of Teachers within Schools and Resulting Student Achievement

    ERIC Educational Resources Information Center

    Grissom, Jason A.; Kalogrides, Demetra; Loeb, Susanna

    2017-01-01

    School performance pressures apply disproportionately to tested grades and subjects. Using longitudinal administrative data--including achievement data from untested grades--and teacher survey data from a large urban district, we examine schools' responses to those pressures in assigning teachers to high-stakes and low-stakes classrooms. We find…

  6. Locomotion in labrid fishes: implications for habitat use and cross-shelf biogeography on the Great Barrier Reef

    NASA Astrophysics Data System (ADS)

    Bellwood, D.; Wainwright, P.

    2001-09-01

    Coral reefs exhibit marked zonation patterns within single reefs and across continental shelves. For sessile organisms these zones are often related to wave exposure. We examined the extent to which wave exposure may shape the distribution patterns of fishes. We documented the distribution of 98 species of wrasses and parrotfishes at 33 sites across the Great Barrier Reef. The greatest difference between labrid assemblages was at the habitat level, with exposed reef flats and crests on mid- and outer reefs possessing a distinct faunal assemblage. These exposed sites were dominated by individuals with high pectoral fin aspect ratios, i.e. fishes believed to be capable of lift-based swimming which often achieve high speeds. Overall, there was a strong correlation between estimated swimming performance, as indicated by fin aspect ratio, and degree of water movement. We propose that swimming performance in fishes limits access to high-energy locations and may be a significant factor influencing habitat use and regional biogeography of reef fishes.

  7. Performance comparison of single-stage mixed-refrigerant Joule-Thomson cycle and reverse Brayton cycle for cooling 80 to 120 K temperature-distributed heat loads

    NASA Astrophysics Data System (ADS)

    Wang, H. C.; Chen, G. F.; Gong, M. Q.; Li, X.

    2017-12-01

    Thermodynamic performance comparison of single-stage mixed-refrigerant Joule-Thomson cycle (MJTR) and pure refrigerant reverse Brayton cycle (RBC) for cooling 80 to 120 K temperature-distributed heat loads was conducted in this paper. Nitrogen under various liquefaction pressures was employed as the heat load. The research was conducted under nonideal conditions by exergy analysis methods. Exergy efficiency and volumetric cooling capacity are two main evaluation parameters. Exergy loss distribution in each process of refrigeration cycle was also investigated. The exergy efficiency and volumetric cooling capacity of MJTR were obviously superior to RBC in 90 to 120 K temperature zone, but still inferior to RBC at 80 K. The performance degradation of MJTR was caused by two main reasons: The high fraction of neon resulted in large entropy generation and exergy loss in throttling process. Larger duty and WLMTD lead to larger exergy losses in recuperator.

  8. Research on droplet size measurement of impulse antiriots water cannon based on sheet laser

    NASA Astrophysics Data System (ADS)

    Fa-dong, Zhao; Hong-wei, Zhuang; Ren-jun, Zhan

    2014-04-01

    As a new-style counter-personnel non-lethal weapon, it is the non-steady characteristic and large water mist field that increase the difficulty of measuring the droplet size distribution of impulse anti-riots water cannon which is the most important index to examine its tactical and technology performance. A method based on the technologies of particle scattering, sheet laser imaging and high speed handling was proposed and an universal droplet size measuring algorithm was designed and verified. According to this method, the droplet size distribution was measured. The measuring results of the size distribution under the same position with different timescale, the same axial distance with different radial distance, the same radial distance with different axial distance were analyzed qualitatively and some rational cause was presented. The droplet size measuring method proposed in this article provides a scientific and effective experiment method to ascertain the technical and tactical performance and optimize the relative system performance.

  9. Generalist genes and learning disabilities: a multivariate genetic analysis of low performance in reading, mathematics, language and general cognitive ability in a sample of 8000 12-year-old twins.

    PubMed

    Haworth, Claire M A; Kovas, Yulia; Harlaar, Nicole; Hayiou-Thomas, Marianna E; Petrill, Stephen A; Dale, Philip S; Plomin, Robert

    2009-10-01

    Our previous investigation found that the same genes influence poor reading and mathematics performance in 10-year-olds. Here we assess whether this finding extends to language and general cognitive disabilities, as well as replicating the earlier finding for reading and mathematics in an older and larger sample. Using a representative sample of 4000 pairs of 12-year-old twins from the UK Twins Early Development Study, we investigated the genetic and environmental overlap between internet-based batteries of language and general cognitive ability tests in addition to tests of reading and mathematics for the bottom 15% of the distribution using DeFries-Fulker extremes analysis. We compared these results to those for the entire distribution. All four traits were highly correlated at the low extreme (average group phenotypic correlation = .58). and in the entire distribution (average phenotypic correlation = .59). Genetic correlations for the low extreme were consistently high (average = .67), and non-shared environmental correlations were modest (average = .23). These results are similar to those seen across the entire distribution (.68 and .23, respectively). The 'Generalist Genes Hypothesis' holds for language and general cognitive disabilities, as well as reading and mathematics disabilities. Genetic correlations were high, indicating a strong degree of overlap in genetic influences on these diverse traits. In contrast, non-shared environmental influences were largely specific to each trait, causing phenotypic differentiation of traits.

  10. Active Damping Using Distributed Anisotropic Actuators

    NASA Technical Reports Server (NTRS)

    Schiller, Noah H.; Cabell, Randolph H.; Quinones, Juan D.; Wier, Nathan C.

    2010-01-01

    A helicopter structure experiences substantial high-frequency mechanical excitation from powertrain components such as gearboxes and drive shafts. The resulting structure-borne vibration excites the windows which then radiate sound into the passenger cabin. In many cases the radiated sound power can be reduced by adding damping. This can be accomplished using passive or active approaches. Passive treatments such as constrained layer damping tend to reduce window transparency. Therefore this paper focuses on an active approach utilizing compact decentralized control units distributed around the perimeter of the window. Each control unit consists of a triangularly shaped piezoelectric actuator, a miniature accelerometer, and analog electronics. Earlier work has shown that this type of system can increase damping up to approximately 1 kHz. However at higher frequencies the mismatch between the distributed actuator and the point sensor caused control spillover. This paper describes new anisotropic actuators that can be used to improve the bandwidth of the control system. The anisotropic actuators are composed of piezoelectric material sandwiched between interdigitated electrodes, which enables the application of the electric field in a preferred in-plane direction. When shaped correctly the anisotropic actuators outperform traditional isotropic actuators by reducing the mismatch between the distributed actuator and point sensor at high frequencies. Testing performed on a Plexiglas panel, representative of a helicopter window, shows that the control units can increase damping at low frequencies. However high frequency performance was still limited due to the flexible boundary conditions present on the test structure.

  11. Distribution of rain height over subtropical region: Durban, South Africa for satellite communication systems

    NASA Astrophysics Data System (ADS)

    Olurotimi, E. O.; Sokoya, O.; Ojo, J. S.; Owolawi, P. A.

    2018-03-01

    Rain height is one of the significant parameters for prediction of rain attenuation for Earth-space telecommunication links, especially those operating at frequencies above 10 GHz. This study examines Three-parameter Dagum distribution of the rain height over Durban, South Africa. 5-year data were used to study the monthly, seasonal, and annual variations using the parameters estimated by the maximum likelihood of the distribution. The performance estimation of the distribution was determined using the statistical goodness of fit. Three-parameter Dagum distribution shows an appropriate distribution for the modeling of rain height over Durban with the Root Mean Square Error of 0.26. Also, the shape and scale parameters for the distribution show a wide variation. The probability exceedance of time for 0.01% indicates the high probability of rain attenuation at higher frequencies.

  12. Plasma Control of Separated Flows on Delta Wings at High Angles of Attack

    DTIC Science & Technology

    2009-03-18

    of Attack 5a. CONTRACT NUMBER ISTC Registration No: 3646 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dr. Anatoly Alexandrovich...NUMBER(S) ISTC 06-7002 12. DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution is unlimited. 13. SUPPLEMENTARY...This work is supported financially by EOARD and performed under the agreement with the International Science and Technology Center ( ISTC ), Moscow

  13. Selective AR Modulators that Distinguish Proliferative from Differentiative Gene Promoters

    DTIC Science & Technology

    2017-08-01

    Public Release; Distribution Unlimited The views, opinions and/or findings contained in this report are those of the author(s) and should not be...Research and Materiel Command Fort Detrick, Maryland 21702-5012 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12. DISTRIBUTION / AVAILABILITY STATEMENT...recognition, we performed a high -throughput screen for compounds eliciting differential AR activity on cARE vs. sARE reporters. Of 10,000 compounds

  14. The Effect of Background Plasma Temperature on Growth and Damping of Whistler Mode Wave Power in the Earth's Magnetosphere

    NASA Astrophysics Data System (ADS)

    Maxworth, A. S.; Golkowski, M.; Malaspina, D.; Jaynes, A. N.

    2017-12-01

    Whistler mode waves play a dominant role in the energy dynamics of the Earth's magnetosphere. Trajectory of whistler mode waves can be predicted by raytracing. Raytracing is a numerical method which solves the Haselgrove's equations at each time step taking the background plasma parameters in to account. The majority of previous raytracing work was conducted assuming a cold (0 K) background magnetospheric plasma. Here we perform raytracing in a finite temperature plasma with background electron and ion temperatures of a few eV. When encountered with a high energy (>10 keV) electron distribution, whistler mode waves can undergo a power attenuation and/or growth, depending on resonance conditions which are a function of wave frequency, wave normal angle and particle energy. In this work we present the wave power attenuation and growth analysis of whistler mode waves, during the interaction with a high energy electron distribution. We have numerically modelled the high energy electron distribution as an isotropic velocity distribution, as well as an anisotropic bi-Maxwellian distribution. Both cases were analyzed with and without the temperature effects for the background magnetospheric plasma. Finally we compare our results with the whistler mode energy distribution obtained by the EMFISIS instrument hosted at the Van Allen Probe spacecraft.

  15. Design and Performance of the NASA SCEPTOR Distributed Electric Propulsion Flight Demonstrator

    NASA Technical Reports Server (NTRS)

    Borer, Nicholas K.; Patterson, Michael D.; Viken, Jeffrey K.; Moore, Mark D.; Clarke, Sean; Redifer, Matthew E.; Christie, Robert J.; Stoll, Alex M.; Dubois, Arthur; Bevirt, JoeBen; hide

    2016-01-01

    Distributed Electric Propulsion (DEP) technology uses multiple propulsors driven by electric motors distributed about the airframe to yield beneficial aerodynamic-propulsion interaction. The NASA SCEPTOR flight demonstration project will retrofit an existing internal combustion engine-powered light aircraft with two types of DEP: small "high-lift" propellers distributed along the leading edge of the wing which accelerate the flow over the wing at low speeds, and larger cruise propellers co-located with each wingtip for primary propulsive power. The updated high-lift system enables a 2.5x reduction in wing area as compared to the original aircraft, reducing drag at cruise and shifting the velocity for maximum lift-to-drag ratio to a higher speed, while maintaining low-speed performance. The wingtip-mounted cruise propellers interact with the wingtip vortex, enabling a further efficiency increase that can reduce propulsive power by 10%. A tradespace exploration approach is developed that enables rapid identification of salient trades, and subsequent creation of SCEPTOR demonstrator geometries. These candidates were scrutinized by subject matter experts to identify design preferences that were not modeled during configuration exploration. This exploration and design approach is used to create an aircraft that consumes an estimated 4.8x less energy at the selected cruise point when compared to the original aircraft.

  16. Pharmacokinetics and tissue distribution study of Praeruptorin D from Radix peucedani in rats by high-performance liquid chromatography (HPLC).

    PubMed

    Liang, Taigang; Yue, Wenyan; Du, Xue; Ren, Luhui; Li, Qingshan

    2012-01-01

    Praeruptorin D (PD), a major pyranocoumarin isolated from Radix Peucedani, exhibited antitumor and anti-inflammatory activities. The aim of this study was to investigate the pharmacokinetics and tissue distribution of PD in rats following intravenous (i.v.) administration. The levels of PD in plasma and tissues were measured by a simple and sensitive reversed-phase high-performance liquid chromatography (HPLC) method. The biosamples were treated by liquid-liquid extraction (LLE) with methyl tert-butyl ether (MTBE) and osthole was used as the internal standard (IS). The chromatographic separation was accomplished on a reversed-phase C(18) column using methanol-water (75:25, v/v) as mobile phase at a flow rate of 0.8 mL/min and ultraviolet detection wave length was set at 323 nm. The results demonstrate that this method has excellent specificity, linearity, precision, accuracy and recovery. The pharmacokinetic study found that PD fitted well into a two-compartment model with a fast distribution phase and a relative slow elimination phase. Tissue distribution showed that the highest concentration was observed in the lung, followed by heart, liver and kidney. Furthermore, PD can also be detected in the brain, which indicated that PD could cross the blood-brain barrier after i.v. administration.

  17. Power Hardware-in-the-Loop-Based Anti-Islanding Evaluation and Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoder, Karl; Langston, James; Hauer, John

    2015-10-01

    The National Renewable Energy Laboratory (NREL) teamed with Southern California Edison (SCE), Clean Power Research (CPR), Quanta Technology (QT), and Electrical Distribution Design (EDD) to conduct a U.S. Department of Energy (DOE) and California Public Utility Commission (CPUC) California Solar Initiative (CSI)-funded research project investigating the impacts of integrating high-penetration levels of photovoltaics (PV) onto the California distribution grid. One topic researched in the context of high-penetration PV integration onto the distribution system is the ability of PV inverters to (1) detect islanding conditions (i.e., when the distribution system to which the PV inverter is connected becomes disconnected from themore » utility power connection) and (2) disconnect from the islanded system within the time specified in the performance specifications outlined in IEEE Standard 1547. This condition may cause damage to other connected equipment due to insufficient power quality (e.g., over-and under-voltages) and may also be a safety hazard to personnel that may be working on feeder sections to restore service. NREL teamed with the Florida State University (FSU) Center for Advanced Power Systems (CAPS) to investigate a new way of testing PV inverters for IEEE Standard 1547 unintentional islanding performance specifications using power hardware-in-loop (PHIL) laboratory testing techniques.« less

  18. Workload Characterization of CFD Applications Using Partial Differential Equation Solvers

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Workload characterization is used for modeling and evaluating of computing systems at different levels of detail. We present workload characterization for a class of Computational Fluid Dynamics (CFD) applications that solve Partial Differential Equations (PDEs). This workload characterization focuses on three high performance computing platforms: SGI Origin2000, EBM SP-2, a cluster of Intel Pentium Pro bases PCs. We execute extensive measurement-based experiments on these platforms to gather statistics of system resource usage, which results in workload characterization. Our workload characterization approach yields a coarse-grain resource utilization behavior that is being applied for performance modeling and evaluation of distributed high performance metacomputing systems. In addition, this study enhances our understanding of interactions between PDE solver workloads and high performance computing platforms and is useful for tuning these applications.

  19. Comparison of modeling methods to predict the spatial distribution of deep-sea coral and sponge in the Gulf of Alaska

    NASA Astrophysics Data System (ADS)

    Rooper, Christopher N.; Zimmermann, Mark; Prescott, Megan M.

    2017-08-01

    Deep-sea coral and sponge ecosystems are widespread throughout most of Alaska's marine waters, and are associated with many different species of fishes and invertebrates. These ecosystems are vulnerable to the effects of commercial fishing activities and climate change. We compared four commonly used species distribution models (general linear models, generalized additive models, boosted regression trees and random forest models) and an ensemble model to predict the presence or absence and abundance of six groups of benthic invertebrate taxa in the Gulf of Alaska. All four model types performed adequately on training data for predicting presence and absence, with regression forest models having the best overall performance measured by the area under the receiver-operating-curve (AUC). The models also performed well on the test data for presence and absence with average AUCs ranging from 0.66 to 0.82. For the test data, ensemble models performed the best. For abundance data, there was an obvious demarcation in performance between the two regression-based methods (general linear models and generalized additive models), and the tree-based models. The boosted regression tree and random forest models out-performed the other models by a wide margin on both the training and testing data. However, there was a significant drop-off in performance for all models of invertebrate abundance ( 50%) when moving from the training data to the testing data. Ensemble model performance was between the tree-based and regression-based methods. The maps of predictions from the models for both presence and abundance agreed very well across model types, with an increase in variability in predictions for the abundance data. We conclude that where data conforms well to the modeled distribution (such as the presence-absence data and binomial distribution in this study), the four types of models will provide similar results, although the regression-type models may be more consistent with biological theory. For data with highly zero-inflated distributions and non-normal distributions such as the abundance data from this study, the tree-based methods performed better. Ensemble models that averaged predictions across the four model types, performed better than the GLM or GAM models but slightly poorer than the tree-based methods, suggesting ensemble models might be more robust to overfitting than tree methods, while mitigating some of the disadvantages in predictive performance of regression methods.

  20. A UNIX SVR4-OS 9 distributed data acquisition for high energy physics

    NASA Astrophysics Data System (ADS)

    Drouhin, F.; Schwaller, B.; Fontaine, J. C.; Charles, F.; Pallares, A.; Huss, D.

    1998-08-01

    The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMS tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The UNIX system manages a list of OS9 front-end systems with a synchronisation protocol running over a TCP/IP layer.

  1. Optically controlled phased-array antenna technology for space communication systems

    NASA Technical Reports Server (NTRS)

    Kunath, Richard R.; Bhasin, Kul B.

    1988-01-01

    Using MMICs in phased-array applications above 20 GHz requires complex RF and control signal distribution systems. Conventional waveguide, coaxial cable, and microstrip methods are undesirable due to their high weight, high loss, limited mechanical flexibility and large volume. An attractive alternative to these transmission media, for RF and control signal distribution in MMIC phased-array antennas, is optical fiber. Presented are potential system architectures and their associated characteristics. The status of high frequency opto-electronic components needed to realize the potential system architectures is also discussed. It is concluded that an optical fiber network will reduce weight and complexity, and increase reliability and performance, but may require higher power.

  2. Successful Principalship of High-Performance Schools in High-Poverty Communities

    ERIC Educational Resources Information Center

    Mulford, Bill; Kendall, Diana; Ewington, John; Edmunds, Bill; Kendall, Lawrie; Silins, Halia

    2008-01-01

    Purpose--The purpose of this article is to review literature in certain areas and report on related results from a study of successful school principalship in the Australian state of Tasmania. Design/methodology/approach--Surveys on successful school principalship were distributed to a population of 195 government schools (excluding colleges and…

  3. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  4. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  5. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis.

    PubMed

    Simonyan, Vahan; Mazumder, Raja

    2014-09-30

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis.

  6. High-Performance Integrated Virtual Environment (HIVE) Tools and Applications for Big Data Analysis

    PubMed Central

    Simonyan, Vahan; Mazumder, Raja

    2014-01-01

    The High-performance Integrated Virtual Environment (HIVE) is a high-throughput cloud-based infrastructure developed for the storage and analysis of genomic and associated biological data. HIVE consists of a web-accessible interface for authorized users to deposit, retrieve, share, annotate, compute and visualize Next-generation Sequencing (NGS) data in a scalable and highly efficient fashion. The platform contains a distributed storage library and a distributed computational powerhouse linked seamlessly. Resources available through the interface include algorithms, tools and applications developed exclusively for the HIVE platform, as well as commonly used external tools adapted to operate within the parallel architecture of the system. HIVE is composed of a flexible infrastructure, which allows for simple implementation of new algorithms and tools. Currently, available HIVE tools include sequence alignment and nucleotide variation profiling tools, metagenomic analyzers, phylogenetic tree-building tools using NGS data, clone discovery algorithms, and recombination analysis algorithms. In addition to tools, HIVE also provides knowledgebases that can be used in conjunction with the tools for NGS sequence and metadata analysis. PMID:25271953

  7. Chemically Inhomogeneous RE-Fe-B Permanent Magnets with High Figure of Merit: Solution to Global Rare Earth Criticality

    PubMed Central

    Jin, Jiaying; Ma, Tianyu; Zhang, Yujing; Bai, Guohua; Yan, Mi

    2016-01-01

    The global rare earth (RE) criticality, especially for those closely-relied Nd/Pr/Dy/Tb in the 2:14:1-typed permanent magnets (PMs), has triggered tremendous attempts to develop new alternatives. Prospective candidates La/Ce with high abundance, however, cannot provide an equivalent performance due to inferior magnetic properties of (La/Ce)2Fe14B to Nd2Fe14B. Here we report high figure-of-merit La/Ce-rich RE-Fe-B PMs, where La/Ce are inhomogeneously distributed among the 2:14:1 phase. The resultant exchange coupling within an individual grain and magnetostatic interactions across grains ensure much superior performance to the La/Ce homogeneously distributed magnet. Maximum energy product (BH)max of 42.2 MGOe is achieved even with 36 wt. % La-Ce incorporation. The cost performance, (BH)max/cost, has been raised by 27.1% compared to a 48.9 MGOe La/Ce-free commercial magnet. The construction of chemical heterogeneity offers recipes to develop commercial-grade PMs using the less risky La/Ce, and also provides a promising solution to the REs availability constraints. PMID:27553789

  8. Performance evaluation of distributed wavelength assignment in WDM optical networks

    NASA Astrophysics Data System (ADS)

    Hashiguchi, Tomohiro; Wang, Xi; Morikawa, Hiroyuki; Aoyama, Tomonori

    2004-04-01

    In WDM wavelength routed networks, prior to a data transfer, a call setup procedure is required to reserve a wavelength path between the source-destination node pairs. A distributed approach to a connection setup can achieve a very high speed, while improving the reliability and reducing the implementation cost of the networks. However, along with many advantages, several major challenges have been posed by the distributed scheme in how the management and allocation of wavelength could be efficiently carried out. In this thesis, we apply a distributed wavelength assignment algorithm named priority based wavelength assignment (PWA) that was originally proposed for the use in burst switched optical networks to the problem of reserving wavelengths of path reservation protocols in the distributed control optical networks. Instead of assigning wavelengths randomly, this approach lets each node select the "safest" wavelengths based on the information of wavelength utilization history, thus unnecessary future contention is prevented. The simulation results presented in this paper show that the proposed protocol can enhance the performance of the system without introducing any apparent drawbacks.

  9. Experimental Study of Structure/Behavior Relationship for a Metallized Explosive

    NASA Astrophysics Data System (ADS)

    Bukovsky, Eric; Reeves, Robert; Gash, Alexander; Glumac, Nick

    2017-06-01

    Metal powders are commonly added to explosive formulations to modify the blast behavior. Although detonation velocity is typically reduced compared to the neat explosive, the metal provides other benefits. Aluminum is a common additive to increase the overall energy output and high-density metals can be useful for enhancing momentum transfer to a target. Typically, metal powder is homogeneously distributed throughout the material; in this study, controlled distributions of metal powder in explosive formulations were investigated. The powder structures were printed using powder bed printing and the porous structures were filled with explosives to create bulk explosive composites. In all cases, the overall ratio between metal and explosive was maintained, but the powder distribution was varied. Samples utilizing uniform distributions to represent typical materials, discrete pockets of metal powder, and controlled, graded powder distributions were created. Detonation experiments were performed to evaluate the influence of metal powder design on the output pressure/time and the overall impulse. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  10. A parallel-processing approach to computing for the geographic sciences; applications and systems enhancements

    USGS Publications Warehouse

    Crane, Michael; Steinwand, Dan; Beckmann, Tim; Krpan, Greg; Liu, Shu-Guang; Nichols, Erin; Haga, Jim; Maddox, Brian; Bilderback, Chris; Feller, Mark; Homer, George

    2001-01-01

    The overarching goal of this project is to build a spatially distributed infrastructure for information science research by forming a team of information science researchers and providing them with similar hardware and software tools to perform collaborative research. Four geographically distributed Centers of the U.S. Geological Survey (USGS) are developing their own clusters of low-cost, personal computers into parallel computing environments that provide a costeffective way for the USGS to increase participation in the high-performance computing community. Referred to as Beowulf clusters, these hybrid systems provide the robust computing power required for conducting information science research into parallel computing systems and applications.

  11. GPS FOM Chimney Analysis using Generalized Extreme Value Distribution

    NASA Technical Reports Server (NTRS)

    Ott, Rick; Frisbee, Joe; Saha, Kanan

    2004-01-01

    Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.

  12. Reliable file sharing in distributed operating system using web RTC

    NASA Astrophysics Data System (ADS)

    Dukiya, Rajesh

    2017-12-01

    Since, the evolution of distributed operating system, distributed file system is come out to be important part in operating system. P2P is a reliable way in Distributed Operating System for file sharing. It was introduced in 1999, later it became a high research interest topic. Peer to Peer network is a type of network, where peers share network workload and other load related tasks. A P2P network can be a period of time connection, where a bunch of computers connected by a USB (Universal Serial Bus) port to transfer or enable disk sharing i.e. file sharing. Currently P2P requires special network that should be designed in P2P way. Nowadays, there is a big influence of browsers in our life. In this project we are going to study of file sharing mechanism in distributed operating system in web browsers, where we will try to find performance bottlenecks which our research will going to be an improvement in file sharing by performance and scalability in distributed file systems. Additionally, we will discuss the scope of Web Torrent file sharing and free-riding in peer to peer networks.

  13. A distributed planning concept for Space Station payload operations

    NASA Technical Reports Server (NTRS)

    Hagopian, Jeff; Maxwell, Theresa; Reed, Tracey

    1994-01-01

    The complex and diverse nature of the payload operations to be performed on the Space Station requires a robust and flexible planning approach. The planning approach for Space Station payload operations must support the phased development of the Space Station, as well as the geographically distributed users of the Space Station. To date, the planning approach for manned operations in space has been one of centralized planning to the n-th degree of detail. This approach, while valid for short duration flights, incurs high operations costs and is not conducive to long duration Space Station operations. The Space Station payload operations planning concept must reduce operations costs, accommodate phased station development, support distributed users, and provide flexibility. One way to meet these objectives is to distribute the planning functions across a hierarchy of payload planning organizations based on their particular needs and expertise. This paper presents a planning concept which satisfies all phases of the development of the Space Station (manned Shuttle flights, unmanned Station operations, and permanent manned operations), and the migration from centralized to distributed planning functions. Identified in this paper are the payload planning functions which can be distributed and the process by which these functions are performed.

  14. A Secure Multicast Framework in Large and High-Mobility Network Groups

    NASA Astrophysics Data System (ADS)

    Lee, Jung-San; Chang, Chin-Chen

    With the widespread use of Internet applications such as Teleconference, Pay-TV, Collaborate tasks, and Message services, how to construct and distribute the group session key to all group members securely is becoming and more important. Instead of adopting the point-to-point packet delivery, these emerging applications are based upon the mechanism of multicast communication, which allows the group member to communicate with multi-party efficiently. There are two main issues in the mechanism of multicast communication: Key Distribution and Scalability. The first issue is how to distribute the group session key to all group members securely. The second one is how to maintain the high performance in large network groups. Group members in conventional multicast systems have to keep numerous secret keys in databases, which makes it very inconvenient for them. Furthermore, in case that a member joins or leaves the communication group, many involved participants have to change their own secret keys to preserve the forward secrecy and the backward secrecy. We consequently propose a novel version for providing secure multicast communication in large network groups. Our proposed framework not only preserves the forward secrecy and the backward secrecy but also possesses better performance than existing alternatives. Specifically, simulation results demonstrate that our scheme is suitable for high-mobility environments.

  15. The Materosion project, a sediment cascade modeling for torrential sediment transfers: final results and perspectives

    NASA Astrophysics Data System (ADS)

    Rudaz, Benjamin; Loye, Alexandre; Mazotti, Benoit; Bardou, Eric; Jaboyedoff, Michel

    2013-04-01

    The Materosion project, conducted between the swiss canton of Valais (CREALP) and University of Lausanne (CRET) aims at forecasting sediment transfer in alpine torrents using the sediment cascade concept. The study site is the high Anniviers valley, around the village of Zinal (Valais). The torrents are divided in homogeneous reaches, to and from which sediments are transported by debris flows and bedload transport events. The model runs simulations of 100 years, with a 1-month time step, each with a given a random meteorological event ranging from no activity up to high magnitude debris flows. These events are calibrated using local rain data and observed corresponding debris flow frequencies. The model is applied to ten torrent systems with variable geological context, watershed geometries and sediment supplies. Given the high number of possible event scenarios, 10'000 simulations per torrent are performed, giving a statistical distribution of cumulated volumes and an event size distribution. A way to visualize the complex results data is proposed, and a back-analysis of the internal sediment cascade dynamic is performed. The back-analysis shows that the results' distribution stabilize after ~5'000 simulations. The model results, especially the range of debris flow volumes are crucial to maintain mitigation measures such as retention dams, and give clues for future sediment cascade modeling.

  16. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases

    PubMed Central

    Nixon, Mark S.; Komogortsev, Oleg V.

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before. PMID:28575030

  17. Method to assess the temporal persistence of potential biometric features: Application to oculomotor, gait, face and brain structure databases.

    PubMed

    Friedman, Lee; Nixon, Mark S; Komogortsev, Oleg V

    2017-01-01

    We introduce the intraclass correlation coefficient (ICC) to the biometric community as an index of the temporal persistence, or stability, of a single biometric feature. It requires, as input, a feature on an interval or ratio scale, and which is reasonably normally distributed, and it can only be calculated if each subject is tested on 2 or more occasions. For a biometric system, with multiple features available for selection, the ICC can be used to measure the relative stability of each feature. We show, for 14 distinct data sets (1 synthetic, 8 eye-movement-related, 2 gait-related, and 2 face-recognition-related, and one brain-structure-related), that selecting the most stable features, based on the ICC, resulted in the best biometric performance generally. Analyses based on using only the most stable features produced superior Rank-1-Identification Rate (Rank-1-IR) performance in 12 of 14 databases (p = 0.0065, one-tailed), when compared to other sets of features, including the set of all features. For Equal Error Rate (EER), using a subset of only high-ICC features also produced superior performance in 12 of 14 databases (p = 0. 0065, one-tailed). In general, then, for our databases, prescreening potential biometric features, and choosing only highly reliable features yields better performance than choosing lower ICC features or than choosing all features combined. We also determined that, as the ICC of a group of features increases, the median of the genuine similarity score distribution increases and the spread of this distribution decreases. There was no statistically significant similar relationships for the impostor distributions. We believe that the ICC will find many uses in biometric research. In case of the eye movement-driven biometrics, the use of reliable features, as measured by ICC, allowed to us achieve the authentication performance with EER = 2.01%, which was not possible before.

  18. Anisotropic Azimuthal Power and Temperature distribution on FuelRod. Impact on Hydride Distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motta, Arthur; Ivanov, Kostadin; Arramova, Maria

    2015-04-29

    The degradation of the zirconium cladding may limit nuclear fuel performance. In the high temperature environment of a reactor, the zirconium in the cladding corrodes, releasing hydrogen in the process. Some of this hydrogen is absorbed by the cladding in a highly inhomogeneous manner. The distribution of the absorbed hydrogen is extremely sensitive to temperature and stress concentration gradients. The absorbed hydrogen tends to concentrate near lower temperatures. This hydrogen absorption and hydride formation can cause cladding failure. This project set out to improve the hydrogen distribution prediction capabilities of the BISON fuel performance code. The project was split intomore » two primary sections, first was the use of a high fidelity multi-physics coupling to accurately predict temperature gradients as a function of r, θ , and z, and the second was to use experimental data to create an analytical hydrogen precipitation model. The Penn State version of thermal hydraulics code COBRA-TF (CTF) was successfully coupled to the DeCART neutronics code. This coupled system was verified by testing and validated by comparison to FRAPCON data. The hydrogen diffusion and precipitation experiments successfully calculated the heat of transport and precipitation rate constant values to be used within the hydrogen model in BISON. These values can only be determined experimentally. These values were successfully implemented in precipitation, diffusion and dissolution kernels that were implemented in the BISON code. The coupled output was fed into BISON models and the hydrogen and hydride distributions behaved as expected. Simulations were conducted in the radial, axial and azimuthal directions to showcase the full capabilities of the hydrogen model.« less

  19. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    PubMed

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  20. Performance of two-stage fan with larger dampers on first-stage rotor

    NASA Technical Reports Server (NTRS)

    Urasek, D. C.; Cunnan, W. S.; Stevans, W.

    1979-01-01

    The performance of a two stage, high pressure-ratio fan, having large, part-span vibration dampers on the first stage rotor is presented and compared with an identical aerodynamically designed fan having smaller dampers. Comparisons of the data for the two damper configurations show that with increased damper size: (1) very high losses in the damper region reduced overall efficiency of first stage rotor by approximately 3 points, (2) the overall performance of each blade row, downstream of the damper was not significantly altered, although appreciable differences in the radial distributions of various performance parameters were noted, and (3) the lower performance of the first stage rotor decreased the overall fan efficiency more than 1 percentage point.

  1. Simple arithmetic: not so simple for highly math anxious individuals.

    PubMed

    Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-12-01

    Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.

  2. Performance evaluation of a novel high performance pinhole array detector module using NEMA NU-4 image quality phantom for four head SPECT Imaging

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2015-03-01

    Radiolabeled tracer distribution imaging of gamma rays using pinhole collimation is considered promising for small animal imaging. The recent availability of various radiolabeled tracers has enhanced the field of diagnostic study and is simultaneously creating demand for high resolution imaging devices. This paper presents analyses to represent the optimized parameters of a high performance pinhole array detector module using two different characteristics phantoms. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were executed to assess the performance of a four head SPECT system incorporated with pinhole array collimators. The system is based on a pixelated array of NaI(Tl) crystals coupled to an array of position sensitive photomultiplier tubes (PSPMTs). The detector module was simulated to have 48 mm by 48 mm active area along with different pinhole apertures on a tungsten plate. The performance of this system has been evaluated using a uniform shape cylindrical water phantom along with NEMA NU-4 image quality (IQ) phantom filled with 99mTc labeled radiotracers. SPECT images were reconstructed where activity distribution is expected to be well visualized. This system offers the combination of an excellent intrinsic spatial resolution, good sensitivity and signal-to-noise ratio along with high detection efficiency over an energy range between 20-160 keV. Increasing number of heads in a stationary system configuration offers increased sensitivity at a spatial resolution similar to that obtained with the current SPECT system design with four heads.

  3. Measuring Treasury Bond Portfolio Risk and Portfolio Optimization with a Non-Gaussian Multivariate Model

    NASA Astrophysics Data System (ADS)

    Dong, Yijun

    The research about measuring the risk of a bond portfolio and the portfolio optimization was relatively rare previously, because the risk factors of bond portfolios are not very volatile. However, this condition has changed recently. The 2008 financial crisis brought high volatility to the risk factors and the related bond securities, even if the highly rated U.S. treasury bonds. Moreover, the risk factors of bond portfolios show properties of fat-tailness and asymmetry like risk factors of equity portfolios. Therefore, we need to use advanced techniques to measure and manage risk of bond portfolios. In our paper, we first apply autoregressive moving average generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) model with multivariate normal tempered stable (MNTS) distribution innovations to predict risk factors of U.S. treasury bonds and statistically demonstrate that MNTS distribution has the ability to capture the properties of risk factors based on the goodness-of-fit tests. Then based on empirical evidence, we find that the VaR and AVaR estimated by assuming normal tempered stable distribution are more realistic and reliable than those estimated by assuming normal distribution, especially for the financial crisis period. Finally, we use the mean-risk portfolio optimization to minimize portfolios' potential risks. The empirical study indicates that the optimized bond portfolios have better risk-adjusted performances than the benchmark portfolios for some periods. Moreover, the optimized bond portfolios obtained by assuming normal tempered stable distribution have improved performances in comparison to the optimized bond portfolios obtained by assuming normal distribution.

  4. Implementation of Patient Decision Support Interventions in Primary Care: The Role of Relational Coordination.

    PubMed

    Tietbohl, Caroline K; Rendle, Katharine A S; Halley, Meghan C; May, Suepattra G; Lin, Grace A; Frosch, Dominick L

    2015-11-01

    The benefits of patient decision support interventions (DESIs) have been well documented. However, DESIs remain difficult to incorporate into clinical practice. Relational coordination (RC) has been shown to improve performance and quality of care in health care settings. This study aims to demonstrate how applying RC theory to DESI implementation could elucidate underlying issues limiting widespread uptake. Five primary care clinics in Northern California participated in a DESI implementation project. We used a deductive thematic approach guided by behaviors outlined in RC theory to analyze qualitative data collected from ethnographic field notes documenting the implementation process and focus groups with health care professionals. We then systematically compared the qualitative findings with quantitative DESI distribution data. Based on DESI distribution rates, clinics were placed into 3 performance categories: high, middle, and low. Qualitative data illustrated how each clinic's performance related to RC behaviors. Consistent with RC theory, the high-performing clinic exhibited frequent, timely, and accurate communication and positive working relationships. The 3 middle-performing clinics exhibited high-quality communication within physician-staff teams but limited communication regarding DESI implementation across the clinic. The lowest-performing clinic was characterized by contentious relationships and inadequate communication. Limitations of the study include nonrandom selection of clinics and limited geographic diversity. In addition, ethnographic data collected documented only DESI implementation practices and not larger staff interactions contributing to RC. These findings suggest that a high level of RC within clinical settings may be a key component and facilitator of successful DESI implementation. Future attempts to integrate DESIs into clinical practice should consider incorporating interventions designed to increase positive RC behaviors as a potential means to improve uptake. © The Author(s) 2015.

  5. Groundwater Remediation using Bayesian Information-Gap Decision Theory

    NASA Astrophysics Data System (ADS)

    O'Malley, D.; Vesselinov, V. V.

    2016-12-01

    Probabilistic analyses of groundwater remediation scenarios frequently fail because the probability of an adverse, unanticipated event occurring is often high. In general, models of flow and transport in contaminated aquifers are always simpler than reality. Further, when a probabilistic analysis is performed, probability distributions are usually chosen more for convenience than correctness. The Bayesian Information-Gap Decision Theory (BIGDT) was designed to mitigate the shortcomings of the models and probabilistic decision analyses by leveraging a non-probabilistic decision theory - information-gap decision theory. BIGDT considers possible models that have not been explicitly enumerated and does not require us to commit to a particular probability distribution for model and remediation-design parameters. Both the set of possible models and the set of possible probability distributions grow as the degree of uncertainty increases. The fundamental question that BIGDT asks is "How large can these sets be before a particular decision results in an undesirable outcome?". The decision that allows these sets to be the largest is considered to be the best option. In this way, BIGDT enables robust decision-support for groundwater remediation problems. Here we apply BIGDT to in a representative groundwater remediation scenario where different options for hydraulic containment and pump & treat are being considered. BIGDT requires many model runs and for complex models high-performance computing resources are needed. These analyses are carried out on synthetic problems, but are applicable to real-world problems such as LANL site contaminations. BIGDT is implemented in Julia (a high-level, high-performance dynamic programming language for technical computing) and is part of the MADS framework (http://mads.lanl.gov/ and https://github.com/madsjulia/Mads.jl).

  6. Sludge accumulation and distribution impact the hydraulic performance in waste stabilisation ponds.

    PubMed

    Coggins, Liah X; Ghisalberti, Marco; Ghadouani, Anas

    2017-03-01

    Waste stabilisation ponds (WSPs) are used worldwide for wastewater treatment, and throughout their operation require periodic sludge surveys. Sludge accumulation in WSPs can impact performance by reducing the effective volume of the pond, and altering the pond hydraulics and wastewater treatment efficiency. Traditionally, sludge heights, and thus sludge volume, have been measured using low-resolution and labour intensive methods such as 'sludge judge' and the 'white towel test'. A sonar device, a readily available technology, fitted to a remotely operated vehicle (ROV) was shown to improve the spatial resolution and accuracy of sludge height measurements, as well as reduce labour and safety requirements. Coupled with a dedicated software package, the profiling of several WSPs has shown that the ROV with autonomous sonar device is capable of providing sludge bathymetry with greatly increased spatial resolution in a greatly reduced profiling time, leading to a better understanding of the role played by sludge accumulation in hydraulic performance of WSPs. The high-resolution bathymetry collected was used to support a much more detailed hydrodynamic assessment of systems with low, medium and high accumulations of sludge. The results of the modelling show that hydraulic performance is not only influenced by the sludge accumulation, but also that the spatial distribution of sludge plays a critical role in reducing the treatment capacity of these systems. In a range of ponds modelled, the reduction in residence time ranged from 33% in a pond with a uniform sludge distribution to a reduction of up to 60% in a pond with highly channelized flow. The combination of high-resolution measurement of sludge accumulation and hydrodynamic modelling will help in the development of frameworks for wastewater sludge management, including the development of more reliable computer models, and could potentially have wider application in the monitoring of other small to medium water bodies, such as channels, recreational water bodies, and commercial ports. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Federated software defined network operations for LHC experiments

    NASA Astrophysics Data System (ADS)

    Kim, Dongkyun; Byeon, Okhwan; Cho, Kihyeon

    2013-09-01

    The most well-known high-energy physics collaboration, the Large Hadron Collider (LHC), which is based on e-Science, has been facing several challenges presented by its extraordinary instruments in terms of the generation, distribution, and analysis of large amounts of scientific data. Currently, data distribution issues are being resolved by adopting an advanced Internet technology called software defined networking (SDN). Stability of the SDN operations and management is demanded to keep the federated LHC data distribution networks reliable. Therefore, in this paper, an SDN operation architecture based on the distributed virtual network operations center (DvNOC) is proposed to enable LHC researchers to assume full control of their own global end-to-end data dissemination. This may achieve an enhanced data delivery performance based on data traffic offloading with delay variation. The evaluation results indicate that the overall end-to-end data delivery performance can be improved over multi-domain SDN environments based on the proposed federated SDN/DvNOC operation framework.

  8. Study of data I/O performance on distributed disk system in mask data preparation

    NASA Astrophysics Data System (ADS)

    Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu

    2010-09-01

    Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.

  9. Measurement of baseline and orientation between distributed aerospace platforms.

    PubMed

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  10. A Holarctic Biogeographical Analysis of the Collembola (Arthropoda, Hexapoda) Unravels Recent Post-Glacial Colonization Patterns

    PubMed Central

    Ávila-Jiménez, María Luisa; Coulson, Stephen James

    2011-01-01

    We aimed to describe the main Arctic biogeographical patterns of the Collembola, and analyze historical factors and current climatic regimes determining Arctic collembolan species distribution. Furthermore, we aimed to identify possible dispersal routes, colonization sources and glacial refugia for Arctic collembola. We implemented a Gaussian Mixture Clustering method on species distribution ranges and applied a distance- based parametric bootstrap test on presence-absence collembolan species distribution data. Additionally, multivariate analysis was performed considering species distributions, biodiversity, cluster distribution and environmental factors (temperature and precipitation). No clear relation was found between current climatic regimes and species distribution in the Arctic. Gaussian Mixture Clustering found common elements within Siberian areas, Atlantic areas, the Canadian Arctic, a mid-Siberian cluster and specific Beringian elements, following the same pattern previously described, using a variety of molecular methods, for Arctic plants. Species distribution hence indicate the influence of recent glacial history, as LGM glacial refugia (mid-Siberia, and Beringia) and major dispersal routes to high Arctic island groups can be identified. Endemic species are found in the high Arctic, but no specific biogeographical pattern can be clearly identified as a sign of high Arctic glacial refugia. Ocean currents patterns are suggested as being an important factor shaping the distribution of Arctic Collembola, which is consistent with Antarctic studies in collembolan biogeography. The clear relations between cluster distribution and geographical areas considering their recent glacial history, lack of relationship of species distribution with current climatic regimes, and consistency with previously described Arctic patterns in a series of organisms inferred using a variety of methods, suggest that historical phenomena shaping contemporary collembolan distribution can be inferred through biogeographical analysis. PMID:26467728

  11. Effect of vane twist on the performance of dome swirlers for gas turbine airblast atomizers

    NASA Technical Reports Server (NTRS)

    Micklow, Gerald J.; Dogra, Anju S.; Nguyen, H. Lee

    1990-01-01

    For advanced gas turbine engines, two combustor systems, the lean premixed/prevaporized (LPP) and the rich burn/quick quench/lean burn (RQL) offer great potential for reducing NO(x) emissions. An important consideration for either concept is the development of an advanced fuel injection system that will provide a stable, efficient, and very uniform combustion system over a wide operating range. High-shear airblast fuel injectors for gas turbine combustors have exhibited superior atomization and mixing compared with pressure-atomizing fuel injectors. This improved mixing has lowered NO(x) emissions and the pattern factor, and has enabled combustors to alternate fuels while maintaining a stable, efficient combustion system. The performance of high-shear airblast fuel injectors is highly dependent on the design of the dome swirl vanes. The type of swirl vanes most widely used in gas turbine combustors are usually flat for ease of manufacture, but vanes with curvature will, in general, give superior aerodynamic performance. The design and performance of high-turning, low-loss curved dome swirl vanes with twist along the span are investigated. The twist induces a secondary vortex flow pattern which will improve the atomization of the fuel, thereby producing a more uniform fuel-air distribution. This uniform distribution will increase combustion efficiency while lowering NO(x) emissions. A systematic swirl vane design system is presented based on one-, two-, and three-dimensional flowfield calculations, with variations in vane-turning angle, rate of turning, vane solidity, and vane twist as design parameters.

  12. Effect of vane twist on the performance of dome swirlers for gas turbine airblast atomizers

    NASA Astrophysics Data System (ADS)

    Micklow, Gerald J.; Dogra, Anju S.; Nguyen, H. Lee

    1990-07-01

    For advanced gas turbine engines, two combustor systems, the lean premixed/prevaporized (LPP) and the rich burn/quick quench/lean burn (RQL) offer great potential for reducing NO(x) emissions. An important consideration for either concept is the development of an advanced fuel injection system that will provide a stable, efficient, and very uniform combustion system over a wide operating range. High-shear airblast fuel injectors for gas turbine combustors have exhibited superior atomization and mixing compared with pressure-atomizing fuel injectors. This improved mixing has lowered NO(x) emissions and the pattern factor, and has enabled combustors to alternate fuels while maintaining a stable, efficient combustion system. The performance of high-shear airblast fuel injectors is highly dependent on the design of the dome swirl vanes. The type of swirl vanes most widely used in gas turbine combustors are usually flat for ease of manufacture, but vanes with curvature will, in general, give superior aerodynamic performance. The design and performance of high-turning, low-loss curved dome swirl vanes with twist along the span are investigated. The twist induces a secondary vortex flow pattern which will improve the atomization of the fuel, thereby producing a more uniform fuel-air distribution. This uniform distribution will increase combustion efficiency while lowering NO(x) emissions. A systematic swirl vane design system is presented based on one-, two-, and three-dimensional flowfield calculations, with variations in vane-turning angle, rate of turning, vane solidity, and vane twist as design parameters.

  13. Effect of vane twist on the performance of dome swirlers for gas turbine airblast atomizers

    NASA Astrophysics Data System (ADS)

    Micklow, Gerald J.; Dogra, Anju S.; Nguyen, H. Lee

    1990-06-01

    For advanced gas turbine engines, two combustor systems, the lean premixed/prevaporized (LPP) and the rich burn/quick quench/lean burn (RQL) offer great potential for reducing NO(x) emissions. An important consideration for either concept is the development of an advanced fuel injection system that will provide a stable, efficient, and very uniform combustion system over a wide operating range. High-shear airblast fuel injectors for gas turbine combustors have exhibited superior atomization and mixing compared with pressure-atomizing fuel injectors. This improved mixing has lowered NO(x) emissions and the pattern factor, and has enabled combustors to alternate fuels while maintaining a stable, efficient combustion system. The performance of high-shear airblast fuel injectors is highly dependent on the design of the dome swirl vanes. The type of swirl vanes most widely used in gas turbine combustors are usually flat for ease of manufacture, but vanes with curvature will, in general, give superior aerodynamic performance. The design and performance of high-turning, low-loss curved dome swirl vanes with twist along the span are investigated. The twist induces a secondary vortex flow pattern which will improve the atomization of the fuel, thereby producing a more uniform fuel-air distribution. This uniform distribution will increase combustion efficiency while lowering NO(x) emissions. A systematic swirl vane design system is presented based on one-, two-, and three-dimensional flowfield calculations, with variations in vane-turning angle, rate of turning, vane solidity, and vane twist as design parameters.

  14. Experiment and application of soft x-ray grazing incidence optical scattering phenomena

    NASA Astrophysics Data System (ADS)

    Chen, Shuyan; Li, Cheng; Zhang, Yang; Su, Liping; Geng, Tao; Li, Kun

    2017-08-01

    For short wavelength imaging systems,surface scattering effects is one of important factors degrading imaging performance. Study of non-intuitive surface scatter effects resulting from practical optical fabrication tolerances is a necessary work for optical performance evaluation of high resolution short wavelength imaging systems. In this paper, Soft X-ray optical scattering distribution is measured by a soft X-ray reflectometer installed by my lab, for different sample mirrors、wavelength and grazing angle. Then aim at space solar telescope, combining these scattered light distributions, and surface scattering numerical model of grazing incidence imaging system, PSF and encircled energy of optical system of space solar telescope are computed. We can conclude that surface scattering severely degrade imaging performance of grazing incidence systems through analysis and computation.

  15. A High Performance Piezoelectric Sensor for Dynamic Force Monitoring of Landslide.

    PubMed

    Li, Ming; Cheng, Wei; Chen, Jiangpan; Xie, Ruili; Li, Xiongfei

    2017-02-17

    Due to the increasing influence of human engineering activities, it is important to monitor the transient disturbance during the evolution process of landslide. For this purpose, a high-performance piezoelectric sensor is presented in this paper. To adapt the high static and dynamic stress environment in slope engineering, two key techniques, namely, the self-structure pressure distribution method (SSPDM) and the capacitive circuit voltage distribution method (CCVDM) are employed in the design of the sensor. The SSPDM can greatly improve the compressive capacity and the CCVDM can quantitatively decrease the high direct response voltage. Then, the calibration experiments are conducted via the independently invented static and transient mechanism since the conventional testing machines cannot match the calibration requirements. The sensitivity coefficient is obtained and the results reveal that the sensor has the characteristics of high compressive capacity, stable sensitivities under different static preload levels and wide-range dynamic measuring linearity. Finally, to reduce the measuring error caused by charge leakage of the piezoelectric element, a low-frequency correction method is proposed and experimental verified. Therefore, with the satisfactory static and dynamic properties and the improving low-frequency measuring reliability, the sensor can complement dynamic monitoring capability of the existing landslide monitoring and forecasting system.

  16. Regulating Charge and Exciton Distribution in High-Performance Hybrid White Organic Light-Emitting Diodes with n-Type Interlayer Switch

    NASA Astrophysics Data System (ADS)

    Luo, Dongxiang; Yang, Yanfeng; Xiao, Ye; Zhao, Yu; Yang, Yibin; Liu, Baiquan

    2017-10-01

    The interlayer (IL) plays a vital role in hybrid white organic light-emitting diodes (WOLEDs); however, only a negligible amount of attention has been given to n-type ILs. Herein, the n-type IL, for the first time, has been demonstrated to achieve a high efficiency, high color rendering index (CRI), and low voltage trade-off. The device exhibits a maximum total efficiency of 41.5 lm W-1, the highest among hybrid WOLEDs with n-type ILs. In addition, high CRIs (80-88) at practical luminances (≥1000 cd m-2) have been obtained, satisfying the demand for indoor lighting. Remarkably, a CRI of 88 is the highest among hybrid WOLEDs. Moreover, the device exhibits low voltages, with a turn-on voltage of only 2.5 V (>1 cd m-2), which is the lowest among hybrid WOLEDs. The intrinsic working mechanism of the device has also been explored; in particular, the role of n-type ILs in regulating the distribution of charges and excitons has been unveiled. The findings demonstrate that the introduction of n-type ILs is effective in developing high-performance hybrid WOLEDs. [Figure not available: see fulltext.

  17. Subscale Test Program for the Orion Conical Ribbon Drogue Parachute

    NASA Technical Reports Server (NTRS)

    Sengupta, Anita; Stuart, Phil; Machin, Ricardo; Bourland, Gary; Schwing, Allen; Longmire, Ellen; Henning, Elsa; Sinclair, Rob

    2011-01-01

    A subscale wind tunnel test program for Orion's conical ribbon drogue parachute is under development. The desired goals of the program are to quantify aerodynamic performance of the parachute in the wake of the entry vehicle, including understanding of the coupling of the parachute and command module dynamics, and an improved understanding of the load distribution within the textile elements of the parachute. The test program is ten percent of full scale conducted in a 3x2.1 m (10x7 ft) closed loop subsonic wind tunnel. The subscale test program is uniquely suited to probing the aerodynamic and structural environment in both a quantitative and qualitative manner. Non-intrusive diagnostics, including Particle Image Velocimetry for wake velocity surveys, high speed pressure transducers for canopy pressure distribution, and a high speed photogrammetric reconstruction, will be used to quantify the parachute's performance.

  18. Key Reconciliation for High Performance Quantum Key Distribution

    PubMed Central

    Martinez-Mateo, Jesus; Elkouss, David; Martin, Vicente

    2013-01-01

    Quantum Key Distribution is carving its place among the tools used to secure communications. While a difficult technology, it enjoys benefits that set it apart from the rest, the most prominent is its provable security based on the laws of physics. QKD requires not only the mastering of signals at the quantum level, but also a classical processing to extract a secret-key from them. This postprocessing has been customarily studied in terms of the efficiency, a figure of merit that offers a biased view of the performance of real devices. Here we argue that it is the throughput the significant magnitude in practical QKD, specially in the case of high speed devices, where the differences are more marked, and give some examples contrasting the usual postprocessing schemes with new ones from modern coding theory. A good understanding of its implications is very important for the design of modern QKD devices. PMID:23546440

  19. Development of a Temperature Sensor for Jet Engine and Space Mission Applications

    NASA Technical Reports Server (NTRS)

    Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik; Culley, Dennis

    2008-01-01

    Electronics for Distributed Turbine Engine Control and Space Exploration Missions are expected to encounter extreme temperatures and wide thermal swings. In particular, circuits deployed in a jet engine compartment are likely to be exposed to temperatures well exceeding 150 C. To meet this requirement, efforts exist at the NASA Glenn Research Center (GRC), in support of the Fundamental Aeronautics Program/Subsonic Fixed Wing Project, to develop temperature sensors geared for use in high temperature environments. The sensor and associated circuitry need to be located in the engine compartment under distributed control architecture to simplify system design, improve reliability, and ease signal multiplexing. Several circuits were designed using commercial-off-the-shelf as well as newly-developed components to perform temperature sensing at high temperatures. The temperature-sensing circuits will be described along with the results pertaining to their performance under extreme temperature.

  20. Research on retailer data clustering algorithm based on Spark

    NASA Astrophysics Data System (ADS)

    Huang, Qiuman; Zhou, Feng

    2017-03-01

    Big data analysis is a hot topic in the IT field now. Spark is a high-reliability and high-performance distributed parallel computing framework for big data sets. K-means algorithm is one of the classical partition methods in clustering algorithm. In this paper, we study the k-means clustering algorithm on Spark. Firstly, the principle of the algorithm is analyzed, and then the clustering analysis is carried out on the supermarket customers through the experiment to find out the different shopping patterns. At the same time, this paper proposes the parallelization of k-means algorithm and the distributed computing framework of Spark, and gives the concrete design scheme and implementation scheme. This paper uses the two-year sales data of a supermarket to validate the proposed clustering algorithm and achieve the goal of subdividing customers, and then analyze the clustering results to help enterprises to take different marketing strategies for different customer groups to improve sales performance.

  1. NASA Langley Research Center's distributed mass storage system

    NASA Technical Reports Server (NTRS)

    Pao, Juliet Z.; Humes, D. Creig

    1993-01-01

    There is a trend in institutions with high performance computing and data management requirements to explore mass storage systems with peripherals directly attached to a high speed network. The Distributed Mass Storage System (DMSS) Project at NASA LaRC is building such a system and expects to put it into production use by the end of 1993. This paper presents the design of the DMSS, some experiences in its development and use, and a performance analysis of its capabilities. The special features of this system are: (1) workstation class file servers running UniTree software; (2) third party I/O; (3) HIPPI network; (4) HIPPI/IPI3 disk array systems; (5) Storage Technology Corporation (STK) ACS 4400 automatic cartridge system; (6) CRAY Research Incorporated (CRI) CRAY Y-MP and CRAY-2 clients; (7) file server redundancy provision; and (8) a transition mechanism from the existent mass storage system to the DMSS.

  2. Multilevel photonic modules for millimeter-wave phased-array antennas

    NASA Astrophysics Data System (ADS)

    Paolella, Arthur C.; Joshi, Abhay M.; Wright, James G.; Coryell, Louis A.

    1998-11-01

    Optical signal distribution for phased array antennas in communication system is advantageous to designers. By distributing the microwave and millimeter wave signal through optical fiber there is the potential for improved performance and lower weight. In addition when applied to communication satellites this weight saving translates into substantially reduced launch costs. The goal of the Phase I Small Business Innovation Research (SBIR) Program is the development of multi-level photonic modules for phased array antennas. The proposed module with ultimately comprise of a monolithic, InGaAs/InP p-i-n photodetector-p-HEMT power amplifier, opto-electronic integrated circuit, that has 44 GHz bandwidth and output power of 50 mW integrated with a planar antenna. The photodetector will have a high quantum efficiency and will be front-illuminated, thereby improved optical performance. Under Phase I a module was developed using standard MIC technology with a high frequency coaxial feed interconnect.

  3. Phase space manipulation in high-brightness electron beams

    NASA Astrophysics Data System (ADS)

    Rihaoui, Marwan M.

    Electron beams have a wide range of applications, including discovery science, medicine, and industry. Electron beams can also be used to power next-generation, high-gradient electron accelerators. The performances of some of these applications could be greatly enhanced by precisely tailoring the phase space distribution of the electron beam. The goal of this dissertation is to explore some of these phase space manipulations. We especially focus on transformations capable of tailoring the beam current distribution. Specifically, we investigate a beamline exchanging phase space coordinates between the horizontal and longitudinal degrees of freedom. The key components necessary for this beamline were constructed and tested. The preliminary beamline was used as a singleshot phase space diagnostics and to produce a train of picoseconds electron bunches. We also investigate the use of multiple electron beams to control the transverse focusing. Our numerical and analytical studies are supplemented with experiments performed at the Argonne Wakefield Accelerator.

  4. Assessment of Template-Based Modeling of Protein Structure in CASP11

    PubMed Central

    Modi, Vivek; Xu, Qifang; Adhikari, Sam; Dunbrack, Roland L.

    2016-01-01

    We present the assessment of predictions submitted in the template-based modeling (TBM) category of CASP11 (Critical Assessment of Protein Structure Prediction). Model quality was judged on the basis of global and local measures of accuracy on all atoms including side chains. The top groups on 39 human-server targets based on model 1 predictions were LEER, Zhang, LEE, MULTICOM, and Zhang-Server. The top groups on 81 targets by server groups based on model 1 predictions were Zhang-Server, nns, BAKER-ROSETTASERVER, QUARK, and myprotein-me. In CASP11, the best models for most targets were equal to or better than the best template available in the Protein Data Bank, even for targets with poor templates. The overall performance in CASP11 is similar to the performance of predictors in CASP10 with slightly better performance on the hardest targets. For most targets, assessment measures exhibited bimodal probability density distributions. Multi-dimensional scaling of an RMSD matrix for each target typically revealed a single cluster with models similar to the target structure, with a mode in the GDT-TS density between 40 and 90, and a wide distribution of models highly divergent from each other and from the experimental structure, with density mode at a GDT-TS value of ~20. The models in this peak in the density were either compact models with entirely the wrong fold, or highly non-compact models. The results argue for a density-driven approach in future CASP TBM assessments that accounts for the bimodal nature of these distributions instead of Z-scores, which assume a unimodal, Gaussian distribution. PMID:27081927

  5. Imaging three-dimensional innervation zone distribution in muscles from M-wave recordings

    NASA Astrophysics Data System (ADS)

    Zhang, Chuan; Peng, Yun; Liu, Yang; Li, Sheng; Zhou, Ping; Zev Rymer, William; Zhang, Yingchun

    2017-06-01

    Objective. To localize neuromuscular junctions in skeletal muscles in vivo which is of great importance in understanding, diagnosing and managing of neuromuscular disorders. Approach. A three-dimensional global innervation zone imaging technique was developed to characterize the global distribution of innervation zones, as an indication of the location and features of neuromuscular junctions, using electrically evoked high-density surface electromyogram recordings. Main results. The performance of the technique was evaluated in the biceps brachii of six intact human subjects. The geometric centers of the distributions of the reconstructed innervation zones were determined with a mean distance of 9.4  ±  1.4 cm from the reference plane, situated at the medial epicondyle of the humerus. A mean depth was calculated as 1.5  ±  0.3 cm from the geometric centers to the closed points over the skin. The results are consistent with those reported in previous histology studies. It was also found that the volumes and distributions of the reconstructed innervation zones changed as the stimulation intensities increased until the supramaximal muscle response was achieved. Significance. Results have demonstrated the high performance of the proposed imaging technique in noninvasively imaging global distributions of the innervation zones in the three-dimensional muscle space in vivo, and the feasibility of its clinical applications, such as guiding botulinum toxin injections in spasticity management, or in early diagnosis of neurodegenerative progression of amyotrophic lateral sclerosis.

  6. Design of distributed JT (Joule-Thomson) effect heat exchanger for superfluid 2 K cooling device

    NASA Astrophysics Data System (ADS)

    Jeong, S.; Park, C.; Kim, K.

    2018-03-01

    Superfluid at 2 K or below is readily obtained from liquid helium at 4.2 K by reducing its vapour pressure. For better cooling performance, however, the cold energy of vaporized helium at 2 K chamber can be effectively utilized in a recuperator which is specially designed in this paper for accomplishing so-called the distributed Joule-Thomson (JT) expansion effect. This paper describes the design methodology of distributed JT effect heat exchanger for 2 K JT cooling device. The newly developed heat exchanger allows continuous significant pressure drop at high-pressure part of the recuperative heat exchanger by using a capillary tube. Being different from conventional recuperative heat exchangers, the efficient JT effect HX must consider the pressure drop effect as well as the heat transfer characteristic. The heat exchanger for the distributed JT effect actively utilizes continuous pressure loss at the hot stream of the heat exchanger by using an OD of 0.64 mm and an ID of 0.4 mm capillary tube. The analysis is performed by dividing the heat exchanger into the multiple sub-units of the heat exchange part and JT valve. For more accurate estimation of the pressure drop of spirally wound capillary tube, preliminary experiments are carried out to investigate the friction factor at high Reynolds number. By using the developed pressure drop correlation and the heat transfer correlation, the specification of the heat exchanger with distributed JT effect for 2 K JT refrigerator is determined.

  7. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  8. Comparison of five modelling techniques to predict the spatial distribution and abundance of seabirds

    USGS Publications Warehouse

    O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite

    2012-01-01

    Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.

  9. Control of Hydrogen Embrittlement in High Strength Steel Using Special Designed Welding Wire

    DTIC Science & Technology

    2016-03-01

    microstructure 4. A low near ambient temperature is reached. • All four factor must be simultaneously present 3 Mitigating HIC and Improving Weld Fatigue...Performance Through Weld Residual Stress Control UNCLASIFIED:DISTRIBUTION A. Approved for public release: distribution unlimited. Click to edit Master...title style 4 • Welding of Armor Steels favors all these conditions for HIC • Hydrogen Present in Sufficient Degree – Derived from moisture in the

  10. Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen Minh

    2002-03-31

    This report summarizes the work performed by Honeywell during the January 2002 to March 2002 reporting period under Cooperative Agreement DE-FC26-01NT40779 for the U. S. Department of Energy, National Energy Technology Laboratory (DOE/NETL) entitled ''Solid Oxide Fuel Cell Hybrid System for Distributed Power Generation''. The main objective of this project is to develop and demonstrate the feasibility of a highly efficient hybrid system integrating a planar Solid Oxide Fuel Cell (SOFC) and a turbogenerator. For this reporting period the following activities have been carried out: {lg_bullet} Conceptual system design trade studies were performed {lg_bullet} System-level performance model was created {lg_bullet}more » Dynamic control models are being developed {lg_bullet} Mechanical properties of candidate heat exchanger materials were investigated {lg_bullet} SOFC performance mapping as a function of flow rate and pressure was completed« less

  11. Strategic Staffing? How Performance Pressures Affect the Distribution of Teachers within Schools and Resulting Student Achievement. CEPA Working Paper No. 15-15

    ERIC Educational Resources Information Center

    Grissom, Jason; Kalogrides, Demetra; Loeb, Susanna

    2015-01-01

    School performance pressures apply disproportionately to tested grades and subjects. Using longitudinal administrative data and teacher survey data from a large urban school district, we examine schools' responses to those pressures in assigning teachers to high-stakes and low-stakes classrooms. We find that teachers who produce greater student…

  12. Employment of High-Performance Thin-Layer Chromatography for the Quantification of Oleuropein in Olive Leaves and the Selection of a Suitable Solvent System for Its Isolation with Centrifugal Partition Chromatography.

    PubMed

    Boka, Vasiliki-Ioanna; Argyropoulou, Aikaterini; Gikas, Evangelos; Angelis, Apostolis; Aligiannis, Nektarios; Skaltsounis, Alexios-Leandros

    2015-11-01

    A high-performance thin-layer chromatographic methodology was developed and validated for the isolation and quantitative determination of oleuropein in two extracts of Olea europaea leaves. OLE_A was a crude acetone extract, while OLE_AA was its defatted residue. Initially, high-performance thin-layer chromatography was employed for the purification process of oleuropein with fast centrifugal partition chromatography, replacing high-performance liquid-chromatography, in the stage of the determination of the distribution coefficient and the retention volume. A densitometric method was developed for the determination of the distribution coefficients, KC = CS/CM. The total concentrations of the target compound in the stationary phase (CS) and in the mobile phase (CM) were calculated by the area measured in the high-performance thin-layer chromatogram. The estimated Kc was also used for the calculation of the retention volume, VR, with a chromatographic retention equation. The obtained data were successfully applied for the purification of oleuropein and the experimental results confirmed the theoretical predictions, indicating that high-performance thin-layer chromatography could be an important counterpart in the phytochemical study of natural products. The isolated oleuropein (purity > 95%) was subsequently used for the estimation of its content in each extract with a simple, sensitive and accurate high-performance thin-layer chromatography method. The best fit calibration curve from 1.0 µg/track to 6.0 µg/track of oleuropein was polynomial and the quantification was achieved by UV detection at λ 240 nm. The method was validated giving rise to an efficient and high-throughput procedure, with the relative standard deviation % of repeatability and intermediate precision not exceeding 4.9% and accuracy between 92% and 98% (recovery rates). Moreover, the method was validated for robustness, limit of quantitation, and limit of detection. The amount of oleuropein for OLE_A, OLE_AA, and an aqueous extract of olive leaves was estimated to be 35.5% ± 2.7, 51.5% ± 1.4, and 12.5% ± 0.12, respectively. Statistical analysis proved that the method is repeatable and selective, and can be effectively applied for the estimation of oleuropein in olive leaves' extracts, and could potentially replace high-performance liquid chromatography methodologies developed so far. Thus, the phytochemical investigation of oleuropein could be based on high-performance thin-layer chromatography coupled with separation processes, such as fast centrifugal partition chromatography, showing efficacy and credibility. Georg Thieme Verlag KG Stuttgart · New York.

  13. Steering Electromagnetic Fields in MRI: Investigating Radiofrequency Field Interactions with Endogenous and External Dielectric Materials for Improved Coil Performance at High Field

    NASA Astrophysics Data System (ADS)

    Vaidya, Manushka

    Although 1.5 and 3 Tesla (T) magnetic resonance (MR) systems remain the clinical standard, the number of 7 T MR systems has increased over the past decade because of the promise of higher signal-to-noise ratio (SNR), which can translate to images with higher resolution, improved image quality and faster acquisition times. However, there are a number of technical challenges that have prevented exploiting the full potential of ultra-high field (≥ 7 T) MR imaging (MRI), such as the inhomogeneous distribution of the radiofrequency (RF) electromagnetic field and specific energy absorption rate (SAR), which can compromise image quality and patient safety. To better understand the origin of these issues, we first investigated the dependence of the spatial distribution of the magnetic field associated with a surface RF coil on the operating frequency and electrical properties of the sample. Our results demonstrated that the asymmetries between the transmit (B1+) and receive (B 1-) circularly polarized components of the magnetic field, which are in part responsible for RF inhomogeneity, depend on the electric conductivity of the sample. On the other hand, when sample conductivity is low, a high relative permittivity can result in an inhomogeneous RF field distribution, due to significant constructive and destructive interference patterns between forward and reflected propagating magnetic field within the sample. We then investigated the use of high permittivity materials (HPMs) as a method to alter the field distribution and improve transmit and receive coil performance in MRI. We showed that HPM placed at a distance from an RF loop coil can passively shape the field within the sample. Our results showed improvement in transmit and receive sensitivity overlap, extension of coil field-of-view, and enhancement in transmit/receive efficiency. We demonstrated the utility of this concept by employing HPM to improve performance of an existing commercial head coil for the inferior regions of the brain, where the specific coil's imaging efficiency was inherently poor. Results showed a gain in SNR, while the maximum local and head SAR values remained below the prescribed limits. We showed that increasing coil performance with HPM could improve detection of functional MR activation during a motor-based task for whole brain fMRI. Finally, to gain an intuitive understanding of how HPM improves coil performance, we investigated how HPM separately affects signal and noise sensitivity to improve SNR. For this purpose, we employed a theoretical model based on dyadic Green's functions to compare the characteristics of current patterns, i.e. the optimal spatial distribution of coil conductors, that would either maximize SNR (ideal current patterns), maximize signal reception (signal-only optimal current patterns), or minimize sample noise (dark mode current patterns). Our results demonstrated that the presence of a lossless HPM changed the relative balance of signal-only optimal and dark mode current patterns. For a given relative permittivity, increasing the thickness of the HPM altered the magnitude of the currents required to optimize signal sensitivity at the voxel of interest as well as decreased the net electric field in the sample, which is associated, via reciprocity, to the noise received from the sample. Our results also suggested that signal-only current patterns could be used to identify HPM configurations that lead to high SNR gain for RF coil arrays. We anticipate that physical insights from this work could be utilized to build the next generation of high performing RF coils integrated with HPM.

  14. Application Summary Report 22: LED MR16 Lamps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, Michael P.

    2014-07-23

    This report analyzes the independently tested photometric performance of 27 LED MR16 lamps. It describes initial performance based on light output, efficacy, distribution, color quality, electrical characteristics, and form factor, with comparisons to a selection of benchmark halogen MR16s and ENERGY STAR qualification thresholds. Three types of products were targeted. First, CALiPER sought 3000 K lamps with the highest rated lumen output (i.e., at least 500 lm) or a claim of equivalency to a 50 W halogen MR16 or higher. The test results indicate that while the initial performance of LED MR16s has improved across the board, market-available products stillmore » do not produce the lumen output and center beam intensity of typical 50 W halogen MR16 lamps. In fact, most of the 18 lamps in this category had lower lumen output and center beam intensity than a typical 35 W halogen MR16 lamp. Second, CALiPER sought lamps with a CRI of 90 or greater. Only four manufacturers were identified with a product in this category. CALiPER testing confirmed the performance of these lamps, which are a good option for applications where high color fidelity is needed. A vast majority of the LED MR16 lamps have a CRI in the low 80s; this is generally acceptable for ambient lighting, but may not always be acceptable for focal lighting. For typical LED packages, there is a fundamental tradeoff between CRI and efficacy, but the lamps in the high-CRI group in this report still offer comparable performance to the rest of the Series 22 products in other performance areas. Finally, CALiPER sought lamps with a narrow distribution, denoted as a beam angle less than 15°. Five such lamps were purchased. Notably, no lamp was identified as having high lumen output (500 lumens or greater), high CRI (90 or greater), a narrow distribution (15° or less), and an efficacy greater than 60 lm/W. This would be an important achievement for LED MR16s especially if output could reach approximately 700 800 lumens, or the approximate equivalent of a 50 W halogen MR16 lamp. Many factors beyond photometric performance should be considered during specification. For example, performance over time, transformer and dimmer compatibility, and total system performance are all critical to a successful installation. Subsequent CALiPER reports will investigate more complex issues.« less

  15. Control of dispatch dynamics for lowering the cost of distributed generation in the built environment

    NASA Astrophysics Data System (ADS)

    Flores, Robert Joseph

    Distributed generation can provide many benefits over traditional central generation such as increased reliability and efficiency while reducing emissions. Despite these potential benefits, distributed generation is generally not purchased unless it reduces energy costs. Economic dispatch strategies can be designed such that distributed generation technologies reduce overall facility energy costs. In this thesis, a microturbine generator is dispatched using different economic control strategies, reducing the cost of energy to the facility. Several industrial and commercial facilities are simulated using acquired electrical, heating, and cooling load data. Industrial and commercial utility rate structures are modeled after Southern California Edison and Southern California Gas Company tariffs and used to find energy costs for the simulated buildings and corresponding microturbine dispatch. Using these control strategies, building models, and utility rate models, a parametric study examining various generator characteristics is performed. An economic assessment of the distributed generation is then performed for both the microturbine generator and parametric study. Without the ability to export electricity to the grid, the economic value of distributed generation is limited to reducing the individual costs that make up the cost of energy for a building. Any economic dispatch strategy must be built to reduce these individual costs. While the ability of distributed generation to reduce cost depends of factors such as electrical efficiency and operations and maintenance cost, the building energy demand being serviced has a strong effect on cost reduction. Buildings with low load factors can accept distributed generation with higher operating costs (low electrical efficiency and/or high operations and maintenance cost) due to the value of demand reduction. As load factor increases, lower operating cost generators are desired due to a larger portion of the building load being met in an effort to reduce demand. In addition, buildings with large thermal demand have access to the least expensive natural gas, lowering the cost of operating distributed generation. Recovery of exhaust heat from DG reduces cost only if the buildings thermal demand coincides with the electrical demand. Capacity limits exist where annual savings from operation of distributed generation decrease if further generation is installed. For low operating cost generators, the approximate limit is the average building load. This limit decreases as operating costs increase. In addition, a high capital cost of distributed generation can be accepted if generator operating costs are low. As generator operating costs increase, capital cost must decrease if a positive economic performance is desired.

  16. An analysis for high speed propeller-nacelle aerodynamic performance prediction. Volume 1: Theory and application

    NASA Technical Reports Server (NTRS)

    Egolf, T. Alan; Anderson, Olof L.; Edwards, David E.; Landgrebe, Anton J.

    1988-01-01

    A computer program, the Propeller Nacelle Aerodynamic Performance Prediction Analysis (PANPER), was developed for the prediction and analysis of the performance and airflow of propeller-nacelle configurations operating over a forward speed range inclusive of high speed flight typical of recent propfan designs. A propeller lifting line, wake program was combined with a compressible, viscous center body interaction program, originally developed for diffusers, to compute the propeller-nacelle flow field, blade loading distribution, propeller performance, and the nacelle forebody pressure and viscous drag distributions. The computer analysis is applicable to single and coaxial counterrotating propellers. The blade geometries can include spanwise variations in sweep, droop, taper, thickness, and airfoil section type. In the coaxial mode of operation the analysis can treat both equal and unequal blade number and rotational speeds on the propeller disks. The nacelle portion of the analysis can treat both free air and tunnel wall configurations including wall bleed. The analysis was applied to many different sets of flight conditions using selected aerodynamic modeling options. The influence of different propeller nacelle-tunnel wall configurations was studied. Comparisons with available test data for both single and coaxial propeller configurations are presented along with a discussion of the results.

  17. Proceedings: Sisal `93

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feo, J.T.

    1993-10-01

    This report contain papers on: Programmability and performance issues; The case of an iterative partial differential equation solver; Implementing the kernal of the Australian Region Weather Prediction Model in Sisal; Even and quarter-even prime length symmetric FFTs and their Sisal Implementations; Top-down thread generation for Sisal; Overlapping communications and computations on NUMA architechtures; Compiling technique based on dataflow analysis for funtional programming language Valid; Copy elimination for true multidimensional arrays in Sisal 2.0; Increasing parallelism for an optimization that reduces copying in IF2 graphs; Caching in on Sisal; Cache performance of Sisal Vs. FORTRAN; FFT algorithms on a shared-memory multiprocessor;more » A parallel implementation of nonnumeric search problems in Sisal; Computer vision algorithms in Sisal; Compilation of Sisal for a high-performance data driven vector processor; Sisal on distributed memory machines; A virtual shared addressing system for distributed memory Sisal; Developing a high-performance FFT algorithm in Sisal for a vector supercomputer; Implementation issues for IF2 on a static data-flow architechture; and Systematic control of parallelism in array-based data-flow computation. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.« less

  18. Feasibility of introducing ferromagnetic materials to onboard bulk high-Tc superconductors to enhance the performance of present maglev systems

    NASA Astrophysics Data System (ADS)

    Deng, Zigang; Wang, Jiasu; Zheng, Jun; Zhang, Ya; Wang, Suyu

    2013-02-01

    Performance improvement is a long-term research task for the promotion of practical application of promising high-temperature superconducting (HTS) magnetic levitation (maglev) vehicle technologies. We studied the feasibility to enhance the performance of present HTS Maglev systems by introducing ferromagnetic materials to onboard bulk superconductors. The principle here is to make use of the high magnetic permeability of ferromagnetic materials to alter the flux distribution of the permanent magnet guideway for the enhancement of magnetic field density at the position of the bulk superconductors. Ferromagnetic iron plates were added to the upper surface of bulk superconductors and their geometric and positioning effects on the maglev performance were investigated experimentally. Results show that the guidance performance (stability) was enhanced greatly for a particular setup when compared to the present maglev system which is helpful in the application where large guidance forces are needed such as maglev tracks with high degrees of curves.

  19. Integration of Propulsion-Airframe-Aeroacoustic Technologies and Design Concepts for a Quiet Blended-Wing-Body Transport

    NASA Technical Reports Server (NTRS)

    Hill, G. A.; Brown, S. A.; Geiselhart, K. A.

    2004-01-01

    This paper summarizes the results of studies undertaken to investigate revolutionary propulsion-airframe configurations that have the potential to achieve significant noise reductions over present-day commercial transport aircraft. Using a 300 passenger Blended-Wing-Body (BWB) as a baseline, several alternative low-noise propulsion-airframe-aeroacoustic (PAA) technologies and design concepts were investigated both for their potential to reduce the overall BWB noise levels, and for their impact on the weight, performance, and cost of the vehicle. Two evaluation frameworks were implemented for the assessments. The first was a Multi-Attribute Decision Making (MADM) process that used a Pugh Evaluation Matrix coupled with the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). This process provided a qualitative evaluation of the PAA technologies and design concepts and ranked them based on how well they satisfied chosen design requirements. From the results of the evaluation, it was observed that almost all of the PAA concepts gave the BWB a noise benefit, but degraded its performance. The second evaluation framework involved both deterministic and probabilistic systems analyses that were performed on a down-selected number of BWB propulsion configurations incorporating the PAA technologies and design concepts. These configurations included embedded engines with Boundary Layer Ingesting Inlets, Distributed Exhaust Nozzles installed on podded engines, a High Aspect Ratio Rectangular Nozzle, Distributed Propulsion, and a fixed and retractable aft airframe extension. The systems analyses focused on the BWB performance impacts of each concept using the mission range as a measure of merit. Noise effects were also investigated when enough information was available for a tractable analysis. Some tentative conclusions were drawn from the results. One was that the Boundary Layer Ingesting Inlets provided improvements to the BWB's mission range, by increasing the propulsive efficiency at cruise, and therefore offered a means to offset performance penalties imposed by some of the advanced PAA configurations. It was also found that the podded Distributed Exhaust Nozzle configuration imposed high penalties on the mission range and the need for substantial synergistic performance enhancements from an advanced integration scheme was identified. The High Aspect Ratio Nozzle showed inconclusive noise results and posed significant integration difficulties. Distributed Propulsion, in general, imposed performance penalties but may offer some promise for noise reduction from jet-to-jet shielding effects. Finally, a retractable aft airframe extension provided excellent noise reduction for a modest decrease in range.

  20. Full State Feedback Control for Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Tillay

    This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimalmore » control commands to the DERs of the VPP.« less

  1. Distributed Grooming in Multi-Domain IP/MPLS-DWDM Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Qing

    2009-12-01

    This paper studies distributed multi-domain, multilayer provisioning (grooming) in IP/MPLS-DWDM networks. Although many multi-domain studies have emerged over the years, these have primarily considered 'homogeneous' network layers. Meanwhile, most grooming studies have assumed idealized settings with 'global' link state across all layers. Hence there is a critical need to develop practical distributed grooming schemes for real-world networks consisting of multiple domains and technology layers. Along these lines, a detailed hierarchical framework is proposed to implement inter-layer routing, distributed grooming, and setup signaling. The performance of this solution is analyzed in detail using simulation studies and future work directions are alsomore » high-lighted.« less

  2. Reducing ultrafine particle emissions using air injection in wood-burning cookstoves

    DOE PAGES

    Rapp, Vi H.; Caubel, Julien J.; Wilson, Daniel L.; ...

    2016-06-27

    In order to address the health risks and climate impacts associated with pollution from cooking on biomass fires, researchers have focused on designing new cookstoves that improve cooking performance and reduce harmful emissions, specifically particulate matter (PM). One method for improving cooking performance and reducing emissions is using air injection to increase turbulence of unburned gases in the combustion zone. Although air injection reduces total PM mass emissions, the effect on PM size-distribution and number concentration has not been thoroughly investigated. Using two new wood-burning cookstove designs from Lawrence Berkeley National Laboratory, this research explores the effect of air injectionmore » on cooking performance, PM and gaseous emissions, and PM size distribution and number concentration. Both cookstoves were created using the Berkeley-Darfur Stove as the base platform to isolate the effects of air injection. The thermal performance, gaseous emissions, PM mass emissions, and particle concentrations (ranging from 5 nm to 10 μm in diameter) of the cookstoves were measured during multiple high-power cooking tests. Finally, the results indicate that air injection improves cookstove performance and reduces total PM mass but increases total ultrafine (less than 100 nm in diameter) PM concentration over the course of high-power cooking.« less

  3. Reducing ultrafine particle emissions using air injection in wood-burning cookstoves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rapp, Vi H.; Caubel, Julien J.; Wilson, Daniel L.

    In order to address the health risks and climate impacts associated with pollution from cooking on biomass fires, researchers have focused on designing new cookstoves that improve cooking performance and reduce harmful emissions, specifically particulate matter (PM). One method for improving cooking performance and reducing emissions is using air injection to increase turbulence of unburned gases in the combustion zone. Although air injection reduces total PM mass emissions, the effect on PM size-distribution and number concentration has not been thoroughly investigated. Using two new wood-burning cookstove designs from Lawrence Berkeley National Laboratory, this research explores the effect of air injectionmore » on cooking performance, PM and gaseous emissions, and PM size distribution and number concentration. Both cookstoves were created using the Berkeley-Darfur Stove as the base platform to isolate the effects of air injection. The thermal performance, gaseous emissions, PM mass emissions, and particle concentrations (ranging from 5 nm to 10 μm in diameter) of the cookstoves were measured during multiple high-power cooking tests. Finally, the results indicate that air injection improves cookstove performance and reduces total PM mass but increases total ultrafine (less than 100 nm in diameter) PM concentration over the course of high-power cooking.« less

  4. Unstable density distribution associated with equatorial plasma bubble

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kherani, E. A., E-mail: esfhan.kherani@inpe.br; Meneses, F. Carlos de; Bharuthram, R.

    2016-04-15

    In this work, we present a simulation study of equatorial plasma bubble (EPB) in the evening time ionosphere. The fluid simulation is performed with a high grid resolution, enabling us to probe the steepened updrafting density structures inside EPB. Inside the density depletion that eventually evolves as EPB, both density and updraft are functions of space from which the density as implicit function of updraft velocity or the density distribution function is constructed. In the present study, this distribution function and the corresponding probability distribution function are found to evolve from Maxwellian to non-Maxwellian as the initial small depletion growsmore » to EPB. This non-Maxwellian distribution is of a gentle-bump type, in confirmation with the recently reported distribution within EPB from space-borne measurements that offer favorable condition for small scale kinetic instabilities.« less

  5. A new statistical method for design and analyses of component tolerance

    NASA Astrophysics Data System (ADS)

    Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam

    2017-03-01

    Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.

  6. A Distributed Platform for Global-Scale Agent-Based Models of Disease Transmission

    PubMed Central

    Parker, Jon; Epstein, Joshua M.

    2013-01-01

    The Global-Scale Agent Model (GSAM) is presented. The GSAM is a high-performance distributed platform for agent-based epidemic modeling capable of simulating a disease outbreak in a population of several billion agents. It is unprecedented in its scale, its speed, and its use of Java. Solutions to multiple challenges inherent in distributing massive agent-based models are presented. Communication, synchronization, and memory usage are among the topics covered in detail. The memory usage discussion is Java specific. However, the communication and synchronization discussions apply broadly. We provide benchmarks illustrating the GSAM’s speed and scalability. PMID:24465120

  7. NRL Fact Book

    DTIC Science & Technology

    2008-01-01

    Distributed network-based battle management High performance computing supporting uniform and nonuniform memory access with single and multithreaded...pallet Airborne EO/IR and radar sensors VNIR through SWIR hyperspectral systems VNIR, MWIR, and LWIR high-resolution sys- tems Wideband SAR systems...meteorological sensors Hyperspectral sensor systems (PHILLS) Mid-wave infrared (MWIR) Indium Antimonide (InSb) imaging system Long-wave infrared ( LWIR

  8. Simple arithmetic: not so simple for highly math anxious individuals

    PubMed Central

    Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-01-01

    Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499

  9. Generic tags for Mn(ii) and Gd(iii) spin labels for distance measurements in proteins.

    PubMed

    Yang, Yin; Gong, Yan-Jun; Litvinov, Aleksei; Liu, Hong-Kai; Yang, Feng; Su, Xun-Cheng; Goldfarb, Daniella

    2017-10-11

    High-affinity chelating tags for Gd(iii) and Mn(ii) ions that provide valuable high-resolution distance restraints for biomolecules were used as spin labels for double electron-electron resonance (DEER) measurements. The availability of a generic tag that can bind both metal ions and provide a narrow and predictable distance distribution for both ions is attractive owing to their different EPR-related characteristics. Herein we introduced two paramagnetic tags, 4PSPyMTA and 4PSPyNPDA, which are conjugated to cysteine residues through a stable thioether bond, forming a short and, depending on the metal ion coordination mode, a rigid tether with the protein. These tags exhibit high affinity for both Mn(ii) and Gd(iii) ions. The DEER performance of the 4PSPyMTA and 4PSPyNPDA tags, in complex with Gd(iii) or Mn(ii), was evaluated for three double cysteine mutants of ubiquitin, and the Gd(iii)-Gd(iii) and Mn(ii)-Mn(ii) distance distributions they generated were compared. All three Gd(iii) complexes of the ubiquitin-PyMTA and ubiquitin-PyNPDA conjugates produced similar and expected distance distributions. In contrast, significant variations in the maxima and widths of the distance distributions were observed for the Mn(ii) analogs. Furthermore, whereas PyNPDA-Gd(iii) and PyNPDA-Mn(ii) delivered similar distance distributions, appreciable differences were observed for two mutants with PyMTA, with the Mn(ii) analog exhibiting a broader distance distribution and shorter distances. ELDOR (electron-electron double resonance)-detected NMR measurements revealed some distribution in the Mn(ii) coordination environment for the protein conjugates of both tags but not for the free tags. The broader distance distributions generated by 4PSPyMTA-Mn(ii), as compared with Gd(iii), were attributed to the distributed location of the Mn(ii) ion within the PyMTA chelate owing to its smaller size and lower coordination number that leave the pyridine nitrogen uncoordinated. Accordingly, in terms of distance resolution, 4PSPyNPDA can serve as an effective generic tag for Gd(iii) and Mn(ii), whereas 4PSPyMTA is efficient for Gd(iii) only. This comparison between Gd(iii) and Mn(ii) suggests that PyMTA model compounds may not predict sufficiently well the performance of PyMTA-Mn(ii) as a tag for high-resolution distance measurements in proteins because the protein environment can influence its coordination mode.

  10. Strategies to Achieve High-Performance White Organic Light-Emitting Diodes

    PubMed Central

    Zhang, Lirong; Li, Xiang-Long; Luo, Dongxiang; Xiao, Peng; Xiao, Wenping; Song, Yuhong; Ang, Qinshu; Liu, Baiquan

    2017-01-01

    As one of the most promising technologies for next-generation lighting and displays, white organic light-emitting diodes (WOLEDs) have received enormous worldwide interest due to their outstanding properties, including high efficiency, bright luminance, wide viewing angle, fast switching, lower power consumption, ultralight and ultrathin characteristics, and flexibility. In this invited review, the main parameters which are used to characterize the performance of WOLEDs are introduced. Subsequently, the state-of-the-art strategies to achieve high-performance WOLEDs in recent years are summarized. Specifically, the manipulation of charges and excitons distribution in the four types of WOLEDs (fluorescent WOLEDs, phosphorescent WOLEDs, thermally activated delayed fluorescent WOLEDs, and fluorescent/phosphorescent hybrid WOLEDs) are comprehensively highlighted. Moreover, doping-free WOLEDs are described. Finally, issues and ways to further enhance the performance of WOLEDs are briefly clarified. PMID:29194426

  11. Building and measuring a high performance network architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less

  12. Regional brain activity that determines successful and unsuccessful working memory formation.

    PubMed

    Teramoto, Shohei; Inaoka, Tsubasa; Ono, Yumie

    2016-08-01

    Using EEG source reconstruction with Multiple Sparse Priors (MSP), we investigated the regional brain activity that determines successful memory encoding in two participant groups of high and low accuracy rates. Eighteen healthy young adults performed a sequential fashion of visual Sternberg memory task. The 32-channel EEG was continuously measured during participants performed two 70 trials of memory task. The regional brain activity corresponding to the oscillatory EEG activity in the alpha band (8-13 Hz) during encoding period was analyzed by MSP implemented in SPM8. We divided the data of all participants into 2 groups (low- and highperformance group) and analyzed differences in regional brain activity between trials in which participants answered correctly and incorrectly within each of the group. Participants in low-performance group showed significant activity increase in the visual cortices in their successful trials compared to unsuccessful ones. On the other hand, those in high-performance group showed a significant activity increase in widely distributed cortical regions in the frontal, temporal, and parietal areas including those suggested as Baddeley's working memory model. Further comparison of activated cortical volumes and mean current source intensities within the cortical regions of Baddeley's model during memory encoding demonstrated that participants in high-performance group showed enhanced activity in the right premotor cortex, which plays an important role in maintaining visuospatial attention, compared to those in low performance group. Our results suggest that better ability in memory encoding is associated with distributed and stronger regional brain activities including the premotor cortex, possibly indicating efficient allocation of cognitive load and maintenance of attention.

  13. Improvement of kink characteristic of proton-implanted VCSEL with ITO overcoating

    NASA Astrophysics Data System (ADS)

    Lai, Fang-I.; Chang, Ya-Hsien; Laih, Li-Hong; Kuo, Hao-chung; Wang, S. C.

    2004-06-01

    Proton implanted VCSEL has been demonstrated with good reliability and decent modulation speed up to 1.25 Gb/s. However, kinks in current vs light output (L-I) has been always an issue in the gain-guided proton implant VCSEL. The kink related jitter and noise performance made it difficult to meet 2.5 Gb/s (OC-48) requirement. The kinks in L-I curve can be attributed to non-uniform carrier distribution induced non-uniform gain distribution within emission area. In this paper, the effects of a Ti/ITO transparent over-coating on the proton-implanted AlGaAs/GaAs VCSELs (15um diameter aperture) are investigated. The kinks distribution in L-I characteristics from a 2 inch wafer is greatly improved compared to conventional process. These VCSELs exhibit nearly kink-free L-I output performance with threshold currents ~3 mA, and the slope efficiencies ~ 0.25 W/A. The near-field emission patterns suggest the Ti/ITO over-coating facilitates the current spreading and uniform carrier distribution of the top VCSEL contact thus enhancing the laser performance. Finally, we performed high speed modulation measurement. The eye diagram of proton-implanted VCSELs with Ti/ITO transparent over-coating operating at 2.125 Gb/s with 10mA bias and 9dB extinction ratio shows very clean eye with jitter less than 35 ps.

  14. Derivation of WECC Distributed PV System Model Parameters from Quasi-Static Time-Series Distribution System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Boemer, Jens C.; Vittal, Eknath

    The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less

  15. Assessing deep and shallow learning methods for quantitative prediction of acute chemical toxicity.

    PubMed

    Liu, Ruifeng; Madore, Michael; Glover, Kyle P; Feasel, Michael G; Wallqvist, Anders

    2018-05-02

    Animal-based methods for assessing chemical toxicity are struggling to meet testing demands. In silico approaches, including machine-learning methods, are promising alternatives. Recently, deep neural networks (DNNs) were evaluated and reported to outperform other machine-learning methods for quantitative structure-activity relationship modeling of molecular properties. However, most of the reported performance evaluations relied on global performance metrics, such as the root mean squared error (RMSE) between the predicted and experimental values of all samples, without considering the impact of sample distribution across the activity spectrum. Here, we carried out an in-depth analysis of DNN performance for quantitative prediction of acute chemical toxicity using several datasets. We found that the overall performance of DNN models on datasets of up to 30,000 compounds was similar to that of random forest (RF) models, as measured by the RMSE and correlation coefficients between the predicted and experimental results. However, our detailed analyses demonstrated that global performance metrics are inappropriate for datasets with a highly uneven sample distribution, because they show a strong bias for the most populous compounds along the toxicity spectrum. For highly toxic compounds, DNN and RF models trained on all samples performed much worse than the global performance metrics indicated. Surprisingly, our variable nearest neighbor method, which utilizes only structurally similar compounds to make predictions, performed reasonably well, suggesting that information of close near neighbors in the training sets is a key determinant of acute toxicity predictions.

  16. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  17. Using Java for distributed computing in the Gaia satellite data processing

    NASA Astrophysics Data System (ADS)

    O'Mullane, William; Luri, Xavier; Parsons, Paul; Lammers, Uwe; Hoar, John; Hernandez, Jose

    2011-10-01

    In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1,000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.

  18. The Case for Distributed Engine Control in Turbo-Shaft Engine Systems

    NASA Technical Reports Server (NTRS)

    Culley, Dennis E.; Paluszewski, Paul J.; Storey, William; Smith, Bert J.

    2009-01-01

    The turbo-shaft engine is an important propulsion system used to power vehicles on land, sea, and in the air. As the power plant for many high performance helicopters, the characteristics of the engine and control are critical to proper vehicle operation as well as being the main determinant to overall vehicle performance. When applied to vertical flight, important distinctions exist in the turbo-shaft engine control system due to the high degree of dynamic coupling between the engine and airframe and the affect on vehicle handling characteristics. In this study, the impact of engine control system architecture is explored relative to engine performance, weight, reliability, safety, and overall cost. Comparison of the impact of architecture on these metrics is investigated as the control system is modified from a legacy centralized structure to a more distributed configuration. A composite strawman system which is typical of turbo-shaft engines in the 1000 to 2000 hp class is described and used for comparison. The overall benefits of these changes to control system architecture are assessed. The availability of supporting technologies to achieve this evolution is also discussed.

  19. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  20. Pattern dependence in high-speed Q-modulated distributed feedback laser.

    PubMed

    Zhu, Hongli; Xia, Yimin; He, Jian-Jun

    2015-05-04

    We investigate the pattern dependence in high speed Q-modulated distributed feedback laser based on its complete physical structure and material properties. The structure parameters of the gain section as well as the modulation and phase sections are all taken into account in the simulations based on an integrated traveling wave model. Using this model, we show that an example Q-modulated DFB laser can achieve an extinction ratio of 6.8dB with a jitter of 4.7ps and a peak intensity fluctuation of less than 15% for 40Gbps RZ modulation signal. The simulation method is proved very useful for the complex laser structure design and high speed performance optimization, as well as for providing physical insight of the operation mechanism.

  1. A Unix SVR-4-OS9 distributed data acquisition for high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drouhin, F.; Schwaller, B.; Fontaine, J.C.

    1998-08-01

    The distributed data acquisition (DAQ) system developed by the GRPHE (Groupe de Recherche en Physique des Hautes Energies) group is a combination of hardware and software dedicated to high energy physics. The system described here is used in the beam tests of the CMs tracker. The central processor of the system is a RISC CPU hosted in a VME card, running a POSIX compliant UNIX system. Specialized real-time OS9 VME cards perform the instrumentation control. The main data flow goes over a deterministic high speed network. The Unix system manages a list of OS9 front-end systems with a synchronization protocolmore » running over a TCP/IP layer.« less

  2. High-speed polarization-encoded quantum key distribution based on silicon photonic integrated devices

    NASA Astrophysics Data System (ADS)

    Bunandar, Darius; Urayama, Junji; Boynton, Nicholas; Martinez, Nicholas; Derose, Christopher; Lentine, Anthony; Davids, Paul; Camacho, Ryan; Wong, Franco; Englund, Dirk

    We present a compact polarization-encoded quantum key distribution (QKD) transmitter near a 1550-nm wavelength implemented on a CMOS-compatible silicon-on-insulator photonics platform. The transmitter generates arbitrary polarization qubits at gigahertz bandwidth with an extinction ratio better than 30 dB using high-speed carrier-depletion phase modulators. We demonstrate the performance of this device by generating secret keys at a rate of 1 Mbps in a complete QKD field test. Our work shows the potential of using advanced photonic integrated circuits to enable high-speed quantum-secure communications. This work was supported by the SECANT QKD Grand Challenge, the Samsung Global Research Outreach Program, and the Air Force Office of Scientific Research.

  3. Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.

    PubMed

    Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar

    2017-03-01

    We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.

  4. Preliminary results with microchannel array plates employing curved microchannels to inhibit ion feedback. [for photon counters

    NASA Technical Reports Server (NTRS)

    Timothy, J. G.; Bybee, R. L.

    1977-01-01

    Up to now, microchannel array plates (MCPs) have been constructed with microchannels having a straight geometry and hence have been prone to ion-feedback instabilities at high operating potentials and high ambient pressures. This paper describes the performances of MCPs with curved (J and C configuration) microchannels to inhibit ion feedback. Plates with curved microchannels have demonstrated performances comparable to those of conventional channel electron multipliers with saturated output pulse-height distributions and modal gain values in excess of 10 to the 6th electrons/pulse.

  5. High-density fiber-optic DNA random microsphere array.

    PubMed

    Ferguson, J A; Steemers, F J; Walt, D R

    2000-11-15

    A high-density fiber-optic DNA microarray sensor was developed to monitor multiple DNA sequences in parallel. Microarrays were prepared by randomly distributing DNA probe-functionalized 3.1-microm-diameter microspheres in an array of wells etched in a 500-microm-diameter optical imaging fiber. Registration of the microspheres was performed using an optical encoding scheme and a custom-built imaging system. Hybridization was visualized using fluorescent-labeled DNA targets with a detection limit of 10 fM. Hybridization times of seconds are required for nanomolar target concentrations, and analysis is performed in minutes.

  6. Performance of low resistance microchannel plate stacks

    NASA Technical Reports Server (NTRS)

    Siegmund, O. H. W.; Stock, J.

    1991-01-01

    Results are presented from an evaluation of three sets of low resistance microchannel plate (MCP) stacks; the tests encompassed gain, pulse-height distribution, background rate, event rate capacity as a function of illuminated area, and performance changes due to high temperature bakeout and high flux UV scrub. The MCPs are found to heat up, requiring from minutes to hours to reach stabilization. The event rate is strongly dependent on the size of the area being illuminated, with larger areas experiencing a gain drop onset at lower rates than smaller areas.

  7. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens.

    PubMed

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D; Volz, Kerstin

    2017-06-01

    We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. High Capacity Single Table Performance Design Using Partitioning in Oracle or PostgreSQL

    DTIC Science & Technology

    2012-03-01

    Indicators ( KPIs ) 13  5.  Conclusion 14  List of Symbols, Abbreviations, and Acronyms 15  Distribution List 16 iv List of Figures Figure 1. Oracle...Figure 7. Time to seek and return one record. 4. Additional Key Performance Indicators ( KPIs ) In addition to pure response time, there are other...Laboratory ASM Automatic Storage Management CPU central processing unit I/O input/output KPIs key performance indicators OS operating system

  9. An experimental investigation of the flow physics of high-lift systems

    NASA Technical Reports Server (NTRS)

    Thomas, Flint O.; Nelson, R. C.

    1995-01-01

    This progress report is a series of overviews outlining experiments on the flow physics of confluent boundary layers for high-lift systems. The research objectives include establishing the role of confluent boundary layer flow physics in high-lift production; contrasting confluent boundary layer structures for optimum and non-optimum C(sub L) cases; forming a high quality, detailed archival data base for CFD/modelling; and examining the role of relaminarization and streamline curvature. Goals of this research include completing LDV study of an optimum C(sub L) case; performing detailed LDV confluent boundary layer surveys for multiple non-optimum C(sub L) cases; obtaining skin friction distributions for both optimum and non-optimum C(sub L) cases for scaling purposes; data analysis and inner and outer variable scaling; setting-up and performing relaminarization experiments; and a final report establishing the role of leading edge confluent boundary layer flow physics on high-lift performance.

  10. Shipping Science Worldwide with Open Source Containers

    NASA Astrophysics Data System (ADS)

    Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.

    2014-12-01

    Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.

  11. Determination of the absolute molecular weight averages and molecular weight distributions of alginates used as ice cream stabilizers by using multiangle laser light scattering measurements.

    PubMed

    Turquois, T; Gloria, H

    2000-11-01

    High-performance size exclusion chromatography with multiangle laser light scattering detection (HPSEC-MALLS) was used for characterizing complete molecular weight distributions for a range of commercial alginates used as ice cream stabilizers. For the samples investigated, molecular weight averages were found to vary between 115 000 and 321 700 g/mol and polydispersity indexes varied from 1. 53 to 3.25. These samples displayed a high content of low molecular weights. Thus, the weight percentage of material below 100 000 g/mol ranged between 6.9 and 54.4%.

  12. Design and Analyses of High Aspect Ratio Nozzles for Distributed Propulsion Acoustic Measurements

    NASA Technical Reports Server (NTRS)

    Dippold, Vance F., III

    2016-01-01

    A series of three convergent round-to-rectangular high-aspect ratio nozzles were designed for acoustics measurements. The nozzles have exit area aspect ratios of 8:1, 12:1, and 16:1. With septa inserts, these nozzles will mimic an array of distributed propulsion system nozzles, as found on hybrid wing-body aircraft concepts. Analyses were performed for the three nozzle designs and showed that the flow through the nozzles was free of separated flow and shocks. The exit flow was mostly uniform with the exception of a pair of vortices at each span-wise end of the nozzle.

  13. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    NASA Astrophysics Data System (ADS)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  14. Probabilistic cosmological mass mapping from weak lensing shear

    DOE PAGES

    Schneider, M. D.; Ng, K. Y.; Dawson, W. A.; ...

    2017-04-10

    Here, we infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear ormore » Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  15. Improvement of two-way continuous-variable quantum key distribution with virtual photon subtraction

    NASA Astrophysics Data System (ADS)

    Zhao, Yijia; Zhang, Yichen; Li, Zhengyu; Yu, Song; Guo, Hong

    2017-08-01

    We propose a method to improve the performance of two-way continuous-variable quantum key distribution protocol by virtual photon subtraction. The virtual photon subtraction implemented via non-Gaussian post-selection not only enhances the entanglement of two-mode squeezed vacuum state but also has advantages in simplifying physical operation and promoting efficiency. In two-way protocol, virtual photon subtraction could be applied on two sources independently. Numerical simulations show that the optimal performance of renovated two-way protocol is obtained with photon subtraction only used by Alice. The transmission distance and tolerable excess noise are improved by using the virtual photon subtraction with appropriate parameters. Moreover, the tolerable excess noise maintains a high value with the increase in distance so that the robustness of two-way continuous-variable quantum key distribution system is significantly improved, especially at long transmission distance.

  16. Pharmacy student absenteeism and academic performance.

    PubMed

    Hidayat, Levita; Vansal, Sandeep; Kim, Esther; Sullivan, Maureen; Salbu, Rebecca

    2012-02-10

    To assess the association of pharmacy students' personal characteristics with absenteeism and academic performance. A survey instrument was distributed to first- (P1) and second-year (P2) pharmacy students to gather characteristics including employment status, travel time to school, and primary source of educational funding. In addition, absences from specific courses and reasons for not attending classes were assessed. Participants were divided into "high" and "low" performers based on grade point average. One hundred sixty survey instruments were completed and 135 (84.3%) were included in the study analysis. Low performers were significantly more likely than high performers to have missed more than 8 hours in therapeutics courses. Low performers were significantly more likely than high performers to miss class when the class was held before or after an examination and low performers were significantly more likely to believe that participating in class did not benefit them. There was a negative association between the number of hours students' missed and their performance in specific courses. These findings provide further insight into the reasons for students' absenteeism in a college or school of pharmacy setting.

  17. High-Energy Electron Shell in ECR Ion Source:

    NASA Astrophysics Data System (ADS)

    Niimura, M. G.; Goto, A.; Yano, Y.

    1997-05-01

    As an injector of cyclotrons and RFQ linacs, ECR ion source (ECRIS) is expected to deliver highly charged ions (HCI) at high beam-current (HBC). Injections of light gases and supplementary electrons have been employed for enhancement of HCI and HBC, respectively. Further amelioration of the performance may be feasible by investigating the hot-electron ring inside an ECRIS. Its existence has been granted because of the MeV of Te observable via X-ray diagnostics. However, its location, acceleration mechanism, and effects on the performance are not well known.We found them by deriving the radially negative potential distribution for an ECRIS from measured endloss-current data. It was evidenced from a hole-burning on the parabolic potential profile (by uniformly distributed warm-electron space charges of 9.5x10^5cm-3) and from a local minimum of the electrostatically-trapped ion distribution. A high-energy electron shell (HEES) was located right on the ECR-radius of 6 cm with shell-halfwidth of 1 cm. Such a thin shell around core plasma can only be generated by the Sadeev-Shapiro or v_phxBz acceleration mechanism that can raise Te up to a relativistic value. Here, v_ph is the phase velocity of ES Bernstein waves propagating backwards against incident microwave and Bz the axial mirror magnetic field. The HEES carries diamagnetic current which reduces the core magnetic pressure, thereby stabilizing the ECR surface against driftwave instabilities similarly to gas-mixing.

  18. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  19. From Carbon-Based Nanotubes to Nanocages for Advanced Energy Conversion and Storage.

    PubMed

    Wu, Qiang; Yang, Lijun; Wang, Xizhang; Hu, Zheng

    2017-02-21

    Carbon-based nanomaterials have been the focus of research interests in the past 30 years due to their abundant microstructures and morphologies, excellent properties, and wide potential applications, as landmarked by 0D fullerene, 1D nanotubes, and 2D graphene. With the availability of high specific surface area (SSA), well-balanced pore distribution, high conductivity, and tunable wettability, carbon-based nanomaterials are highly expected as advanced materials for energy conversion and storage to meet the increasing demands for clean and renewable energies. In this context, attention is usually attracted by the star material of graphene in recent years. In this Account, we overview our studies on carbon-based nanotubes to nanocages for energy conversion and storage, including their synthesis, performances, and related mechanisms. The two carbon nanostructures have the common features of interior cavity, high conductivity, and easy doping but much different SSAs and pore distributions, leading to different performances. We demonstrated a six-membered-ring-based growth mechanism of carbon nanotubes (CNTs) with benzene precursor based on the structural similarity of the benzene ring to the building unit of CNTs. By this mechanism, nitrogen-doped CNTs (NCNTs) with homogeneous N distribution and predominant pyridinic N were obtained with pyridine precursor, providing a new kind of support for convenient surface functionalization via N-participation. Accordingly, various transition-metal nanoparticles were directly immobilized onto NCNTs without premodification. The so-constructed catalysts featured high dispersion, narrow size distribution and tunable composition, which presented superior catalytic performances for energy conversions, for example, the oxygen reduction reaction (ORR) and methanol oxidation in fuel cells. With the advent of the new field of carbon-based metal-free electrocatalysts, we first extended ORR catalysts from the electron-rich N-doped to the electron-deficient B-doped sp 2 carbon. The combined experimental and theoretical study indicated the ORR activity originated from the activation of carbon π electrons by breaking the integrity of π conjugation, despite the electron-rich or electron-deficient nature of the dopants. With this understanding, metal-free electrocatalysts were further extended to the dopant-free defective carbon nanomaterials. Moreover, we developed novel 3D hierarchical carbon-based nanocages by the in situ MgO template method, which featured coexisting micro-meso-macropores and much larger SSA than the nanotubes. The unique 3D architecture avoids the restacking generally faced by 2D graphene due to the intrinsic π-π interaction. Consequently, the hierarchical nanocages presented superior performances not only as new catalyst supports and metal-free electrocatalysts but also as electrode materials for energy storage. State-of-the-art supercapacitive performances were achieved with high energy density and power density, as well as excellent rate capability and cycling stability. The large interior space of the nanocages enabled the encapsulation of high-loading sulfur to alleviate polysulfide dissolution while greatly enhancing the electron conduction and Li-ion diffusion, leading to top level performance of lithium-sulfur battery. These results not only provide unique carbon-based nanomaterials but also lead to in-depth understanding of growth mechanisms, material design, and structure-performance relationships, which is significant to promote their energy applications and also to enrich the exciting field of carbon-based nanomaterials.

  20. Etching nano-holes in silicon carbide using catalytic platinum nano-particles

    NASA Astrophysics Data System (ADS)

    Moyen, E.; Wulfhekel, W.; Lee, W.; Leycuras, A.; Nielsch, K.; Gösele, U.; Hanbücken, M.

    2006-09-01

    The catalytic reaction of platinum during a hydrogen etching process has been used to perform controlled vertical nanopatterning of silicon carbide substrates. A first set of experiments was performed with platinum powder randomly distributed on the SiC surface. Subsequent hydrogen etching in a hot wall reactor caused local atomic hydrogen production at the catalyst resulting in local SiC etching and hole formation. Secondly, a highly regular and monosized distribution of Pt was obtained by sputter deposition of Pt through an Au membrane serving as a contact mask. After the lift-off of the mask, the hydrogen etching revealed the onset of well-controlled vertical patterned holes on the SiC surface.

  1. Low noise buffer amplifiers and buffered phase comparators for precise time and frequency measurement and distribution

    NASA Technical Reports Server (NTRS)

    Eichinger, R. A.; Dachel, P.; Miller, W. H.; Ingold, J. S.

    1982-01-01

    Extremely low noise, high performance, wideband buffer amplifiers and buffered phase comparators were developed. These buffer amplifiers are designed to distribute reference frequencies from 30 KHz to 45 MHz from a hydrogen maser without degrading the hydrogen maser's performance. The buffered phase comparators are designed to intercompare the phase of state of the art hydrogen masers without adding any significant measurement system noise. These devices have a 27 femtosecond phase stability floor and are stable to better than one picosecond for long periods of time. Their temperature coefficient is less than one picosecond per degree C, and they have shown virtually no voltage coefficients.

  2. Distributed Turboelectric Propulsion for Hybrid Wing Body Aircraft

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Brown, Gerald V.; Felder, James L.

    2008-01-01

    Meeting future goals for aircraft and air traffic system performance will require new airframes with more highly integrated propulsion. Previous studies have evaluated hybrid wing body (HWB) configurations with various numbers of engines and with increasing degrees of propulsion-airframe integration. A recently published configuration with 12 small engines partially embedded in a HWB aircraft, reviewed herein, serves as the airframe baseline for the new concept aircraft that is the subject of this paper. To achieve high cruise efficiency, a high lift-to-drag ratio HWB was adopted as the baseline airframe along with boundary layer ingestion inlets and distributed thrust nozzles to fill in the wakes generated by the vehicle. The distributed powered-lift propulsion concept for the baseline vehicle used a simple, high-lift-capable internally blown flap or jet flap system with a number of small high bypass ratio turbofan engines in the airframe. In that concept, the engine flow path from the inlet to the nozzle is direct and does not involve complicated internal ducts through the airframe to redistribute the engine flow. In addition, partially embedded engines, distributed along the upper surface of the HWB airframe, provide noise reduction through airframe shielding and promote jet flow mixing with the ambient airflow. To improve performance and to reduce noise and environmental impact even further, a drastic change in the propulsion system is proposed in this paper. The new concept adopts the previous baseline cruise-efficient short take-off and landing (CESTOL) airframe but employs a number of superconducting motors to drive the distributed fans rather than using many small conventional engines. The power to drive these electric fans is generated by two remotely located gas-turbine-driven superconducting generators. This arrangement allows many small partially embedded fans while retaining the superior efficiency of large core engines, which are physically separated but connected through electric power lines to the fans. This paper presents a brief description of the earlier CESTOL vehicle concept and the newly proposed electrically driven fan concept vehicle, using the previous CESTOL vehicle as a baseline.

  3. Online System for Faster Multipoint Linkage Analysis via Parallel Execution on Thousands of Personal Computers

    PubMed Central

    Silberstein, M.; Tzemach, A.; Dovgolevsky, N.; Fishelson, M.; Schuster, A.; Geiger, D.

    2006-01-01

    Computation of LOD scores is a valuable tool for mapping disease-susceptibility genes in the study of Mendelian and complex diseases. However, computation of exact multipoint likelihoods of large inbred pedigrees with extensive missing data is often beyond the capabilities of a single computer. We present a distributed system called “SUPERLINK-ONLINE,” for the computation of multipoint LOD scores of large inbred pedigrees. It achieves high performance via the efficient parallelization of the algorithms in SUPERLINK, a state-of-the-art serial program for these tasks, and through the use of the idle cycles of thousands of personal computers. The main algorithmic challenge has been to efficiently split a large task for distributed execution in a highly dynamic, nondedicated running environment. Notably, the system is available online, which allows computationally intensive analyses to be performed with no need for either the installation of software or the maintenance of a complicated distributed environment. As the system was being developed, it was extensively tested by collaborating medical centers worldwide on a variety of real data sets, some of which are presented in this article. PMID:16685644

  4. Sensitivity of Offshore Surface Fluxes and Sea Breezes to the Spatial Distribution of Sea-Surface Temperature

    NASA Astrophysics Data System (ADS)

    Lombardo, Kelly; Sinsky, Eric; Edson, James; Whitney, Michael M.; Jia, Yan

    2018-03-01

    A series of numerical sensitivity experiments is performed to quantify the impact of sea-surface temperature (SST) distribution on offshore surface fluxes and simulated sea-breeze dynamics. The SST simulations of two mid-latitude sea-breeze events over coastal New England are performed using a spatially-uniform SST, as well as spatially-varying SST datasets of 32- and 1-km horizontal resolutions. Offshore surface heat and buoyancy fluxes vary in response to the SST distribution. Local sea-breeze circulations are relatively insensitive, with minimal differences in vertical structure and propagation speed among the experiments. The largest thermal perturbations are confined to the lowest 10% of the sea-breeze column due to the relatively high stability of the mid-Atlantic marine atmospheric boundary layer (ABL) suppressing vertical mixing, resulting in the depth of the marine layer remaining unchanged. Minimal impacts on the column-averaged virtual potential temperature and sea-breeze depth translates to small changes in sea-breeze propagation speed. This indicates that the use of datasets with a fine-scale SST may not produce more accurate sea-breeze simulations in highly stable marine ABL regimes, though may prove more beneficial in less stable sub-tropical environments.

  5. Feature extraction and identification in distributed optical-fiber vibration sensing system for oil pipeline safety monitoring

    NASA Astrophysics Data System (ADS)

    Wu, Huijuan; Qian, Ya; Zhang, Wei; Tang, Chenghao

    2017-12-01

    High sensitivity of a distributed optical-fiber vibration sensing (DOVS) system based on the phase-sensitivity optical time domain reflectometry (Φ-OTDR) technology also brings in high nuisance alarm rates (NARs) in real applications. In this paper, feature extraction methods of wavelet decomposition (WD) and wavelet packet decomposition (WPD) are comparatively studied for three typical field testing signals, and an artificial neural network (ANN) is built for the event identification. The comparison results prove that the WPD performs a little better than the WD for the DOVS signal analysis and identification in oil pipeline safety monitoring. The identification rate can be improved up to 94.4%, and the nuisance alarm rate can be effectively controlled as low as 5.6% for the identification network with the wavelet packet energy distribution features.

  6. Investigation of Hydrogen Embrittlement Susceptibility of X80 Weld Joints by Thermal Simulation

    NASA Astrophysics Data System (ADS)

    Peng, Huangtao; An, Teng; Zheng, Shuqi; Luo, Bingwei; Wang, Siyu; Zhang, Shuai

    2018-05-01

    The objective of this study was to investigate the hydrogen embrittlement (HE) susceptibility and influence mechanism of X80 weld joints. Slow strain rate testing (SSRT) under in situ H-charging, combined with microstructure and fracture analysis, was performed on the base metal (BM), weld metal (WM), thermally simulated fine-grained heat-affected zone (FGHAZ) and coarse-grained heat-affected zone (CGHAZ). Results showed that the WM and simulated HAZ had a greater degree of high local strain distribution than the BM; compared to the CGHAZ, the FGHAZ had lower microhardness and more uniformly distributed stress. SSRT results showed that the weld joint was highly sensitive to HE; the HE index decreased in the following sequence: FGHAZ, WM, CGHAZ and BM. The effect of the microstructure on HE was mainly reflected in microstructure, local stress distribution and microhardness.

  7. High-Performance Clock Synchronization Algorithms for Distributed Wireless Airborne Computer Networks with Applications to Localization and Tracking of Targets

    DTIC Science & Technology

    2010-06-01

    GMKPF represents a better and more flexible alternative to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ...accurate results relative to GML and EML when the network delays are modeled in terms of a single non-Gaussian/non-exponential distribution or as a...to the Gaussian Maximum Likelihood (GML), and Exponential Maximum Likelihood ( EML ) estimators for clock offset estimation in non-Gaussian or non

  8. Thinking Systemically: Steps for States to Improve Equity in the Distribution of Teachers-- An Action-Planning Workbook to Help Guide Regional Comprehensive Center and State Education Agency Conversation to Address the Inequitable Distribution of Teachers

    ERIC Educational Resources Information Center

    National Comprehensive Center for Teacher Quality, 2009

    2009-01-01

    The National Comprehensive Center for Teacher Quality (TQ Center) is a resource to which the regional comprehensive centers, states, and other education stakeholders turn for strengthening the quality of teaching--especially in high-poverty, low-performing, and hard-to-staff schools--and for finding guidance in addressing specific needs, thereby…

  9. Epitrochoid Power-Law Nozzle Rapid Prototype Build/Test Project (Briefing Charts)

    DTIC Science & Technology

    2015-02-01

    Production Approved for public release; distribution is unlimited. PA clearance # 15122. 4 Epitrochoid Power-Law Nozzle Build/Test Build on SpaceX ...Multiengine Approach SpaceX ) Approved for public release; distribution is unlimited. PA clearance # 15122. Engines: Merlin 1D on Falcon 9 v1.1 (Photo 5...to utilize features of high performance engines advances and the economies of scale of the multi-engine approach of SpaceX Falcon 9 – Rapid Prototype

  10. [Series: Medical Applications of the PHITS Code (2): Acceleration by Parallel Computing].

    PubMed

    Furuta, Takuya; Sato, Tatsuhiko

    2015-01-01

    Time-consuming Monte Carlo dose calculation becomes feasible owing to the development of computer technology. However, the recent development is due to emergence of the multi-core high performance computers. Therefore, parallel computing becomes a key to achieve good performance of software programs. A Monte Carlo simulation code PHITS contains two parallel computing functions, the distributed-memory parallelization using protocols of message passing interface (MPI) and the shared-memory parallelization using open multi-processing (OpenMP) directives. Users can choose the two functions according to their needs. This paper gives the explanation of the two functions with their advantages and disadvantages. Some test applications are also provided to show their performance using a typical multi-core high performance workstation.

  11. Many-junction photovoltaic device performance under non-uniform high-concentration illumination

    NASA Astrophysics Data System (ADS)

    Valdivia, Christopher E.; Wilkins, Matthew M.; Chahal, Sanmeet S.; Proulx, Francine; Provost, Philippe-Olivier; Masson, Denis P.; Fafard, Simon; Hinzer, Karin

    2017-09-01

    A parameterized 3D distributed circuit model was developed to calculate the performance of III-V solar cells and photonic power converters (PPC) with a variable number of epitaxial vertically-stacked pn junctions. PPC devices are designed with many pn junctions to realize higher voltages and to operate under non-uniform illumination profiles from a laser or LED. Performance impacts of non-uniform illumination were greatly reduced with increasing number of junctions, with simulations comparing PPC devices with 3 to 20 junctions. Experimental results using Azastra Opto's 12- and 20-junction PPC illuminated by an 845 nm diode laser show high performance even with a small gap between the PPC and optical fiber output, until the local tunnel junction limit is reached.

  12. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  13. Performance of the K+ ion diode in the 2 MV injector for heavy ion fusion

    NASA Astrophysics Data System (ADS)

    Bieniosek, F. M.; Henestroza, E.; Kwan, J. W.

    2002-02-01

    Heavy ion beam inertial fusion driver concepts depend on the availability and performance of high-brightness high-current ion sources. Surface ionization sources have relatively low current density but high brightness because of the low temperature of the emitted ions. We have measured the beam profiles at the exit of the injector diode, and compared the measured profiles with EGUN and WARP-3D predictions. Spherical aberrations are significant in this large aspect ratio diode. We discuss the measured and calculated beam size and beam profiles, the effect of aberrations, quality of vacuum, and secondary electron distributions on the beam profile.

  14. Effects of oxygen and water content on microbial distribution in the polyurethane foam cubes of a biofilter for SO2 removal.

    PubMed

    Zhang, Jingying; Li, Lin; Liu, Junxin; Wang, Yanjie

    2018-01-01

    The performance of a biofilter for off-gas treatment relies on the activity of microorganisms and adequate O 2 and H 2 O. In present study, a microelectrode was applied to analyze O 2 in polyurethane foam cubes (PUFCs) packed in a biofilter for SO 2 removal. The O 2 distribution varied with the density and water-containing rate (WCR) of PUFCs. The O 2 concentration dropped sharply from 10.2 to 0.8mg/L from the surface to the center of a PUFC with 97.20% of WCR. The PUFCs with high WCR presented aerobic-anoxic-aerobic areas. Three-dimensional simulated images demonstrated that the structure of PUFCs with high WCR consisted of an aerobic "shell" and an anoxic "core", with high-density PUFCs featuring a larger anoxic area than low-density PUFCs. Moreover, the H 2 O distribution in the PUFC was uneven and affected the O 2 concentration. Whereas aerobic bacteria were observed in the PUFC surface, facultative anaerobic microorganisms were found at the PUFC core, where the O 2 concentration was relatively low. O 2 and H 2 O distributions differed in the PUFCs, and the distribution of microorganisms varied accordingly. Copyright © 2017. Published by Elsevier B.V.

  15. Revolutionary Aeropropulsion Concept for Sustainable Aviation: Turboelectric Distributed Propulsion

    NASA Technical Reports Server (NTRS)

    Kim, Hyun Dae; Felder, James L.; Tong, Michael. T.; Armstrong, Michael

    2013-01-01

    In response to growing aviation demands and concerns about the environment and energy usage, a team at NASA proposed and examined a revolutionary aeropropulsion concept, a turboelectric distributed propulsion system, which employs multiple electric motor-driven propulsors that are distributed on a large transport vehicle. The power to drive these electric propulsors is generated by separately located gas-turbine-driven electric generators on the airframe. This arrangement enables the use of many small-distributed propulsors, allowing a very high effective bypass ratio, while retaining the superior efficiency of large core engines, which are physically separated but connected to the propulsors through electric power lines. Because of the physical separation of propulsors from power generating devices, a new class of vehicles with unprecedented performance employing such revolutionary propulsion system is possible in vehicle design. One such vehicle currently being investigated by NASA is called the "N3-X" that uses a hybrid-wing-body for an airframe and superconducting generators, motors, and transmission lines for its propulsion system. On the N3-X these new degrees of design freedom are used (1) to place two large turboshaft engines driving generators in freestream conditions to minimize total pressure losses and (2) to embed a broad continuous array of 14 motor-driven fans on the upper surface of the aircraft near the trailing edge of the hybrid-wing-body airframe to maximize propulsive efficiency by ingesting thick airframe boundary layer flow. Through a system analysis in engine cycle and weight estimation, it was determined that the N3-X would be able to achieve a reduction of 70% or 72% (depending on the cooling system) in energy usage relative to the reference aircraft, a Boeing 777-200LR. Since the high-power electric system is used in its propulsion system, a study of the electric power distribution system was performed to identify critical dynamic and safety issues. This paper presents some of the features and issues associated with the turboelectric distributed propulsion system and summarizes the recent study results, including the high electric power distribution, in the analysis of the N3-X vehicle.

  16. Distributed Multiple Access Control for the Wireless Mesh Personal Area Networks

    NASA Astrophysics Data System (ADS)

    Park, Moo Sung; Lee, Byungjoo; Rhee, Seung Hyong

    Mesh networking technologies for both high-rate and low-rate wireless personal area networks (WPANs) are under development by several standardization bodies. They are considering to adopt distributed TDMA MAC protocols to provide seamless user mobility as well as a good peer-to-peer QoS in WPAN mesh. It has been, however, pointed out that the absence of a central controller in the wireless TDMA MAC may cause a severe performance degradation: e. g., fair allocation, service differentiation, and admission control may be hard to achieve or can not be provided. In this paper, we suggest a new framework of resource allocation for the distributed MAC protocols in WPANs. Simulation results show that our algorithm achieves both a fair resource allocation and flexible service differentiations in a fully distributed way for mesh WPANs where the devices have high mobility and various requirements. We also provide an analytical modeling to discuss about its unique equilibrium and to compute the lengths of reserved time slots at the stable point.

  17. Thermal analysis of disc brakes using finite element method

    NASA Astrophysics Data System (ADS)

    Jaenudin, Jamari, J.; Tauviqirrahman, M.

    2017-01-01

    Disc brakes are components of a vehicle that serve to slow or stop the rotation of the wheel. This paper discusses the phenomenon of heat distribution on the brake disc during braking. Heat distribution on the brake disc is caused by kinetic energy changing into mechanical energy. Energy changes occur during the braking process due to friction between the surface of the disc and a disc pad. The temperature resulting from this friction rises high. This thermal analysis on brake discs is aimed to evaluate the performance of an electric car in the braking process. The aim of this study is to analyze the thermal behavior of the brake discs using the Finite Element Method (FEM) through examining the heat distribution on the brake disc using 3-D modeling. Results obtained from the FEM reflect the effects of high heat due to the friction between the disc pad with the disc rotor. Results of the simulation study are used to identify the effect of the heat distribution that occurred during the braking process.

  18. Morphology evolution in high-performance polymer solar cells processed from nonhalogenated solvent

    DOE PAGES

    Cai, Wanzhu; Liu, Peng; Jin, Yaocheng; ...

    2015-05-26

    A new processing protocol based on non-halogenated solvent and additive is developed to produce polymer solar cells with power conversion efficiencies better than those processed from commonly used halogenated solvent-additive pair. Morphology studies show that good performance correlates with a finely distributed nanomorphology with a well-defined polymer fibril network structure, which leads to balanced charge transport in device operation.

  19. Modeling Operator Performance in Low Task Load Supervisory Domains

    DTIC Science & Technology

    2011-06-01

    PDF Probability Distribution Function SAFE System for Aircrew Fatigue Evaluation SAFTE Sleep , Activity, Fatigue, and Task Effectiveness SCT...attentional capacity due to high mental workload. In low task load settings, fatigue is mainly caused by lack of sleep and boredom experienced by...performance decrements. Also, psychological fatigue is strongly correlated with lack of sleep . Not surprisingly, operators of the morning shift reported the

  20. NAS Parallel Benchmark. Results 11-96: Performance Comparison of HPF and MPI Based NAS Parallel Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    Saini, Subash; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    High Performance Fortran (HPF), the high-level language for parallel Fortran programming, is based on Fortran 90. HALF was defined by an informal standards committee known as the High Performance Fortran Forum (HPFF) in 1993, and modeled on TMC's CM Fortran language. Several HPF features have since been incorporated into the draft ANSI/ISO Fortran 95, the next formal revision of the Fortran standard. HPF allows users to write a single parallel program that can execute on a serial machine, a shared-memory parallel machine, or a distributed-memory parallel machine. HPF eliminates the complex, error-prone task of explicitly specifying how, where, and when to pass messages between processors on distributed-memory machines, or when to synchronize processors on shared-memory machines. HPF is designed in a way that allows the programmer to code an application at a high level, and then selectively optimize portions of the code by dropping into message-passing or calling tuned library routines as 'extrinsics'. Compilers supporting High Performance Fortran features first appeared in late 1994 and early 1995 from Applied Parallel Research (APR) Digital Equipment Corporation, and The Portland Group (PGI). IBM introduced an HPF compiler for the IBM RS/6000 SP/2 in April of 1996. Over the past two years, these implementations have shown steady improvement in terms of both features and performance. The performance of various hardware/ programming model (HPF and MPI (message passing interface)) combinations will be compared, based on latest NAS (NASA Advanced Supercomputing) Parallel Benchmark (NPB) results, thus providing a cross-machine and cross-model comparison. Specifically, HPF based NPB results will be compared with MPI based NPB results to provide perspective on performance currently obtainable using HPF versus MPI or versus hand-tuned implementations such as those supplied by the hardware vendors. In addition we would also present NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz) NEC SX-4/32, SGI/CRAY T3E, SGI Origin2000.

  1. Discriminating nutritional quality of foods using the 5-Color nutrition label in the French food market: consistency with nutritional recommendations.

    PubMed

    Julia, Chantal; Ducrot, Pauline; Péneau, Sandrine; Deschamps, Valérie; Méjean, Caroline; Fézeu, Léopold; Touvier, Mathilde; Hercberg, Serge; Kesse-Guyot, Emmanuelle

    2015-09-28

    Our objectives were to assess the performance of the 5-Colour nutrition label (5-CNL) front-of-pack nutrition label based on the Food Standards Agency nutrient profiling system to discriminate nutritional quality of foods currently on the market in France and its consistency with French nutritional recommendations. Nutritional composition of 7777 foods available on the French market collected from the web-based collaborative project Open Food Facts were retrieved. Distribution of products across the 5-CNL categories according to food groups, as arranged in supermarket shelves was assessed. Distribution of similar products from different brands in the 5-CNL categories was also assessed. Discriminating performance was considered as the number of color categories present in each food group. In the case of discrepancies between the category allocation and French nutritional recommendations, adaptations of the original score were proposed. Overall, the distribution of foodstuffs in the 5-CNL categories was consistent with French recommendations: 95.4% of 'Fruits and vegetables', 72.5% of 'Cereals and potatoes' were classified as 'Green' or 'Yellow' whereas 86.0% of 'Sugary snacks' were classified as 'Pink' or 'Red'. Adaptations to the original FSA score computation model were necessary for beverages, added fats and cheese in order to be consistent with French official nutritional recommendations. The 5-CNL label displays a high performance in discriminating nutritional quality of foods across food groups, within a food group and for similar products from different brands. Adaptations from the original model were necessary to maintain consistency with French recommendations and high performance of the system.

  2. A distributed microcomputer-controlled system for data acquisition and power spectral analysis of EEG.

    PubMed

    Vo, T D; Dwyer, G; Szeto, H H

    1986-04-01

    A relatively powerful and inexpensive microcomputer-based system for the spectral analysis of the EEG is presented. High resolution and speed is achieved with the use of recently available large-scale integrated circuit technology with enhanced functionality (INTEL Math co-processors 8087) which can perform transcendental functions rapidly. The versatility of the system is achieved with a hardware organization that has distributed data acquisition capability performed by the use of a microprocessor-based analog to digital converter with large resident memory (Cyborg ISAAC-2000). Compiled BASIC programs and assembly language subroutines perform on-line or off-line the fast Fourier transform and spectral analysis of the EEG which is stored as soft as well as hard copy. Some results obtained from test application of the entire system in animal studies are presented.

  3. Tensorial Basis Spline Collocation Method for Poisson's Equation

    NASA Astrophysics Data System (ADS)

    Plagne, Laurent; Berthou, Jean-Yves

    2000-01-01

    This paper aims to describe the tensorial basis spline collocation method applied to Poisson's equation. In the case of a localized 3D charge distribution in vacuum, this direct method based on a tensorial decomposition of the differential operator is shown to be competitive with both iterative BSCM and FFT-based methods. We emphasize the O(h4) and O(h6) convergence of TBSCM for cubic and quintic splines, respectively. We describe the implementation of this method on a distributed memory parallel machine. Performance measurements on a Cray T3E are reported. Our code exhibits high performance and good scalability: As an example, a 27 Gflops performance is obtained when solving Poisson's equation on a 2563 non-uniform 3D Cartesian mesh by using 128 T3E-750 processors. This represents 215 Mflops per processors.

  4. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE PAGES

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte; ...

    2017-05-17

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  5. Optimum aggregation of geographically distributed flexible resources in strategic smart-grid/microgrid locations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattarai, Bishnu P.; Myers, Kurt S.; Bak-Jensen, Brigitte

    This paper determines optimum aggregation areas for a given distribution network considering spatial distribution of loads and costs of aggregation. An elitist genetic algorithm combined with a hierarchical clustering and a Thevenin network reduction is implemented to compute strategic locations and aggregate demand within each area. The aggregation reduces large distribution networks having thousands of nodes to an equivalent network with few aggregated loads, thereby significantly reducing the computational burden. Furthermore, it not only helps distribution system operators in making faster operational decisions by understanding during which time of the day will be in need of flexibility, from which specificmore » area, and in which amount, but also enables the flexibilities stemming from small distributed resources to be traded in various power/energy markets. A combination of central and local aggregation scheme where a central aggregator enables market participation, while local aggregators materialize the accepted bids, is implemented to realize this concept. The effectiveness of the proposed method is evaluated by comparing network performances with and without aggregation. Finally, for a given network configuration, steady-state performance of aggregated network is significantly accurate (≈ ±1.5% error) compared to very high errors associated with forecast of individual consumer demand.« less

  6. Predicting species richness and distribution ranges of centipedes at the northern edge of Europe

    NASA Astrophysics Data System (ADS)

    Georgopoulou, Elisavet; Djursvoll, Per; Simaiakis, Stylianos M.

    2016-07-01

    In recent decades, interest in understanding species distributions and exploring processes that shape species diversity has increased, leading to the development of advanced methods for the exploitation of occurrence data for analytical and ecological purposes. Here, with the use of georeferenced centipede data, we explore the importance and contribution of bioclimatic variables and land cover, and predict distribution ranges and potential hotspots in Norway. We used a maximum entropy analysis (Maxent) to model species' distributions, aiming at exploring centres of distribution, latitudinal spans and northern range boundaries of centipedes in Norway. The performance of all Maxent models was better than random with average test area under the curve (AUC) values above 0.893 and True Skill Statistic (TSS) values above 0.593. Our results showed a highly significant latitudinal gradient of increased species richness in southern grid-cells. Mean temperatures of warmest and coldest quarters explained much of the potential distribution of species. Predictive modelling analyses revealed that south-eastern Norway and the Atlantic coast in the west (inclusive of the major fjord system of Sognefjord), are local biodiversity hotspots with regard to high predictive species co-occurrence. We conclude that our predicted northward shifts of centipedes' distributions in Norway are likely a result of post-glacial recolonization patterns, species' ecological requirements and dispersal abilities.

  7. Selective Iterative Waterfilling for Digital Subscriber Lines

    NASA Astrophysics Data System (ADS)

    Xu, Yang; Le-Ngoc, Tho; Panigrahi, Saswat

    2007-12-01

    This paper presents a high-performance, low-complexity, quasi-distributed dynamic spectrum management (DSM) algorithm suitable for DSL systems. We analytically demonstrate that the rate degradation of the distributed iterative waterfilling (IW) algorithm in near-far scenarios is caused by the insufficient utilization of all available frequency and power resources due to its nature of noncooperative game theoretic formulation. Inspired by this observation, we propose the selective IW (SIW) algorithm that can considerably alleviate the performance degradation of IW by applying IW selectively to different groups of users over different frequency bands so that all the available resources can be fully utilized. For [InlineEquation not available: see fulltext.] users, the proposed SIW algorithm needs at most [InlineEquation not available: see fulltext.] times the complexity of the IW algorithm, and is much simpler than the centralized optimal spectrum balancing (OSB), while it can offer a rate performance much better than that of the IW and close to the maximum possible rate region computed by the OSB in realistic near-far DSL scenarios. Furthermore, its predominantly distributed structure makes it suitable for DSL implementation.

  8. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    NASA Technical Reports Server (NTRS)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  9. Application of high performance asynchronous socket communication in power distribution automation

    NASA Astrophysics Data System (ADS)

    Wang, Ziyu

    2017-05-01

    With the development of information technology and Internet technology, and the growing demand for electricity, the stability and the reliable operation of power system have been the goal of power grid workers. With the advent of the era of big data, the power data will gradually become an important breakthrough to guarantee the safe and reliable operation of the power grid. So, in the electric power industry, how to efficiently and robustly receive the data transmitted by the data acquisition device, make the power distribution automation system be able to execute scientific decision quickly, which is the pursuit direction in power grid. In this paper, some existing problems in the power system communication are analysed, and with the help of the network technology, a set of solutions called Asynchronous Socket Technology to the problem in network communication which meets the high concurrency and the high throughput is proposed. Besides, the paper also looks forward to the development direction of power distribution automation in the era of big data and artificial intelligence.

  10. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  11. Ex vivo validation of photo-magnetic imaging.

    PubMed

    Luk, Alex; Nouizi, Farouk; Erkol, Hakan; Unlu, Mehmet B; Gulsen, Gultekin

    2017-10-15

    We recently introduced a new high-resolution diffuse optical imaging technique termed photo-magnetic imaging (PMI), which utilizes magnetic resonance thermometry (MRT) to monitor the 3D temperature distribution induced in a medium illuminated with a near-infrared light. The spatiotemporal temperature distribution due to light absorption can be accurately estimated using a combined photon propagation and heat diffusion model. High-resolution optical absorption images are then obtained by iteratively minimizing the error between the measured and modeled temperature distributions. We have previously demonstrated the feasibility of PMI with experimental studies using tissue simulating agarose phantoms. In this Letter, we present the preliminary ex vivo PMI results obtained with a chicken breast sample. Similarly to the results obtained on phantoms, the reconstructed images reveal that PMI can quantitatively resolve an inclusion with a 3 mm diameter embedded deep in a biological tissue sample with only 10% error. These encouraging results demonstrate the high performance of PMI in ex vivo biological tissue and its potential for in vivo imaging.

  12. Macromolecule mapping of the brain using ultrashort-TE acquisition and reference-based metabolite removal.

    PubMed

    Lam, Fan; Li, Yudu; Clifford, Bryan; Liang, Zhi-Pei

    2018-05-01

    To develop a practical method for mapping macromolecule distribution in the brain using ultrashort-TE MRSI data. An FID-based chemical shift imaging acquisition without metabolite-nulling pulses was used to acquire ultrashort-TE MRSI data that capture the macromolecule signals with high signal-to-noise-ratio (SNR) efficiency. To remove the metabolite signals from the ultrashort-TE data, single voxel spectroscopy data were obtained to determine a set of high-quality metabolite reference spectra. These spectra were then incorporated into a generalized series (GS) model to represent general metabolite spatiospectral distributions. A time-segmented algorithm was developed to back-extrapolate the GS model-based metabolite distribution from truncated FIDs and remove it from the MRSI data. Numerical simulations and in vivo experiments have been performed to evaluate the proposed method. Simulation results demonstrate accurate metabolite signal extrapolation by the proposed method given a high-quality reference. For in vivo experiments, the proposed method is able to produce spatiospectral distributions of macromolecules in the brain with high SNR from data acquired in about 10 minutes. We further demonstrate that the high-dimensional macromolecule spatiospectral distribution resides in a low-dimensional subspace. This finding provides a new opportunity to use subspace models for quantification and accelerated macromolecule mapping. Robustness of the proposed method is also demonstrated using multiple data sets from the same and different subjects. The proposed method is able to obtain macromolecule distributions in the brain from ultrashort-TE acquisitions. It can also be used for acquiring training data to determine a low-dimensional subspace to represent the macromolecule signals for subspace-based MRSI. Magn Reson Med 79:2460-2469, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Investigating Whistler Mode Wave Diffusion Coefficients at Mars

    NASA Astrophysics Data System (ADS)

    Shane, A. D.; Liemohn, M. W.; Xu, S.; Florie, C.

    2017-12-01

    Observations of electron pitch angle distributions have suggested collisions are not the only pitch angle scattering process occurring in the Martian ionosphere. This unknown scattering process is causing high energy electrons (>100 eV) to become isotropized. Whistler mode waves are one pitch angle scattering mechanism known to preferentially scatter high energy electrons in certain plasma regimes. The distribution of whistler mode wave diffusion coefficients are dependent on the background magnetic field strength and thermal electron density, as well as the frequency and wave normal angle of the wave. We have solved for the whistler mode wave diffusion coefficients using the quasi-linear diffusion equations and have integrated them into a superthermal electron transport (STET) model. Preliminary runs have produced results that qualitatively match the observed electron pitch angle distributions at Mars. We performed parametric sweeps over magnetic field, thermal electron density, wave frequency, and wave normal angle to understand the relationship between the plasma parameters and the diffusion coefficient distributions, but also to investigate what regimes whistler mode waves scatter only high energy electrons. Increasing the magnetic field strength and lowering the thermal electron density shifts the distribution of diffusion coefficients toward higher energies and lower pitch angles. We have created an algorithm to identify Mars Atmosphere Volatile and EvolutioN (MAVEN) observations of high energy isotropic pitch angle distributions in the Martian ionosphere. We are able to map these distributions at Mars, and compare the conditions under which these are observed at Mars with the results of our parametric sweeps. Lastly, we will also look at each term in the kinetic diffusion equation to determine if the energy and mixed diffusion coefficients are important enough to incorporate into STET as well.

  14. Analysis of nanopore arrangement of porous alumina layers formed by anodizing in oxalic acid at relatively high temperatures

    NASA Astrophysics Data System (ADS)

    Zaraska, Leszek; Stępniowski, Wojciech J.; Jaskuła, Marian; Sulka, Grzegorz D.

    2014-06-01

    Anodic aluminum oxide (AAO) layers were formed by a simple two-step anodization in 0.3 M oxalic acid at relatively high temperatures (20-30 °C) and various anodizing potentials (30-65 V). The effect of anodizing conditions on structural features of as-obtained oxides was carefully investigated. A linear and exponential relationships between cell diameter, pore density and anodizing potential were confirmed, respectively. On the other hand, no effect of temperature and duration of anodization on pore spacing and pore density was found. Detailed quantitative and qualitative analyses of hexagonal arrangement of nanopore arrays were performed for all studied samples. The nanopore arrangement was evaluated using various methods based on the fast Fourier transform (FFT) images, Delaunay triangulations (defect maps), pair distribution functions (PDF), and angular distribution functions (ADF). It was found that for short anodizations performed at relatively high temperatures, the optimal anodizing potential that results in formation of nanostructures with the highest degree of pore order is 45 V. No direct effect of temperature and time of anodization on the nanopore arrangement was observed.

  15. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System.

    PubMed

    Liu, Ying-Pei; Liang, Hai-Ping; Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane.

  16. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System

    PubMed Central

    Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane. PMID:26098556

  17. In-Situ Measurement of High-Temperature Proton Exchange Membrane Fuel Cell Stack Using Flexible Five-in-One Micro-Sensor

    PubMed Central

    Lee, Chi-Yuan; Weng, Fang-Bor; Kuo, Yzu-Wei; Tsai, Chao-Hsuan; Cheng, Yen-Ting; Cheng, Chih-Kai; Lin, Jyun-Ting

    2016-01-01

    In the chemical reaction that proceeds in a high-temperature proton exchange membrane fuel cell stack (HT-PEMFC stack), the internal local temperature, voltage, pressure, flow and current nonuniformity may cause poor membrane material durability and nonuniform fuel distribution, thus influencing the performance and lifetime of the fuel cell stack. In this paper micro-electro-mechanical systems (MEMS) are utilized to develop a high-temperature electrochemical environment-resistant five-in-one micro-sensor embedded in the cathode channel plate of an HT-PEMFC stack, and materials and process parameters are appropriately selected to protect the micro-sensor against failure or destruction during long-term operation. In-situ measurement of the local temperature, voltage, pressure, flow and current distributions in the HT-PEMFC stack is carried out. This integrated micro-sensor has five functions, and is favorably characterized by small size, good acid resistance and temperature resistance, quick response, real-time measurement, and the goal is being able to be put in any place for measurement without affecting the performance of the battery. PMID:27763559

  18. In-Situ Measurement of High-Temperature Proton Exchange Membrane Fuel Cell Stack Using Flexible Five-in-One Micro-Sensor.

    PubMed

    Lee, Chi-Yuan; Weng, Fang-Bor; Kuo, Yzu-Wei; Tsai, Chao-Hsuan; Cheng, Yen-Ting; Cheng, Chih-Kai; Lin, Jyun-Ting

    2016-10-18

    In the chemical reaction that proceeds in a high-temperature proton exchange membrane fuel cell stack (HT-PEMFC stack), the internal local temperature, voltage, pressure, flow and current nonuniformity may cause poor membrane material durability and nonuniform fuel distribution, thus influencing the performance and lifetime of the fuel cell stack. In this paper micro-electro-mechanical systems (MEMS) are utilized to develop a high-temperature electrochemical environment-resistant five-in-one micro-sensor embedded in the cathode channel plate of an HT-PEMFC stack, and materials and process parameters are appropriately selected to protect the micro-sensor against failure or destruction during long-term operation. In-situ measurement of the local temperature, voltage, pressure, flow and current distributions in the HT-PEMFC stack is carried out. This integrated micro-sensor has five functions, and is favorably characterized by small size, good acid resistance and temperature resistance, quick response, real-time measurement, and the goal is being able to be put in any place for measurement without affecting the performance of the battery.

  19. Monolithic subwavelength high refractive-index-contrast grating VCSELs

    NASA Astrophysics Data System (ADS)

    Gebski, Marcin; Dems, Maciej; Lott, James A.; Czyszanowski, Tomasz

    2016-03-01

    In this paper we present optical design and simulation results of vertical-cavity surface-emitting lasers (VCSELs) that incorporate monolithic subwavelength high refractive-index-contrast grating (MHCG) mirrors - a new variety of HCG mirror that is composed of high index material surrounded only on one side by low index material. We show the impact of an MHCG mirror on the performance of 980 nm VCSELs designed for high bit rate and energy-efficient optical data communications. In our design, all or part of the all-semiconductor top coupling distributed Bragg reflector mirror is replaced by an undoped gallium-arsenide MHCG. We show how the optical field intensity distribution of the VCSEL's fundamental mode is controlled by the combination of the number of residual distributed Bragg reflector (DBR) mirror periods and the physical design of the topmost gallium-arsenide MHCG. Additionally, we numerically investigate the confinement factors of our VCSELs and show that this parameter for the MHCG DBR VCSELs may only be properly determined in two or three dimensions due to the periodic nature of the grating mirror.

  20. Design and Implementation of a Distributed Version of the NASA Engine Performance Program

    NASA Technical Reports Server (NTRS)

    Cours, Jeffrey T.

    1994-01-01

    Distributed NEPP is a new version of the NASA Engine Performance Program that runs in parallel on a collection of Unix workstations connected through a network. The program is fault-tolerant, efficient, and shows significant speed-up in a multi-user, heterogeneous environment. This report describes the issues involved in designing distributed NEPP, the algorithms the program uses, and the performance distributed NEPP achieves. It develops an analytical model to predict and measure the performance of the simple distribution, multiple distribution, and fault-tolerant distribution algorithms that distributed NEPP incorporates. Finally, the appendices explain how to use distributed NEPP and document the organization of the program's source code.

  1. Distributed metadata in a high performance computing environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination thatmore » a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.« less

  2. General Purpose Graphics Processing Unit Based High-Rate Rice Decompression and Reed-Solomon Decoding

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loughry, Thomas A.

    As the volume of data acquired by space-based sensors increases, mission data compression/decompression and forward error correction code processing performance must likewise scale. This competency development effort was explored using the General Purpose Graphics Processing Unit (GPGPU) to accomplish high-rate Rice Decompression and high-rate Reed-Solomon (RS) decoding at the satellite mission ground station. Each algorithm was implemented and benchmarked on a single GPGPU. Distributed processing across one to four GPGPUs was also investigated. The results show that the GPGPU has considerable potential for performing satellite communication Data Signal Processing, with three times or better performance improvements and up to tenmore » times reduction in cost over custom hardware, at least in the case of Rice Decompression and Reed-Solomon Decoding.« less

  3. High-performance solid-state supercapacitors based on graphene-ZnO hybrid nanocomposites.

    PubMed

    Li, Zijiong; Zhou, Zhihua; Yun, Gaoqian; Shi, Kai; Lv, Xiaowei; Yang, Baocheng

    2013-11-12

    In this paper, we report a facile low-cost synthesis of the graphene-ZnO hybrid nanocomposites for solid-state supercapacitors. Structural analysis revealed a homogeneous distribution of ZnO nanorods that are inserted in graphene nanosheets, forming a sandwiched architecture. The material exhibited a high specific capacitance of 156 F g-1 at a scan rate of 5 mV.s-1. The fabricated solid-state supercapacitor device using these graphene-ZnO hybrid nanocomposites exhibits good supercapacitive performance and long-term cycle stability. The improved supercapacitance property of these materials could be ascribed to the increased conductivity of ZnO and better utilization of graphene. These results demonstrate the potential of the graphene-ZnO hybrid nanocomposites as an electrode in high-performance supercapacitors.

  4. High-performance solid-state supercapacitors based on graphene-ZnO hybrid nanocomposites

    NASA Astrophysics Data System (ADS)

    Li, Zijiong; Zhou, Zhihua; Yun, Gaoqian; Shi, Kai; Lv, Xiaowei; Yang, Baocheng

    2013-11-01

    In this paper, we report a facile low-cost synthesis of the graphene-ZnO hybrid nanocomposites for solid-state supercapacitors. Structural analysis revealed a homogeneous distribution of ZnO nanorods that are inserted in graphene nanosheets, forming a sandwiched architecture. The material exhibited a high specific capacitance of 156 F g-1 at a scan rate of 5 mV.s-1. The fabricated solid-state supercapacitor device using these graphene-ZnO hybrid nanocomposites exhibits good supercapacitive performance and long-term cycle stability. The improved supercapacitance property of these materials could be ascribed to the increased conductivity of ZnO and better utilization of graphene. These results demonstrate the potential of the graphene-ZnO hybrid nanocomposites as an electrode in high-performance supercapacitors.

  5. High-performance solid-state supercapacitors based on graphene-ZnO hybrid nanocomposites

    PubMed Central

    2013-01-01

    In this paper, we report a facile low-cost synthesis of the graphene-ZnO hybrid nanocomposites for solid-state supercapacitors. Structural analysis revealed a homogeneous distribution of ZnO nanorods that are inserted in graphene nanosheets, forming a sandwiched architecture. The material exhibited a high specific capacitance of 156 F g−1 at a scan rate of 5 mV.s−1. The fabricated solid-state supercapacitor device using these graphene-ZnO hybrid nanocomposites exhibits good supercapacitive performance and long-term cycle stability. The improved supercapacitance property of these materials could be ascribed to the increased conductivity of ZnO and better utilization of graphene. These results demonstrate the potential of the graphene-ZnO hybrid nanocomposites as an electrode in high-performance supercapacitors. PMID:24215772

  6. Resource selection models are useful in predicting fine-scale distributions of black-footed ferrets in prairie dog colonies

    USGS Publications Warehouse

    Eads, David A.; Jachowski, David S.; Biggins, Dean E.; Livieri, Travis M.; Matchett, Marc R.; Millspaugh, Joshua J.

    2012-01-01

    Wildlife-habitat relationships are often conceptualized as resource selection functions (RSFs)—models increasingly used to estimate species distributions and prioritize habitat conservation. We evaluated the predictive capabilities of 2 black-footed ferret (Mustela nigripes) RSFs developed on a 452-ha colony of black-tailed prairie dogs (Cynomys ludovicianus) in the Conata Basin, South Dakota. We used the RSFs to project the relative probability of occurrence of ferrets throughout an adjacent 227-ha colony. We evaluated performance of the RSFs using ferret space use data collected via postbreeding spotlight surveys June–October 2005–2006. In home ranges and core areas, ferrets selected the predicted "very high" and "high" occurrence categories of both RSFs. Count metrics also suggested selection of these categories; for each model in each year, approximately 81% of ferret locations occurred in areas of very high or high predicted occurrence. These results suggest usefulness of the RSFs in estimating the distribution of ferrets throughout a black-tailed prairie dog colony. The RSFs provide a fine-scale habitat assessment for ferrets that can be used to prioritize releases of ferrets and habitat restoration for prairie dogs and ferrets. A method to quickly inventory the distribution of prairie dog burrow openings would greatly facilitate application of the RSFs.

  7. Experimental investigation on pressurization performance of cryogenic tank during high-temperature helium pressurization process

    NASA Astrophysics Data System (ADS)

    Lei, Wang; Yanzhong, Li; Yonghua, Jin; Yuan, Ma

    2015-03-01

    Sufficient knowledge of thermal performance and pressurization behaviors in cryogenic tanks during rocket launching period is of importance to the design and optimization of a pressurization system. In this paper, ground experiments with liquid oxygen (LO2) as the cryogenic propellant, high-temperature helium exceeding 600 K as the pressurant gas, and radial diffuser and anti-cone diffuser respectively at the tank inlet were performed. The pressurant gas requirements, axial and radial temperature distributions, and energy distributions inside the propellant tank were obtained and analyzed to evaluate the comprehensive performance of the pressurization system. It was found that the pressurization system with high-temperature helium as the pressurant gas could work well that the tank pressure was controlled within a specified range and a stable discharging liquid rate was achieved. For the radial diffuser case, the injected gas had a direct impact on the tank inner wall. The severe gas-wall heat transfer resulted in about 59% of the total input energy absorbed by the tank wall. For the pressurization case with anti-cone diffuser, the direct impact of high-temperature gas flowing toward the liquid surface resulted in a greater deal of energy transferred to the liquid propellant, and the percentage even reached up to 38%. Moreover, both of the two cases showed that the proportion of energy left in ullage to the total input energy was quite small, and the percentage was only about 22-24%. This may indicate that a more efficient diffuser should be developed to improve the pressurization effect. Generally, the present experimental results are beneficial to the design and optimization of the pressurization system with high-temperature gas supplying the pressurization effect.

  8. Aerodynamics of High-Lift Configuration Civil Aircraft Model in JAXA

    NASA Astrophysics Data System (ADS)

    Yokokawa, Yuzuru; Murayama, Mitsuhiro; Ito, Takeshi; Yamamoto, Kazuomi

    This paper presents basic aerodynamics and stall characteristics of the high-lift configuration aircraft model JSM (JAXA Standard Model). During research process of developing high-lift system design method, wind tunnel testing at JAXA 6.5m by 5.5m low-speed wind tunnel and Navier-Stokes computation on unstructured hybrid mesh were performed for a realistic configuration aircraft model equipped with high-lift devices, fuselage, nacelle-pylon, slat tracks and Flap Track Fairings (FTF), which was assumed 100 passenger class modern commercial transport aircraft. The testing and the computation aimed to understand flow physics and then to obtain some guidelines for designing a high performance high-lift system. As a result of the testing, Reynolds number effects within linear region and stall region were observed. Analysis of static pressure distribution and flow visualization gave the knowledge to understand the aerodynamic performance. CFD could capture the whole characteristics of basic aerodynamics and clarify flow mechanism which governs stall characteristics even for complicated geometry and its flow field. This collaborative work between wind tunnel testing and CFD is advantageous for improving or has improved the aerodynamic performance.

  9. A Distributed Ambient Intelligence Based Multi-Agent System for Alzheimer Health Care

    NASA Astrophysics Data System (ADS)

    Tapia, Dante I.; RodríGuez, Sara; Corchado, Juan M.

    This chapter presents ALZ-MAS (Alzheimer multi-agent system), an ambient intelligence (AmI)-based multi-agent system aimed at enhancing the assistance and health care for Alzheimer patients. The system makes use of several context-aware technologies that allow it to automatically obtain information from users and the environment in an evenly distributed way, focusing on the characteristics of ubiquity, awareness, intelligence, mobility, etc., all of which are concepts defined by AmI. ALZ-MAS makes use of a services oriented multi-agent architecture, called flexible user and services oriented multi-agent architecture, to distribute resources and enhance its performance. It is demonstrated that a SOA approach is adequate to build distributed and highly dynamic AmI-based multi-agent systems.

  10. Antipodal hotspot pairs on the earth

    NASA Technical Reports Server (NTRS)

    Rampino, Michael R.; Caldeira, Ken

    1992-01-01

    The results of statistical analyses performed on three published hotspot distributions suggest that significantly more hotspots occur as nearly antipodal pairs than is anticipated from a random distribution, or from their association with geoid highs and divergent plate margins. The observed number of antipodal hotspot pairs depends on the maximum allowable deviation from exact antipodality. At a maximum deviation of not greater than 700 km, 26 to 37 percent of hotspots form antipodal pairs in the published lists examined here, significantly more than would be expected from the general hotspot distribution. Two possible mechanisms that might create such a distribution include: (1) symmetry in the generation of mantle plumes, and (2) melting related to antipodal focusing of seismic energy from large-body impacts.

  11. Microstructure, Hardness, and Residual Stress Distributions in T-Joint Weld of HSLA S500MC Steel

    NASA Astrophysics Data System (ADS)

    Frih, Intissar; Montay, Guillaume; Adragna, Pierre-Antoine

    2017-03-01

    This paper investigates the characterization of the microstructure, hardness, and residual stress distributions of MIG-welded high-strength low-alloy S500MC steel. The T-joint weld for 10-mm-thick plates was joined using a two passes MIG welding technology. The contour method was performed to measure longitudinal welding residual stress. The obtained results highlighted a good correlation between the metallurgical phase constituents and hardness distribution within the weld zones. In fact, the presence of bainite and smaller ferrite grain size in the weld-fusion zone might be the reason for the highest hardness measured in this region. A similar trend of the residual stress and hardness distributions was also obtained.

  12. NASA's Participation in the National Computational Grid

    NASA Technical Reports Server (NTRS)

    Feiereisen, William J.; Zornetzer, Steve F. (Technical Monitor)

    1998-01-01

    Over the last several years it has become evident that the character of NASA's supercomputing needs has changed. One of the major missions of the agency is to support the design and manufacture of aero- and space-vehicles with technologies that will significantly reduce their cost. It is becoming clear that improvements in the process of aerospace design and manufacturing will require a high performance information infrastructure that allows geographically dispersed teams to draw upon resources that are broader than traditional supercomputing. A computational grid draws together our information resources into one system. We can foresee the time when a Grid will allow engineers and scientists to use the tools of supercomputers, databases and on line experimental devices in a virtual environment to collaborate with distant colleagues. The concept of a computational grid has been spoken of for many years, but several events in recent times are conspiring to allow us to actually build one. In late 1997 the National Science Foundation initiated the Partnerships for Advanced Computational Infrastructure (PACI) which is built around the idea of distributed high performance computing. The Alliance lead, by the National Computational Science Alliance (NCSA), and the National Partnership for Advanced Computational Infrastructure (NPACI), lead by the San Diego Supercomputing Center, have been instrumental in drawing together the "Grid Community" to identify the technology bottlenecks and propose a research agenda to address them. During the same period NASA has begun to reformulate parts of two major high performance computing research programs to concentrate on distributed high performance computing and has banded together with the PACI centers to address the research agenda in common.

  13. Learning Curves of Virtual Mastoidectomy in Distributed and Massed Practice.

    PubMed

    Andersen, Steven Arild Wuyts; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten

    2015-10-01

    Repeated and deliberate practice is crucial in surgical skills training, and virtual reality (VR) simulation can provide self-directed training of basic surgical skills to meet the individual needs of the trainee. Assessment of the learning curves of surgical procedures is pivotal in understanding skills acquisition and best-practice implementation and organization of training. To explore the learning curves of VR simulation training of mastoidectomy and the effects of different practice sequences with the aim of proposing the optimal organization of training. A prospective trial with a 2 × 2 design was conducted at an academic teaching hospital. Participants included 43 novice medical students. Of these, 21 students completed time-distributed practice from October 14 to November 29, 2013, and a separate group of 19 students completed massed practice on May 16, 17, or 18, 2014. Data analysis was performed from June 6, 2014, to March 3, 2015. Participants performed 12 repeated virtual mastoidectomies using a temporal bone surgical simulator in either a distributed (practice blocks spaced in time) or massed (all practice in 1 day) training program with randomization for simulator-integrated tutoring during the first 5 sessions. Performance was assessed using a modified Welling Scale for final product analysis by 2 blinded senior otologists. Compared with the 19 students in the massed practice group, the 21 students in the distributed practice group were older (mean age, 25.1 years), more often male (15 [62%]), and had slightly higher mean gaming frequency (2.3 on a 1-5 Likert scale). Learning curves were established and distributed practice was found to be superior to massed practice, reported as mean end score (95% CI) of 15.7 (14.4-17.0) in distributed practice vs. 13.0 (11.9-14.1) with massed practice (P = .002). Simulator-integrated tutoring accelerated the initial performance, with mean score for tutored sessions of 14.6 (13.9-15.2) vs. 13.4 (12.8-14.0) for corresponding nontutored sessions (P < .01) but at the cost of a drop in performance once tutoring ceased. The performance drop was less with distributed practice, suggesting a protective effect when acquired skills were consolidated over time. The mean performance of the nontutored participants in the distributed practice group plateaued on a score of 16.0 (15.3-16.7) at approximately the ninth repetition, but the individual learning curves were highly variable. Novices can acquire basic mastoidectomy competencies with self-directed VR simulation training. Training should be organized with distributed practice, and simulator-integrated tutoring can be useful to accelerate the initial learning curve. Practice should be deliberate and toward a standard set level of proficiency that remains to be defined rather than toward the mean learning curve plateau.

  14. 64Cu-ATSM and 18FDG PET uptake and 64Cu-ATSM autoradiography in spontaneous canine tumors: comparison with pimonidazole hypoxia immunohistochemistry

    PubMed Central

    2012-01-01

    Background The aim of this study was to compare 64Cu-diacetyl-bis(N4-methylsemicarbazone) (64Cu-ATSM) and 18FDG PET uptake characteristics and 64Cu-ATSM autoradiography to pimonidazole immunohistochemistry in spontaneous canine sarcomas and carcinomas. Methods Biopsies were collected from individual tumors between approximately 3 and 25 hours after the intravenous injection of 64Cu-ATSM and pimonidazole. 64Cu-ATSM autoradiography and pimonidazole immunostaining was performed on sectioned biopsies. Acquired 64Cu-ATSM autoradiography and pimonidazole images were rescaled, aligned and their distribution patterns compared. 64Cu-ATSM and 18FDG PET/CT scans were performed in a concurrent study and uptake characteristics were obtained for tumors where available. Results Maximum pimonidazole pixel value and mean pimonidazole labeled fraction was found to be strongly correlated to 18FDG PET uptake levels, whereas more varying results were obtained for the comparison to 64Cu-ATSM. In the case of the latter, uptake at scans performed 3 h post injection (pi) generally showed strong positive correlated to pimonidazole uptake. Comparison of distribution patterns of pimonidazole immunohistochemistry and 64Cu-ATSM autoradiography yielded varying results. Significant positive correlations were mainly found in sections displaying a heterogeneous distribution of tracers. Conclusions Tumors with high levels of pimonidazole staining generally displayed high uptake of 18FDG and 64Cu-ATSM (3 h pi.). Similar regional distribution of 64Cu-ATSM and pimonidazole was observed in most heterogeneous tumor regions. However, tumor and hypoxia level dependent differences may exist with regard to the hypoxia specificity of 64Cu-ATSM in canine tumors. PMID:22704363

  15. Programming with BIG data in R: Scaling analytics from one to thousands of nodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Drew; Chen, Wei -Chen; Matheson, Michael A.

    Here, we present a tutorial overview showing how one can achieve scalable performance with R. We do so by utilizing several package extensions, including those from the pbdR project. These packages consist of high performance, high-level interfaces to and extensions of MPI, PBLAS, ScaLAPACK, I/O libraries, profiling libraries, and more. While these libraries shine brightest on large distributed platforms, they also work rather well on small clusters and often, surprisingly, even on a laptop with only two cores. Our tutorial begins with recommendations on how to get more performance out of your R code before considering parallel implementations. Because Rmore » is a high-level language, a function can have a deep hierarchy of operations. For big data, this can easily lead to inefficiency. Profiling is an important tool to understand the performance of an R code for both serial and parallel improvements.« less

  16. Effect of high negative incidence on the performance of a centrifugal compressor stage with conventional vaned diffusers

    NASA Astrophysics Data System (ADS)

    Jaatinen, Ahti; Grönman, Aki; Turunen-Saaresti, Teemu; Backman, Jari

    2011-06-01

    Three vaned diffusers, designed to have high negative incidence (-8°) at the design operating point, are studied experimentally. The overall performance (efficiency and pressure ratio) are measured at three rotational speeds, and flow angles before and after the diffuser are measured at the design rotational speed and with three mass flow rates. The results are compared to corresponding results of the original vaneless diffuser design. Attention is paid to the performance at lower mass flows than the design mass flow. The results show that it is possible to improve the performance at mass flows lower than the design mass flow with a vaned diffuser designed with high negative incidence. However, with the vaned diffusers, the compressor still stalls at higher mass flow rates than with the vaneless one. The flow angle distributions after the diffuser are more uniform with the vaned diffusers.

  17. Programming with BIG data in R: Scaling analytics from one to thousands of nodes

    DOE PAGES

    Schmidt, Drew; Chen, Wei -Chen; Matheson, Michael A.; ...

    2016-11-09

    Here, we present a tutorial overview showing how one can achieve scalable performance with R. We do so by utilizing several package extensions, including those from the pbdR project. These packages consist of high performance, high-level interfaces to and extensions of MPI, PBLAS, ScaLAPACK, I/O libraries, profiling libraries, and more. While these libraries shine brightest on large distributed platforms, they also work rather well on small clusters and often, surprisingly, even on a laptop with only two cores. Our tutorial begins with recommendations on how to get more performance out of your R code before considering parallel implementations. Because Rmore » is a high-level language, a function can have a deep hierarchy of operations. For big data, this can easily lead to inefficiency. Profiling is an important tool to understand the performance of an R code for both serial and parallel improvements.« less

  18. 1310nm VCSELs in 1-10Gb/s commercial applications

    NASA Astrophysics Data System (ADS)

    Jewell, Jack; Graham, Luke; Crom, Max; Maranowski, Kevin; Smith, Joseph; Fanning, Tom

    2006-02-01

    Beginning with 4 Gigabit/sec Fibre-Channel, 1310nm vertical-cavity surface-emitting lasers (VCSELs) are now entering the marketplace. Such VCSELs perform like distributed feedback lasers but have drive currents and heat dissipation like 850nm VCSELs, making them ideal for today's high-performance interconnects and the only choice for the next step in increased interconnection density. Transceiver performances at 4 and 10 Gigabits/sec over fiber lengths 10-40km are presented. The active material is extremely robust, resulting in excellent reliability.

  19. Practical dose point-based methods to characterize dose distribution in a stationary elliptical body phantom for a cone-beam C-arm CT system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Jang-Hwan, E-mail: jhchoi21@stanford.edu; Constantin, Dragos; Ganguly, Arundhuti

    2015-08-15

    Purpose: To propose new dose point measurement-based metrics to characterize the dose distributions and the mean dose from a single partial rotation of an automatic exposure control-enabled, C-arm-based, wide cone angle computed tomography system over a stationary, large, body-shaped phantom. Methods: A small 0.6 cm{sup 3} ion chamber (IC) was used to measure the radiation dose in an elliptical body-shaped phantom made of tissue-equivalent material. The IC was placed at 23 well-distributed holes in the central and peripheral regions of the phantom and dose was recorded for six acquisition protocols with different combinations of minimum kVp (109 and 125 kVp)more » and z-collimator aperture (full: 22.2 cm; medium: 14.0 cm; small: 8.4 cm). Monte Carlo (MC) simulations were carried out to generate complete 2D dose distributions in the central plane (z = 0). The MC model was validated at the 23 dose points against IC experimental data. The planar dose distributions were then estimated using subsets of the point dose measurements using two proposed methods: (1) the proximity-based weighting method (method 1) and (2) the dose point surface fitting method (method 2). Twenty-eight different dose point distributions with six different point number cases (4, 5, 6, 7, 14, and 23 dose points) were evaluated to determine the optimal number of dose points and their placement in the phantom. The performances of the methods were determined by comparing their results with those of the validated MC simulations. The performances of the methods in the presence of measurement uncertainties were evaluated. Results: The 5-, 6-, and 7-point cases had differences below 2%, ranging from 1.0% to 1.7% for both methods, which is a performance comparable to that of the methods with a relatively large number of points, i.e., the 14- and 23-point cases. However, with the 4-point case, the performances of the two methods decreased sharply. Among the 4-, 5-, 6-, and 7-point cases, the 7-point case (1.0% [±0.6%] difference) and the 6-point case (0.7% [±0.6%] difference) performed best for method 1 and method 2, respectively. Moreover, method 2 demonstrated high-fidelity surface reconstruction with as few as 5 points, showing pixelwise absolute differences of 3.80 mGy (±0.32 mGy). Although the performance was shown to be sensitive to the phantom displacement from the isocenter, the performance changed by less than 2% for shifts up to 2 cm in the x- and y-axes in the central phantom plane. Conclusions: With as few as five points, method 1 and method 2 were able to compute the mean dose with reasonable accuracy, demonstrating differences of 1.7% (±1.2%) and 1.3% (±1.0%), respectively. A larger number of points do not necessarily guarantee better performance of the methods; optimal choice of point placement is necessary. The performance of the methods is sensitive to the alignment of the center of the body phantom relative to the isocenter. In body applications where dose distributions are important, method 2 is a better choice than method 1, as it reconstructs the dose surface with high fidelity, using as few as five points.« less

  20. Initial evaluation of commercially available InGaAsP DFB laser diodes for use in high-speed digital fiber optic transceivers

    NASA Technical Reports Server (NTRS)

    Cook, Anthony L.; Hendricks, Herbert D.

    1990-01-01

    NASA has been pursuing the development of high-speed fiber-optic transceivers for use in a number of space data system applications. Current efforts are directed toward a high-performance all-integrated-circuit transceiver operating up to the 3-5 Gb/s range. Details of the evaluation and selection of candidate high-speed optical sources to be used in the space-qualified high-performance transceiver are presented. Data on the performance of commercially available DFB (distributed feedback) lasers are presented, and their performance relative to each other and to their structural design with regard to their use in high-performance fiber-optic transceivers is discussed. The DFB lasers were obtained from seven commercial manufacturers. The data taken on each laser included threshold current, differential quantum efficiency, CW side mode suppression radio, wavelength temperature coefficient, threshold temperature coefficient, natural linewidth, and far field pattern. It was found that laser diodes with buried heterostructures and first-order gratings had, in general, the best CW operating characteristics. The modulated characteristics of the DFB laser diodes are emphasized. Modulated linewidth, modulated side mode suppression ratio, and frequency response are discussed.

  1. Data-Driven Residential Load Modeling and Validation in GridLAB-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gotseff, Peter; Lundstrom, Blake

    Accurately characterizing the impacts of high penetrations of distributed energy resources (DER) on the electric distribution system has driven modeling methods from traditional static snap shots, often representing a critical point in time (e.g., summer peak load), to quasi-static time series (QSTS) simulations capturing all the effects of variable DER, associated controls and hence, impacts on the distribution system over a given time period. Unfortunately, the high time resolution DER source and load data required for model inputs is often scarce or non-existent. This paper presents work performed within the GridLAB-D model environment to synthesize, calibrate, and validate 1-second residentialmore » load models based on measured transformer loads and physics-based models suitable for QSTS electric distribution system modeling. The modeling and validation approach taken was to create a typical GridLAB-D model home that, when replicated to represent multiple diverse houses on a single transformer, creates a statistically similar load to a measured load for a given weather input. The model homes are constructed to represent the range of actual homes on an instrumented transformer: square footage, thermal integrity, heating and cooling system definition as well as realistic occupancy schedules. House model calibration and validation was performed using the distribution transformer load data and corresponding weather. The modeled loads were found to be similar to the measured loads for four evaluation metrics: 1) daily average energy, 2) daily average and standard deviation of power, 3) power spectral density, and 4) load shape.« less

  2. A Comparative Distributed Evaluation of the NWS-RDHM using Shape Matching and Traditional Measures with In Situ and Remotely Sensed Information

    NASA Astrophysics Data System (ADS)

    KIM, J.; Bastidas, L. A.

    2011-12-01

    We evaluate, calibrate and diagnose the performance of National Weather Service RDHM distributed model over the Durango River Basin in Colorado using simultaneously in situ and remotely sensed information from different discharge gaging stations (USGS), information about snow cover (SCV) and snow water equivalent (SWE) in situ from several SNOTEL sites and snow information distributed over the catchment from remotely sensed information (NOAA-NASA). In the process of evaluation we attempt to establish the optimal degree of parameter distribution over the catchment by calibration. A multi-criteria approach based on traditional measures (RMSE) and similarity based pattern comparisons using the Hausdorff and Earth Movers Distance approaches is used for the overall evaluation of the model performance. These pattern based approaches (shape matching) are found to be extremely relevant to account for the relatively large degree of inaccuracy in the remotely sensed SWE (judged inaccurate in terms of the value but reliable in terms of the distribution pattern) and the high reliability of the SCV (yes/no situation) while at the same time allow for an evaluation that quantifies the accuracy of the model over the entire catchment considering the different types of observations. The Hausdorff norm, due to its intrinsically multi-dimensional nature, allows for the incorporation of variables such as the terrain elevation as one of the variables for evaluation. The EMD, because of its extremely high computational overburden, requires the mapping of the set of evaluation variables into a two dimensional matrix for computation.

  3. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  4. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  5. Design and comparison of laser windows for high-power lasers

    NASA Astrophysics Data System (ADS)

    Niu, Yanxiong; Liu, Wenwen; Liu, Haixia; Wang, Caili; Niu, Haisha; Man, Da

    2014-11-01

    High-power laser systems are getting more and more widely used in industry and military affairs. It is necessary to develop a high-power laser system which can operate over long periods of time without appreciable degradation in performance. When a high-energy laser beam transmits through a laser window, it is possible that the permanent damage is caused to the window because of the energy absorption by window materials. So, when we design a high-power laser system, a suitable laser window material must be selected and the laser damage threshold of the window must be known. In this paper, a thermal analysis model of high-power laser window is established, and the relationship between the laser intensity and the thermal-stress field distribution is studied by deducing the formulas through utilizing the integral-transform method. The influence of window radius, thickness and laser intensity on the temperature and stress field distributions is analyzed. Then, the performance of K9 glass and the fused silica glass is compared, and the laser-induced damage mechanism is analyzed. Finally, the damage thresholds of laser windows are calculated. The results show that compared with K9 glass, the fused silica glass has a higher damage threshold due to its good thermodynamic properties. The presented theoretical analysis and simulation results are helpful for the design and selection of high-power laser windows.

  6. Comparison of sampling techniques for Bayesian parameter estimation

    NASA Astrophysics Data System (ADS)

    Allison, Rupert; Dunkley, Joanna

    2014-02-01

    The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble Markov chain Monte Carlo (MCMC) sampling. We focus on their performance on toy-model Gaussian likelihoods and on a real-world cosmological data set. We outline the sampling algorithms themselves and elaborate on performance diagnostics such as convergence time, scope for parallelization, dimensional scaling, requisite tunings and suitability for non-Gaussian distributions. We find that nested sampling delivers high-fidelity estimates for posterior statistics at low computational cost, and should be adopted in favour of Metropolis-Hastings in many cases. Affine-invariant MCMC is competitive when computing clusters can be utilized for massive parallelization. Affine-invariant MCMC and existing extensions to nested sampling naturally probe multimodal and curving distributions.

  7. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  8. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring

    PubMed Central

    Gharavi, Hamid; Hu, Bin

    2018-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network. PMID:29503505

  9. Relationships Between Divided Attention and Working Memory Impairment in People With Schizophrenia

    PubMed Central

    Gray, Bradley E.; Hahn, Britta; Robinson, Benjamin; Harvey, Alex; Leonard, Carly J.; Luck, Steven J.; Gold, James M.

    2014-01-01

    Recent studies suggest that people with schizophrenia (PSZ) have difficulty distributing their attention broadly. Other research suggests that PSZ have reduced working memory (WM) capacity. This study tested whether these findings reflect a common underlying deficit. We measured the ability to distribute attention by means of the Useful Field of View (UFOV) task, in which participants must distribute attention so that they can discriminate a foveal target and simultaneously localize a peripheral target. Participants included 50 PSZ and 52 healthy control subjects. We found that PSZ exhibited severe impairments in UFOV performance, that UFOV performance was highly correlated with WM capacity in PSZ (r = −.61), and that UFOV impairments could not be explained by either impaired low-level processing or a generalized deficit. These results suggest that a common mechanism explains deficits in the ability to distribute attention broadly, reduced WM capacity, and other aspects of impaired cognition in schizophrenia. We hypothesize that this mechanism may involve abnormal local circuit dynamics that cause a hyperfocusing of resources onto a small number of internal representations. PMID:24748559

  10. Modeling electronic trap state distributions in nanocrystalline anatase

    NASA Astrophysics Data System (ADS)

    Le, Nam; Schweigert, Igor

    The charge transport properties of nanocrystalline TiO2 films, and thus the catalytic performance of devices that incorporate them, are affected strongly by the spatial and energetic distribution of localized electronic trap states. Such traps may arise from a variety of defects: Ti interstitials, O vacancies, step edges at surfaces, and grain boundaries. We have developed a procedure for applying density functional theory (DFT) and density functional tight binding (DFTB) calculations to characterize distributions of localized states arising from multiple types of defects. We have applied the procedure to investigate how the morphologies of interfaces between pairs of attached anatase nanoparticles determine the energies of trap states therein. Our results complement recent experimental findings that subtle changes in the morphology of highly porous TiO2 aerogel networks can have a dramatic effect on catalytic performance, which was attributed to changes in the distribution of trap states. This work was supported by the U.S. Naval Research Laboratory via the National Research Council and by the Office of Naval Research through the U.S. Naval Research Laboratory.

  11. Wireless Infrastructure M2M Network For Distributed Power Grid Monitoring.

    PubMed

    Gharavi, Hamid; Hu, Bin

    2017-01-01

    With the massive integration of distributed renewable energy sources (RESs) into the power system, the demand for timely and reliable network quality monitoring, control, and fault analysis is rapidly growing. Following the successful deployment of Phasor Measurement Units (PMUs) in transmission systems for power monitoring, a new opportunity to utilize PMU measurement data for power quality assessment in distribution grid systems is emerging. The main problem however, is that a distribution grid system does not normally have the support of an infrastructure network. Therefore, the main objective in this paper is to develop a Machine-to-Machine (M2M) communication network that can support wide ranging sensory data, including high rate synchrophasor data for real-time communication. In particular, we evaluate the suitability of the emerging IEEE 802.11ah standard by exploiting its important features, such as classifying the power grid sensory data into different categories according to their traffic characteristics. For performance evaluation we use our hardware in the loop grid communication network testbed to access the performance of the network.

  12. Composition and Molecular Weight Distribution of Carob Germ Proteins Fractions

    USDA-ARS?s Scientific Manuscript database

    Biochemical properties of carob germ proteins were analyzed using a combination of selective extraction, reversed-phase high performance liquid chromatography (RP-HPLC), size exclusion chromatography coupled with multi-angle laser light scattering (SEC-MALS) and electrophoretic analysis. Using a mo...

  13. Greenhouse Evaluation of Air-Assist Delivery Parameters for Mature Poinsettias

    USDA-ARS?s Scientific Manuscript database

    Understanding the performance characteristics of application equipment is important for helping make the most efficacious applications. While handguns making high volume applications are common in greenhouse production, it is difficult to achieve uniform distribution of product in a timely manner. ...

  14. Modeling of Fuel Film Cooling Using Steady State RANS and Unsteady DES Approaches

    DTIC Science & Technology

    2016-07-27

    Briefing Charts 3. DATES COVERED (From - To) 21 July 2016 – 31 August 2016 4. TITLE AND SUBTITLE Modeling of Fuel Film Cooling Using Steady State RANS...Prescribed by ANSI Std. 239.18 1 Distribution A: Approved for Public Release; Distribution Unlimited. PA# 16391. Modeling of Fuel  Film  Cooling Using...Distribution Unlimited. PA# 16391. 3 Introduction • Fuel  film  cooling is critical for high performing boost engines  using the Oxygen Rich Staged

  15. Qualitative Beam Profiling of Light Curing Units for Resin Based Composites.

    PubMed

    Haenel, Thomas; Hausnerová, Berenika; Steinhaus, Johannes; Moeginger, Ing Bernhard

    2016-12-01

    This study investigates two technically simple methods to determine the irradiance distribution of light curing units that governs the performance of a visible-light curing resin-based composites. Insufficient light irradiation leads to under-cured composites with poor mechanical properties and elution of residual monomers. The unknown irradiance distribution and its effect on the final restoration are the main critical issues requiring highly sophisticated experimental equipment. The study shows that irradiance distributions of LCUs can easily be determined qualitatively with generally available equipment. This significantly helps dentists in practices to be informed about the homogeneity of the curing lights. Copyright© 2016 Dennis Barber Ltd.

  16. High-resolution mapping of molecules in an ionic liquid via scanning transmission electron microscopy.

    PubMed

    Miyata, Tomohiro; Mizoguchi, Teruyasu

    2018-03-01

    Understanding structures and spatial distributions of molecules in liquid phases is crucial for the control of liquid properties and to develop efficient liquid-phase processes. Here, real-space mapping of molecular distributions in a liquid was performed. Specifically, the ionic liquid 1-Ethyl-3-methylimidazolium bis(trifluoromethanesulfonyl)imide (C2mimTFSI) was imaged using atomic-resolution scanning transmission electron microscopy. Simulations revealed network-like bright regions in the images that were attributed to the TFSI- anion, with minimal contributions from the C2mim+ cation. Simple visualization of the TFSI- distribution in the liquid sample was achieved by binarizing the experimental image.

  17. A High Performance Piezoelectric Sensor for Dynamic Force Monitoring of Landslide

    PubMed Central

    Li, Ming; Cheng, Wei; Chen, Jiangpan; Xie, Ruili; Li, Xiongfei

    2017-01-01

    Due to the increasing influence of human engineering activities, it is important to monitor the transient disturbance during the evolution process of landslide. For this purpose, a high-performance piezoelectric sensor is presented in this paper. To adapt the high static and dynamic stress environment in slope engineering, two key techniques, namely, the self-structure pressure distribution method (SSPDM) and the capacitive circuit voltage distribution method (CCVDM) are employed in the design of the sensor. The SSPDM can greatly improve the compressive capacity and the CCVDM can quantitatively decrease the high direct response voltage. Then, the calibration experiments are conducted via the independently invented static and transient mechanism since the conventional testing machines cannot match the calibration requirements. The sensitivity coefficient is obtained and the results reveal that the sensor has the characteristics of high compressive capacity, stable sensitivities under different static preload levels and wide-range dynamic measuring linearity. Finally, to reduce the measuring error caused by charge leakage of the piezoelectric element, a low-frequency correction method is proposed and experimental verified. Therefore, with the satisfactory static and dynamic properties and the improving low-frequency measuring reliability, the sensor can complement dynamic monitoring capability of the existing landslide monitoring and forecasting system. PMID:28218673

  18. Comparative Genome Analysis of Ciprofloxacin-Resistant Pseudomonas aeruginosa Reveals Genes Within Newly Identified High Variability Regions Associated With Drug Resistance Development

    PubMed Central

    Su, Hsun-Cheng; Khatun, Jainab; Kanavy, Dona M.

    2013-01-01

    The alarming rise of ciprofloxacin-resistant Pseudomonas aeruginosa has been reported in several clinical studies. Though the mutation of resistance genes and their role in drug resistance has been researched, the process by which the bacterium acquires high-level resistance is still not well understood. How does the genomic evolution of P. aeruginosa affect resistance development? Could the exposure of antibiotics to the bacteria enrich genomic variants that lead to the development of resistance, and if so, how are these variants distributed through the genome? To answer these questions, we performed 454 pyrosequencing and a whole genome analysis both before and after exposure to ciprofloxacin. The comparative sequence data revealed 93 unique resistance strain variation sites, which included a mutation in the DNA gyrase subunit A gene. We generated variation-distribution maps comparing the wild and resistant types, and isolated 19 candidates from three discrete resistance-associated high variability regions that had available transposon mutants, to perform a ciprofloxacin exposure assay. Of these region candidates with transposon disruptions, 79% (15/19) showed a reduction in the ability to gain high-level resistance, suggesting that genes within these high variability regions might enrich for certain functions associated with resistance development. PMID:23808957

  19. Random-access scanning microscopy for 3D imaging in awake behaving animals

    PubMed Central

    Nadella, K. M. Naga Srinivas; Roš, Hana; Baragli, Chiara; Griffiths, Victoria A.; Konstantinou, George; Koimtzis, Theo; Evans, Geoffrey J.; Kirkby, Paul A.; Silver, R. Angus

    2018-01-01

    Understanding how neural circuits process information requires rapid measurements from identified neurons distributed in 3D space. Here we describe an acousto-optic lens two-photon microscope that performs high-speed focussing and line-scanning within a volume spanning hundreds of micrometres. We demonstrate its random access functionality by selectively imaging cerebellar interneurons sparsely distributed in 3D and by simultaneously recording from the soma, proximal and distal dendrites of neocortical pyramidal cells in behaving mice. PMID:27749836

  20. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    PubMed

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

Top