Science.gov

Sample records for parallel forms reliability

  1. Parallelized reliability estimation of reconfigurable computer networks

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Das, Subhendu; Palumbo, Dan

    1990-01-01

    A parallelized system, ASSURE, for computing the reliability of embedded avionics flight control systems which are able to reconfigure themselves in the event of failure is described. ASSURE accepts a grammar that describes a reliability semi-Markov state-space. From this it creates a parallel program that simultaneously generates and analyzes the state-space, placing upper and lower bounds on the probability of system failure. ASSURE is implemented on a 32-node Intel iPSC/860, and has achieved high processor efficiencies on real problems. Through a combination of improved algorithms, exploitation of parallelism, and use of an advanced microprocessor architecture, ASSURE has reduced the execution time on substantial problems by a factor of one thousand over previous workstation implementations. Furthermore, ASSURE's parallel execution rate on the iPSC/860 is an order of magnitude faster than its serial execution rate on a Cray-2 supercomputer. While dynamic load balancing is necessary for ASSURE's good performance, it is needed only infrequently; the particular method of load balancing used does not substantially affect performance.

  2. Essay Reliability: Form and Meaning.

    ERIC Educational Resources Information Center

    Shale, Doug

    This study is an attempt at a cohesive characterization of the concept of essay reliability. As such, it takes as a basic premise that previous and current practices in reporting reliability estimates for essay tests have certain shortcomings. The study provides an analysis of these shortcomings--partly to encourage a fuller understanding of the…

  3. Armed Services Vocational Aptitude Battery (ASVAB): Alternate Forms Reliability (Forms 8, 9, 10, and 11). Technical Paper for Period October 1980-April 1985.

    ERIC Educational Resources Information Center

    Palmer, Pamla; And Others

    A study investigated the alternate forms reliability of the Armed Services Vocational Aptitude Battery (ASVAB) Forms 8, 9, 10, and 11. Usable data were obtained from 62,938 armed services applicants who took the ASVAB in January and February 1983. Results showed that the parallel forms reliability coefficients between ASVAB Form 8a and the…

  4. Construction of Parallel Test Forms Using Optimal Test Designs.

    ERIC Educational Resources Information Center

    Dirir, Mohamed A.

    The effectiveness of an optimal item selection method in designing parallel test forms was studied during the development of two forms that were parallel to an existing form for each of three language arts tests for fourth graders used in the Connecticut Mastery Test. Two listening comprehension forms, two reading comprehension forms, and two…

  5. Reliability and mass analysis of dynamic power conversion systems with parallel of standby redundancy

    NASA Technical Reports Server (NTRS)

    Juhasz, A. J.; Bloomfield, H. S.

    1985-01-01

    A combinatorial reliability approach is used to identify potential dynamic power conversion systems for space mission applications. A reliability and mass analysis is also performed, specifically for a 100 kWe nuclear Brayton power conversion system with parallel redundancy. Although this study is done for a reactor outlet temperature of 1100K, preliminary system mass estimates are also included for reactor outlet temperatures ranging up to 1500 K.

  6. Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results

    ERIC Educational Resources Information Center

    Lundervold, Duane A.; Dunlap, Angel L.

    2006-01-01

    Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

  7. The ijk forms of factorization methods. II - Parallel systems

    NASA Technical Reports Server (NTRS)

    Ortega, J. M.; Romine, C. H.

    1988-01-01

    The paper considers the 'ijk forms' of LU and Cholesky factorization on certain parallel computers. This extends an earlier analysis for vector computers. Attention is restricted to local memory systems with processors that may or may not have vector capability. Special attention is given to bus architectures but qualitative analyses are given for other interconnection systems.

  8. MINT: A Reliability Modeling Framework for Energy-Efficient Parallel Disk Systems

    E-print Network

    Qin, Xiao

    MINT: A Reliability Modeling Framework for Energy-Efficient Parallel Disk Systems Shu Yin, Xiaojun systems, we develop a mathematical modeling framework called MINT. We first model the behaviors real-world trace to validate out MINT model. Validation result shows that the behaviors of PDC and MAID

  9. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  10. COMPANION FORMS IN PARALLEL WEIGHT ONE TOBY GEE AND PAYMAN KASSAEI

    E-print Network

    Kassaei, Payman L.

    a result of Gross [Gro90], and prove a companion forms theorem for Hilbert modular forms of parallel weight that there is a weight one form; this was proved by Gross [Gro90], under the further hypothesis that the eigenvaluesCOMPANION FORMS IN PARALLEL WEIGHT ONE TOBY GEE AND PAYMAN KASSAEI Abstract. Let p > 2 be prime

  11. An Examination of the Effect of Multidimensionality on Parallel Forms Construction.

    ERIC Educational Resources Information Center

    Ackerman, Terry A.

    This paper examines the effect of using unidimensional item response theory (IRT) item parameter estimates of multidimensional items to create weakly parallel test forms using target information curves. To date, all computer-based algorithms that have been devised to create parallel test forms assume that the items are unidimensional. This paper…

  12. How Reliable are Parallel Disk Systems When Energy-Saving Schemes are Involved? Shu Yin, Xiaojun Ruan, Adam Manzanares, and Xiao Qin

    E-print Network

    Qin, Xiao

    How Reliable are Parallel Disk Systems When Energy-Saving Schemes are Involved? Shu Yin, Xiaojun, growing evidence shows that energy-saving schemes in disk drives usually have negative impacts on storage - called MINT - to evaluate the reliability of a parallel disk system where energy-saving mechanisms

  13. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel

  14. Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test

    ERIC Educational Resources Information Center

    Benton, Tom

    2013-01-01

    This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

  15. A Parallel Independent Component Implement Based on Learning Updating with Forms of Matrix Transformations

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Hui; Kong, Guang-Qian; Liu, Cai-Hong

    PVM (Parallel virtual machine) library is a tool which used processes large amounts of data sets. This paper wants to achieve a high performance solution that exploits PVM library and parallel computers to solve ICA (Independent Component Analysis) problem. The paper presents parallel power ICA implementations to decomposition data sets. Power iteration (PI) is an algorithm for independent component analysis, which has some desired features. It has higher performance and data capacity than current sequential implementations. This paper, we show the power iteration algorithm which learning updating is in the form of matrix transformation . From power iteration algorithm, we develop parallel power iteration algorithm and implement parallel component decomposition solution. At last, experimental results, analysis and future plans are presented.

  16. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  17. Comparison of heuristic methods for reliability optimization of series-parallel systems 

    E-print Network

    Lee, Hsiang

    2003-01-01

    is superior to the Nakagawa and Nakashima method and Kim and Yum method for the series-parallel problem with multiple component choices in terms of solution quality, but an analysis of computational complexity shows that the max-min approach is inferior...

  18. Determinism Is Not Enough: Making Parallel Programs Reliable with Stable Multithreading

    E-print Network

    ), and re- searchers have recently dedicated much effort to bringing determin- ism into multithreading include Coverity's source code ana- lyzer [6], Microsoft's Static Driver Verifier [3], Valgrind memory constraints, forcing processors into multicore designs. Thus, developers must resort to parallel code for best

  19. Validity and Reliability of International Physical Activity Questionnaire-Short Form in Chinese Youth

    ERIC Educational Resources Information Center

    Wang, Chao; Chen, Peijie; Zhuang, Jie

    2013-01-01

    Purpose: The psychometric profiles of the widely used International Physical Activity Questionnaire-Short Form (IPAQ-SF) in Chinese youth have not been reported. The purpose of this study was to examine the validity and reliability of the IPAQ-SF using a sample of Chinese youth. Method: One thousand and twenty-one youth (M[subscript age] = 14.26 ±…

  20. An Investigation into Reliability, Availability, and Serviceability (RAS) Features for Massively Parallel Processor Systems

    SciTech Connect

    KELLY, SUZANNE M.; OGDEN, JEFFREY BRANDON

    2002-10-01

    A study has been completed into the RAS features necessary for Massively Parallel Processor (MPP) systems. As part of this research, a use case model was built of how RAS features would be employed in an operational MPP system. Use cases are an effective way to specify requirements so that all involved parties can easily understand them. This technique is in contrast to laundry lists of requirements that are subject to misunderstanding as they are without context. As documented in the use case model, the study included a look at incorporating system software and end-user applications, as well as hardware, into the RAS system.

  1. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  2. Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-01-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  3. Magnetosheath filamentary structures formed by ion acceleration at the quasi-parallel bow shock

    NASA Astrophysics Data System (ADS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-04-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  4. Parallel G-Quadruplexes Formed by Guanine-Rich Microsatellite Repeats Inhibit Human Topoisomerase I.

    PubMed

    Ogloblina, A M; Bannikova, V A; Khristich, A N; Oretskaya, T S; Yakubovskaya, M G; Dolinnaya, N G

    2015-08-01

    Using UV and CD spectroscopy, we studied the thermodynamic stability and folding topology of G-quadruplexes (G4), formed by G-rich fragments in human microsatellites that differ in the number of guanosines within the repeating unit. The oligonucleotides d(GGGT)4 and d(GGT)4 were shown to form propeller-type parallel-stranded intramolecular G-quadruplexes. The G4 melting temperature is dramatically decreased (by more than 45°C) in the transition from the tri-G-tetrad to the bi-G-tetrad structure. d(GT)n-repeats do not form perfect G-quadruplexes (one-G-tetrad); folded G4-like conformation is not stable at room temperature and is not stabilized by monovalent metal ions. The minimum concentration of K+ that promotes quadruplex folding of d(GGT)4 was found to depend on the supporting Na+ concentration. It was demonstrated for the first time that the complementary regions flanking G4-motifs (as in d(CACTGG-CC-(GGGT)4-TA-CCAGTG)) cannot form a double helix in the case of a parallel G4 due to the steric remoteness, but instead destabilize the structure. Additionally, we investigated the effect of the described oligonucleotides on the activity of topoisomerase I, one of the key cell enzymes, with a focus on the relationship between the stability of the formed quadruplexes and the inhibition degree of the enzyme. The most active inhibitor with IC50 = 0.08 µM was the oligonucleotide d(CACTGG-CC-(GGGT)4-TA-CCAGTG), whose flanking G4-motif sequences reduced the extreme stability of G-quadruplex formed by d(GGGT)4. PMID:26547071

  5. Reliability and Validity of the Korean Young Schema Questionnaire-Short Form-3 in Medical Students

    PubMed Central

    Lee, Seung Jae; Choi, Young Hee; Rim, Hyo Deog; Won, Seung Hee

    2015-01-01

    Objective The Young Schema Questionnaire (YSQ) is a self-report measure of early maladaptive schemas and is currently in its third revision; it is available in both long (YSQ-L3) and short (YSQ-S3) forms. The goal of this study was to develop a Korean version of the YSQ-S3 and establish its psychometric properties in a Korean sample. Methods A total of 542 graduate medical students completed the Korean version of the YSQ-S3 and several other psychological scales. A subsample of 308 subjects completed the Korean YSQ-S3 both before and after a 2-year test-retest interval. Correlation, regression, and confirmatory factor analyses were performed on the data. Results The internal consistency of the 90-item Korean YSQ-S3 was 0.97 and that of each schema was acceptable, with Cronbach's alphas ranging from 0.59 to 0.90. The test-retest reliability ranged from 0.46 to 0.65. Every schema showed robust positive correlations with most psychological measures. The confirmatory factor analysis for the 18-factor structure originally proposed by Young, Klosko, and Weishaar (2003) showed that most goodness-of-fit statistics were indicative of a satisfactory fit. Conclusion These findings support the reliability and validity of the Korean version of the YSQ-S3. PMID:26207121

  6. Alternating d(G-A) sequences form a parallel-stranded DNA homoduplex.

    PubMed Central

    Rippe, K; Fritsch, V; Westhof, E; Jovin, T M

    1992-01-01

    The oligonucleotides d[(G-A)7G] and d[(G-A)12G] self-associate under physiological conditions (10 mM MgCl2, neutral pH) into a stable double-helical structure (psRR-DNA) in which the two polypurine strands are in a parallel orientation in contrast to the antiparallel disposition of conventional B-DNA. We have characterized psRR-DNA by gel electrophoresis, UV absorption, vacuum UV circular dichroism, monomer-excimer fluorescence of oligonucleotides end-labelled with pyrene, and chemical probing with diethyl pyrocarbonate and dimethyl sulfate. The duplex is stable at pH 4-9, suggesting that the structure is compatible with, but does not require, protonation of the A residues. The data support a model derived from force-field analysis in which the parallel-stranded d(G-A)n helix is right-handed and constituted of alternating, symmetrical Gsyn.Gsyn and Aanti.Aanti base pairs with N1H...O6 and N6H...N7 hydrogen bonds, respectively. This dinucleotide structure may be the source of a negative peak observed at 190 nm in the vacuum UV CD spectrum, a feature previously reported only for left-handed Z-DNA. The related sequence d[(GAAGGA)4G] also forms a parallel-stranded duplex but one that is less stable and probably involves a slightly different secondary structure. We discuss the potential intervention of psRR-DNA in recombination, gene expression and the stabilization of genomic structure. Images PMID:1396571

  7. Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators

    SciTech Connect

    Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

    2013-07-30

    A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

  8. Closed-form massively-parallel range-from-image-flow algorithm

    SciTech Connect

    Raviv, D.; Albus, J.S.

    1990-10-01

    The authors provide a closed-form solution for obtaining 3D structure of a scene for a given six degree of freedom motion of a camera. The solution is massively parallel, i.e., the range that corresponds to each pixel is dependent on the spatial and temporal changes in intensities of that pixel, and on the motion parameters of the camera. The measurements of the intensities are done in a priori known directions. The solution is for the general case of camera motion. The derivation is based upon representing the image in the spherical coordinate system, although a similar approach could be taken for other image domains, e.g., the planar coordinate system. They comment on the amount of computations, errors and singular points of the solutions. They also suggest a practical way to significantly reduce and implement them.

  9. A cross-sectional audit of student health insurance waiver forms: an assessment of reliability and compliance.

    PubMed

    Molnar, JoAnn

    2002-01-01

    To assess the reliability of using a waiver process to ensure compliance with health insurance requirements established by a university, the author conducted a cross-sectional verification and compliance audit of insurance waiver forms received for the 1999/2000 academic year. This study revealed that a waiver form process could not be relied upon to enforce compliance. PMID:11910953

  10. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  11. Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.

    SciTech Connect

    Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

    2014-09-01

    Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

  12. G-quadruplexes form ultrastable parallel structures in deep eutectic solvent.

    PubMed

    Zhao, Chuanqi; Ren, Jinsong; Qu, Xiaogang

    2013-01-29

    G-quadruplex DNA is highly polymorphic. Its conformation transition is involved in a series of important life events. These controllable diverse structures also make G-quadruplex DNA a promising candidate as catalyst, biosensor, and DNA-based architecture. So far, G-quadruplex DNA-based applications are restricted done in aqueous media. Since many chemical reactions and devices are required to be performed under strictly anhydrous conditions, even at high temperature, it is challenging and meaningful to conduct G-quadruplex DNA in water-free medium. In this report, we systemically studied 10 representative G-quadruplexes in anhydrous room-temperature deep eutectic solvents (DESs). The results indicate that intramolecular, intermolecular, and even higher-order G-quadruplex structures can be formed in DES. Intriguingly, in DES, parallel structure becomes the G-quadruplex DNA preferred conformation. More importantly, compared to aqueous media, G-quadruplex has ultrastability in DES and, surprisingly, some G-quadruplex DNA can survive even beyond 110 °C. Our work would shed light on the applications of G-quadruplex DNA to chemical reactions and DNA-based devices performed in an anhydrous environment, even at high temperature. PMID:23282194

  13. Measuring Teacher Self-Report on Classroom Practices: Construct Validity and Reliability of the Classroom Strategies Scale-Teacher Form

    ERIC Educational Resources Information Center

    Reddy, Linda A.; Dudek, Christopher M.; Fabiano, Gregory A.; Peters, Stephanie

    2015-01-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented.…

  14. An index-based short-form of the WISC-IV with accompanying analysis of the reliability and

    E-print Network

    Crawford, John R.

    , College of Life Sciences and Medicine, King's College, University of Aberdeen, Aberdeen AB24 3HN, UK (eAn index-based short-form of the WISC-IV with accompanying analysis of the reliability School of Psychology, University of Aberdeen, UK 2 Royal Children's Hospital, Melbourne, Australia 3

  15. A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity

    ERIC Educational Resources Information Center

    Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

    2009-01-01

    Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

  16. Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form

    ERIC Educational Resources Information Center

    Levy, Susan S.; Readdy, R. Tucker

    2009-01-01

    The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)…

  17. Structural Aspects of the Antiparallel and Parallel Duplexes Formed by DNA, 2’-O-Methyl RNA and RNA Oligonucleotides

    PubMed Central

    Szabat, Marta; Pedzinski, Tomasz; Czapik, Tomasz; Kierzek, Elzbieta; Kierzek, Ryszard

    2015-01-01

    This study investigated the influence of the nature of oligonucleotides on the abilities to form antiparallel and parallel duplexes. Base pairing of homopurine DNA, 2’-O-MeRNA and RNA oligonucleotides with respective homopyrimidine DNA, 2’-O-MeRNA and RNA as well as chimeric oligonucleotides containing LNA resulted in the formation of 18 various duplexes. UV melting, circular dichroism and fluorescence studies revealed the influence of nucleotide composition on duplex structure and thermal stability depending on the buffer pH value. Most duplexes simultaneously adopted both orientations. However, at pH 5.0, parallel duplexes were more favorable. Moreover, the presence of LNA nucleotides within a homopyrimidine strand favored the formation of parallel duplexes. PMID:26579720

  18. Re-forming supercritical quasi-parallel shocks. I - One- and two-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Thomas, V. A.; Winske, D.; Omidi, N.

    1990-01-01

    The process of reforming supercritical quasi-parallel shocks is investigated using one-dimensional and two-dimensional hybrid (particle ion, massless fluid electron) simulations both of shocks and of simpler two-stream interactions. It is found that the supercritical quasi-parallel shock is not steady. Instread of a well-defined shock ramp between upstream and downstream states that remains at a fixed position in the flow, the ramp periodically steepens, broadens, and then reforms upstream of its former position. It is concluded that the wave generation process is localized at the shock ramp and that the reformation process proceeds in the absence of upstream perturbations intersecting the shock.

  19. Measuring teacher self-report on classroom practices: Construct validity and reliability of the Classroom Strategies Scale - Teacher Form.

    PubMed

    Reddy, Linda A; Dudek, Christopher M; Fabiano, Gregory A; Peters, Stephanie

    2015-12-01

    This article presents information about the construct validity and reliability of a new teacher self-report measure of classroom instructional and behavioral practices (the Classroom Strategies Scales-Teacher Form; CSS-T). The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. Information is provided about the construct validity, internal consistency, test-retest reliability, and freedom from item-bias of the scales. Given previous investigations with the CSS Observer Form, it was hypothesized that internal consistency would be adequate and that confirmatory factor analyses (CFA) of CSS-T data from 293 classrooms would offer empirical support for the CSS-T's Total, Composite and subscales, and yield a similar factor structure to that of the CSS Observer Form. Goodness-of-fit indices of ?2/df, Root Mean Square Error of Approximation, Goodness of Fit Index, and Adjusted Goodness of Fit Index suggested satisfactory fit of proposed CFA models whereas the Comparative Fit Index did not. Internal consistency estimates of .93 and .94 were obtained for the Instructional Strategies and Behavioral Strategies Total scales respectively. Adequate test-retest reliability was found for instructional and behavioral total scales (r = .79, r = .84, percent agreement 93% and 93%). The CSS-T evidences freedom from item bias on important teacher demographics (age, educational degree, and years of teaching experience). Implications of results are discussed. (PsycINFO Database Record PMID:25622226

  20. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sáez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM© assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-­10) across 5 time points. The alternate form reliabilities ranged from…

  1. The utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to in this paper are specific to the Cray X-MP line of computers and its associated SSD (Solid-state Storage Device). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  2. Utilization of parallel processing in solving the inviscid form of the average-passage equation system for multistage turbomachinery

    NASA Technical Reports Server (NTRS)

    Mulac, Richard A.; Celestina, Mark L.; Adamczyk, John J.; Misegades, Kent P.; Dawson, Jef M.

    1987-01-01

    A procedure is outlined which utilizes parallel processing to solve the inviscid form of the average-passage equation system for multistage turbomachinery along with a description of its implementation in a FORTRAN computer code, MSTAGE. A scheme to reduce the central memory requirements of the program is also detailed. Both the multitasking and I/O routines referred to are specific to the Cray X-MP line of computers and its associated SSD (Solid-State Disk). Results are presented for a simulation of a two-stage rocket engine fuel pump turbine.

  3. Reliability and Validity of a Spanish Version of the Social Skills Rating System--Teacher Form

    ERIC Educational Resources Information Center

    Jurado, Michelle; Cumba-Aviles, Eduardo; Collazo, Luis C.; Matos, Maribel

    2006-01-01

    The aim of this study was to examine the psychometric properties of a Spanish version of the Social Skills Scale of the Social Skills Rating System-Teacher Form (SSRS-T) with a sample of children attending elementary schools in Puerto Rico (N = 357). The SSRS-T was developed for use with English-speaking children. Although translated, adapted, and…

  4. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  5. Development and reliability testing of a food store observation form. — Measures of the Food Environment

    Cancer.gov

    Skip to Main Content at the National Institutes of Health | www.cancer.gov Print Page E-mail Page Search: Please wait while this form is being loaded.... Home Browse by Resource Type Browse by Area of Research Research Networks Funding Information About

  6. Reliability of equivalent sphere model in blood-forming organ dose estimation

    NASA Technical Reports Server (NTRS)

    Shinn, Judy L.; Wilson, John W.; Nealy, John E.

    1990-01-01

    The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

  7. Reliability of equivalent sphere model in blood-forming organ dose estimation

    SciTech Connect

    Shinn, J.L.; Wilson, J.W.; Nealy, J.E.

    1990-04-01

    The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

  8. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  9. Concurrent Validity and Reliability of the Kaufman Version of the McCarthy Scales Short Form for a Sample of Mexican-American Children.

    ERIC Educational Resources Information Center

    Valencia, Richard R.; Rankin, Richard J.

    1983-01-01

    The concurrent validity and reliability of Kaufman's short-form version of the McCarthy Scales of Children's Abilities were examined for a sample of 342 Mexican-American preschool and kindergarten age children. The results showed that generally the positive psychometric properties of the Kaufman short form were also noted for the children in this…

  10. Parallel-plate submicron gap formed by micromachined low-density pillars for near-field radiative heat transfer

    SciTech Connect

    Ito, Kota; Miura, Atsushi; Iizuka, Hideo; Toshiyoshi, Hiroshi

    2015-02-23

    Near-field radiative heat transfer has been a subject of great interest due to the applicability to thermal management and energy conversion. In this letter, a submicron gap between a pair of diced fused quartz substrates is formed by using micromachined low-density pillars to obtain both the parallelism and small parasitic heat conduction. The gap uniformity is validated by the optical interferometry at four corners of the substrates. The heat flux across the gap is measured in a steady-state and is no greater than twice of theoretically predicted radiative heat flux, which indicates that the parasitic heat conduction is suppressed to the level of the radiative heat transfer or less. The heat conduction through the pillars is modeled, and it is found to be limited by the thermal contact resistance between the pillar top and the opposing substrate surface. The methodology to form and evaluate the gap promotes the near-field radiative heat transfer to various applications such as thermal rectification, thermal modulation, and thermophotovoltaics.

  11. Two-Repeat Human Telomeric d(TAGGGTTAGGGT) Sequence Forms Interconverting Parallel and Antiparallel G-Quadruplexes in Solution: Distinct Topologies, Thermodynamic Properties, and Folding/Unfolding Kinetics

    PubMed Central

    Patel, Dinshaw J.

    2015-01-01

    We demonstrate by NMR that the two-repeat human telomeric sequence d(TAGGGTTAGGGT) can form both parallel and antiparallel G-quadruplex structures in K+-containing solution. Both structures are dimeric G-quadruplexes involving three stacked G-tetrads. The sequence d(TAGGGUTAGGGT), containing a single thymine-to-uracil substitution at position 6, formed a predominantly parallel dimeric G-quadruplex with double-chain-reversal loops; the structure was symmetric, and all guanines were anti. Another modified sequence, d(UAGGGTBrUAGGGT), formed a predominantly antiparallel dimeric G-quadruplex with edgewise loops; the structure was asymmetric with six syn guanines and six anti guanines. The two structures can coexist and interconvert in solution. For the latter sequence, the antiparallel form is more favorable at low temperatures (<50 °C), while the parallel form is more favorable at higher temperatures; at temperatures lower than 40 °C, the antiparallel G-quadruplex folds faster but unfolds slower than the parallel G-quadruplex. PMID:14653736

  12. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216

    ERIC Educational Resources Information Center

    Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

  13. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  14. Validity and Reliability of the Turkish Form of Technology-Rich Outcome-Focused Learning Environment Inventory

    ERIC Educational Resources Information Center

    Cakir, Mustafa

    2011-01-01

    The purpose of the study was to investigate the reliability and validity of a Turkish adaptation of Technology-Rich Outcomes-Focused Learning Environment Inventory (TROFLEI) which was developed by Aldridge, Dorman, and Fraser. A sample of 985 students from 16 high schools (Grades 9-12) participated in the study. Translation process followed…

  15. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  16. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217

    ERIC Educational Resources Information Center

    Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

  17. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  18. An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

    2013-01-01

    The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

  19. The major G-quadruplex formed in the human BCL-2 proximal promoter adopts a parallel structure with a 13-nt loop in K+ solution.

    PubMed

    Agrawal, Prashansa; Lin, Clement; Mathad, Raveendra I; Carver, Megan; Yang, Danzhou

    2014-02-01

    The human BCL-2 gene contains a 39-bp GC-rich region upstream of the P1 promoter that has been shown to be critically involved in the regulation of BCL-2 gene expression. Inhibition of BCL-2 expression can decrease cellular proliferation and enhance the efficacy of chemotherapy. Here we report the major G-quadruplex formed in the Pu39 G-rich strand in this BCL-2 promoter region. The 1245G4 quadruplex adopts a parallel structure with one 13-nt and two 1-nt chain-reversal loops. The 1245G4 quadruplex involves four nonsuccessive G-runs, I, II, IV, V, unlike the previously reported bcl2 MidG4 quadruplex formed on the central four G-runs. The parallel 1245G4 quadruplex with the 13-nt loop, unexpectedly, appears to be more stable than the mixed parallel/antiparallel MidG4. Parallel-stranded structures with two 1-nt loops and one variable-length middle loop are found to be prevalent in the promoter G-quadruplexes; the variable middle loop is suggested to determine the specific overall structure and potential ligand recognition site. A limit of 7 nt in loop length is used in all quadruplex-predicting software. Thus, the formation and high stability of the 1245G4 quadruplex with a 13-nt loop is significant. The presence of two distinct interchangeable G-quadruplexes in the overlapping region of the BCL-2 promoter is intriguing, suggesting a novel mechanism for gene transcriptional regulation and ligand modulation. PMID:24450880

  20. Balancing the Need for Reliability and Time Efficiency: Short Forms of the Wechsler Adult Intelligence Scale-III

    ERIC Educational Resources Information Center

    Jeyakumar, Sharon L. E.; Warriner, Erin M.; Raval, Vaishali V.; Ahmad, Saadia A.

    2004-01-01

    Tables permitting the conversion of short-form composite scores to full-scale IQ estimates have been published for previous editions of the Wechsler Adult Intelligence Scale (WAIS). Equivalent tables are now needed for selected subtests of the WAIS-III. This article used Tellegen and Briggs's formulae to convert the sum of scaled scores for four…

  1. Reliability and structural integrity

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1976-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  2. Female Genital Mutilation in Sierra Leone: Forms, Reliability of Reported Status, and Accuracy of Related Demographic and Health Survey Questions

    PubMed Central

    Grant, Donald S.; Berggren, Vanja

    2013-01-01

    Objective. To determine forms of female genital mutilation (FGM), assess consistency between self-reported and observed FGM status, and assess the accuracy of Demographic and Health Surveys (DHS) FGM questions in Sierra Leone. Methods. This cross-sectional study, conducted between October 2010 and April 2012, enrolled 558 females aged 12–47 from eleven antenatal clinics in northeast Sierra Leone. Data on demography, FGM status, and self-reported anatomical descriptions were collected. Genital inspection confirmed the occurrence and extent of cutting. Results. All participants reported FGM status; 4 refused genital inspection. Using the WHO classification of FGM, 31.7% had type Ib; 64.1% type IIb; and 4.2% type IIc. There was a high level of agreement between reported and observed FGM prevalence (81.2% and 81.4%, resp.). There was no correlation between DHS FGM responses and anatomic extent of cutting, as 2.7% reported pricking; 87.1% flesh removal; and 1.1% that genitalia was sewn closed. Conclusion. Types I and II are the main forms of FGM, with labia majora alterations in almost 5% of cases. Self-reports on FGM status could serve as a proxy measurement for FGM prevalence but not for FGM type. The DHS FGM questions are inaccurate for determining cutting extent. PMID:24204384

  3. Closed-form solution of mid-potential between two parallel charged plates with more extensive application

    NASA Astrophysics Data System (ADS)

    Shang, Xiang-Yu; Yang, Chen; Zhou, Guo-Qing

    2015-10-01

    Efficient calculation of the electrostatic interactions including repulsive force between charged molecules in a biomolecule system or charged particles in a colloidal system is necessary for the molecular scale or particle scale mechanical analyses of these systems. The electrostatic repulsive force depends on the mid-plane potential between two charged particles. Previous analytical solutions of the mid-plane potential, including those based on simplified assumptions and modern mathematic methods, are reviewed. It is shown that none of these solutions applies to wide ranges of inter-particle distance from 0 to 10 and surface potential from 1 to 10. Three previous analytical solutions are chosen to develop a semi-analytical solution which is proven to have more extensive applications. Furthermore, an empirical closed-form expression of mid-plane potential is proposed based on plenty of numerical solutions. This empirical solution has extensive applications, as well as high computational efficiency. Project supported by the National Key Basic Research Program of China (Grant No. 2012CB026103), the National Natural Science Foundation of China (Grant No. 51009136), and the Natural Science Foundation of Jiangsu Province, China (Grant No. BK2011212).

  4. A new model of in vitro fungal biofilms formed on human nail fragments allows reliable testing of laser and light therapies against onychomycosis.

    PubMed

    Vila, Taissa Vieira Machado; Rozental, Sonia; de Sá Guimarães, Claudia Maria Duarte

    2015-04-01

    Onychomycoses represent approximately 50 % of all nail diseases worldwide. In warmer and more humid countries like Brazil, the incidence of onychomycoses caused by non-dermatophyte molds (NDM, including Fusarium spp.) or yeasts (including Candida albicans) has been increasing. Traditional antifungal treatments used for the dermatophyte-borne disease are less effective against onychomycoses caused by NDM. Although some laser and light treatments have demonstrated clinical efficacy against onychomycosis, their US Food and Drug Administration (FDA) approval as "first-line" therapy is pending, partly due to the lack of well-demonstrated fungicidal activity in a reliable in vitro model. Here, we describe a reliable new in vitro model to determine the fungicidal activity of laser and light therapies against onychomycosis caused by Fusarium oxysporum and C. albicans. Biofilms formed in vitro on sterile human nail fragments were treated with 1064 nm neodymium-doped yttrium aluminum garnet laser (Nd:YAG), 420 nm intense pulsed light (IPL) IPL 420, followed by Nd:YAG, or near-infrared light ((NIR) 700-1400 nm). Light and laser antibiofilm effects were evaluated using cell viability assay and scanning electron microscopy (SEM). All treatments were highly effective against C. albicans and F. oxysporum biofilms, resulting in decreases in cell viability of 45-60 % for C. albicans and 92-100 % for F. oxysporum. The model described here yielded fungicidal activities that matched more closely to those observed in the clinic, when compared to published in vitro models for laser and light therapies. Thus, our model might represent an important tool for the initial testing, validation, and "fine-tuning" of laser and light therapies against onychomycosis. PMID:25471266

  5. The Zarit Caregiver Burden Interview Short Form (ZBI-12) in spouses of Veterans with Chronic Spinal Cord Injury, Validity and Reliability of the Persian Version

    PubMed Central

    Rajabi-Mashhadi, Mohammad T; Mashhadinejad, Hosein; Ebrahimzadeh, Mohammad H; Golhasani-Keshtan, Farideh; Ebrahimi, Hanieh; Zarei, Zahra

    2015-01-01

    Background: To test the psychometric properties of the Persian version of Zarit Burden Interview (ZBI-12) in the Iranian population. Methods: After translating and cultural adaptation of the questionnaire into Persian, 100 caregiver spouses of Iran- Iraq war (1980-88) veterans with chronic spinal cord injury who live in the city of Mashhad, Iran, invited to participate in the study. The Persian version of ZBI-12 accompanied with the Persian SF-36 was completed by the caregivers to test validity of the Persian ZBI-12.A Pearson`s correlation coefficient was calculated for validity testing. In order to assess reliability of the Persian ZBI-12, we administered the ZBI-12 randomly in 48 caregiver spouses again 3 days later. Results: Generally, the internal consistency of the questionnaire was found to be strong (Cronbach's alpha 0.77). Intercorrelation matrix between the different domains of ZBI-12 at test-retest was 0.78. The results revealed that majority of questions the Persian ZBI_12 have a significant correlation to each other. In terms of validity, our results showed that there is significant correlations between some domains of the Persian version the Short Form Health Survey -36 with the Persian Zarit Burden Interview such as Q1 with Role Physical (P=0.03),General Health (P=0.034),Social Functional (0.037), Mental Health (0.023) and Q3 with Physical Function (P=0.001),Viltality (0.002), Socil Function (0.001). Conclusions: Our findings suggest that the Zarit Burden Interview Persian version is both a valid and reliable instrument for measuring the burden of caregivers of individuals with chronic spinal cord injury. PMID:25692171

  6. A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates

    ERIC Educational Resources Information Center

    Kim, Seonghoon

    2012-01-01

    Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

  7. Improved techniques of parallel gap welding and monitoring

    NASA Technical Reports Server (NTRS)

    Mardesich, N.; Gillanders, M. S.

    1984-01-01

    Welding programs which show that parallel gas welding is a reliable process are discussed. When monitoring controls and nondestructive tests are incorporated into the process, parallel gap welding becomes more reliable and cost effective. The panel fabrication techniques and the HAC thermal cycling test indicate reliable product integrity. The design and building of automated tooling and fixturing for welding are discussed.

  8. Parallel rendering

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1995-01-01

    This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

  9. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  10. Reliability training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (editor); Malec, Henry A. (editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

    1992-01-01

    Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

  11. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  12. Parallel and Multilevel Algorithms for Computational Partial Differential Equations

    E-print Network

    Jimack, Peter

    Parallel and Multilevel Algorithms for Computational Partial Differential Equations Peter K Jimack 343 5464 ABSTRACT The efficient and reliable solution of partial differential equations (PDEs) plays that arises when the finite element mesh is adapted. Keywords: Partial Differential Equations, Parallel

  13. Electricity Reliability

    E-print Network

    Post, Wilfred M.

    Electricity Delivery and Energy Reliability High Temperature Superconductivity (HTS) Visualization in the future because they have virtually no resistance to electric current, offering the possibility of new electric power equipment with more energy efficiency and higher capacity than today's systems

  14. Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach

    ERIC Educational Resources Information Center

    Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

    2012-01-01

    Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

  15. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  16. Examining the reliability and validity of a modified version of the International Physical Activity Questionnaire, long form (IPAQ-LF) in Nigeria: a cross-sectional study

    PubMed Central

    Oyeyemi, Adewale L; Bello, Umar M; Philemon, Saratu T; Aliyu, Habeeb N; Majidadi, Rebecca W; Oyeyemi, Adetoyeje Y

    2014-01-01

    Objectives To investigate the reliability and an aspect of validity of a modified version of the long International Physical Activity Questionnaire (Hausa IPAQ-LF) in Nigeria. Design Cross-sectional study, examining the reliability and construct validity of the Hausa IPAQ-LF compared with anthropometric and biological variables. Setting Metropolitan Maiduguri, the capital city of Borno State in Nigeria. Participants 180 Nigerian adults (50% women) with a mean age of 35.6 (SD=10.3) years, recruited from neighbourhoods with diverse socioeconomic status and walkability. Outcome measures Domains (domestic physical activity (PA), occupational PA, leisure-time PA, active transportation and sitting time) and intensities of PA (vigorous, moderate and walking) were measured with the Hausa IPAQ-LF on two different occasions, 8?days apart. Outcomes for construct validity were measured body mass index (BMI), systolic blood pressure (SBP) and diastolic blood pressure (DBP). Results The Hausa IPAQ-LF demonstrated good test–retest reliability (intraclass correlation coefficient, ICC>75) for total PA (ICC=0.79, 95% CI 0.65 to 0.82), occupational PA (ICC=0.77, 95% CI 0.68 to 0.82), active transportation (ICC=0.82, 95% CI 0.75 to 0.87) and vigorous intensity activities (ICC=0.82, 95% CI 0.76 to 0.87). Reliability was substantially higher for total PA (ICC=0.80), occupational PA (ICC=0.78), leisure-time PA (ICC=0.75) and active transportation (ICC=0.80) in men than in women, but domestic PA (ICC=0.38) and sitting time (ICC=0.71) demonstrated more substantial reliability coefficients in women than in men. For the construct validity, domestic PA was significantly related mainly with SBP (r=?0.27) and DBP (r=?0.17), and leisure-time PA and total PA were significantly related only with SBP (r=?0.16) and BMI (r=?0.29), respectively. Similarly, moderate-intensity PA was mainly related with SBP (r=?0.16, p<0.05) and DBP (r=?0.21, p<0.01), but vigorous-intensity PA was only related with BMI (r=?0.11, p<0.05). Conclusions The modified Hausa IPAQ-LF demonstrated sufficient evidence of test–retest reliability and may be valid for assessing context specific PA behaviours of adults in Nigeria. PMID:25448626

  17. Massively parallel visualization: Parallel rendering

    SciTech Connect

    Hansen, C.D.; Krogh, M.; White, W.

    1995-12-01

    This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM-5 MPP.

  18. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  19. Parallel pipelining

    SciTech Connect

    Joseph, D.D.; Bai, R.; Liao, T.Y.; Huang, A.; Hu, H.H.

    1995-09-01

    In this paper the authors introduce the idea of parallel pipelining for water lubricated transportation of oil (or other viscous material). A parallel system can have major advantages over a single pipe with respect to the cost of maintenance and continuous operation of the system, to the pressure gradients required to restart a stopped system and to the reduction and even elimination of the fouling of pipe walls in continuous operation. The authors show that the action of capillarity in small pipes is more favorable for restart than in large pipes. In a parallel pipeline system, they estimate the number of small pipes needed to deliver the same oil flux as in one larger pipe as N = (R/r){sup {alpha}}, where r and R are the radii of the small and large pipes, respectively, and {alpha} = 4 or 19/7 when the lubricating water flow is laminar or turbulent.

  20. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  1. Parallelism in System Tools

    SciTech Connect

    Matney, Sr., Kenneth D; Shipman, Galen M

    2010-01-01

    The Cray XT, when employed in conjunction with the Lustre filesystem, has provided the ability to generate huge amounts of data in the form of many files. Typically, this is accommodated by satisfying the requests of large numbers of Lustre clients in parallel. In contrast, a single service node (Lustre client) cannot adequately service such datasets. This means that the use of traditional UNIX tools like cp, tar, et alli (with have no parallel capability) can result in substantial impact to user productivity. For example, to copy a 10 TB dataset from the service node using cp would take about 24 hours, under more or less ideal conditions. During production operation, this could easily extend to 36 hours. In this paper, we introduce the Lustre User Toolkit for Cray XT, developed at the Oak Ridge Leadership Computing Facility (OLCF). We will show that Linux commands, implementing highly parallel I/O algorithms, provide orders of magnitude greater performance, greatly reducing impact to productivity.

  2. Assessing the Discriminant Ability, Reliability, and Comparability of Multiple Short Forms of the Boston Naming Test in an Alzheimer’s Disease Center Cohort

    PubMed Central

    Katsumata, Yuriko; Mathews, Melissa; Abner, Erin L.; Jicha, Gregory A.; Caban-Holt, Allison; Smith, Charles D.; Nelson, Peter T.; Kryscio, Richard J.; Schmitt, Frederick A.; Fardo, David W.

    2015-01-01

    Background The Boston Naming Test (BNT) is a commonly used neuropsychological test of confrontation naming that aids in determining the presence and severity of dysnomia. Many short versions of the original 60-item test have been developed and are routinely administered in clinical/research settings. Because of the common need to translate similar measures within and across studies, it is important to evaluate the operating characteristics and agreement of different BNT versions. Methods We analyzed longitudinal data of research volunteers (n = 681) from the University of Kentucky Alzheimer’s Disease Center longitudinal cohort. Conclusions With the notable exception of the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) 15-item BNT, short forms were internally consistent and highly correlated with the full version; these measures varied by diagnosis and generally improved from normal to mild cognitive impairment (MCI) to dementia. All short forms retained the ability to discriminate between normal subjects and those with dementia. The ability to discriminate between normal and MCI subjects was less strong for the short forms than the full BNT, but they exhibited similar patterns. These results have important implications for researchers designing longitudinal studies, who must consider that the statistical properties of even closely related test forms may be quite different. PMID:25613081

  3. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L.

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  4. Parallel Anisotropic Tetrahedral Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Darmofal, David L.

    2008-01-01

    An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

  5. Results from the translation and adaptation of the Iranian Short-Form McGill Pain Questionnaire (I-SF-MPQ): preliminary evidence of its reliability, construct validity and sensitivity in an Iranian pain population

    PubMed Central

    2011-01-01

    Background The Short Form McGill Pain Questionnaire (SF-MPQ) is one of the most widely used instruments to assess pain. The aim of this study was to translate and culturally adapt the questionnaire for Farsi (the official language of Iran) speakers in order to test its reliability and sensitivity. Methods We followed Guillemin's guidelines for cross-cultural adaption of health-related measures, which include forward-backward translations, expert committee meetings, and face validity testing in a pilot group. Subsequently, the questionnaire was administered to a sample of 100 diverse chronic pain patients attending a tertiary pain and rehabilitation clinic. In order to evaluate test-retest reliability, patients completed the questionnaire in the morning and early evening of their first visit. Finally, patients were asked to complete the questionnaire for the third time after completing a standardized treatment protocol three weeks later. Intraclass correlation coefficient (ICC) was used to evaluate reliability. We used principle component analysis to assess construct validity. Results Ninety-two subjects completed the questionnaire both in the morning and in the evening of the first visit (test-retest reliability), and after three weeks (sensitivity to change). Eight patients who did not finish treatment protocol were excluded from the study. Internal consistency was found by Cronbach's alpha to be 0.951, 0.832 and 0.840 for sensory, affective and total scores respectively. ICC resulted in 0.906 for sensory, 0.712 for affective and 0.912 for total pain score. Item to subscale score correlations supported the convergent validity of each item to its hypothesized subscale. Correlations were observed to range from r2 = 0.202 to r2 = 0.739. Sensitivity or responsiveness was evaluated by pair t-test, which exhibited a significant difference between pre- and post-treatment scores (p < 0.001). Conclusion The results of this study indicate that the Iranian version of the SF-MPQ is a reliable questionnaire and responsive to changes in the subscale and total pain scores in Persian chronic pain patients over time. PMID:22074591

  6. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S; Sundar, Hari; Veerapaneni, Shravan

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  7. Parallel hierarchical radiosity rendering

    SciTech Connect

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  8. The short form of the fear survey schedule for children-revised (FSSC-R-SF): an efficient, reliable, and valid scale for measuring fear in children and adolescents.

    PubMed

    Muris, Peter; Ollendick, Thomas H; Roelofs, Jeffrey; Austin, Kristin

    2014-12-01

    The present study examined the psychometric properties of the Short Form of the Fear Survey Schedule for Children-Revised (FSSC-R-SF) in non-clinical and clinically referred children and adolescents from the Netherlands and the United States. Exploratory as well as confirmatory factor analyses of the FSSC-R-SF yielded support for the hypothesized five-factor structure representing fears in the domains of (1) failure and criticism, (2) the unknown, (3) animals, (4) danger and death, and (5) medical affairs. The FSSC-R-SF showed satisfactory reliability and was capable of assessing gender and age differences in youths' fears and fearfulness that have been documented in previous research. Further, the convergent validity of the scale was good as shown by substantial and meaningful correlations with the full-length FSSC-R and alternative childhood anxiety measures. Finally, support was found for the discriminant validity of the scale. That is, clinically referred children and adolescents exhibited higher scores on the FSSC-R-SF total scale and most subscales as compared to their non-clinical counterparts. Moreover, within the clinical sample, children and adolescents with a major anxiety disorder generally displayed higher FSSC-R-SF scores than youths without such a diagnosis. Altogether, these findings indicate that the FSSC-R-SF is a brief, reliable, and valid scale for assessing fear sensitivities in children and adolescents. PMID:25445086

  9. Reliability of an Adapted Version of the Modified Six Elements Test as a Measure of Executive Function.

    PubMed

    Bertens, Dirk; Fasotti, Luciano; Egger, Jos I M; Boelen, Danielle H E; Kessels, Roy P C

    2016-01-01

    The Modified Six Elements Test (MSET) is used to examine executive deficits-more specifically, planning deficits. This study investigates the reliability of an adapted version of the MSET and proposes a novel scoring method. Two parallel versions of the adapted MSET were administered in 60 healthy participants in a counterbalanced order. Test-retest and parallel-form reliability were examined using intraclass correlation coefficients, Bland-Altman analyses, standard errors of measurement, and smallest real differences, representing clinically relevant changes over time. Moreover, the ecological validity of the adapted MSET was evaluated using the Executive Function Index, a self-rating questionnaire measuring everyday executive performance. No systematic differences between the test occasions were present, and the adapted MSET including the proposed scoring method was capable of detecting real clinical changes. Intraclass correlations for the test-retest and parallel-form reliability were modest, and the variability between the test scores was high. The nonsignificant correlations with the Executive Function Index did not confirm the previously established ecological validity of the MSET. We show that both parallel versions of the test are clinically equivalent and can be used to measure executive function over the course of time without task-specific learning effects. PMID:26111243

  10. Short-term reliability of a brief hazard perception test.

    PubMed

    Scialfa, Charles T; Pereverseff, Rosemary S; Borkenhagen, David

    2014-12-01

    Hazard perception tests (HPTs) have been successfully implemented in some countries as a part of the driver licensing process and, while their validity has been evaluated, their short-term stability is unknown. This study examined the short-term reliability of a brief, dynamic version of the HPT. Fifty-five young adults (Mage=21 yrs) with at least two years of post-licensing driving experience completed parallel, 21-scene HPTs with a one-month interval separating each test. Minimal practice effects (?0.1s) were manifested. Internal consistency (Cronbach's alpha) averaged 0.73 for the two forms. The correlation between the two tests was 0.55 (p<0.001) and correcting for lack of reliability increased the correlation to 0.72. Thus, a brief form of the HPT demonstrates acceptable short-term reliability in drivers whose hazard perception should be stable, an important feature for implementation and consumer acceptance. One implication of these results is that valid HPT scores should predict future crash risk, a desirable property for user acceptance of such tests. However, short-term stability should be assessed over longer periods and in other driver groups, particularly novices and older adults, in whom inter-individual differences in the development of hazard perception skill may render HPT tests unstable, even over short intervals. PMID:25173997

  11. Parallel expression of alternate forms of psbA2 gene provides evidence for the existence of a targeted D1 repair mechanism in Synechocystis sp. PCC 6803.

    PubMed

    Nagarajan, Aparna; Burnap, Robert L

    2014-09-01

    The D1 protein of Photosystem II (PSII) is recognized as the main target of photoinhibitory damage and exhibits a high turnover rate due to its degradation and replacement during the PSII repair cycle. Damaged D1 is replaced by newly synthesized D1 and, although reasonable, there is no direct evidence for selective replacement of damaged D1. Instead, it remains possible that increased turnover of D1 subunits occurs in a non-selective manner due for example, to a general up-regulation of proteolytic activity triggered during damaging environmental conditions, such as high light. To determine if D1 degradation is targeted to damaged D1 or generalized to all D1, we developed a genetic system involving simultaneous dual expression of wild type and mutant versions of D1 protein. Dual D1 strains (nS345P:eWT and nD170A:eWT) expressed a wild type (WT) D1 from ectopic and a damage prone mutant (D1-S345P, D1-D170A) from native locus on the chromosome. Characterization of strains showed that all dual D1 strains restore WT like phenotype with high PSII activity. Higher PSII activity indicates increased population of PSII reaction centers with WT D1. Analysis of steady state levels of D1 in nS345P:eWT by immunoblot showed an accumulation of WT D1 only. But, in vivo pulse labeling confirmed the synthesis of both S345P (exists as iD1) and WT D1 in the dual strain. Expression of nS345P:eWT in FtsH2 knockout background showed accumulation of both iD1 and D1 proteins. This demonstrates that dual D1 strains express both forms of D1, yet only damage prone PSII complexes are selected for repair providing evidence that the D1 degradation process is targeted towards damaged PSII complexes. Since the N-terminus has been previously shown to be important for the degradation of damaged D1, the possibility that the highly conserved cysteine 18 residue situated in the N-terminal domain of D1 is involved in the targeted repair process was tested by examining site directed mutants of this and the other cysteines of the D1 protein. This article is part of a special issue entitled: photosynthesis research for sustainability: keys to produce clean energy. PMID:24582662

  12. Parallelization for reaction

    E-print Network

    Louvet, Violaine

    Parallelization for reaction waves with complex chemistry Context Application Background Numerical Results Conclusions and Perspectives Parallelization strategies for multi-scale reaction waves for Engineering - Paraguay 2010 #12;Parallelization for reaction waves with complex chemistry Context Application

  13. Calculating system reliability with SRFYDO

    SciTech Connect

    Morzinski, Jerome; Anderson - Cook, Christine M; Klamann, Richard M

    2010-01-01

    SRFYDO is a process for estimating reliability of complex systems. Using information from all applicable sources, including full-system (flight) data, component test data, and expert (engineering) judgment, SRFYDO produces reliability estimates and predictions. It is appropriate for series systems with possibly several versions of the system which share some common components. It models reliability as a function of age and up to 2 other lifecycle (usage) covariates. Initial output from its Exploratory Data Analysis mode consists of plots and numerical summaries so that the user can check data entry and model assumptions, and help determine a final form for the system model. The System Reliability mode runs a complete reliability calculation using Bayesian methodology. This mode produces results that estimate reliability at the component, sub-system, and system level. The results include estimates of uncertainty, and can predict reliability at some not-too-distant time in the future. This paper presents an overview of the underlying statistical model for the analysis, discusses model assumptions, and demonstrates usage of SRFYDO.

  14. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (inventor); Sayah, Hoshyar R. (inventor)

    1988-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  15. Integrated circuit reliability testing

    NASA Technical Reports Server (NTRS)

    Buehler, Martin G. (Inventor); Sayah, Hoshyar R. (Inventor)

    1990-01-01

    A technique is described for use in determining the reliability of microscopic conductors deposited on an uneven surface of an integrated circuit device. A wafer containing integrated circuit chips is formed with a test area having regions of different heights. At the time the conductors are formed on the chip areas of the wafer, an elongated serpentine assay conductor is deposited on the test area so the assay conductor extends over multiple steps between regions of different heights. Also, a first test conductor is deposited in the test area upon a uniform region of first height, and a second test conductor is deposited in the test area upon a uniform region of second height. The occurrence of high resistances at the steps between regions of different height is indicated by deriving the measured length of the serpentine conductor using the resistance measured between the ends of the serpentine conductor, and comparing that to the design length of the serpentine conductor. The percentage by which the measured length exceeds the design length, at which the integrated circuit will be discarded, depends on the required reliability of the integrated circuit.

  16. Algorithmically Specialized Parallel Architecture For Robotics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Computing system called Robot Mathematics Processor (RMP) contains large number of processor elements (PE's) connected in various parallel and serial combinations reconfigurable via software. Special-purpose architecture designed for solving diverse computational problems in robot control, simulation, trajectory generation, workspace analysis, and like. System an MIMD-SIMD parallel architecture capable of exploiting parallelism in different forms and at several computational levels. Major advantage lies in design of cells, which provides flexibility and reconfigurability superior to previous SIMD processors.

  17. Parallel processing of natural language

    SciTech Connect

    Chang, H.O.

    1986-01-01

    Two types of parallel natural language processing are studied in this work: (1) the parallelism between syntactic and nonsyntactic processing and (2) the parallelism within syntactic processing. It is recognized that a syntactic category can potentially be attached to more than one node in the syntactic tree of a sentence. Even if all the attachments are syntactically well-formed, nonsyntactic factors such as semantic and pragmatic consideration may require one particular attachment. Syntactic processing must synchronize and communicate with nonsyntactic processing. Two syntactic processing algorithms are proposed for use in a parallel environment: Early's algorithm and the LR(k) algorithm. Conditions are identified to detect the syntactic ambiguity and the algorithms are augmented accordingly. It is shown that by using nonsyntactic information during syntactic processing, backtracking can be reduced, and the performance of the syntactic processor is improved. For the second type of parallelism, it is recognized that one portion of a grammar can be isolated from the rest of the grammar and be processed by a separate processor. A partial grammar of a larger grammar is defined. Parallel syntactic processing is achieved by using two processors concurrently: the main processor (mp) and the two processors concurrently: the main processor (mp) and the auxiliary processor (ap).

  18. Wind turbine reliability database update.

    SciTech Connect

    Peters, Valerie A.; Hill, Roger Ray; Stinebaugh, Jennifer A.; Veers, Paul S.

    2009-03-01

    This report documents the status of the Sandia National Laboratories' Wind Plant Reliability Database. Included in this report are updates on the form and contents of the Database, which stems from a fivestep process of data partnerships, data definition and transfer, data formatting and normalization, analysis, and reporting. Selected observations are also reported.

  19. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  20. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  1. Improved CDMA Performance Using Parallel Interference Cancellation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    1995-01-01

    This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

  2. A Parallel Symbolic-Numerical Approach to Algebraic Curve Plotting ?

    E-print Network

    A Parallel Symbolic-Numerical Approach to Algebraic Curve Plotting ? Christian Mittermaier://www.risc.uni-linz.ac.at Abstract. We describe a parallel hybrid symbolic-numerical solution to the problem of reliably plotting modern computer algebra systems provide functions for plotting and visu- alizing the real aÆne part

  3. Parallel rendering techniques for massively parallel visualization

    SciTech Connect

    Hansen, C.; Krogh, M.; Painter, J.

    1995-07-01

    As the resolution of simulation models increases, scientific visualization algorithms which take advantage of the large memory. and parallelism of Massively Parallel Processors (MPPs) are becoming increasingly important. For large applications rendering on the MPP tends to be preferable to rendering on a graphics workstation due to the MPP`s abundant resources: memory, disk, and numerous processors. The challenge becomes developing algorithms that can exploit these resources while minimizing overhead, typically communication costs. This paper will describe recent efforts in parallel rendering for polygonal primitives as well as parallel volumetric techniques. This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume render use a MIMD approach. Implementations for these algorithms are presented for the Thinking Ma.chines Corporation CM-5 MPP.

  4. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  5. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  6. Design considerations for parallel graphics libraries

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1994-01-01

    Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

  7. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed. PMID:20509047

  8. Low-power approaches for parallel, free-space photonic interconnects

    SciTech Connect

    Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

    1995-12-31

    Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

  9. DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY

    SciTech Connect

    Dominguez, A.; Siana, B.; Masters, D.; Henry, A. L.; Martin, C. L.; Scarlata, C.; Bedregal, A. G.; Malkan, M.; Ross, N. R.; Atek, H.; Colbert, J. W.; Teplitz, H. I.; Rafelski, M.; McCarthy, P.; Hathi, N. P.; Dressler, A.; Bunker, A.

    2013-02-15

    Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

  10. Parallel integrated frame synchronizer chip

    NASA Technical Reports Server (NTRS)

    Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

    2000-01-01

    A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

  11. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  12. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  13. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  14. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  15. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  16. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  17. Parallel Discrete Event Simulation

    NASA Astrophysics Data System (ADS)

    Kunz, Georg

    Ever since discrete event simulation has been adopted by a large research community, simulation developers have attempted to draw benefits from executing a simulation on multiple processing units in parallel. Hence, a wide range of research has been conducted on Parallel Discrete Event Simulation (PDES). In this chapter we give an overview of the challenges and approaches of parallel simulation. Furthermore, we present a survey of the parallelization capabilities of the network simulators OMNeT++, ns-2, DSIM and JiST.

  18. Reliability model generator specification

    NASA Technical Reports Server (NTRS)

    Cohen, Gerald C.; Mccann, Catherine

    1990-01-01

    The Reliability Model Generator (RMG), a program which produces reliability models from block diagrams for ASSIST, the interface for the reliability evaluation tool SURE is described. An account is given of motivation for RMG and the implemented algorithms are discussed. The appendices contain the algorithms and two detailed traces of examples.

  19. Reliability in aposematic signaling

    PubMed Central

    2010-01-01

    In light of recent work, we will expand on the role and variability of aposematic signals. The focus of this review will be the concepts of reliability and honesty in aposematic signaling. We claim that reliable signaling can solve the problem of aposematic evolution, and that variability in reliability can shed light on the complexity of aposematic systems. PMID:20539774

  20. Boolean Circuit Programming: A New Paradigm to Design Parallel Algorithms

    E-print Network

    Ha, Soonhoi

    Boolean Circuit Programming: A New Paradigm to Design Parallel Algorithms Kunsoo Park Heejin Park Woo-Chul Jeun § Soonhoi Ha ¶ Abstract The Boolean circuit has been an important model of parallel important model of parallel computation is the Boolean circuit [18, 19]. Uni- form Boolean circuits have

  1. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  2. Research in parallel computing

    NASA Technical Reports Server (NTRS)

    Ortega, James M.; Henderson, Charles

    1994-01-01

    This report summarizes work on parallel computations for NASA Grant NAG-1-1529 for the period 1 Jan. - 30 June 1994. Short summaries on highly parallel preconditioners, target-specific parallel reductions, and simulation of delta-cache protocols are provided.

  3. Combinatorial Parallel and Scientific

    E-print Network

    Pinar, Ali

    Combinatorial algorithms have long played a pivotal enabling role in many applica- tions of parallel computing computational techniques and rapidly changing computational platforms. But the relationship be- tween parallel the relationship between discrete algorithms and parallel computing. In addition to their traditional role

  4. Parallel simulation today

    NASA Technical Reports Server (NTRS)

    Nicol, David; Fujimoto, Richard

    1992-01-01

    This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

  5. User's guide to the Reliability Estimation System Testbed (REST)

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

    1992-01-01

    The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

  6. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  7. System and Software Reliability (C103)

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores

    2003-01-01

    Within the last decade better reliability models (hardware. software, system) than those currently used have been theorized and developed but not implemented in practice. Previous research on software reliability has shown that while some existing software reliability models are practical, they are no accurate enough. New paradigms of development (e.g. OO) have appeared and associated reliability models have been proposed posed but not investigated. Hardware models have been extensively investigated but not integrated into a system framework. System reliability modeling is the weakest of the three. NASA engineers need better methods and tools to demonstrate that the products meet NASA requirements for reliability measurement. For the new models for the software component of the last decade, there is a great need to bring them into a form that they can be used on software intensive systems. The Statistical Modeling and Estimation of Reliability Functions for Systems (SMERFS'3) tool is an existing vehicle that may be used to incorporate these new modeling advances. Adapting some existing software reliability modeling changes to accommodate major changes in software development technology may also show substantial improvement in prediction accuracy. With some additional research, the next step is to identify and investigate system reliability. System reliability models could then be incorporated in a tool such as SMERFS'3. This tool with better models would greatly add value in assess in GSFC projects.

  8. Low-power, parallel photonic interconnections for Multi-Chip Module applications

    SciTech Connect

    Carson, R.F.; Lovejoy, M.L.; Lear, K.L.

    1994-12-31

    New applications of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMs). Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. MCM-based applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated currently in photonic and optoelectronic technologies. The work described is a parallel link array, designed for vertical (Z-Axis) interconnection of the layers in a MCM-based signal processor stack, operating at a data rate of 100 Mb/s. This interconnect is based upon high-efficiency VCSELs, HBT photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques.

  9. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  10. Reliability quantification and visualization for electric microgrids

    NASA Astrophysics Data System (ADS)

    Panwar, Mayank

    The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

  11. The STAPL Parallel Container Framework 

    E-print Network

    Tanase, Ilie Gabriel

    2012-02-14

    The Standard Template Adaptive Parallel Library (STAPL) is a parallel programming infrastructure that extends C with support for parallelism. STAPL provides a run-time system, a collection of distributed data structures (pContainers) and parallel...

  12. Making programmable BMS safe and reliable

    SciTech Connect

    Cusimano, J.A.

    1995-12-01

    Burner management systems ensure safe admission of fuel to the furnace and prevent explosions. This article describes how programmable control systems can be every bit as safe and reliable as hardwired or standard programmable logic controller-based designs. High-pressure boilers are required by regulatory agencies and insurance companies alike to be equipped with a burner management system (BMS) to ensure safe admission of fuel to the furnace and to prevent explosions. These systems work in parallel with, but independently of, the combustion and feedwater control systems that start up, monitor, and shut down burners and furnaces. Safety and reliability are the fundamental requirements of a BMS. Programmable control system for BMS applications are now available that incorporate high safety and reliability into traditional microprocessor-based designs. With one of these control systems, a qualified systems engineer applying relevant standards, such as the National Fire Protection Assn (NFPA) 85 series, can design and implement a superior BMS.

  13. Human reliability analysis

    SciTech Connect

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

  14. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1989-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  15. Parallel digital forensics infrastructure.

    SciTech Connect

    Liebrock, Lorie M.; Duggan, David Patrick

    2009-10-01

    This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

  16. On mesh rezoning algorithms for parallel platforms

    SciTech Connect

    Plaskacz, E.J.

    1995-07-01

    A mesh rezoning algorithm for finite element simulations in a parallel-distributed environment is described. The cornerstones of the algorithm are: the parallel computation of distortion norms on the element and subdomain level, the exchange of the individual subdomain norms to form a subdomain distortion vector, the classification of subdomains and the rezoning behavior prescribed within each subdomain as a response to its own classification and the classification of neighboring subdomains.

  17. Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples

    ERIC Educational Resources Information Center

    Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

    2005-01-01

    This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…

  18. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  19. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  20. PCLIPS: Parallel CLIPS

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

    1994-01-01

    A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

  1. Hawaii electric system reliability.

    SciTech Connect

    Silva Monroy, Cesar Augusto; Loose, Verne William

    2012-09-01

    This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

  2. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  3. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  4. Reliability measurement during software development

    NASA Technical Reports Server (NTRS)

    Hecht, H.; Sturm, W. A.; Trattner, S.

    1977-01-01

    Measurement of software reliability was carried out during the development of data base software for a multi-sensor tracking system. Every run made during this project was scored as success or failure, and supporting data were collected on forms for further analysis. The failure ratio (number of failures per calendar interval divided by total number of runs) and failure rate (number of failures divided by CPU time for the interval) were found to be consistent measures, on a month-to-month basis as well as from module to module, and therefore considered valid indicators of reliability in this environment. Trend lines could be established from these measurements that provide good visualization of the progress on the job as a whole as well as on individual modules. Over one-half of the observed failures were due to factors associated with the specific run submission rather than with the code proper.

  5. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (ESTSC)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more »and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures, and basis Fortran integration. Future versions will extend the functionality substantially, provide a number of core parallel tools, and provide support across a wide rang of parallel architectures and languages.« less

  6. Totally parallel multilevel algorithms

    NASA Technical Reports Server (NTRS)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Parallel computing works

    SciTech Connect

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  8. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  9. Non-Cartesian Parallel Imaging Reconstruction

    PubMed Central

    Wright, Katherine L.; Hamilton, Jesse I.; Griswold, Mark A.; Gulani, Vikas; Seiberlich, Nicole

    2014-01-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be employed to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the non-homogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian GRAPPA, and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  10. Science Grade 7, Long Form.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

    The Grade 7 Science course of study was prepared in two parallel forms. A short form designed for students who had achieved a high measure of success in previous science courses; the long form for those who have not been able to maintain the pace. Both forms contain similar content. The Grade 7 guide is the first in a three-year sequence for…

  11. Parallel nearest neighbor calculations

    NASA Astrophysics Data System (ADS)

    Trease, Harold

    We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.

  12. Ultra Reliability Workshop Introduction

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.

    2006-01-01

    This plan is the accumulation of substantial work by a large number of individuals. The Ultra-Reliability team consists of representatives from each center who have agreed to champion the program and be the focal point for their center. A number of individuals from NASA, government agencies (including the military), universities, industry and non-governmental organizations also contributed significantly to this effort. Most of their names may be found on the Ultra-Reliability PBMA website.

  13. A New Approach to Parallel Interference Cancellation for CDMA

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Simon, Martin

    1996-01-01

    This paper introduces an improved nonlinear parallel interference cancellation scheme that significantly reduces the degrading effect of user interference with implementation complexity linear in the number of users. The scheme operates on the fact that parallel processing simultaneously removes from each user a part of the interference produced by the remaining users accessing the channel the amount being proportional to their reliability. The parallel processing can be done in multiple stages. Simulation results are given for a multitude of different situations, in particular those cases for which the analysis is too complex.

  14. NAS parallel benchmark results

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barszcz, E.; Dagum, L.; Simon, H. D.

    1992-01-01

    The NAS (Numerical Aerodynamic Simulation) parallel benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a 'pencil and paper' fashion. The performance results of various systems using the NAS parallel benchmarks are presented. These results represent the best results that have been reported to the authors for the specific systems listed. They represent implementation efforts performed by personnel in both the NAS Applied Research Branch of NASA Ames Research Center and in other organizations.

  15. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (editor); Barton, John (editor); Lasinski, Thomas (editor); Simon, Horst (editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  16. Statistical modelling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1991-01-01

    During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

  17. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  18. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  19. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  20. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  1. UCLA Parallel PIC Framework

    NASA Astrophysics Data System (ADS)

    Decyk, Viktor K.; Norton, Charles D.

    2004-12-01

    The UCLA Parallel PIC Framework (UPIC) has been developed to provide trusted components for the rapid construction of new, parallel Particle-in-Cell (PIC) codes. The Framework uses object-based ideas in Fortran95, and is designed to provide support for various kinds of PIC codes on various kinds of hardware. The focus is on student programmers. The Framework supports multiple numerical methods, different physics approximations, different numerical optimizations and implementations for different hardware. It is designed with "defensive" programming in mind, meaning that it contains many error checks and debugging helps. Above all, it is designed to hide the complexity of parallel processing. It is currently being used in a number of new Parallel PIC codes.

  2. Introduction to Parallel Processing

    E-print Network

    Evans, Hal

    for interprocess communication · industry standard in parallel and cluster computing decades · implementations!) · Optimization Strategy · Message Passing Interface (MPI) · Graphics Processing Units (GPU) · Thursday: Physics Optimization Strategy · Overall one needs to understand where the software is spending the most time · Free

  3. Artificial intelligence in parallel

    SciTech Connect

    Waldrop, M.M.

    1984-08-10

    The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

  4. Gearbox Reliability Collaborative Update (Presentation)

    SciTech Connect

    Sheng, S.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  5. Understanding biological computation: reliable learning and recognition.

    PubMed Central

    Hogg, T; Huberman, B A

    1984-01-01

    We experimentally examine the consequences of the hypothesis that the brain operates reliably, even though individual components may intermittently fail, by computing with dynamical attractors. Specifically, such a mechanism exploits dynamic collective behavior of a system with attractive fixed points in its phase space. In contrast to the usual methods of reliable computation involving a large number of redundant elements, this technique of self-repair only requires collective computation with a few units, and it is amenable to quantitative investigation. Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation. PMID:6593731

  6. Electronic logic for enhanced switch reliability

    DOEpatents

    Cooper, J.A.

    1984-01-20

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  7. Reliable aluminum contact formation by electrostatic bonding

    NASA Astrophysics Data System (ADS)

    Kárpáti, T.; Pap, A. E.; Radnóczi, Gy; Beke, B.; Bársony, I.; Fürjes, P.

    2015-07-01

    The paper presents a detailed study of a reliable method developed for aluminum fusion wafer bonding assisted by the electrostatic force evolving during the anodic bonding process. The IC-compatible procedure described allows the parallel formation of electrical and mechanical contacts, facilitating a reliable packaging of electromechanical systems with backside electrical contacts. This fusion bonding method supports the fabrication of complex microelectromechanical systems (MEMS) and micro-opto-electromechanical systems (MOEMS) structures with enhanced temperature stability, which is crucial in mechanical sensor applications such as pressure or force sensors. Due to the applied electrical potential of??-1000?V the Al metal layers are compressed by electrostatic force, and at the bonding temperature of 450?°C intermetallic diffusion causes aluminum ions to migrate between metal layers.

  8. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  9. Orbiter Autoland reliability analysis

    NASA Technical Reports Server (NTRS)

    Welch, D. Phillip

    1993-01-01

    The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

  10. Proposed reliability cost model

    NASA Technical Reports Server (NTRS)

    Delionback, L. M.

    1973-01-01

    The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

  11. Precision and reliability in animal navigation.

    PubMed

    Pfuhl, G; Tjelmeland, H; Biegler, R

    2011-05-01

    Uncertainty plays an important role in several navigational computations. Navigation typically depends on multiple sources of information, and different navigational systems may operate both in parallel and in combination. The optimal combination of information from different sources must take into account the uncertainty of that information. We distinguish between two types of spatial uncertainty, precision, and reliability. Precision is the inverse variance of the probability distribution that describes the information a cue contributes to an organism's knowledge of its location. Reliability is the probability of the cue being correctly identified, or the probability of a cue being related to a target location. We argue that in most environments, precision and reliability are negatively correlated. In case of cue conflict, precision and reliability must be traded off against each other. We offer a quantitative description of optimal behaviour. Knowledge of uncertainty is also needed to optimally determine the point where a search should start when an organism has more precise spatial information in one of the spatial dimensions. We show that if there is any cost to travel, it is advantageous to head off to one side of the most likely target location and head toward the target. The magnitude of the optimal offset depends on both travel cost and search cost. PMID:20496009

  12. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  13. Inspection criteria ensure quality control of parallel gap soldering

    NASA Technical Reports Server (NTRS)

    Burka, J. A.

    1968-01-01

    Investigation of parallel gap soldering of electrical leads resulted in recommendation on material preparation, equipment, process control, and visual inspection criteria to ensure reliable solder joints. The recommendations will minimize problems in heat-dwell time, amount of solder, bridging conductors, and damage of circuitry.

  14. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  15. Reliable Shapelet Image Analysis

    E-print Network

    P. Melchior; M. Meneghetti; M. Bartelmann

    2006-12-13

    Aims: We discuss the applicability and reliability of the shapelet technique for scientific image analysis. Methods: We quantify the effects of non-orthogonality of sampled shapelet basis functions and misestimation of shapelet parameters. We perform the shapelet decomposition on artificial galaxy images with underlying shapelet models and galaxy images from the GOODS survey, comparing the publicly available IDL implementation with our new C++ implementation. Results: Non-orthogonality of the sampled basis functions and misestimation of the shapelet parameters can cause substantial misinterpretation of the physical properties of the decomposed objects. Additional constraints, image preprocessing and enhanced precision have to be incorporated in order to achieve reliable decomposition results.

  16. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2013-07-31

    This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

  17. Parallel time integration software

    Energy Science and Technology Software Center (ESTSC)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore »come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  18. Progress in parallelizing XOOPIC

    SciTech Connect

    Mardahl, P.J.; Verboncoeur, J.P.

    1998-12-31

    XOOPIC (Object Oriented Particle in Cell code for X11-based Unix workstations) is presently a serial 2d 3v particle-in-cell plasma simulation. The present effort focuses on using parallel and distributed processing to optimize the simulation for large problems. The benefits include increased capacity for memory intensive problems, and improved performance for processor-intensive problems. The MPI library enables the parallel version to be easily ported to massively parallel, SMP, and distributed computers. The philosophy employed here is to spatially decompose the system into computational regions separated by virtual boundaries, objects which contain the local data and algorithms to perform the local field solve and particle communication between regions. This implementation will reduce the changes required in the rest of the program by parallelization. Specific implementation details such as the hiding of communication latency behind local computation will also be discussed. The initial implementation includes manual, partitioning in one spatial coordinate, electromagnetic models, diagnostics by computational region, and effective transmission of both fields and particles across virtual boundaries. This version was able to perform greater than 600,000 particle-pushes-per-second using 8 200MHz UltraSPARC CPU`s. In this work the authors extend parallel XOOPIC to have 2-d partitioning, automated partitioning, and global diagnostics.

  19. Parallel time integration software

    SciTech Connect

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds must come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.

  20. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  1. Reliable Energy Integration of

    E-print Network

    Zeng, Ning

    for comparison of key technologies. Integrate Physics of Failure Models to Predict Reliability Assess life, however, other physical and electrical stresses related to location offshore that can cause failures as well. highly variable temperatures powerful storms and lightning strikes. These can cause failures

  2. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  3. Reliable solar cookers

    SciTech Connect

    Magney, G.K.

    1992-12-31

    The author describes the activities of SERVE, a Christian relief and development agency, to introduce solar ovens to the Afghan refugees in Pakistan. It has provided 5,000 solar cookers since 1984. The experience has demonstrated the potential of the technology and the need for a durable and reliable product. Common complaints about the cookers are discussed and the ideal cooker is described.

  4. Quantifying Human Performance Reliability.

    ERIC Educational Resources Information Center

    Askren, William B.; Regulinski, Thaddeus L.

    Human performance reliability for tasks in the time-space continuous domain is defined and a general mathematical model presented. The human performance measurement terms time-to-error and time-to-error-correction are defined. The model and measurement terms are tested using laboratory vigilance and manual control tasks. Error and error-correction…

  5. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the ``factories,`` reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  6. Designing reliability into accelerators

    SciTech Connect

    Hutton, A.

    1992-08-01

    For the next generation of high performance, high average luminosity colliders, the factories,'' reliability engineering must be introduced right at the inception of the project and maintained as a central theme throughout the project. There are several aspects which will be addressed separately: Concept; design; motivation; management techniques; and fault diagnosis.

  7. Space Shuttle Propulsion System Reliability

    NASA Technical Reports Server (NTRS)

    Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

    2011-01-01

    This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

  8. LUNAR MASS SPECTROMETER RELIABILITY PREDICTION

    E-print Network

    Rathbun, Julie A.

    LUNAR MASS SPECTROMETER RELIABILITY PREDICTION Contract No. NAS 9-5829 I'IU. ATM-965 PAGE 1 tu:v. 1'10. A OF Jl DATE 9 June 1971 Presented in this ATM are the Lunar Mass Spectrometer (LMS) reliability--t::J~ S. J. Ellison, Manager ALSEP Reliability #12;LUNAR MASS SPECTROMETER RELIABILITY PREDICTION Contract

  9. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  10. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  11. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  12. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  13. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  14. Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

    2012-01-01

    In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

  15. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  16. Parallelizing Quantum Circuits

    E-print Network

    Anne Broadbent; Elham Kashefi

    2007-04-13

    We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we provide for the first time a full characterization of patterns with flow of arbitrary depth, based on the notion of influencing paths and a simple rewriting system on the angles of the measurement. Our method leads to insightful knowledge for constructing parallel circuits and as applications, we demonstrate several constant and logarithmic depth circuits. Furthermore, we prove a logarithmic separation in terms of quantum depth between the quantum circuit model and the measurement-based model.

  17. Parallel Magnetic Resonance Imaging

    E-print Network

    Uecker, Martin

    2015-01-01

    The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

  18. Parallel optical sampler

    DOEpatents

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  19. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.

  20. "Parallel" transport - revisited

    E-print Network

    Stasheff, Jim

    2011-01-01

    Parallel transport in a fibre bundle with respect to smooth paths in the base space B have recently been extended to representations of the smooth singular simplicial set Sing_{smooth}(B). Inspired by these extensions,I revisit the development of a notion of `parallel' transport in the topological setting of fibrations with the homotopy lifting property and then extend it to representations of Sing(B) on such fibrations. Closely related is the notion of (strong or `infty') homotopy action, which has variants under a variety of names.

  1. A Parallel Tree Code

    E-print Network

    John Dubinski

    1996-03-18

    We describe a new implementation of a parallel N-body tree code. The code is load-balanced using the method of orthogonal recursive bisection to subdivide the N-body system into independent rectangular volumes each of which is mapped to a processor on a parallel computer. On the Cray T3D, the load balance in the range of 70-90\\% depending on the problem size and number of processors. The code can handle simulations with $>$ 10 million particles roughly a factor of 10 greater than allowed in vectorized tree codes.

  2. Keldysh formalism for multiple parallel worlds

    E-print Network

    Mohammad Ansari; Yuli V. Nazarov

    2015-09-14

    We present here a compact and self-contained review of recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to evaluation of Renyi and Shannon entropy flows. We start with the formulation of standard and extended Keldysh technique in single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds In the end, we shortly summarize the concrete results obtained with the method.

  3. Keldysh formalism for multiple parallel worlds

    E-print Network

    Ansari, Mohammad

    2015-01-01

    We present here a compact and self-contained review of recently developed Keldysh formalism for multiple parallel worlds. The formalism has been applied to consistent quantum evaluation of the flows of informational quantities, in particular, to evaluation of Renyi and Shannon entropy flows. We start with the formulation of standard and extended Keldysh technique in single world in a form convenient for our presentation. We explain the use of Keldysh contours encompassing multiple parallel worlds In the end, we shortly summarize the concrete results obtained with the method.

  4. Selectivity of small molecule ligands for parallel and anti-parallel DNA G-quadruplex structures.

    PubMed

    Garner, Thomas P; Williams, Huw E L; Gluszyk, Katarzyna I; Roe, Stephen; Oldham, Neil J; Stevens, Malcolm F G; Moses, John E; Searle, Mark S

    2009-10-21

    We report CD, ESI-MS and molecular modelling studies of ligand binding interactions with DNA quadruplex structures derived from the human telomeric repeat sequence (h-Tel) and the proto-oncogenic c-kit promoter sequence. These sequences form anti-parallel (both 2 + 2 and 3 + 1) and parallel conformations, respectively, and demonstrate distinctively different degrees of structural plasticity in binding ligands. With h-Tel, we show that an extended heteroaromatic 1,4-triazole (TRZ), designed to exploit pi-stacking interactions and groove-specific contacts, shows some selectivity for parallel folds, however, the polycyclic fluorinated acridinium cation (RHPS4), which is a similarly potent telomerase inhibitor, shows selectivity for anti-parallel conformations implicating favourable interactions with lateral and diagonal loops. In contrast, the unique c-kit parallel-stranded quadruplex shows none of the structural plasticity of h-Tel with either ligand. We show by quantitative ESI-MS analysis that both sequences are able to bind a ligand on either end of the quadruplex. In the case of h-Tel the two sites have similar affinities, however, in the case of the c-kit quadruplex the affinities of the two sites are different and ligand-dependent. We demonstrate that two different small molecule architectures result in significant differences in selectivity for parallel and anti-parallel quadruplex structures that may guide quadruplex targeted drug-design. PMID:19795057

  5. General Aviation Aircraft Reliability Study

    NASA Technical Reports Server (NTRS)

    Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

    2001-01-01

    This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

  6. Understanding the Elements of Operational Reliability: A Key for Achieving High Reliability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2010-01-01

    This viewgraph presentation reviews operational reliability and its role in achieving high reliability through design and process reliability. The topics include: 1) Reliability Engineering Major Areas and interfaces; 2) Design Reliability; 3) Process Reliability; and 4) Reliability Applications.

  7. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  8. NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

  9. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  10. Parallel Total Energy

    Energy Science and Technology Software Center (ESTSC)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  11. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  12. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (ESTSC)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  13. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. )

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  14. Progress in parallelizing XOOPIC

    NASA Astrophysics Data System (ADS)

    Mardahl, Peter; Verboncoeur, J. P.

    1997-11-01

    XOOPIC (Object Orient Particle in Cell code for X11-based Unix workstations) is presently a serial 2-D 3v particle-in-cell plasma simulation (J.P. Verboncoeur, A.B. Langdon, and N.T. Gladd, ``An object-oriented electromagnetic PIC code.'' Computer Physics Communications 87 (1995) 199-211.). The present effort focuses on using parallel and distributed processing to optimize the simulation for large problems. The benefits include increased capacity for memory intensive problems, and improved performance for processor-intensive problems. The MPI library is used to enable the parallel version to be easily ported to massively parallel, SMP, and distributed computers. The philosophy employed here is to spatially decompose the system into computational regions separated by 'virtual boundaries', objects which contain the local data and algorithms to perform the local field solve and particle communication between regions. This implementation will reduce the changes required in the rest of the program by parallelization. Specific implementation details such as the hiding of communication latency behind local computation will also be discussed.

  15. A massively asynchronous, parallel brain.

    PubMed

    Zeki, Semir

    2015-05-19

    Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously--with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871

  16. A massively asynchronous, parallel brain

    PubMed Central

    Zeki, Semir

    2015-01-01

    Whether the visual brain uses a parallel or a serial, hierarchical, strategy to process visual signals, the end result appears to be that different attributes of the visual scene are perceived asynchronously—with colour leading form (orientation) by 40 ms and direction of motion by about 80 ms. Whatever the neural root of this asynchrony, it creates a problem that has not been properly addressed, namely how visual attributes that are perceived asynchronously over brief time windows after stimulus onset are bound together in the longer term to give us a unified experience of the visual world, in which all attributes are apparently seen in perfect registration. In this review, I suggest that there is no central neural clock in the (visual) brain that synchronizes the activity of different processing systems. More likely, activity in each of the parallel processing-perceptual systems of the visual brain is reset independently, making of the brain a massively asynchronous organ, just like the new generation of more efficient computers promise to be. Given the asynchronous operations of the brain, it is likely that the results of activities in the different processing-perceptual systems are not bound by physiological interactions between cells in the specialized visual areas, but post-perceptually, outside the visual brain. PMID:25823871

  17. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  18. Power electronics reliability.

    SciTech Connect

    Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley

    2010-10-01

    The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

  19. Human Reliability Program Workshop

    SciTech Connect

    Landers, John; Rogers, Erin; Gerke, Gretchen

    2014-05-18

    A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

  20. Improving Parallel I/O Performance with Data Layout Awareness

    SciTech Connect

    Chen, Yong; Sun, Xian-He; Thakur, Dr. Rajeev; Song, Huaiming; Jin, Hui

    2010-01-01

    Parallel applications can benefit greatly from massive computational capability, but their performance suffers from large latency of I/O accesses. The poor I/O performance has been attributed as a critical cause of the low sustained performance of parallel computing systems. In this study, we propose a data layout-aware optimization strategy to promote a better integration of the parallel I/O middleware and parallel file systems, two major components of the current parallel I/O systems, and to improve the data access performance. We explore the layout-aware optimization in both independent I/O and collective I/O, two primary forms of I/O in parallel applications. We illustrate that the layout-aware I/O optimization could improve the performance of current parallel I/O strategy effectively. The experimental results verify that the proposed strategy could improve parallel I/O performance by nearly 40% on average. The proposed layout-aware parallel I/O has a promising potential in improving the I/O performance of parallel systems.

  1. Parallel Harness for Informatic Stream Hashing

    Energy Science and Technology Software Center (ESTSC)

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPImore »message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.« less

  2. Parallel Harness for Informatic Stream Hashing

    SciTech Connect

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPI message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.

  3. Compact, Reliable EEPROM Controller

    NASA Technical Reports Server (NTRS)

    Katz, Richard; Kleyner, Igor

    2010-01-01

    A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault situations such as the loss of power or brownouts. Protection is also provided by a powermonitoring circuit.

  4. ATLAS reliability analysis

    SciTech Connect

    Bartsch, R.R.

    1995-09-01

    Key elements of the 36 MJ ATLAS capacitor bank have been evaluated for individual probabilities of failure. These have been combined to estimate system reliability which is to be greater than 95% on each experimental shot. This analysis utilizes Weibull or Weibull-like distributions with increasing probability of failure with the number of shots. For transmission line insulation, a minimum thickness is obtained and for the railgaps, a method for obtaining a maintenance interval from forthcoming life tests is suggested.

  5. Spacecraft transmitter reliability

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

  6. Fatigue and Reliability of Wind Turbines

    Energy Science and Technology Software Center (ESTSC)

    1995-08-17

    FAROW is a computer program that assists in the probalistic analysis of the Fatigue and Reliabiity of Wind turbines. The fatigue lifetime of wind turbine components is calculated using functional forms for important input quantities. Parameters of these functions are defined in an input file as either constants or random variables. The user can select from a library of random variable distribution functions. FAROW uses structural reliability techniques to calculate the mean time to failure,more »probability of failure before a target lifetime, relative importance of each of the random inputs, and the sensitivity of the reliability to all input parameters. Monte Carlo simulation is also available.« less

  7. Message based event specification for debugging nondeterministic parallel programs

    SciTech Connect

    Damohdaran-Kamal, S.K.; Francioni, J.M.

    1995-02-01

    Portability and reliability of parallel programs can be severely impaired by their nondeterministic behavior. Therefore, an effective means to precisely and accurately specify unacceptable nondeterministic behavior is necessary for testing and debugging parallel programs. In this paper we describe a class of expressions, called Message Expressions that can be used to specify nondeterministic behavior of message passing parallel programs. Specification of program behavior with Message Expressions is easier than pattern based specification techniques in that the former does not require knowledge of run-time event order, whereas that later depends on the user`s knowledge of the run-time event order for correct specification. We also discuss our adaptation of Message Expressions for use in a dynamic distributed testing and debugging tool, called mdb, for programs written for PVM (Parallel Virtual Machine).

  8. PARALLEL ELECTRIC FIELD SPECTRUM OF SOLAR WIND TURBULENCE

    SciTech Connect

    Mozer, F. S.; Chen, C. H. K.

    2013-05-01

    By searching through more than 10 satellite years of THEMIS and Cluster data, 3 reliable examples of parallel electric field turbulence in the undisturbed solar wind have been found. The perpendicular and parallel electric field spectra in these examples have similar shapes and amplitudes, even at large scales (frequencies below the ion gyroscale), where Alfvenic turbulence with no parallel electric field component is thought to dominate. The spectra of the parallel electric field fluctuations are power laws with exponents near -5/3 below the ion scales ({approx}0.1 Hz), and with a flattening of the spectrum in the vicinity of this frequency. At small scales (above a few Hz), the spectra are steeper than -5/3 with values in the range of -2.1 to -2.8. These steeper slopes are consistent with expectations for kinetic Alfven turbulence, although their amplitude relative to the perpendicular fluctuations is larger than expected.

  9. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S. (Storrs, CT)

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  10. Fault Tree Reliability Analysis and Design-for-reliability

    Energy Science and Technology Software Center (ESTSC)

    1998-05-05

    WinR provides a fault tree analysis capability for performing systems reliability and design-for-reliability analyses. The package includes capabilities for sensitivity and uncertainity analysis, field failure data analysis, and optimization.

  11. UNCORRECTED Reliability analysis of hybrid ceramic/steel gun barrels

    E-print Network

    Grujicic, Mica

    UNCORRECTED PROOF Reliability analysis of hybrid ceramic/steel gun barrels M. GRUJICIC1 , J. R-5069, USA Received in final form 25 February 2002 AB ST R AC T Failure of the ceramic gun-barrel lining probability for the lining is also discussed. Keywords failure; gun-barrel lining; reliability; thermo

  12. On Component Reliability and System Reliability for Space Missions

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

    2012-01-01

    This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

  13. Interconnecting computers with the high-speed parallel interface

    SciTech Connect

    Tolmie, D.E.; Dornhoff, A.G.; Tenbrink, S.C.

    1982-08-01

    A variety of local computer network architectures have been evaluated for use in the Los Alamos National Laboratory Central Computing Facility in support of scientific research at the Laboratory. A store-and-forward switched message system built around hardware products known as High-Speed Parallel Interfaces (HSPIs) was determined to be superior in this application to the more conventional contention bus systems. The HSPI interfaces, along with some dedicated computers used as intermediate switching nodes, implement this store-and-forward architecture and form the backbone of the Integrated Computer Network (ICN). The HSPI channel standard was developed at Los Alamos for the interconnection of computers of different manufacture. This standard intercomputer interface is used for passing data and messages via the I/O channels of the different computers. Reliable full-duplex point-to-point data transfers at speeds up to 50 million bits per second are accommodated. Extensive error detection and error correction capabilities are included in the HSPI hardware. HSPIs are currently in around-the-clock use at Los Alamos, interconnecting the computers of one of the world's most powerful computer networks. This report discusses the HSPI specifications and use of the HSPIs in the network. The Los Alamos network architecture and comparisons to a contention bus architecture are also discussed.

  14. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  15. Descriptive Simplicity in Parallel Computing 

    E-print Network

    Marr, Marcus

    The programming of parallel computers is recognised as being a difficult task and there exist a wide selection of parallel programming languages and environments. This thesis presents and examines the Hierarchical ...

  16. Tinkertoy Parallel Programming: Complicated Applications

    E-print Network

    Plimpton, Steve

    parallel algorithms for particle modeling, crash simulations and transferring data between two independent of activity within the parallel computing community. A key \\Lambda This work was funded by the Applied

  17. On parallel machine scheduling 1

    E-print Network

    Magdeburg, Universität

    On parallel machine scheduling 1 machines with setup times. The setup has to be performed by a single server. The objective is to minimize even for the case of two identical parallel machines. This paper presents a pseudopolynomial

  18. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

  19. Ultrascalable petaflop parallel supercomputer

    DOEpatents

    Blumrich, Matthias A. (Ridgefield, CT); Chen, Dong (Croton On Hudson, NY); Chiu, George (Cross River, NY); Cipolla, Thomas M. (Katonah, NY); Coteus, Paul W. (Yorktown Heights, NY); Gara, Alan G. (Mount Kisco, NY); Giampapa, Mark E. (Irvington, NY); Hall, Shawn (Pleasantville, NY); Haring, Rudolf A. (Cortlandt Manor, NY); Heidelberger, Philip (Cortlandt Manor, NY); Kopcsay, Gerard V. (Yorktown Heights, NY); Ohmacht, Martin (Yorktown Heights, NY); Salapura, Valentina (Chappaqua, NY); Sugavanam, Krishnan (Mahopac, NY); Takken, Todd (Brewster, NY)

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  20. Homology, convergence and parallelism.

    PubMed

    Ghiselin, Michael T

    2016-01-01

    Homology is a relation of correspondence between parts of parts of larger wholes. It is used when tracking objects of interest through space and time and in the context of explanatory historical narratives. Homologues can be traced through a genealogical nexus back to a common ancestral precursor. Homology being a transitive relation, homologues remain homologous however much they may come to differ. Analogy is a relationship of correspondence between parts of members of classes having no relationship of common ancestry. Although homology is often treated as an alternative to convergence, the latter is not a kind of correspondence: rather, it is one of a class of processes that also includes divergence and parallelism. These often give rise to misleading appearances (homoplasies). Parallelism can be particularly hard to detect, especially when not accompanied by divergences in some parts of the body. PMID:26598721

  1. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  2. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  3. Parallel grid population

    DOEpatents

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  4. Ultimately Reliable Pyrotechnic Systems

    NASA Technical Reports Server (NTRS)

    Scott, John H.; Hinkel, Todd

    2015-01-01

    This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing safety and operational availability.

  5. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  6. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  7. Ferrite logic reliability study

    NASA Technical Reports Server (NTRS)

    Baer, J. A.; Clark, C. B.

    1973-01-01

    Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

  8. Nuclear performance and reliability

    SciTech Connect

    Rothwell, G.

    1993-07-01

    If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive. There has been a significant improvement in nuclear power plant performance, due largely to a decline in the forced outage rate and a dramatic drop in the average number of forced outages per fuel cycle. If fewer forced outages are a sign of improved safety, nuclear power plants have become safer and more productive over time. To encourage further increases in performance, regulatory incentive schemes should reward reactor operators for improved reliability and safety, as well as for improved performance.

  9. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2012-01-01 2012-01-01 false Parallel citations of Code and Federal...

  10. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2010-01-01 2010-01-01 false Parallel citations of Code and Federal...

  11. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2013-01-01 2012-01-01 true Parallel citations of Code and Federal...

  12. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2014-01-01 2012-01-01 true Parallel citations of Code and Federal...

  13. 1 CFR 21.23 - Parallel citations of Code and Federal Register.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... § 21.23 Parallel citations of Code and Federal Register. For parallel reference, the Code of Federal Regulations and the Federal Register may be cited in the following forms, as appropriate: ___ CFR ___ (___ FR... 1 General Provisions 1 2011-01-01 2011-01-01 false Parallel citations of Code and Federal...

  14. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  15. Xyce(?) Parallel Electronic Simulator

    Energy Science and Technology Software Center (ESTSC)

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuitmore »network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.« less

  16. Xyce(?) Parallel Electronic Simulator

    SciTech Connect

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuit network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.

  17. Parallel Pascal - An extended Pascal for parallel computers

    NASA Technical Reports Server (NTRS)

    Reeves, A. P.

    1984-01-01

    Parallel Pascal is an extended version of the conventional serial Pascal programming language which includes a convenient syntax for specifying array operations. It is upward compatible with standard Pascal and involves only a small number of carefully chosen new features. Parallel Pascal was developed to reduce the semantic gap between standard Pascal and a large range of highly parallel computers. Two important design goals of Parallel Pascal were efficiency and portability. Portability is particularly difficult to achieve since different parallel computers frequently have very different capabilities.

  18. Product reliability and thin-film photovoltaics

    NASA Astrophysics Data System (ADS)

    Gaston, Ryan; Feist, Rebekah; Yeung, Simon; Hus, Mike; Bernius, Mark; Langlois, Marc; Bury, Scott; Granata, Jennifer; Quintana, Michael; Carlson, Carl; Sarakakis, Georgios; Ogden, Douglas; Mettas, Adamantios

    2009-08-01

    Despite significant growth in photovoltaics (PV) over the last few years, only approximately 1.07 billion kWhr of electricity is estimated to have been generated from PV in the US during 2008, or 0.27% of total electrical generation. PV market penetration is set for a paradigm shift, as fluctuating hydrocarbon prices and an acknowledgement of the environmental impacts associated with their use, combined with breakthrough new PV technologies, such as thin-film and BIPV, are driving the cost of energy generated with PV to parity or cost advantage versus more traditional forms of energy generation. In addition to reaching cost parity with grid supplied power, a key to the long-term success of PV as a viable energy alternative is the reliability of systems in the field. New technologies may or may not have the same failure modes as previous technologies. Reliability testing and product lifetime issues continue to be one of the key bottlenecks in the rapid commercialization of PV technologies today. In this paper, we highlight the critical need for moving away from relying on traditional qualification and safety tests as a measure of reliability and focus instead on designing for reliability and its integration into the product development process. A drive towards quantitative predictive accelerated testing is emphasized and an industrial collaboration model addressing reliability challenges is proposed.

  19. Testing for PV Reliability (Presentation)

    SciTech Connect

    Kurtz, S.; Bansal, S.

    2014-09-01

    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  20. Doing Research I Reliable Sources

    E-print Network

    Chen, Deming

    Doing Research I Reliable Sources The public depends on museums to give accurate, dependable needed for proper records and labels, staff members look for reliable sources. What are sources or illus- trated and described in a reliable written source is invaluable in the research process

  1. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  2. HPC Infrastructure for Solid Earth Simulation on Parallel Computers

    NASA Astrophysics Data System (ADS)

    Nakajima, K.; Chen, L.; Okuda, H.

    2004-12-01

    Recently, various types of parallel computers with various types of architectures and processing elements (PE) have emerged, which include PC clusters and the Earth Simulator. Moreover, users can easily access to these computer resources through network on Grid environment. It is well-known that thorough tuning is required for programmers to achieve excellent performance on each computer. The method for tuning strongly depends on the type of PE and architecture. Optimization by tuning is a very tough work, especially for developers of applications. Moreover, parallel programming using message passing library such as MPI is another big task for application programmers. In GeoFEM project (http://gefeom.tokyo.rist.or.jp), authors have developed a parallel FEM platform for solid earth simulation on the Earth Simulator, which supports parallel I/O, parallel linear solvers and parallel visualization. This platform can efficiently hide complicated procedures for parallel programming and optimization on vector processors from application programmers. This type of infrastructure is very useful. Source codes developed on PC with single processor is easily optimized on massively parallel computer by linking the source code to the parallel platform installed on the target computer. This parallel platform, called HPC Infrastructure will provide dramatic efficiency, portability and reliability in development of scientific simulation codes. For example, line number of the source codes is expected to be less than 10,000 and porting legacy codes to parallel computer takes 2 or 3 weeks. Original GeoFEM platform supports only I/O, linear solvers and visualization. In the present work, further development for adaptive mesh refinement (AMR) and dynamic load-balancing (DLB) have been carried out. In this presentation, examples of large-scale solid earth simulation using the Earth Simulator will be demonstrated. Moreover, recent results of a parallel computational steering tool using an MxN communication model will be shown. In an MxN communication model, the large-scale computation modules run on M PE's and high performance parallel visualization modules run on N PE's, concurrently. This can allow computation and visualization to select suitable parallel hardware environments respectively. Meanwhile, real-time steering can be achieved during computation so that the users can check and adjust the computation process in real time. Furthermore, different numbers of PE's can achieve better configuration between computation and visualization under Grid environment.

  3. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  4. The Reliability of Neurons

    PubMed Central

    Bullock, Theodore Holmes

    1970-01-01

    The prevalent probabilistic view is virtually untestable; it remains a plausible belief. The cases usually cited can not be taken as evidence for it. Several grounds for this conclusion are developed. Three issues are distinguished in an attempt to clarify a murky debate: (a) the utility of probabilistic methods in data reduction, (b) the value of models that assume indeterminacy, and (c) the validity of the inference that the nervous system is largely indeterministic at the neuronal level. No exception is taken to the first two; the second is a private heuristic question. The third is the issue to which the assertion in the first two sentences is addressed. Of the two kinds of uncertainty, statistical mechanical (= practical unpredictability) as in a gas, and Heisenbergian indeterminancy, the first certainly exists, the second is moot at the neuronal level. It would contribute to discussion to recognize that neurons perform with a degree of reliability. Although unreliability is difficult to establish, to say nothing of measure, evidence that some neurons have a high degree of reliability, in both connections and activity is increasing greatly. An example is given from sternarchine electric fish. PMID:5462670

  5. Photon detection with parallel asynchronous processing

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1990-01-01

    An approach to photon detection with a parallel asynchronous signal processor is described. The visible or IR photon-detection capability of the silicon p(+)-n-n(+) detectors and the parallel asynchronous processing are addressed separately. This approach would permit an independent analog processing channel to be dedicated to every pixel. A laminar architecture consisting of a stack of planar arrays of the devices would form a 2D array processor with a 2D array of inputs located directly behind a focal-plane detector array. A 2D image data stream would propagate in neuronlike asynchronous pulse-coded form through the laminar processor. Such systems can integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The possibility of multispectral image processing is addressed.

  6. Rocket engine propulsion 'system reliability'

    NASA Technical Reports Server (NTRS)

    O'Hara, K. J.

    1992-01-01

    A system reliability approach is discussed which is based on a quantitative assessment of both the engine system and hardware reliability during the design process. The approach makes it possible to evaluate design trades as the design matures. It is concluded that the system reliability assessment approach offers the following benefits. The uncertainty of each design variable is explicitly considered in the analysis process. The most significant design variables are ranked in order of their effect on reliability. Design trades can be assessed for reliability impact. It is concluded that the approach facilitates communication between disciplines and thus aids in concurrent engineering.

  7. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1989-01-01

    A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. The authors focus on tensor product array computations, a simple but important class of numerical algorithms. They consider first the problem of programming one-dimensional kernel routines, such as parallel tridiagonal solvers, and then look at how such parallel kernels can be combined to form parallel tensor product algorithms.

  8. Computer Assisted Parallel Program Generation

    E-print Network

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  9. Extended Parallelism Models for Optimization on Massively Parallel Computers

    SciTech Connect

    Eldred, M.S.; Schimel, B.D.

    1999-05-24

    Single-level parallel optimization approaches, those in which either the simulation code executes in parallel or the optimiza- tion algorithm invokes multiple simultaneous single-processor analyses, have been investigated previously and been shown to be effective in reducing the time required to compute optimal solutions. However, these approaches have clear performance limita- tions that prevent effective scaling with the thousands of processors available in massively parallel supercomputers. In more recent work, a capability has been developed for multilevel parallelism in which multiple instances of multiprocessor simulations are coordinated simultaneously. This implementation employs a master-slave approach using the Message Passing Interface (MPI) within the DAKOTA software toolkit. Mathematical analysis on achieving peak efficiency in multilevel parallelism has shown that the most effective processor partitioning scheme is the one that limits the size of multiprocessor simulations in favor of concurrent execution of multiple simulations. That is, if both coarse-grained and fine-grained parallelism can be exploited, then preference should be given to the coarse-grained parallelism. This analysis was verified in multilevel paralIel computatiorud experiments on networks of workstations (NOWS) and on the Intel TeraFLOPS massively parallel supercomputer. In current work, methods for exploiting additional coarse-grained parallelism in optimization are being investigated so that fine-grained efficiency losses can be further minimized. These activities are focusing on both algorithmic coarse-grained parallel- ism (multiple independent function evaluations) through the development of speculative gradient methods and concurrent iterator strategies and on function evaluation coarse-grained parallelism (multiple separable simulations within a function evaluation) through the development of general partitioning and nested synchronization facilities. The net result is a total of four separate lev- els of parallelism which can minimize efficiency losses and achieve near linear scaling on massively parallel computers.

  10. Highly parallel computation

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.; Tichy, Walter F.

    1990-01-01

    Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

  11. The Parallel Principle

    E-print Network

    Richard A Mould

    2001-11-19

    Von Neumann's psycho-physical parallelism requires the existence of an interaction between subjective experiences and material systems. A hypothesis is proposed that amends physics in a way that connects subjective states with physical states, and a general model of the interaction is provided. A specific example shows how the theory applies to pain consciousness. The implications concerning quantum mechanical state creation and reduction are discussed, and some mechanisms are suggested to seed the process. An experiment that tests the hypothesis is described elsewhere. Key Words: von Neumann, psycho-physical, consciousness, state reduction, state collapse, macroscopic superpositions, conscious observer.

  12. Learning in Parallel

    E-print Network

    Vitter, Jeffrey Scott; Lin, Jyh-Han

    1992-01-01

    no restrictionson the representationof the hypothesisreturned by the parallel learning algorithm other than that it isNC-evaluatable: De#0Cnition 2.4. A concept class C n is NC-evaluatable if the problem of determining whether a given hypothesis c2C n is consistent... n is NC-evaluatable, then C n isNC-learnable. Section 3. NC-learnable Concept Classes #2F 5 Proof. The proof is a straightforwardadaptation of the proof for the sequential case given in #5BBlumer, Ehrenfeucht, Haussler, and Warmuth 1989#5D. Suppose...

  13. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J.; de Verdiere, G.C.

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  14. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

  15. Task Parallel Skeletons for Irregularly Structured Problems

    E-print Network

    Hofstedt, Petra

    Task Parallel Skeletons for Irregularly Structured Problems Petra Hofstedt Department of Computer parallel skeleton into a functional programming language is presented. Task parallel skeletons, as other al- gorithmic skeletons, represent general parallelization patterns. They are introduced into otherwise

  16. Spectrophotometric Assay of Mebendazole in Dosage Forms Using Sodium Hypochlorite

    NASA Astrophysics Data System (ADS)

    Swamy, N.; Prashanth, K. N.; Basavaiah, K.

    2014-07-01

    A simple, selective and sensitive spectrophotometric method is described for the determination of mebendazole (MBD) in bulk drug and dosage forms. The method is based on the reaction of MBD with hypochlorite in the presence of sodium bicarbonate to form the chloro derivative of MBD, followed by the destruction of the excess hypochlorite by nitrite ion. The color was formed by the oxidation of iodide with the chloro derivative of MBD to iodine in the presence of starch and forming the blue colored product, which was measured at 570 nm. The optimum conditions that affect the reaction were ascertained and, under these conditions, a linear relationship was obtained in the concentration range of 1.25-25.0·g/ml MBD. The calculated molar absorptivity and Sandell sensitivity values are 9.56·103 l·mol-1·cm-1 and 0.031 ?g/cm2, respectively. The limits of detection and quantification are 0.11 and 0.33 ?g/ml, respectively. The proposed method was applied successfully to the determination of MBD in bulk drug and dosage forms, and no interference was observed from excipients present in the dosage forms. The reliability of the proposed method was further checked by parallel determination by the reference method and also by recovery studies.

  17. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  18. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  19. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  20. Parallel ptychographic reconstruction

    PubMed Central

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-01-01

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

  1. Benchmarking massively parallel architectures

    SciTech Connect

    Lubeck, O.; Moore, J.; Simmons, M.; Wasserman, H.

    1993-07-01

    The purpose of this paper is to summarize some initial experiences related to measuring the performance of massively parallel processors (MPPs) at Los Alamos National Laboratory (LANL). Actually, the range of MPP architectures the authors have used is rather limited, being confined mostly to the Thinking Machines Corporation (TMC) Connection Machine CM-2 and CM-5. Some very preliminary work has been carried out on the Kendall Square KSR-1, and efforts related to other machines, such as the Intel Paragon and the soon-to-be-released CRAY T3D are planned. This paper will concentrate more on methodology rather than discuss specific architectural strengths and weaknesses; the latter is expected to be the subject of future reports. MPP benchmarking is a field in critical need of structure and definition. As the authors have stated previously, such machines have enormous potential, and there is certainly a dire need for orders of magnitude computational power over current supercomputers. However, performance reports for MPPs must emphasize actual sustainable performance from real applications in a careful, responsible manner. Such has not always been the case. A recent paper has described in some detail, the problem of potentially misleading performance reporting in the parallel scientific computing field. Thus, in this paper, the authors briefly offer a few general ideas on MPP performance analysis.

  2. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  3. Parallelization of a treecode

    E-print Network

    R. Valdarnini

    2003-03-18

    I describe here the performance of a parallel treecode with individual particle timesteps. The code is based on the Barnes-Hut algorithm and runs cosmological N-body simulations on parallel machines with a distributed memory architecture using the MPI message-passing library. For a configuration with a constant number of particles per processor the scalability of the code was tested up to P=128 processors on an IBM SP4 machine. In the large $P$ limit the average CPU time per processor necessary for solving the gravitational interactions is $\\sim 10 %$ higher than that expected from the ideal scaling relation. The processor domains are determined every large timestep according to a recursive orthogonal bisection, using a weighting scheme which takes into account the total particle computational load within the timestep. The results of the numerical tests show that the load balancing efficiency $L$ of the code is high ($>=90%$) up to P=32, and decreases to $L\\sim 80%$ when P=128. In the latter case it is found that some aspects of the code performance are affected by machine hardware, while the proposed weighting scheme can achieve a load balance as high as $L\\sim 90%$ even in the large $P$ limit.

  4. Applied Parallel Metadata Indexing

    SciTech Connect

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  5. Nuclear weapon reliability evaluation methodology

    SciTech Connect

    Wright, D.L.

    1993-06-01

    This document provides an overview of those activities that are normally performed by Sandia National Laboratories to provide nuclear weapon reliability evaluations for the Department of Energy. These reliability evaluations are first provided as a prediction of the attainable stockpile reliability of a proposed weapon design. Stockpile reliability assessments are provided for each weapon type as the weapon is fielded and are continuously updated throughout the weapon stockpile life. The reliability predictions and assessments depend heavily on data from both laboratory simulation and actual flight tests. An important part of the methodology are the opportunities for review that occur throughout the entire process that assure a consistent approach and appropriate use of the data for reliability evaluation purposes.

  6. Reliability of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  7. Human reliability assessment: tools for law enforcement

    NASA Astrophysics Data System (ADS)

    Ryan, Thomas G.; Overlin, Trudy K.

    1997-01-01

    This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.

  8. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, T.

    1986-01-01

    A nonlinear structural dynamics program with an element library that exploits parallel processing is under development. The aim is to exploit scheduling-allocation so that parallel processing and vectorization can effectively be treated in a general purpose program. As a byproduct an automatic scheme for assigning time steps was devised. A rudimentary form of the program is complete and has been tested; it shows substantial advantage can be taken of parallelism. In addition, a stability proof for the subcycling algorithm has been developed.

  9. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  10. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  11. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  12. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  13. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D; Williams, Mark L; Bowman, Stephen M

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

  14. Parallel computation of seismic analysis of high arch dam

    NASA Astrophysics Data System (ADS)

    Chen, Houqun; Ma, Huaifa; Tu, Jin; Cheng, Guangqing; Tang, Juzhen

    2008-03-01

    Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

  15. Interrelation Between Safety Factors and Reliability

    NASA Technical Reports Server (NTRS)

    Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

  16. Parallel Monte Carlo reactor neutronics

    SciTech Connect

    Blomquist, R.N.; Brown, F.B.

    1994-03-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved.

  17. Toward Parallel Document Clustering

    SciTech Connect

    Mogill, Jace A.; Haglin, David J.

    2011-09-01

    A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

  18. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  19. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  20. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  1. Parallel TreeSPH

    NASA Astrophysics Data System (ADS)

    Davé, Romeel; Dubinski, John; Hernquist, Lars

    1997-08-01

    We describe PTreeSPH, a gravity treecode combined with an SPH hydrodynamics code designed for parallel supercomputers having distributed memory. Our computational algorithm is based on the popular TreeSPH code of Hernquist & Katz (1989)[ApJS, 70, 419]. PTreeSPH utilizes a domain decomposition procedure and a synchronous hypercube communication paradigm to build self-contained subvolumes of the simulation on each processor at every timestep. Computations then proceed in a manner analogous to a serial code. We use the Message Passing Interface (MPI) communications package, making our code easily portable to a variety of parallel systems. PTreeSPH uses individual smoothing lengths and timesteps, with a communication algorithm designed to minimize exchange of information while still providing all information required to accurately perform SPH computations. We have incorporated periodic boundary conditions with forces calculated using a quadrupole Ewald summation method, and comoving integration under a variety of cosmologies. Following algorithms presented in Katz et al. (1996)[ApJS, 105, 19], we have also included radiative cooling, heating from a parameterized ionizing background, and star formation. A cosmological simulation from z = 49 to z = 2 with 64 3 gas particles and 64 3 dark matter particles requires ˜ 1800 node-hours on a Cray T3D, with a communications overhead of ˜ 8%, load balanced to ? 95% level. When used on the new Cray T3E, this code will be capable of performing cosmological hydrodynamical simulations down to z = 0 with ˜ 2 × 10 6 particles, or to z = 2 with ˜ 10 7 particles, in a reasonable amount of time. Even larger simulations will be practical in situations where the matter is not highly clustered or when periodic boundaries are not required.

  2. Parallel computing: One opportunity, four challenges

    SciTech Connect

    Gaudiot, J.-L.

    1989-12-31

    The author reviews briefly the area of parallel computer processing. This area has been expanding at a great rate in the past decade. Great strides have been made in the hardware area, and in the speed of performance of chips. However to some degree the hardware area is beginning to run into basic physical speed limits, which will slow the rate of advance of this area simply because of physical limitations. The author looks at ways that computer architecture, and software applications, can work to continue the rate of increase in computing power which has occurred over the past decade. Four particular areas are mentioned: programmability; communication network design; reliable operation; performance evaluation and benchmarking.

  3. Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again

    ERIC Educational Resources Information Center

    Morley, Donald D.

    2012-01-01

    The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

  4. Reliability and structural integrity. [analytical model for calculating crack detection probability

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1973-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  5. The Parallel Java 2 Library Parallel Programming in 100% Java

    E-print Network

    Kaminsky, Alan

    written using Nvidia's CUDA. PJ2 also includes a lightweight map- reduce framework for big data parallel the chosen language and should support all the paradigms--multicore, cluster, GPU, big data, and so on's CUDA for GPU parallel programming, Apa che's Hadoop for mapreduce big data programming. None

  6. Cerebro : forming parallel internets and enabling ultra-local economies

    E-print Network

    Ypodimatopoulos, Polychronis Panagiotis

    2008-01-01

    Internet-based mobile communications have been increasing rapidly [5], yet there is little or no progress in platforms that enable applications for discovery, context-awareness and sharing of data and services in a peer-wise ...

  7. Revised, final form, July 1994 Domain Decomposition, Parallel Computing and

    E-print Network

    Bjørstad, Petter E.

    purposes, for reservoir management and for prediction of the reservoir performance [3]. Another field of hydrocarbons and other chemicals trapped in tiny pores in the rock. If the rock permits and if the fluid. By injection of additional fluids and the release of pressure through the production of fluids at wells

  8. Computation and parallel implementation for early vision

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  9. Evaluation of fault-tolerant parallel-processor architectures over long space missions

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1989-01-01

    The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

  10. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Reliability Standards... RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file each Reliability Standard or modification to a Reliability Standard that it proposes to be made effective...

  11. Parallel Smoothed Aggregation Multigrid: Aggregation Strategies on Massively Parallel Machines

    SciTech Connect

    Ray S. Tuminaro

    2000-11-09

    Algebraic multigrid methods offer the hope that multigrid convergence can be achieved (for at least some important applications) without a great deal of effort from engineers and scientists wishing to solve linear systems. In this paper the authors consider parallelization of the smoothed aggregation multi-grid method. Smoothed aggregation is one of the most promising algebraic multigrid methods. Therefore, developing parallel variants with both good convergence and efficiency properties is of great importance. However, parallelization is nontrivial due to the somewhat sequential aggregation (or grid coarsening) phase. In this paper, they discuss three different parallel aggregation algorithms and illustrate the advantages and disadvantages of each variant in terms of parallelism and convergence. Numerical results will be shown on the Intel Teraflop computer for some large problems coming from nontrivial codes: quasi-static electric potential simulation and a fluid flow calculation.

  12. J. Parallel Distrib. Comput. 73 (2013) 371382 Contents lists available at SciVerse ScienceDirect

    E-print Network

    Wu, Jie

    2013-01-01

    . Parallel Distrib. Comput. journal homepage: www.elsevier.com/locate/jpdc Building a reliable and high Department of Computer and Information Sciences, Temple University Philadelphia, USA a r t i c l e i n f o distributed system High-performance system design a b s t r a c t Provisioning reliability in a high

  13. Hydrologic Terrain Processing Using Parallel Computing

    NASA Astrophysics Data System (ADS)

    Tarboton, D. G.; Watson, D. W.; Wallace, R. M.; Schreuders, K.; Tesfa, T. K.

    2009-12-01

    Topography in the form of Digital Elevation Models (DEMs), is widely used to derive information for the modeling of hydrologic processes. Hydrologic terrain analysis augments the information content of digital elevation data by removing spurious pits, deriving a structured flow field, and calculating surfaces of hydrologic information derived from the flow field. The increasing availability of high-resolution terrain datasets for large areas poses a challenge for existing algorithms that process terrain data to extract this hydrologic information. This paper will describe parallel algorithms that have been developed to enhance hydrologic terrain pre-processing so that larger datasets can be more efficiently computed. Message Passing Interface (MPI) parallel implementations have been developed for pit removal, flow direction, and generalized flow accumulation methods within the Terrain Analysis Using Digital Elevation Models (TauDEM) package. The parallel algorithm works by decomposing the domain into striped or tiled data partitions where each tile is processed by a separate processor. This method also reduces the memory requirements of each processor so that larger size grids can be processed. The parallel pit removal algorithm is adapted from the method of Planchon and Darboux that starts from a high elevation then progressively scans the grid, lowering each grid cell to the maximum of the original elevation or the lowest neighbor. The MPI implementation reconciles elevations along process domain edges after each scan. Generalized flow accumulation extends flow accumulation approaches commonly available in GIS through the integration of multiple inputs and a broad class of algebraic rules into the calculation of flow related quantities. It is based on establishing a flow field through DEM grid cells, that is then used to evaluate any mathematical function that incorporates dependence on values of the quantity being evaluated at upslope (or downslope) grid cells as well as other input quantities. The parallel generalized flow accumulation implementation relies on a dependency grid initialized with the number of upslope grid cells, that is reduced as each upslope cell is evaluated so as to track via a ready queue when each grid cell is ready for computation. The parallel implementations of these terrain analysis methods have enabled the processing of grids larger than were possible using the memory-based single processor implementation, as well as reducing computation times when run on multi-core desktop workstations and parallel computing clusters.

  14. Stirling Convertor Fasteners Reliability Quantification

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin R.; Korovaichuk, Igor; Kovacevich, Tiodor; Schreiber, Jeffrey G.

    2006-01-01

    Onboard Radioisotope Power Systems (RPS) being developed for NASA s deep-space science and exploration missions require reliable operation for up to 14 years and beyond. Stirling power conversion is a candidate for use in an RPS because it offers a multifold increase in the conversion efficiency of heat to electric power and reduced inventory of radioactive material. Structural fasteners are responsible to maintain structural integrity of the Stirling power convertor, which is critical to ensure reliable performance during the entire mission. Design of fasteners involve variables related to the fabrication, manufacturing, behavior of fasteners and joining parts material, structural geometry of the joining components, size and spacing of fasteners, mission loads, boundary conditions, etc. These variables have inherent uncertainties, which need to be accounted for in the reliability assessment. This paper describes these uncertainties along with a methodology to quantify the reliability, and provides results of the analysis in terms of quantified reliability and sensitivity of Stirling power conversion reliability to the design variables. Quantification of the reliability includes both structural and functional aspects of the joining components. Based on the results, the paper also describes guidelines to improve the reliability and verification testing.

  15. Computer-Aided Reliability Estimation

    NASA Technical Reports Server (NTRS)

    Bavuso, S. J.; Stiffler, J. J.; Bryant, L. A.; Petersen, P. L.

    1986-01-01

    CARE III (Computer-Aided Reliability Estimation, Third Generation) helps estimate reliability of complex, redundant, fault-tolerant systems. Program specifically designed for evaluation of fault-tolerant avionics systems. However, CARE III general enough for use in evaluation of other systems as well.

  16. The Reliability of Density Measurements.

    ERIC Educational Resources Information Center

    Crothers, Charles

    1978-01-01

    Data from a land-use study of small- and medium-sized towns in New Zealand are used to ascertain the relationship between official and effective density measures. It was found that the reliability of official measures of density is very low overall, although reliability increases with community size. (Author/RLV)

  17. Development of a Parallel Redundant STATCOM System

    NASA Astrophysics Data System (ADS)

    Takeda, Masatoshi; Yasuda, Satoshi; Tamai, Shinzo; Morishima, Naoki

    This paper presents a new concept of parallel redundant STATCOM system. This system consists of a number of medium capacity STATCOM units connected in parallel, which can achieve a high operational reliability and functional flexibility. The proposed STATCOM system has such redundant operation characteristics that the remaining STATCOM units can maintain their operation even though some of the STATCOM units are out of service. And also, it has flexible convertibility so that it can be converted to a BTB or a UPFC system easily, according to the diversified change of needs in power systems. In order to realize this concept, the authors developed several important key technologies for the STATCOM, such as the novel PWM scheme that enables effective cancellation of lower order harmonics, GCT inverter technologies with small loss consumption, and the coordination control scheme with capacitor banks to ensure effective dynamic performance with minimum loss consumption. The proposed STATCOM system was put into practical applications, exhibiting excellent performance characteristics at each site.

  18. Statistical modeling of software reliability

    NASA Technical Reports Server (NTRS)

    Miller, Douglas R.

    1992-01-01

    This working paper discusses the statistical simulation part of a controlled software development experiment being conducted under the direction of the System Validation Methods Branch, Information Systems Division, NASA Langley Research Center. The experiment uses guidance and control software (GCS) aboard a fictitious planetary landing spacecraft: real-time control software operating on a transient mission. Software execution is simulated to study the statistical aspects of reliability and other failure characteristics of the software during development, testing, and random usage. Quantification of software reliability is a major goal. Various reliability concepts are discussed. Experiments are described for performing simulations and collecting appropriate simulated software performance and failure data. This data is then used to make statistical inferences about the quality of the software development and verification processes as well as inferences about the reliability of software versions and reliability growth under random testing and debugging.

  19. Photovoltaic performance and reliability workshop

    SciTech Connect

    Mrig, L.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  20. Probability interpretations of intraclass reliabilities.

    PubMed

    Ellis, Jules L

    2013-11-20

    Research where many organizations are rated by different samples of individuals such as clients, patients, or employees frequently uses reliabilities computed from intraclass correlations. Consumers of statistical information, such as patients and policy makers, may not have sufficient background for deciding which levels of reliability are acceptable. It is shown that the reliability is related to various probabilities that may be easier to understand, for example, the proportion of organizations that will be classed significantly above (or below) the mean and the probability that an organization is classed correctly given that it is classed significantly above (or below) the mean. One can view these probabilities as the amount of information of the classification and the correctness of the classification. These probabilities have an inverse relationship: given a reliability, one can 'buy' correctness at the cost of informativeness and conversely. This article discusses how this can be used to make judgments about the required level of reliabilities. PMID:23703932

  1. Reliability-based design optimization using efficient global reliability analysis.

    SciTech Connect

    Bichon, Barron J.; Mahadevan, Sankaran; Eldred, Michael Scott

    2010-05-01

    Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

  2. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  3. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  4. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  5. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  6. Sub-Second Parallel State Estimation

    SciTech Connect

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.

  7. Highly reliable PLC systems

    SciTech Connect

    Beckman, L.V.

    1995-03-01

    Today`s control engineers are afforded many options when designing microprocessor based systems for safety applications. The use of some form of redundancy is typical, but the final selection must match the requirements of the application. Should the system be fail safe or fault tolerant? Is safety the overriding consideration, or is production a concern as well? Are redundant PLC`s (Programmable Logic Controllers) adequate, or should a system specifically designed for safety applications be utilized? There is a considerable effort in progress, both in the USA and in Europe, to establish guidelines and standards which match the safety integrity of the system with the degree of risk inherent in the application. This paper is intended to provide an introduction to the subject, and explore some of the microprocessor based alternatives available to the control or safety engineer.

  8. A reliable multicast for XTP

    NASA Technical Reports Server (NTRS)

    Dempsey, Bert J.; Weaver, Alfred C.

    1990-01-01

    Multicast services needed for current distributed applications on LAN's fall generally into one of three categories: datagram, semi-reliable, and reliable. Transport layer multicast datagrams represent unreliable service in which the transmitting context 'fires and forgets'. XTP executes these semantics when the MULTI and NOERR mode bits are both set. Distributing sensor data and other applications in which application-level error recovery strategies are appropriate benefit from the efficiency in multidestination delivery offered by datagram service. Semi-reliable service refers to multicasting in which the control algorithms of the transport layer--error, flow, and rate control--are used in transferring the multicast distribution to the set of receiving contexts, the multicast group. The multicast defined in XTP provides semi-reliable service. Since, under a semi-reliable service, joining a multicast group means listening on the group address and entails no coordination with other members, a semi-reliable facility can be used for communication between a client and a server group as well as true peer-to-peer group communication. Resource location in a LAN is an important application domain. The term 'semi-reliable' refers to the fact that group membership changes go undetected. No attempt is made to assess the current membership of the group at any time--before, during, or after--the data transfer.

  9. Asymptotically reliable transport of multimedia/graphics over wireless channels

    NASA Astrophysics Data System (ADS)

    Han, Richard Y.; Messerschmitt, David G.

    1996-03-01

    We propose a multiple-delivery transport service tailored for graphics and video transported over connections with wireless access. This service operates at the interface between the transport and application layers, balancing the subjective delay and image quality objectives of the application with the low reliability and limited bandwidth of the wireless link. While techniques like forward-error correction, interleaving and retransmission improve reliability over wireless links, they also increase latency substantially when bandwidth is limited. Certain forms of interactive multimedia datatypes can benefit from an initial delivery of a corrupt packet to lower the perceptual latency, as long as reliable delivery occurs eventually. Multiple delivery of successively refined versions of the received packet, terminating when a sufficiently reliable version arrives, exploits the redundancy inherently required to improve reliability without a traffic penalty. Modifications to acknowledgment-repeat-request (ARQ) methods to implement this transport service are proposed, which we term `leaky ARQ'. For the specific case of pixel-coded window-based text/graphics, we describe additional functions needed to more effectively support urgent delivery and asymptotic reliability. X server emulation suggests that users will accept a multi-second delay between a (possibly corrupt) packet and the ultimate reliably-delivered version. The relaxed delay for reliable delivery can be exploited for traffic capacity improvement using scheduling of retransmissions.

  10. Aerospace reliability applied to biomedicine.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  11. Integrating reliability analysis and design

    SciTech Connect

    Rasmuson, D. M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems.

  12. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Lau, Sonie; Yan, Jerry C.

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

  13. Is Monte Carlo embarrassingly parallel?

    SciTech Connect

    Hoogenboom, J. E.

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  14. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

  15. VCSEL-based parallel optical transmission module

    NASA Astrophysics Data System (ADS)

    Shen, Rongxuan; Chen, Hongda; Zuo, Chao; Pei, Weihua; Zhou, Yi; Tang, Jun

    2005-02-01

    This paper describes the design process and performance of the optimized parallel optical transmission module. Based on 1×12 VCSEL (Vertical Cavity Surface Emitting Laser) array, we designed and fabricated the high speed parallel optical modules. Our parallel optical module contains a 1×12 VCSEL array, a 12 channel CMOS laser driver circuit, a high speed PCB (Printed Circuit Board), a MT fiber connector and a packaging housing. The L-I-V characteristics of the 850nm VCSEL was measured at the operating current 8mA, 3dB frequency bandwidth more than 3GHz and the optical output 1mW. The transmission rate of all 12 channels is 30Gbit/s, with a single channel 2.5Gbit/s. By adopting the integration of the 1×12 VCSEL array and the driver array, we make a high speed PCB (Printed Circuit Board) to provide the optoelectronic chip with the operating voltage and high speed signals current. The LVDS (Low-Voltage Differential Signals) was set as the input signal to achieve better high frequency performance. The active coupling was adopted with a MT connector (8° slant fiber array). We used the Small Form Factor Pluggable (SFP) packaging. With the edge connector, the module could be inserted into the system dispense with bonding process.

  16. Parallel search of strongly ordered game trees

    SciTech Connect

    Marsland, T.A.; Campbell, M.

    1982-12-01

    The alpha-beta algorithm forms the basis of many programs that search game trees. A number of methods have been designed to improve the utility of the sequential version of this algorithm, especially for use in game-playing programs. These enhancements are based on the observation that alpha beta is most effective when the best move in each position is considered early in the search. Trees that have this so-called strong ordering property are not only of practical importance but possess characteristics that can be exploited in both sequential and parallel environments. This paper draws upon experiences gained during the development of programs which search chess game trees. Over the past decade major enhancements of the alpha beta algorithm have been developed by people building game-playing programs, and many of these methods will be surveyed and compared here. The balance of the paper contains a study of contemporary methods for searching chess game trees in parallel, using an arbitrary number of independent processors. To make efficient use of these processors, one must have a clear understanding of the basic properties of the trees actually traversed when alpha-beta cutoffs occur. This paper provides such insights and concludes with a brief description of a refinement to a standard parallel search algorithm for this problem. 33 references.

  17. Instrumentation for parallel magnetic resonance imaging 

    E-print Network

    Brown, David Gerald

    2007-04-25

    field. This image is formed by the application of a series of RF and static magnetic field gradient pulses (a pulse sequence) which interact with the nuclear magnetic dipoles, or spins, contained within the sample. Pulse sequences are used to scan...) process of assembling the prototype 64-channel parallel receiver system. IGC, Inc. is also gratefully acknowledged for its generous donation of a 0.16 T, whole body, permanent magnet. The IGC magnet was used as a testbed for the development of RF coils...

  18. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens (Rochester, MN); Inglett, Todd Alan (Rochester, MN)

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  19. EFFICIENT SCHEDULING OF PARALLEL JOBS ON MASSIVELY PARALLEL SYSTEMS

    SciTech Connect

    F. PETRINI; W. FENG

    1999-09-01

    We present buffered coscheduling, a new methodology to multitask parallel jobs in a message-passing environment and to develop parallel programs that can pave the way to the efficient implementation of a distributed operating system. Buffered coscheduling is based on three innovative techniques: communication buffering, strobing, and non-blocking communication. By leveraging these techniques, we can perform effective optimizations based on the global status of the parallel machine rather than on the limited knowledge available locally to each processor. The advantages of buffered coscheduling include higher resource utilization, reduced communication overhead, efficient implementation of low-control strategies and fault-tolerant protocols, accurate performance modeling, and a simplified yet still expressive parallel programming model. Preliminary experimental results show that buffered coscheduling is very effective in increasing the overall performance in the presence of load imbalance and communication-intensive workloads.

  20. Parallel Architecture For Robotics Computation

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1990-01-01

    Universal Real-Time Robotic Controller and Simulator (URRCS) is highly parallel computing architecture for control and simulation of robot motion. Result of extensive algorithmic study of different kinematic and dynamic computational problems arising in control and simulation of robot motion. Study led to development of class of efficient parallel algorithms for these problems. Represents algorithmically specialized architecture, in sense capable of exploiting common properties of this class of parallel algorithms. System with both MIMD and SIMD capabilities. Regarded as processor attached to bus of external host processor, as part of bus memory.

  1. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK`s current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN`s and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  2. Parallel inverse iteration with reorthogonalization

    SciTech Connect

    Fann, G.I.; Littlefield, R.J.

    1993-03-01

    A parallel method for finding orthogonal eigenvectors of real symmetric tridiagonal is described. The method uses inverse iteration with repeated Modified Gram-Schmidt (MGS) reorthogonalization of the unconverged iterates for clustered eigenvalues. This approach is more parallelizable than reorthogonalizing against fully converged eigenvectors, as is done by LAPACK's current DSTEIN routine. The new method is found to provide accuracy and speed comparable to DSTEIN's and to have good parallel scalability even for matrices with large clusters of eigenvalues. We present al results for residual and orthogonality tests, plus timings on IBM RS/6000 (sequential) and Intel Touchstone DELTA (parallel) computers.

  3. Reliability consideration for erratic loadings

    SciTech Connect

    Walrond, S.P.; Sharma, C.

    1995-12-31

    Traditionally, power systems reliability studies, have been concerned with the modelling and evaluation of various systems; whether transmission and distribution lines or generating components. The concept of reliability considers a component as either non repairable or repairable. In the latter case, the reliability is a measure of the component`s availability for a specified period of time. This paper examines the impact of a steel plant`s demand on the availability of the generating units for an island utility. The load is quite erratic, and is speculated as having a deleterious effect on the life of the generating machines committed to meeting the customer`s requirements. These machines are normally on speed (frequency) control, and would track the ramping rate of the client. The paper attempts to quantify the increased maintenance due to the load, also changes in outage rates, and as such determine the impact on reliability and availability of these sets.

  4. Reliability and Maintainability (RAM) Training

    NASA Technical Reports Server (NTRS)

    Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

    2000-01-01

    The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

  5. Parallelizing Sequential Programs With Statistical Accuracy Tests

    E-print Network

    Misailovic, Sasa

    2010-08-05

    We present QuickStep, a novel system for parallelizing sequential programs. QuickStep deploys a set of parallelization transformations that together induce a search space of candidate parallel programs. Given a sequential ...

  6. Parallel Marker Based Image Segmentation with Watershed

    E-print Network

    Parallel Marker Based Image Segmentation with Watershed Transformation Alina N. Moga Albert; Parallel Marker Based Watershed Transformation Abstract. The parallel watershed transformation used homogeneity with the watershed transformation. Boundary­based region merging is then effected to condense non

  7. ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.

  8. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Reliability Standards... RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file...

  9. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability reports. 39... RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11 Reliability reports. (a) The Electric Reliability Organization shall...

  10. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Reliability Standards... RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability Organization shall file...

  11. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Reliability reports. 39... RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE ESTABLISHMENT, APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11 Reliability reports. (a) The Electric Reliability Organization shall...

  12. Reliable Measures for Aligning Japanese-English News Articles and Masao Utiyama and Hitoshi Isahara

    E-print Network

    Reliable Measures for Aligning Japanese-English News Articles and Sentences Masao Utiyama articles and sentences to make a large parallel corpus. We first used a method based on cross-language informa- tion retrieval (CLIR) to align the Japanese and English articles and then used a method based

  13. Continuation of research in the statistical aspects of reliability, availability, and maintainability

    NASA Astrophysics Data System (ADS)

    Hollander, Myles

    1994-11-01

    Research areas included standby redundancy policies, redundancy allocations in series and parallel systems, goodness-of-fit tests for censored data, autopsy models, nonparametric methods for imperfect repair, inference for systems operating in different environments, and dynamic reliability models. Twenty-nine technical reports were written in the period and twenty-six papers were published in the period.

  14. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  15. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  16. Predicting performance of parallel computations

    NASA Technical Reports Server (NTRS)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  17. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  18. Parallel Networks for Machine Vision

    E-print Network

    Horn, Berthold K.P.

    1988-12-01

    The amount of computation required to solve many early vision problems is prodigious, and so it has long been thought that systems that operate in a reasonable amount of time will only become feasible when parallel ...

  19. Turbomachinery CFD on parallel computers

    NASA Technical Reports Server (NTRS)

    Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

    1992-01-01

    The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

  20. Parallel execution for conflicting transactions

    E-print Network

    Narula, Neha

    2015-01-01

    Multicore main-memory databases only obtain parallel performance when transactions do not conflict. Conflicting transactions are executed one at a time in order to ensure that they have serializable effects. Sequential ...

  1. Forms of matter and forms of radiation

    E-print Network

    Maurice Kleman

    2011-04-08

    The theory of defects in ordered and ill-ordered media is a well-advanced part of condensed matter physics. Concepts developed in this field also occur in the study of spacetime singularities, namely: i)- the topological theory of quantized defects (Kibble's cosmic strings) and ii)- the Volterra process for continuous defects, used to classify the Poincar\\'e symmetry breakings. We reassess the classification of Minkowski spacetime defects in the same theoretical frame, starting from the conjecture that these defects fall into two classes, as on they relate to massive particles or to radiation. This we justify on the empirical evidence of the Hubble's expansion. We introduce timelike and null congruences of geodesics treated as ordered media, viz. 'm'-crystals of massive particles and 'r'-crystals of massless particles, with parallel 4-momenta in M^4. Classifying their defects (or 'forms') we find (i) 'm'- and 'r'- Volterra continuous line defects and (ii) quantized topologically stable 'r'-defects, these latter forms being of various dimensionalities. Besides these 'perfect' forms, there are 'imperfect' disclinations that bound misorientation walls in three dimensions. We also speculate on the possible relation of these forms with the large-scale structure of the Universe.

  2. Fatigue Reliability of Gas Turbine Engine Structures

    NASA Technical Reports Server (NTRS)

    Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.

    1997-01-01

    The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

  3. Superfast robust digital image correlation analysis with parallel computing

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Tian, Long

    2015-03-01

    Existing digital image correlation (DIC) using the robust reliability-guided displacement tracking (RGDT) strategy for full-field displacement measurement is a path-dependent process that can only be executed sequentially. This path-dependent tracking strategy not only limits the potential of DIC for further improvement of its computational efficiency but also wastes the parallel computing power of modern computers with multicore processors. To maintain the robustness of the existing RGDT strategy and to overcome its deficiency, an improved RGDT strategy using a two-section tracking scheme is proposed. In the improved RGDT strategy, the calculated points with correlation coefficients higher than a preset threshold are all taken as reliably computed points and given the same priority to extend the correlation analysis to their neighbors. Thus, DIC calculation is first executed in parallel at multiple points by separate independent threads. Then for the few calculated points with correlation coefficients smaller than the threshold, DIC analysis using existing RGDT strategy is adopted. Benefiting from the improved RGDT strategy and the multithread computing, superfast DIC analysis can be accomplished without sacrificing its robustness and accuracy. Experimental results show that the presented parallel DIC method performed on a common eight-core laptop can achieve about a 7 times speedup.

  4. Nuclear Fragmentation and Its Parallels

    E-print Network

    K. Chase; A. Mekjian

    1993-09-13

    A model for the fragmentation of a nucleus is developed. Parallels of the description of this process with other areas are shown which include Feynman's theory of the $\\lambda$ transition in liquid Helium, Bose condensation, and Markov process models used in stochastic networks and polymer physics. These parallels are used to generalize and further develop a previous exactly solvable model of nuclear fragmentation. An analysis of some experimental data is given.

  5. Graphics applications utilizing parallel processing

    NASA Technical Reports Server (NTRS)

    Rice, John R.

    1990-01-01

    The results are presented of research conducted to develop a parallel graphic application algorithm to depict the numerical solution of the 1-D wave equation, the vibrating string. The research was conducted on a Flexible Flex/32 multiprocessor and a Sequent Balance 21000 multiprocessor. The wave equation is implemented using the finite difference method. The synchronization issues that arose from the parallel implementation and the strategies used to alleviate the effects of the synchronization overhead are discussed.

  6. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  7. Efficiency of parallel direct optimization.

    PubMed

    Janies, D A; Wheeler, W C

    2001-03-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. PMID:12240679

  8. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  9. Parallel architectures for problem solving

    SciTech Connect

    Kale, L.V.

    1985-01-01

    The problem of exploiting a large amount of hardware in parallel is one of the biggest challenges facing computer science today. The problem of designing parallel architectures and execution methods for solving large combinatorially explosive problems is studied here. Such problems typically do not have a regular structure that can be readily exploited for parallel execution. Prolog is chosen as a language to specify computation because it is seen as a language that is conceptually simple as well as amenable to parallel interpretation. A tree representation of Prolog computation called the REDUCE-OR tree is described as an alternative to the AND-OR tree representation. A process model based on this representation is developed; it captures more parallelism than most other proposed models. A class of bus architectures is proposed to implement the process model. A general model of parallel Prolog systems is developed and the proposed architectures examined in its framework. One of the important features of the proposed architectures is that they limit contracting of work to a close neighborhood. Various interconnection networks are analyzed, and a new one called the lattice-mesh is proposed. The lattice-mesh improves on the square grid of buses, while retaining its linear-area property. An extensive simulation framework was built. Results of some of the experiments conducted on the simulation system are given.

  10. First reliability test of a surface micromachined microengine using SHiMMeR

    SciTech Connect

    Tanner, D.M.; Smith, N.F.; Bowman, D.J.

    1997-08-01

    The first-ever reliability stress test on surface micromachined microengines developed at Sandia National Laboratories (SNL) has been completed. We stressed 41 microengines at 36,000 RPM and inspected the functionality at 60 RPM. We have observed an infant mortality region, a region of low failure rate (useful life), and no signs of wearout in the data. The reliability data are presented and interpreted using standard reliability methods. Failure analysis results on the stressed microengines are presented. In our effort to study the reliability of MEMS, we need to observe the failures of large numbers of parts to determine the failure modes. To facilitate testing of large numbers of micromachines. The Sandia High Volume Measurement of Micromachine Reliability (SHiMMeR) system has computer controlled positioning and the capability to inspect moving parts. The development of this parallel testing system is discussed in detail.

  11. Reliability and Functional Availability of HVAC Systems 

    E-print Network

    Myrefelt, S.

    2004-01-01

    This paper presents a model to calculate the reliability and availability of heating, ventilation and air conditioning systems. The reliability is expressed in the terms of reliability, maintainability and decision capability. These terms are a...

  12. 76 FR 71011 - Reliability Technical Conference Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ...Mosher, Senior Director of Policy Analysis and Reliability, American Public Power Association...Vice President and Director of Reliability Assessment and Performance Analysis, North American Electric Reliability Corporation Michael...

  13. Ultra precision and reliable bonding method

    NASA Technical Reports Server (NTRS)

    Gwo, Dz-Hung (Inventor)

    2001-01-01

    The bonding of two materials through hydroxide-catalyzed hydration/dehydration is achieved at room temperature by applying hydroxide ions to at least one of the two bonding surfaces and by placing the surfaces sufficiently close to each other to form a chemical bond between them. The surfaces may be placed sufficiently close to each other by simply placing one surface on top of the other. A silicate material may also be used as a filling material to help fill gaps between the surfaces caused by surface figure mismatches. A powder of a silica-based or silica-containing material may also be used as an additional filling material. The hydroxide-catalyzed bonding method forms bonds which are not only as precise and transparent as optical contact bonds, but also as strong and reliable as high-temperature frit bonds. The hydroxide-catalyzed bonding method is also simple and inexpensive.

  14. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ... Energy Regulatory Commission 18 CFR Part 40 Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System Restoration Reliability Standards AGENCY: Federal Energy Regulatory... and 749, which approved new and revised Reliability Standards, including IRO-004-2 and EOP-001....

  15. Robust Design of Reliability Test Plans Using Degradation Measures.

    SciTech Connect

    Lane, Jonathan Wesley; Lane, Jonathan Wesley; Crowder, Stephen V.; Crowder, Stephen V.

    2014-10-01

    With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus, it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. Generally, the assumption is made that the error associated with a degradation measure follows a known distribution, usually normal, although in practice cases may arise where that assumption is not valid. In this paper, we examine such degradation measures, both simulated and real, and present non-parametric methods to demonstrate reliability and to develop reliability test plans for the future production of components with this form of degradation.

  16. Parallel asynchronous systems and image processing algorithms

    NASA Technical Reports Server (NTRS)

    Coon, D. D.; Perera, A. G. U.

    1989-01-01

    A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.

  17. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  18. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  19. Reliability analysis based on a direct ship hull strength assessment

    NASA Astrophysics Data System (ADS)

    Feng, Guoqing; Wang, Dongsheng; Garbatov, Yordan; Guedes Soares, C.

    2015-12-01

    A method of reliability analysis based on a direct strength calculation employing the von Mises stress failure criterion is presented here. The short term strain distributions of ship hull structural components are identified through the statistical analysis of the wave-induced strain history and the long term distributions by the weighted summation of the short term strain distributions. The wave-induced long term strain distribution is combined with the still water strain. The extreme strain distribution of the response strain is obtained by statistical analysis of the combined strains. The limit state function of the reliability analysis is based on the von Mises stress failure criterion, including the related uncertainties due to the quality of the material and model uncertainty. The reliability index is calculated using FORM and sensitivity analysis of each variable that has effects on the reliability is also discussed.

  20. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  1. CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED

    EPA Science Inventory

    This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...

  2. Low complexity bit-parallel GF (2m ) multiplier

    E-print Network

    International Association for Cryptologic Research (IACR)

    . The most used irreducible poly- nomials are the all-one polynomials (AOP) [2, 3] and the sparse polynomials on the special form of AOP, efficient bit-parallel multipliers are proposed using polynomial basis (PB)[2 an AOP have been proposed in [11­14]. #12;In a recent paper, Chang et al. [15] has proposed a low

  3. CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED

    EPA Science Inventory

    This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...

  4. A Coordination Layer for Exploiting Task Parallelism with HPF

    E-print Network

    Orlando, Salvatore

    , has recently received much attention [6, 5]. Depending on the applications, HPF tasks can be organized. Replication entails using a processor farm structure [7], where incoming jobs are dispatched on one forms of task parallelism like pipelines and processor farms [11, 7]. We present #12; templates which

  5. Hierarchial parallel computer architecture defined by computational multidisciplinary mechanics

    NASA Technical Reports Server (NTRS)

    Padovan, Joe; Gute, Doug; Johnson, Keith

    1989-01-01

    The goal is to develop an architecture for parallel processors enabling optimal handling of multi-disciplinary computation of fluid-solid simulations employing finite element and difference schemes. The goals, philosphical and modeling directions, static and dynamic poly trees, example problems, interpolative reduction, the impact on solvers are shown in viewgraph form.

  6. Scheduling Asymmetric Parallelism on a PlayStation3 Cluster Filip Blagojevic 1

    E-print Network

    Nikolopoulos, Dimitris

    and is reliable in predicting opti- mal mappings of nested parallelism in MPI programs on the PS3 cluster. The presented co-scheduling heuristics reduce slack time on the accelerator cores of the PS3 and improve of Sony PlayStation3 (PS3) nodes. Our analysis reveals the sensitivity of computation and communication

  7. Reverse engineering a gene network using an asynchronous parallel evolution strategy

    E-print Network

    Jostins, Luke; Jaeger, Johannes

    2010-03-02

    reliable results. However, no parallel island Evolutionary Strategy (piES) has yet been demonstrated to be effective for this task. Results Here, we present synchronous and asynchronous versions of the piES algorithm, and apply them to a real reverse...

  8. Assessment of NDE reliability data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

    1975-01-01

    Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

  9. Reliability model for planetary gear

    NASA Technical Reports Server (NTRS)

    Savage, M.; Paridon, C. A.; Coy, J. J.

    1982-01-01

    A reliability model is presented for planetary gear trains in which the ring gear is fixed, the Sun gear is the input, and the planet arm is the output. The input and output shafts are coaxial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. This type of gear train is commonly used in main rotor transmissions for helicopters and in other applications which require high reductions in speed. The reliability model is based on the Weibull distribution of the individual reliabilities of the transmission components. The transmission's basic dynamic capacity is defined as the input torque which may be applied for one million input rotations of the Sun gear. Load and life are related by a power law. The load life exponent and basic dynamic capacity are developed as functions of the component capacities.

  10. Electronics reliability and measurement technology

    NASA Technical Reports Server (NTRS)

    Heyman, Joseph S. (editor)

    1987-01-01

    A summary is presented of the Electronics Reliability and Measurement Technology Workshop. The meeting examined the U.S. electronics industry with particular focus on reliability and state-of-the-art technology. A general consensus of the approximately 75 attendees was that "the U.S. electronics industries are facing a crisis that may threaten their existence". The workshop had specific objectives to discuss mechanisms to improve areas such as reliability, yield, and performance while reducing failure rates, delivery times, and cost. The findings of the workshop addressed various aspects of the industry from wafers to parts to assemblies. Key problem areas that were singled out for attention are identified, and action items necessary to accomplish their resolution are recommended.

  11. RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.

  12. Human reliability analysis application guide

    SciTech Connect

    Not Available

    1993-05-01

    Martin Marietta Energy Systems, Inc., (MMES) is engaged in a phased program to update the MMES safety documentation for the existing DOE facilities it manages. This guidance for Human Reliability Analysis (HRA) is intended for use in probabilistic safety analysis of DOE facilities. It is intended to supplement the Facility Safety Evaluation Application Guide, CSET-10, (July 1991), and subsequent revisions. The primary intended use of the guidance is to support accident analysis in Phase II of the Energy Systems Safety Analysis Report Update Program (SARUP). However, the guidance can be used for any safety analysis that needs human reliability analysis. The human reliability analysis methods covered by how-to'' sections of the guide are direct application of or adaptations of methods developed for Probabilistic Risk Assessments (PRA) of nuclear power plants. The adaptations of these methods were designed to make the methods more suitable to DOE non-reactor applications. The methods have been adapted for use by the personnel who will perform the accident analysis task. This has been done by furnishing guidance intended for accident analysts who are experienced in human reliability analysis and by providing various methods of several levels of complexity and sophistication that lend themselves to a graded approach. The methods also lend themselves to progressive, iterative refinement so that the more burdensome versions of the methods may be reserved for use only when the importance to safety of the human reliability issues clearly warrants the time and effort. The recommended techniques build upon logic models of the human interactions with the facility that contribute to accident sequences. The recommended techniques employ assessments of a catalog of Performance Shaping Factors (PSFs) that influence the reliability of operating crews in carrying out their activities.

  13. Designing magnetic systems for reliability

    SciTech Connect

    Heitzenroeder, P.J.

    1991-01-01

    Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not to be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.

  14. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  15. JPARSS: A Java Parallel Network Package for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

  16. Performance and Scalability Evaluation of the Ceph Parallel File System

    SciTech Connect

    Wang, Feiyi; Nelson, Mark; Oral, H Sarp; Settlemyer, Bradley W; Atchley, Scott; Caldwell, Blake A; Hill, Jason J

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  17. Reliability in the design phase

    SciTech Connect

    Siahpush, A.S.; Hills, S.W.; Pham, H. ); Majumdar, D. )

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system's reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  18. Reliability in the design phase

    SciTech Connect

    Siahpush, A.S.; Hills, S.W.; Pham, H.; Majumdar, D.

    1991-12-01

    A study was performed to determine the common methods and tools that are available to calculated or predict a system`s reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

  19. Metrological Reliability of Medical Devices

    NASA Astrophysics Data System (ADS)

    Costa Monteiro, E.; Leon, L. F.

    2015-02-01

    The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

  20. A parallel Jacobson-Oksman optimization algorithm. [parallel processing (computers)

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.; Markos, A. T.

    1975-01-01

    A gradient-dependent optimization technique which exploits the vector-streaming or parallel-computing capabilities of some modern computers is presented. The algorithm, derived by assuming that the function to be minimized is homogeneous, is a modification of the Jacobson-Oksman serial minimization method. In addition to describing the algorithm, conditions insuring the convergence of the iterates of the algorithm and the results of numerical experiments on a group of sample test functions are presented. The results of these experiments indicate that this algorithm will solve optimization problems in less computing time than conventional serial methods on machines having vector-streaming or parallel-computing capabilities.

  1. Computing contingency statistics in parallel.

    SciTech Connect

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  2. Parallel plasma fluid turbulence calculations

    NASA Astrophysics Data System (ADS)

    Leboeuf, J. N.; Carreras, B. A.; Charlton, L. A.; Drake, J. B.; Lynch, V. E.; Newman, D. E.; Sidikman, K. L.; Spong, D. A.

    The study of plasma turbulence and transport is a complex problem of critical importance for fusion-relevant plasmas. To this day, the fluid treatment of plasma dynamics is the best approach to realistic physics at the high resolution required for certain experimentally relevant calculations. Core and edge turbulence in a magnetic fusion device have been modeled using state-of-the-art, nonlinear, three-dimensional, initial-value fluid and gyrofluid codes. Parallel implementation of these models on diverse platforms--vector parallel (National Energy Research Supercomputer Center's CRAY Y-MP C90), massively parallel (Intel Paragon XP/S 35), and serial parallel (clusters of high-performance workstations using the Parallel Virtual Machine protocol)--offers a variety of paths to high resolution and significant improvements in real-time efficiency, each with its own advantages. The largest and most efficient calculations have been performed at the 200 Mword memory limit on the C90 in dedicated mode, where an overlap of 12 to 13 out of a maximum of 16 processors has been achieved with a gyrofluid model of core fluctuations. The richness of the physics captured by these calculations is commensurate with the increased resolution and efficiency and is limited only by the ingenuity brought to the analysis of the massive amounts of data generated.

  3. 200 NATURE PHYSICS | VOL 9 | APRIL 2013 | www.nature.com/naturephysics The parallel approach

    E-print Network

    Loss, Daniel

    200 NATURE PHYSICS | VOL 9 | APRIL 2013 | www.nature.com/naturephysics commentary The parallel approach Massimiliano Di Ventra and Yuriy V. Pershin A class of two-terminal passive circuit elements that can also act as memories could be the building blocks of a form of massively parallel computation

  4. Asynchronous parallel status comparator

    DOEpatents

    Arnold, Jeffrey W. (828 Hickory Ridge Rd., Aiken, SC 29801); Hart, Mark M. (223 Limerick Dr., Aiken, SC 29803)

    1992-01-01

    Apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition.

  5. Asynchronous parallel status comparator

    DOEpatents

    Arnold, J.W.; Hart, M.M.

    1992-12-15

    Disclosed is an apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition. 4 figs.

  6. File concepts for parallel I/O

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas W.

    1989-01-01

    The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

  7. Optimization Based Efficiencies in First Order Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Peck, Jeffrey A.; Mahadevan, Sankaran

    2003-01-01

    This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.

  8. Visualizing Parallel Computer System Performance

    NASA Technical Reports Server (NTRS)

    Malony, Allen D.; Reed, Daniel A.

    1988-01-01

    Parallel computer systems are among the most complex of man's creations, making satisfactory performance characterization difficult. Despite this complexity, there are strong, indeed, almost irresistible, incentives to quantify parallel system performance using a single metric. The fallacy lies in succumbing to such temptations. A complete performance characterization requires not only an analysis of the system's constituent levels, it also requires both static and dynamic characterizations. Static or average behavior analysis may mask transients that dramatically alter system performance. Although the human visual system is remarkedly adept at interpreting and identifying anomalies in false color data, the importance of dynamic, visual scientific data presentation has only recently been recognized Large, complex parallel system pose equally vexing performance interpretation problems. Data from hardware and software performance monitors must be presented in ways that emphasize important events while eluding irrelevant details. Design approaches and tools for performance visualization are the subject of this paper.

  9. Massively Parallel MRI Detector Arrays

    PubMed Central

    Keil, Boris; Wald, Lawrence L

    2013-01-01

    Originally proposed as a method to increase sensitivity by extending the locally high-sensitivity of small surface coil elements to larger areas, the term parallel imaging now includes the use of array coils to perform image encoding. This methodology has impacted clinical imaging to the point where many examinations are performed with an array comprising multiple smaller surface coil elements as the detector of the MR signal. This article reviews the theoretical and experimental basis for the trend towards higher channel counts relying on insights gained from modeling and experimental studies as well as the theoretical analysis of the so-called “ultimate” SNR and g-factor. We also review the methods for optimally combining array data and changes in RF methodology needed to construct massively parallel MRI detector arrays and show some examples of state-of-the-art for highly accelerated imaging with the resulting highly parallel arrays. PMID:23453758

  10. Fast data parallel polygon rendering

    SciTech Connect

    Ortega, F.A.; Hansen, C.D.

    1993-09-01

    This paper describes a parallel method for polygonal rendering on a massively parallel SIMD machine. This method, based on a simple shading model, is targeted for applications which require very fast polygon rendering for extremely large sets of polygons such as is found in many scientific visualization applications. The algorithms described in this paper are incorporated into a library of 3D graphics routines written for the Connection Machine. The routines are implemented on both the CM-200 and the CM-5. This library enables a scientists to display 3D shaded polygons directly from a parallel machine without the need to transmit huge amounts of data to a post-processing rendering system.

  11. Master/slave clock arrangement for providing reliable clock signal

    NASA Technical Reports Server (NTRS)

    Abbey, Duane L. (Inventor)

    1977-01-01

    The outputs of two like frequency oscillators are combined to form a single reliable clock signal, with one oscillator functioning as a slave under the control of the other to achieve phase coincidence when the master is operative and in a free-running mode when the master is inoperative so that failure of either oscillator produces no effect on the clock signal.

  12. Reliability and Validity of the Learning Styles Questionnaire.

    ERIC Educational Resources Information Center

    Fung, Y. H.; And Others

    1993-01-01

    Describes a study of Chinese undergraduate students at the Hong Kong Polytechnic that was conducted to examine the reliability and predictive validity of a short form of Honey and Mumford's Learning Styles Questionnaire. Correlations between learning style scores and preferences for different types of learning activities are discussed. (16…

  13. A Semantic Wiki Alerting Environment Incorporating Credibility and Reliability Evaluation

    E-print Network

    Kokar, Mieczyslaw M.

    A Semantic Wiki Alerting Environment Incorporating Credibility and Reliability Evaluation Brian in the form of a semantic wiki. A gang ontology and semantic inferencing are used to annotate the reports Introduction In this paper, we describe a prototype we are developing that we call the Semantic Wiki Alerting

  14. Long life high reliability thermal control systems study data handbook

    NASA Technical Reports Server (NTRS)

    Scollon, T. R., Jr.; Carpitella, M. J.

    1971-01-01

    The development of thermal control systems with high reliability and long service life is discussed. Various passive and semi-active thermal control systems which have been installed on space vehicles are described. The properties of the various coatings are presented in tabular form.

  15. Presented by Reliability, Availability, and

    E-print Network

    -Battelle for the U.S. Department of Energy Scott_RAS_SC10 VM-level migration using Xen · System setup ­ Xen VMMPresented by Reliability, Availability, and Serviceability (RAS) for High- Performance Computing Division #12;2 Managed by UT-Battelle for the U.S. Department of Energy Scott_RAS_SC10 Research

  16. Mpact and Future of Reliability

    E-print Network

    Bernstein, Joseph B.

    and processes, and include topics on predictive reliability modeling and simulation, physics of failure systems such as space missions, civil aviation, nuclear power plants, petro-chemical installations is also home to numerous research laboratories with extensive state of the art equipment and high

  17. How Reliable Is Laboratory Testing?

    MedlinePLUS

    ... services. Advertising & Sponsorship: Policy | Opportunities PLEASE NOTE: Your web browser does not have JavaScript enabled. Unless you enable Javascript , your ability to navigate and access the features of this website will be ... | Conclusion | Sources What are the indicators of test reliability? Four indicators are most commonly used to ...

  18. Reliability of Perceptual Voice Assessment.

    ERIC Educational Resources Information Center

    Blaustein, Steven; Bar, Asher

    1983-01-01

    The perceptual assessment of voice disorders by speech pathologists and interjudge reliability were investigated through screening of 160 children (4-14 years old). Results demonstrated poor interjudge agreement between the listeners and indicate a need for more objective measures in the identification of voice disorders. (Author/SEW)

  19. RELIABILITY OF CAPACITOR CHARGING UNITS

    E-print Network

    Sprott, Julien Clinton

    RELIABILITY OF CAPACITOR CHARGING UNITS Clint Sprott July 30, 1965 University of Wisconsin Thermonuclear Plasma Studies PLP 51.. Copy No. 3CAPACITOR CHARGING UNITS Clint Sprott July 30, 1965 Recent tests have been made on the accuracy of the voltage to which various capacitor banks

  20. Reliability Analysis of Money Habitudes

    ERIC Educational Resources Information Center

    Delgadillo, Lucy M.; Bushman, Brittani S.

    2015-01-01

    Use of the Money Habitudes exercise has gained popularity among various financial professionals. This article reports on the reliability of this resource. A survey administered to young adults at a western state university was conducted, and each Habitude or "domain" was analyzed using Cronbach's alpha procedures. Results showed all six…

  1. Photovoltaic performance and reliability workshop

    SciTech Connect

    Kroposki, B.

    1996-10-01

    This proceedings is the compilation of papers presented at the ninth PV Performance and Reliability Workshop held at the Sheraton Denver West Hotel on September 4--6, 1996. This years workshop included presentations from 25 speakers and had over 100 attendees. All of the presentations that were given are included in this proceedings. Topics of the papers included: defining service lifetime and developing models for PV module lifetime; examining and determining failure and degradation mechanisms in PV modules; combining IEEE/IEC/UL testing procedures; AC module performance and reliability testing; inverter reliability/qualification testing; standardization of utility interconnect requirements for PV systems; need activities to separate variables by testing individual components of PV systems (e.g. cells, modules, batteries, inverters,charge controllers) for individual reliability and then test them in actual system configurations; more results reported from field experience on modules, inverters, batteries, and charge controllers from field deployed PV systems; and system certification and standardized testing for stand-alone and grid-tied systems.

  2. Wanted: A Solid, Reliable PC

    ERIC Educational Resources Information Center

    Goldsborough, Reid

    2004-01-01

    This article discusses PC reliability, one of the most pressing issues regarding computers. Nearly a quarter century after the introduction of the first IBM PC and the outset of the personal computer revolution, PCs have largely become commodities, with little differentiating one brand from another in terms of capability and performance. Most of…

  3. Reliability calculation under planned maintenance

    SciTech Connect

    Elmakis, D.; Levy, P.

    1987-02-01

    This paper provides an extension to methods for computing reliability indices in the case of planned maintenance, in particular, a new formula for the frequency of system failure due to overlapping of two outages, forced and planned, is derived for two components. A numerical example illustrates the possibilities of the proposed approach.

  4. Parallel algorithms for mapping pipelined and parallel computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1988-01-01

    Many computational problems in image processing, signal processing, and scientific computing are naturally structured for either pipelined or parallel computation. When mapping such problems onto a parallel architecture it is often necessary to aggregate an obvious problem decomposition. Even in this context the general mapping problem is known to be computationally intractable, but recent advances have been made in identifying classes of problems and architectures for which optimal solutions can be found in polynomial time. Among these, the mapping of pipelined or parallel computations onto linear array, shared memory, and host-satellite systems figures prominently. This paper extends that work first by showing how to improve existing serial mapping algorithms. These improvements have significantly lower time and space complexities: in one case a published O(nm sup 3) time algorithm for mapping m modules onto n processors is reduced to an O(nm log m) time complexity, and its space requirements reduced from O(nm sup 2) to O(m). Run time complexity is further reduced with parallel mapping algorithms based on these improvements, which run on the architecture for which they create the mappings.

  5. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-03-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

  6. Gang scheduling a parallel machine

    SciTech Connect

    Gorda, B.C.; Brooks, E.D. III.

    1991-12-01

    Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

  7. Parallelization of the SIR code

    NASA Astrophysics Data System (ADS)

    Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jur?ak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

    A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

  8. Some Reliability Estimates for Computerized Adaptive Tests.

    ERIC Educational Resources Information Center

    Nicewander, W. Alan; Thomasson, Gary L.

    1999-01-01

    Derives three reliability estimates for the Bayes modal estimate (BME) and the maximum-likelihood estimate (MLE) of theta in computerized adaptive tests (CATs). Computes the three reliability estimates and the true reliabilities of both BME and MLE for seven simulated CATs. Results show the true reliabilities for BME and MLE to be nearly identical…

  9. 76 FR 71011 - Reliability Technical Conference Agenda

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-16

    ... Energy Regulatory Commission Reliability Technical Conference Agenda Reliability Technical Conference. Docket No. AD12-1-000 North American Electric Docket No. RC11-6-000 Reliability Corporation. Public... for addressing risks to reliability that were identified in earlier Commission technical...

  10. Reliability Assessment Using Discriminative Sampling and Metamodeling

    E-print Network

    Wang, Gaofeng Gary

    region. Conversely the failure probability of a system or product, fP , is defined as the rate ABSTRACT Reliability assessment is the foundation for reliability engineering and reliability-based design efficiently assess the reliability for problems of single failure region and has a good performance

  11. parallel_dp: The Parallel Dynamic Programming Design Pattern as an IntelR

    E-print Network

    Tang, Peiyi

    parallel_dp: The Parallel Dynamic Programming Design Pattern as an IntelR Threading Building Blocks and participant collaboration of this design pattern. We pro- pose the parallel dp algorithm template by parallel dp. We analyze the per- formance of our solution by applying parallel dp to create four TBB

  12. Mapping Pixel Windows To Vectors For Parallel Processing

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.

    1996-01-01

    Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

  13. Parallel Monte Carlo Simulation for control system design

    NASA Technical Reports Server (NTRS)

    Schubert, Wolfgang M.

    1995-01-01

    The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

  14. Parallel language constructs for tensor product computations on loosely coupled architectures

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Vanrosendale, John

    1989-01-01

    Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

  15. Power Quality and Reliability Project

    NASA Technical Reports Server (NTRS)

    Attia, John O.

    2001-01-01

    One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.

  16. Defining Requirements for Improved Photovoltaic System Reliability

    SciTech Connect

    Maish, A.B.

    1998-12-21

    Reliable systems are an essential ingredient of any technology progressing toward commercial maturity and large-scale deployment. This paper defines reliability as meeting system fictional requirements, and then develops a framework to understand and quantify photovoltaic system reliability based on initial and ongoing costs and system value. The core elements necessary to achieve reliable PV systems are reviewed. These include appropriate system design, satisfactory component reliability, and proper installation and servicing. Reliability status, key issues, and present needs in system reliability are summarized for four application sectors.

  17. Complete classification of parallel Lorentz surfaces in four-dimensional neutral pseudosphere

    SciTech Connect

    Chen, Bang-Yen

    2010-08-15

    A Lorentz surface of an indefinite space form is called parallel if its second fundamental form is parallel with respect to the Van der Waerden-Bortolotti connection. Such surfaces are locally invariant under the reflection with respect to the normal space at each point. Parallel surfaces are important in geometry as well as in general relativity since extrinsic invariants of such surfaces do not change from point to point. Parallel Lorentz surfaces in four-dimensional (4D) Lorentzian space forms are classified by Chen and Van der Veken [''Complete classification of parallel surfaces in 4-dimensional Lorentz space forms,'' Tohoku Math. J. 61, 1 (2009)]. Recently, explicit classification of parallel Lorentz surfaces in the pseudo-Euclidean 4-space E{sub 2}{sup 4} and in the pseudohyperbolic 4-space H{sub 2}{sup 4}(-1) are obtained recently by Chen et al. [''Complete classification of parallel Lorentzian surfaces in Lorentzian complex space forms,'' Int. J. Math. 21, 665 (2010); ''Complete classification of parallel Lorentz surfaces in neutral pseudo hyperbolic 4-space,'' Cent. Eur. J. Math. 8, 706 (2010)], respectively. In this article, we completely classify the remaining case; namely, parallel Lorentz surfaces in 4D neutral pseudosphere S{sub 2}{sup 4}(1). Our result states that there are 24 families of such surfaces in S{sub 2}{sup 4}(1). Conversely, every parallel Lorentz surface in S{sub 2}{sup 4}(1) is obtained from one of the 24 families. The main result indicates that there are major differences between Lorentz surfaces in the de Sitter 4-space dS{sub 4} and in the neutral pseudo 4-sphere S{sub 2}{sup 4}.

  18. Ultimate DWDM format in fiber-true bit-parallel solitons on WDM beams

    NASA Technical Reports Server (NTRS)

    Yeh, C.; Bergman, L. A.

    2000-01-01

    Whether true solitons can exist on WDM beams (and in what form) is a question that is generally unknown. This paper will discuss an answer to this question and a demonstration of the bit-parallel WDM transmission.

  19. Parallelized event chain algorithm for dense hard sphere and polymer systems

    SciTech Connect

    Kampmann, Tobias A. Boltz, Horst-Holger; Kierfeld, Jan

    2015-01-15

    We combine parallelization and cluster Monte Carlo for hard sphere systems and present a parallelized event chain algorithm for the hard disk system in two dimensions. For parallelization we use a spatial partitioning approach into simulation cells. We find that it is crucial for correctness to ensure detailed balance on the level of Monte Carlo sweeps by drawing the starting sphere of event chains within each simulation cell with replacement. We analyze the performance gains for the parallelized event chain and find a criterion for an optimal degree of parallelization. Because of the cluster nature of event chain moves massive parallelization will not be optimal. Finally, we discuss first applications of the event chain algorithm to dense polymer systems, i.e., bundle-forming solutions of attractive semiflexible polymers.

  20. Standard Templates Adaptive Parallel Library 

    E-print Network

    Arzu, Francisco Jose

    2000-01-01

    of parallelism in areas such as geometric modeling or graph algorithms, which use dynamic linked data structures. STAPL is intended to replace STL in a user transparent manner and run on a small to medium scale shared memory multiprocessor machine, which supports...

  1. Parallel, Distributed Scripting with Python

    SciTech Connect

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  2. Matpar: Parallel Extensions for MATLAB

    NASA Technical Reports Server (NTRS)

    Springer, P. L.

    1998-01-01

    Matpar is a set of client/server software that allows a MATLAB user to take advantage of a parallel computer for very large problems. The user can replace calls to certain built-in MATLAB functions with calls to Matpar functions.

  3. Towards Pervasive Parallelism Kunle Olukotun

    E-print Network

    John, Lizy Kurian

    (vs.VAX-11/780) 25%/year 52%/year ??%/year Stanford Hydra Project CMP + TLS Afara Websystems Sun Niagara 1 Heterogeneous cores (CPU+GPUs), app-specific accelerators Deep memory hierarchies Challenge: harness, the requirements of emerging applications, and the challenges of future parallel architectures #12;The Stanford

  4. Hybrid Parallel Part I. Preliminaries

    E-print Network

    Kaminsky, Alan

    bitcoin mining program in Chapter 13 doesn't nec essarily take full advantage of the cluster's parallel have to mine 40 or more bitcoins to take full advantage of the cluster. If I mine fewer than 40 bitcoins, some of the cores will be idle. That's not good. I want to put those idle cores to use. T I can

  5. Method of forming oriented block copolymer line patterns, block copolymer line patterns formed thereby, and their use to form patterned articles

    DOEpatents

    Russell, Thomas P.; Hong, Sung Woo; Lee, Doug Hyun; Park, Soojin; Xu, Ting

    2015-10-13

    A block copolymer film having a line pattern with a high degree of long-range order is formed by a method that includes forming a block copolymer film on a substrate surface with parallel facets, and annealing the block copolymer film to form an annealed block copolymer film having linear microdomains parallel to the substrate surface and orthogonal to the parallel facets of the substrate. The line-patterned block copolymer films are useful for the fabrication of magnetic storage media, polarizing devices, and arrays of nanowires.

  6. Improve reliability of liquid film shaft seals

    SciTech Connect

    Godse, A.G.

    1995-08-01

    A centrifugal compressor shaft seal and its supporting system form an important pat of a turbocompressor train. API-617 references different types of seals. The notable ones are shown with applicable remarks in Table 1. Of all the seals described in Table 1, process applications favor liquid film shaft seals. Since the design permits positive clearance between the shaft/shaft sleeve and inner bore of the seal rings with an oil film supplied by the system, theoretically long life is expected. There are instances of unforeseen shutdowns due more to seal failure than due to the bearings. An API requirement of uninterrupted compressor runs for three years is infeasible in some installations, in spite of the standby concept applied to the seal system pumps, filters and coolers that are prone to maintenance and changeover during operation. To pinpoint areas of concern, a seal system should be divided into two parts: seal itself; and seal auxiliary support system. The latter can be reliable in most installations while the former may not, which causes early shutdowns. Since it is essential to maintain uninterrupted oil supply under varying operating conditions, process upsets, orderly shutdown and blocking-in case of trips, the system designer is required to carefully size the components as well as instrumentation. The following summary on component sizing and other considerations according to API-614 will help to understand the factors that make the system reliable on the supply side. This also shows how the system can become too large.

  7. Debugging and analysis of large-scale parallel programs. Doctoral thesis

    SciTech Connect

    Mellor-Crummey, J.M.

    1989-09-01

    One of the most serious problems in the development cycle of large-scale parallel programs is the lack of tools for debugging and performance analysis. Parallel programs are more difficult to analyze than their sequential counterparts for several reasons. First, race conditions in parallel programs can cause non-deterministic behavior, which reduces the effectiveness of traditional cyclic debugging techniques. Second, invasive, interactive analysis can distort a parallel program's execution beyond recognition. Finally, comprehensive analysis of a parallel program's execution requires collection, management, and presentation of an enormous amount of information. This dissertation addresses the problem of debugging and analysis of large-scale parallel programs executing on shared-memory multiprocessors. It proposes a methodology for top-down analysis of parallel program executions that replaces previous ad-hoc approaches. To support this methodology, a formal model for shared-memory communication among processes in a parallel program is developed. It is shown how synchronization traces based on this abstract model can be used to create indistinguishable executions that form the basis for debugging. This result is used to develop a practical technique for tracing parallel program executions on shared-memory parallel processors so that their executions can be repeated deterministically on demand.

  8. A parallelization of the row-searching algorithm

    NASA Astrophysics Data System (ADS)

    Yaici, Malika; Khaled, Hayet; Khaled, Zakia; Bentahar, Athmane

    2012-11-01

    The problem dealt in this paper concerns the parallelization of the row-searching algorithm which allows the search for linearly dependant rows on a given matrix and its implementation on MPI (Message Passing Interface) environment. This algorithm is largely used in control theory and more specifically in solving the famous diophantine equation. An introduction to the diophantine equation is presented, then two parallelization approaches of the algorithm are detailed. The first distributes a set of rows on processes (processors) and the second makes a distribution per blocks. The sequential algorithm and its two parallel forms are implemented using MPI routines, then modelled using UML (Unified Modelling Language) and finally evaluated using algorithmic complexity.

  9. Mirror versus parallel bimanual reaching

    PubMed Central

    2013-01-01

    Background In spite of their importance to everyday function, tasks that require both hands to work together such as lifting and carrying large objects have not been well studied and the full potential of how new technology might facilitate recovery remains unknown. Methods To help identify the best modes for self-teleoperated bimanual training, we used an advanced haptic/graphic environment to compare several modes of practice. In a 2-by-2 study, we compared mirror vs. parallel reaching movements, and also compared veridical display to one that transforms the right hand’s cursor to the opposite side, reducing the area that the visual system has to monitor. Twenty healthy, right-handed subjects (5 in each group) practiced 200 movements. We hypothesized that parallel reaching movements would be the best performing, and attending to one visual area would reduce the task difficulty. Results The two-way comparison revealed that mirror movement times took an average 1.24 s longer to complete than parallel. Surprisingly, subjects’ movement times moving to one target (attending to one visual area) also took an average of 1.66 s longer than subjects moving to two targets. For both hands, there was also a significant interaction effect, revealing the lowest errors for parallel movements moving to two targets (p?parallel movements with a veridical display (moving to two separate targets). These results point to the expected levels of challenge for these bimanual training modes, which could be used to advise therapy choices in self-neurorehabilitation. PMID:23837908

  10. Assessment of NDE Reliability Data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Chang, F. H.; Covchman, J. C.; Lemon, G. H.; Packman, P. F.

    1976-01-01

    Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

  11. Gearbox Reliability Collaborative Bearing Calibration

    SciTech Connect

    van Dam, J.

    2011-10-01

    NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

  12. Improving equipment reliability through thermography

    SciTech Connect

    Bates, D.E. )

    1990-06-01

    Failure of electrical or mechanical equipment is costly to repair and also results in loss of revenue. Industries spend millions on protective devices to ensure the reliability of their equipment. Alarm systems and protective relaying safeguard against most problems. However, often permanent damage has already occurred when these systems operate. As technology develops so does the ability to suppress unexpected outages and equipment damage. In many cases an equipment problem manifests itself by the radiation of excess heat energy. The detection and analysis of this heating is becoming a major factor in predictive maintenance programs. This paper describes how thermography is a cost-effective tool to provide reliable service. This technology can work for the pipe line industry just as well as the utility industry.

  13. The Utility of Reliability and Survival

    E-print Network

    Singpurwalla, Nozer D

    2009-01-01

    Reliability (survival analysis, to biostatisticians) is a key ingredient for mak- ing decisions that mitigate the risk of failure. The other key ingredient is utility. A decision theoretic framework harnesses the two, but to invoke this framework we must distinguish between chance and probability. We describe a functional form for the utility of chance that incorporates all dispositions to risk, and pro- pose a probability of choice model for eliciting this utility. To implement the model a subject is asked to make a series of binary choices between gambles and certainty. These choices endow a statistical character to the problem of utility elicitation. The workings of our approach are illustrated via a live example in- volving a military planner. The material is general because it is germane to any situation involving the valuation of chance.

  14. An integrated approach for designing reliable coatings

    SciTech Connect

    Shaffer, E.O. II

    1996-12-31

    In its simplest form, adhesive failure is predicted when some applied energy exceeds a critical property of the joint. The challenge in designing reliability is to establish the details of both the applied energies and the critical performance properties. Complications arise in determining performance properties, which include both adhesive and cohesive strengths, since they are strong functions of processing and environmental conditions. Thus, any test used to measure these must be able to mimic the correct conditions. Another complication that arises is the dependence of the applied debond energies on the mechanical properties of the coating and substrate. The available debond energy is also a function of the geometry and any external loads applied. In this presentation, the author shows how computational mechanics can be used to determine the role of mechanical properties on the applied energy. In doing so, key properties are identified that allow the coating manufacturer to optimize their material for specific applications. Examples are given for several microelectronic applications.

  15. Reliability/redundancy trade-off evaluation for multiplexed architectures used to implement quantum dot based computing

    SciTech Connect

    Bhaduri, D.; Shukla, S. K.; Graham, P. S.; Gokhale, M.

    2004-01-01

    With the advent of nanocomputing, researchers have proposed Quantum Dot Cellular Automata (QCA) as one of the implementation technologies. The majority gate is one of the fundamental gates implementable with QCAs. Moreover, majority gates play an important role in defect-tolerant circuit implementations for nanotechnologies due to their use in redundancy mechanisms such as TMR, CTMR etc. Therefore, providing reliable implementation of majority logic using some redundancy mechanism is extremely important. This problem was addressed by von Neumann in 1956, in the form of 'majority multiplexing' and since then several analytical probabilistic models have been proposed to analyze majority multiplexing circuits. However, such analytical approaches are extremely challenging combinatorially and error prone. Also the previous analyses did not distinguish between permanent faults at the gates and transient faults due to noisy interconnects or noise effects on gates. In this paper, we provide explicit fault models for transient and permanent errors at the gates and noise effects at the interconnects. We model majority multiplexing in a probabilistic system description language, and use probabilistic model checking to analyze the effects of our fault models on the different reliability/redundancy trade-offs for majority multiplexing configurations. We also draw parallels with another fundamental logic gate multiplexing technique, namely NAND multiplexing. Tools and methodologies for analyzing redundant architectures that use majority gates will help logic designers to quickly evaluate the amount of redundancy needed to achieve a given level of reliability. VLSI designs at the nanoscale will utilize implementation fabrics prone to faults of permanent and transient nature, and the interconnects will be extensively affected by noise, hence the need for tools that can capture probabilistically quantified fault models and provide quick evaluation of the trade-offs. A comparative study of NAND multiplexing vs. majority multiplexing is also needed in case the designers are confronted with the choice to implement redundancy in either way. This paper provides models, methodologies and tools for these much needed analyses.

  16. Reliability tests of ultrasonic and thermosonic wire bonds

    NASA Astrophysics Data System (ADS)

    Lizak, T.; Kociubi?ski, A.

    2015-09-01

    This paper presents analysis of mechanical strength and reliability of wire bonds in context of the applied bonding technique, wire material and substrate type used as well as bonding parameters. The investigation conducted includes a selection of parameters affecting process of effective wire bonds forming by 53XX F&K Delvotec Bonder and implementation of wire bonds with ultrasonic and thermosonic techniques, using various substrates combined with gold and aluminum 25 ?m diameter wires. Furthermore, reliability and quality test made by bond pull technique have been presented and discussed.

  17. FAROW: A tool for fatigue and reliability of wind turbines

    SciTech Connect

    Veers, P.S.; Lange, C.H.; Winterstein, S.R.

    1993-07-01

    FAROW is a computer program that evaluates the fatigue and reliability of wind turbine components using structural reliability methods. A deterministic fatigue life formulation is based on functional forms of three basic parts of wind turbine fatigue calculation: (1) the loading environment, (2) the gross level of structural response given the load environment, and (3) the local failure criterion given both load environment and gross stress response. The calculated lifetime is compared with a user specific target lifetime to assess probabilities of premature failure. The parameters of the functional forms can be defined as either constants or random variables. The reliability analysis uses the deterministic lifetime calculation as the limit state function of a FORM/SORM (first and second order reliability methods) calculation based on techniques developed by Rackwitz. Besides probability of premature failure, FAROW calculates the mean lifetime, the relative importance of each of the random variables, and the sensitivity of the results to all of the input parameters, both constant inputs and the parameters that define the random variable inputs. The ability to check the probability of failure with Monte Carlo simulation is included as an option.

  18. Permission Forms

    ERIC Educational Resources Information Center

    Zirkel, Perry A.

    2005-01-01

    The prevailing practice in public schools is to routinely require permission or release forms for field trips and other activities that pose potential for liability. The legal status of such forms varies, but they are generally considered to be neither rock-solid protection nor legally valueless in terms of immunity. The following case and the…

  19. Reliable and robust entanglement witness

    E-print Network

    Xiao Yuan; Quanxin Mei; Shan Zhou; Xiongfeng Ma

    2015-12-08

    Entanglement, a critical resource for quantum information processing, needs to be witnessed in many practical scenarios. Theoretically, witnessing entanglement is by measuring a special Hermitian observable, called entanglement witness (EW), which has non-negative expected outcomes for all separable states but can have negative expectations for certain entangled states. In practice, an EW implementation may suffer from two problems. The first one is \\emph{reliability}. Due to unreliable realization devices, a separable state could be falsely identified as an entangled one. The second problem relates to \\emph{robustness}. A witness may not to optimal for a target state and fail to identify its entanglement. To overcome the reliability problem, we employ a recently proposed measurement-device-independent entanglement witness, in which the correctness of the conclusion is independent of the implemented measurement devices. In order to overcome the robustness problem, we optimize the EW to draw a better conclusion given certain experimental data. With the proposed EW scheme, where only data postprocessing needs to be modified comparing to the original measurement-device-independent scheme, one can efficiently take advantage of the measurement results to maximally draw reliable conclusions.

  20. Reliability in individual monitoring service.

    PubMed

    Mod Ali, N

    2011-03-01

    As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country. PMID:21147789

  1. Interrater reliability of Risk Matrix 2000/s.

    PubMed

    Wakeling, Helen C; Mann, Ruth E; Milner, Rebecca J

    2011-12-01

    Actuarial risk assessment instruments for sexual offenders are often used in high-stakes decision making and therefore should be subject to stringent reliability and validity testing. Furthermore, those involved in the risk assessment of sexual offenders should be aware of the factors that may affect the reliability of these instruments. The present study examined the interrater reliability of the Risk Matrix 2000/s between one field rater and one independent rater with a sample of more than 100 sexual offenders. The results indicated good interrater reliability of the tool, although reliability varies from item to item. A number of factors were identified that seem to reduce the reliability of scoring. The present findings are strengthened by examining interrater reliability of the tool in the usual practitioner context and by calculating a range of reliability statistics. Strategies are suggested to increase reliability in the use of actuarial tools in routine practice. PMID:22114173

  2. Interrater Reliability of Risk Matrix 2000/s.

    PubMed

    Wakeling, Helen C; Mann, Ruth E; Milner, Rebecca J

    2011-01-01

    Actuarial risk assessment instruments for sexual offenders are often used in high-stakes decision making and therefore should be subject to stringent reliability and validity testing. Furthermore, those involved in the risk assessment of sexual offenders should be aware of the factors that may affect the reliability of these instruments. The present study examined the interrater reliability of the Risk Matrix 2000/s between one field rater and one independent rater with a sample of more than 100 sexual offenders. The results indicated good interrater reliability of the tool, although reliability varies from item to item. A number of factors were identified that seem to reduce the reliability of scoring. The present findings are strengthened by examining interrater reliability of the tool in the usual practitioner context and by calculating a range of reliability statistics. Strategies are suggested to increase reliability in the use of actuarial tools in routine practice. PMID:21216783

  3. A parallel algorithm for implicit depletant simulations

    NASA Astrophysics Data System (ADS)

    Glaser, Jens; Karas, Andrew S.; Glotzer, Sharon C.

    2015-11-01

    We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core processors and graphics processing units for the case of hard spheres, hemispheres, and discoids. With depletants, we report novel cluster phases in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction ?c < 0.50, and additionally enables simulation of the fluid-solid transition.

  4. Catalytic Parallel Kinetic Resolution under Homogeneous Conditions

    PubMed Central

    Duffey, Trisha A.; MacKay, James A.; Vedejs, Edwin

    2010-01-01

    Two complementary chiral catalysts, the phosphine 8d and the DMAP-derived ent-23b, are used simultaneously to selectively activate one of a mixture of two different achiral anhydrides as acyl donors under homogeneous conditions. The resulting activated intermediates 25 and 26 react with the racemic benzylic alcohol 5 to form enantioenriched esters (R)-24 and (S)-17 by fully catalytic parallel kinetic resolution (PKR). The aroyl ester (R)-24 is obtained with near-ideal enantioselectivity for the PKR process, but (S)-17 is contaminated by ca. 8% of the minor enantiomer (R)-17 resulting from a second pathway via formation of mixed anhydride 24 and its activation by 8d. PMID:20557113

  5. The reliability of environmental measures of the college alcohol environment.

    PubMed

    Clapp, John D; Whitney, Mike; Shillington, Audrey M

    2002-01-01

    Much of what we know about students' drinking patterns and problems related to alcohol use is based on survey research. Although local and national survey data are important to alcohol-prevention projects, they do not sufficiently capture the complexity of the alcohol environment. Environmental prevention approaches to alcohol-related problems have been shown to be effective in community settings and researchers have begun to study and adapt such approaches for use on college campuses. Many environmental approaches require systematic scanning of the campus alcohol environment. This study assessed the inter-rater reliability of two environmental scanning tools (a newspaper content analysis form and a bulletin analysis form) designed to identify alcohol-related advertisements targeting college students. Inter-rater reliability for these forms varied across different rating categories and ranged from poor to excellent. Suggestions for future research are addressed. PMID:12556134

  6. Stability, Nonlinearity and Reliability of Electrostatically Actuated MEMS Devices

    PubMed Central

    Zhang, Wen-Ming; Meng, Guang; Chen, Di

    2007-01-01

    Electrostatic micro-electro-mechanical system (MEMS) is a special branch with a wide range of applications in sensing and actuating devices in MEMS. This paper provides a survey and analysis of the electrostatic force of importance in MEMS, its physical model, scaling effect, stability, nonlinearity and reliability in detail. It is necessary to understand the effects of electrostatic forces in MEMS and then many phenomena of practical importance, such as pull-in instability and the effects of effective stiffness, dielectric charging, stress gradient, temperature on the pull-in voltage, nonlinear dynamic effects and reliability due to electrostatic forces occurred in MEMS can be explained scientifically, and consequently the great potential of MEMS technology could be explored effectively and utilized optimally. A simplified parallel-plate capacitor model is proposed to investigate the resonance response, inherent nonlinearity, stiffness softened effect and coupled nonlinear effect of the typical electrostatically actuated MEMS devices. Many failure modes and mechanisms and various methods and techniques, including materials selection, reasonable design and extending the controllable travel range used to analyze and reduce the failures are discussed in the electrostatically actuated MEMS devices. Numerical simulations and discussions indicate that the effects of instability, nonlinear characteristics and reliability subjected to electrostatic forces cannot be ignored and are in need of further investigation.

  7. The Effect of a Looker's Past Reliability on Infants' Reasoning about Beliefs

    ERIC Educational Resources Information Center

    Poulin-Dubois, Diane; Chow, Virginia

    2009-01-01

    We investigated whether 16-month-old infants' past experience with a person's gaze reliability influences their expectation about the person's ability to form beliefs. Infants were first administered a search task in which they observed an experimenter show excitement while looking inside a box that either contained a toy (reliable looker…

  8. 76 FR 73608 - Reliability Technical Conference, North American Electric Reliability Corporation, Public Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ...Mosher, Senior Director of Policy Analysis and Reliability, American Public Power Association...Vice President and Director of Reliability Assessment and Performance Analysis, North American Electric Reliability Corporation Michael...

  9. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ...interconnection by ensuring that the reliability coordinator has the data...and monitor Interconnection Reliability Operating Limits (IROL...Operational Planning Analysis'' and ``Real Time Assessment...1\\ Mandatory Reliability Standards for...

  10. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  11. An intercalation-locked parallel-stranded DNA tetraplex

    DOE PAGESBeta

    Tripathi, S.; Zhang, D.; Paukstelis, P. J.

    2015-01-27

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(ACBrUCGGABrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A–A base pairs betweenmore »adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H–1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures.« less

  12. An intercalation-locked parallel-stranded DNA tetraplex

    SciTech Connect

    Tripathi, S.; Zhang, D.; Paukstelis, P. J.

    2015-01-27

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(ACBrUCGGABrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A–A base pairs between adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H–1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures.

  13. An intercalation-locked parallel-stranded DNA tetraplex

    PubMed Central

    Tripathi, Shailesh; Zhang, Daoning; Paukstelis, Paul J.

    2015-01-01

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(ACBrUCGGABrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5?-most A–A base pairs between adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H–1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures. PMID:25628357

  14. An intercalation-locked parallel-stranded DNA tetraplex.

    PubMed

    Tripathi, Shailesh; Zhang, Daoning; Paukstelis, Paul J

    2015-02-18

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson-Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(AC(Br)UCGGA(Br)UGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A-A base pairs between adjacent G-G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. (1)H-(1)H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures. PMID:25628357

  15. Parallel VLSI Circuit Analysis and Optimization 

    E-print Network

    Ye, Xiaoji

    2012-02-14

    circuit simulation techniques and achieves superlinear speedup in practice. The second part of the dissertation talks about parallel circuit optimization. A modified asynchronous parallel pattern search (APPS) based method which utilizes the efficient...

  16. Adaptively Parallel Processor Allocation for Cilk Jobs

    E-print Network

    Sen, Siddhartha

    The problem of allocating processor resources fairly and efficiently to parallel jobs has been studied extensively in the past. Most of this work, however, assumes that the instantaneous parallelism of the jobs is known ...

  17. Parallelizing alternating direction implicit solver on GPUs

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a parallel Alternating Direction Implicit (ADI) solver on GPUs. Our implementation significantly improves existing implementations in two aspects. First, we address the scalability issue of existing Parallel Cyclic Reduction (PCR) implementations by eliminating their hardware resource con...

  18. Implementing clips on a parallel computer

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1987-01-01

    The C language integrated production system (CLIPS) is a forward chaining rule based language to provide training and delivery for expert systems. Conceptually, rule based languages have great potential for benefiting from the inherent parallelism of the algorithms that they employ. During each cycle of execution, a knowledge base of information is compared against a set of rules to determine if any rules are applicable. Parallelism also can be employed for use with multiple cooperating expert systems. To investigate the potential benefits of using a parallel computer to speed up the comparison of facts to rules in expert systems, a parallel version of CLIPS was developed for the FLEX/32, a large grain parallel computer. The FLEX implementation takes a macroscopic approach in achieving parallelism by splitting whole sets of rules among several processors rather than by splitting the components of an individual rule among processors. The parallel CLIPS prototype demonstrates the potential advantages of integrating expert system tools with parallel computers.

  19. Parallelizing Sequential Programs with Statistical Accuracy Tests

    E-print Network

    Misailovic, Sasa

    We present QuickStep, a novel system for parallelizing sequential programs. Unlike standard parallelizing compilers (which are designed to preserve the semantics of the original sequential computation), QuickStep is instead ...

  20. Speculative parallelism in Intel Cilk Plus

    E-print Network

    Perez, Ruben (Ruben M.)

    2012-01-01

    Certain algorithms can be effectively parallelized at the cost of performing some redundant work. One example is searching an unordered tree graph for a particular node. Each subtree can be searched in parallel by a separate ...

  1. Parallel Coupled Micro-Macro Actuators

    E-print Network

    Morrell, John Bryant

    1996-01-01

    This thesis presents a new actuator system consisting of a micro-actuator and a macro-actuator coupled in parallel via a compliant transmission. The system is called the Parallel Coupled Micro-Macro Actuator, or PaCMMA. ...

  2. Parallel information and computation with restitution for noise-tolerant nanoscale logic networks

    NASA Astrophysics Data System (ADS)

    Sadek, Akram S.; Nikolic, Konstantin; Forshaw, Michael

    2004-01-01

    Nanoelectronic devices are anticipated to become exceedingly noisy as they are scaled towards thermodynamic limits. Hence the development of nanoscale classical information systems will require optimal schemes for reliable information processing in the presence of noise. We present a novel, highly noise-tolerant computer architecture based on the work of von Neumann that may enable the construction of reliable nanocomputers comprised of noisy gates. The fundamental principles of this technique of parallel restitution are parallel processing by redundant logic gates, parallelism in the interconnects between gate resources and intermittent signal restitution performed in parallel. The results of our mathematical model, verified by Monte Carlo simulations, show that nanoprocessors consisting of {\\sim }10^{12} gates incorporating this technique can be made 90% reliable over 10 years of continuous operation with a gate error probability per actuation of \\epsilon \\sim 10^{-4} and a redundancy of R \\sim 50 . This compares very favourably with corresponding results utilizing modular redundant architectures of \\epsilon \\sim 5 \\times 10^{-17} with R \\sim 50 , and \\epsilon \\sim 10^{-31} with no noise tolerance. Arbitrary reliability is possible within a noise limit of \\epsilon \\simeq 0.010\\,77 , with massive redundancy. We show parallel restitution to be a general paradigm applicable to different kinds of information processing, including neural communication. Significantly, we show how our treatment of para-restituted computation as a statistical ensemble coupled to a heat bath allows consideration of the computation entropy of logic gates, and tentatively sketch a thermodynamic theory of noisy computation that might set fundamental physical limits on scaling classical computation to the nanoscale. Our preliminary work indicates that classical computation may be confined to the macroscale by noise, quantum computation possibly being the only information processing possible at the extreme nanoscale.

  3. Parallel supercomputing with commodity components

    NASA Technical Reports Server (NTRS)

    Warren, M. S.; Goda, M. P.; Becker, D. J.

    1997-01-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10(sup 15) floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  4. Hyper-Systolic Parallel Computing

    E-print Network

    Th. Lippert; A. Seyfried; A. Bode; K. Schilling

    1995-07-25

    A new class of parallel algorithms is introduced that can achieve a complexity of O(n^3/2) with respect to the interprocessor communication, in the exact computation of systems with pairwise mutual interactions of all elements. Hitherto, conventional methods exhibit a communicational complexity of O(n^2). The amount of computation operations is not altered for the new algorithm which can be formulated as a kind of h-range problem, known from the mathematical field of Additive Number Theory. We will demonstrate the reduction in communicational expense by comparing the standard-systolic algorithm and the new algorithm on the connection machine CM5 and the CRAY T3D. The parallel method can be useful in various scientific and engineering fields like exact n-body dynamics with long range forces, polymer chains, protein folding or signal processing.

  5. Parallel supercomputing with commodity components

    SciTech Connect

    Warren, M.S.; Goda, M.P.; Becker, D.J.

    1997-09-01

    We have implemented a parallel computer architecture based entirely upon commodity personal computer components. Using 16 Intel Pentium Pro microprocessors and switched fast ethernet as a communication fabric, we have obtained sustained performance on scientific applications in excess of one Gigaflop. During one production astrophysics treecode simulation, we performed 1.2 x 10{sup 15} floating point operations (1.2 Petaflops) over a three week period, with one phase of that simulation running continuously for two weeks without interruption. We report on a variety of disk, memory and network benchmarks. We also present results from the NAS parallel benchmark suite, which indicate that this architecture is competitive with current commercial architectures. In addition, we describe some software written to support efficient message passing, as well as a Linux device driver interface to the Pentium hardware performance monitoring registers.

  6. Parallel processing spacecraft communication system

    NASA Technical Reports Server (NTRS)

    Bolotin, Gary S. (Inventor); Donaldson, James A. (Inventor); Luong, Huy H. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    An uplink controlling assembly speeds data processing using a special parallel codeblock technique. A correct start sequence initiates processing of a frame. Two possible start sequences can be used; and the one which is used determines whether data polarity is inverted or non-inverted. Processing continues until uncorrectable errors are found. The frame ends by intentionally sending a block with an uncorrectable error. Each of the codeblocks in the frame has a channel ID. Each channel ID can be separately processed in parallel. This obviates the problem of waiting for error correction processing. If that channel number is zero, however, it indicates that the frame of data represents a critical command only. That data is handled in a special way, independent of the software. Otherwise, the processed data further handled using special double buffering techniques to avoid problems from overrun. When overrun does occur, the system takes action to lose only the oldest data.

  7. Good form.

    PubMed

    Sorrel, Amy Lynn

    2015-03-01

    New standardized prior authorization forms for health care services and prescription drugs released by the Texas Department of Insurance promise to alleviate administrative busy work and its related costs. PMID:25761070

  8. Minisymposium MS19 Parallel Crypto

    E-print Network

    Kaminsky, Alan

    Rochester Institute of Technology February 24, 2010 2010 SIAM Conference on Parallel Processing 2014 2015 2016 2017 2018 2019 1E1 1E2 1E3 1E4 1E5 1E6 1E7 1E8 1E9 1 minute 1 hour 1 day 1 month 1 year Crypto Attack Times on a Top Supercomputer 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 1E-3 1E

  9. Parallel Processing in Combustion Analysis

    NASA Technical Reports Server (NTRS)

    Schunk, Richard Gregory; Chung, T. J.

    2000-01-01

    The objective of this research is to demonstrate the application of the Flow-field Dependent Variation (FDV) method to a problem of current interest in supersonic chemical combustion. Due in part to the stiffness of the chemical reactions, the solution of such problems on unstructured three dimensional grids often dictates the use of parallel computers. Preliminary results for the injection of a supersonic hydrogen stream into vitiated air are presented.

  10. Efficient, massively parallel eigenvalue computation

    NASA Technical Reports Server (NTRS)

    Huo, Yan; Schreiber, Robert

    1993-01-01

    In numerical simulations of disordered electronic systems, one of the most common approaches is to diagonalize random Hamiltonian matrices and to study the eigenvalues and eigenfunctions of a single electron in the presence of a random potential. An effort to implement a matrix diagonalization routine for real symmetric dense matrices on massively parallel SIMD computers, the Maspar MP-1 and MP-2 systems, is described. Results of numerical tests and timings are also presented.

  11. Force user's manual: A portable, parallel FORTRAN

    NASA Technical Reports Server (NTRS)

    Jordan, Harry F.; Benten, Muhammad S.; Arenstorf, Norbert S.; Ramanan, Aruna V.

    1990-01-01

    The use of Force, a parallel, portable FORTRAN on shared memory parallel computers is described. Force simplifies writing code for parallel computers and, once the parallel code is written, it is easily ported to computers on which Force is installed. Although Force is nearly the same for all computers, specific details are included for the Cray-2, Cray-YMP, Convex 220, Flex/32, Encore, Sequent, Alliant computers on which it is installed.

  12. Parallel machine architecture and compiler design facilities

    NASA Technical Reports Server (NTRS)

    Kuck, David J.; Yew, Pen-Chung; Padua, David; Sameh, Ahmed; Veidenbaum, Alex

    1990-01-01

    The objective is to provide an integrated simulation environment for studying and evaluating various issues in designing parallel systems, including machine architectures, parallelizing compiler techniques, and parallel algorithms. The status of Delta project (which objective is to provide a facility to allow rapid prototyping of parallelized compilers that can target toward different machine architectures) is summarized. Included are the surveys of the program manipulation tools developed, the environmental software supporting Delta, and the compiler research projects in which Delta has played a role.

  13. Automatic Multilevel Parallelization Using OpenMP

    NASA Technical Reports Server (NTRS)

    Jin, Hao-Qiang; Jost, Gabriele; Yan, Jerry; Ayguade, Eduard; Gonzalez, Marc; Martorell, Xavier; Biegel, Bryan (Technical Monitor)

    2002-01-01

    In this paper we describe the extension of the CAPO (CAPtools (Computer Aided Parallelization Toolkit) OpenMP) parallelization support tool to support multilevel parallelism based on OpenMP directives. CAPO generates OpenMP directives with extensions supported by the NanosCompiler to allow for directive nesting and definition of thread groups. We report some results for several benchmark codes and one full application that have been parallelized using our system.

  14. Numerical simulation of supersonic wake flow with parallel computers

    SciTech Connect

    Wong, C.C.; Soetrisno, M.

    1995-07-01

    Simulating a supersonic wake flow field behind a conical body is a computing intensive task. It requires a large number of computational cells to capture the dominant flow physics and a robust numerical algorithm to obtain a reliable solution. High performance parallel computers with unique distributed processing and data storage capability can provide this need. They have larger computational memory and faster computing time than conventional vector computers. We apply the PINCA Navier-Stokes code to simulate a wind-tunnel supersonic wake experiment on Intel Gamma, Intel Paragon, and IBM SP2 parallel computers. These simulations are performed to study the mean flow in the near wake region of a sharp, 7-degree half-angle, adiabatic cone at Mach number 4.3 and freestream Reynolds number of 40,600. Overall the numerical solutions capture the general features of the hypersonic laminar wake flow and compare favorably with the wind tunnel data. With a refined and clustering grid distribution in the recirculation zone, the calculated location of the rear stagnation point is consistent with the 2D axisymmetric and 3D experiments. In this study, we also demonstrate the importance of having a large local memory capacity within a computer node and the effective utilization of the number of computer nodes to achieve good parallel performance when simulating a complex, large-scale wake flow problem.

  15. WISC-IV short-form 1 Running head: WISC-IV Short-Form

    E-print Network

    Crawford, John R.

    WISC-IV short-form 1 Running head: WISC-IV Short-Form British Journal of Clinical Psychology.crawford@abdn.ac.uk #12;WISC-IV short-form 2 Running head: WISC-IV Short-Form Abstract Objectives: To develop an Index, in press An Index-Based Short-Form of the WISC-IV with Accompanying Analysis of the Reliability

  16. Highly parallel sparse Cholesky factorization

    NASA Technical Reports Server (NTRS)

    Gilbert, John R.; Schreiber, Robert

    1990-01-01

    Several fine grained parallel algorithms were developed and compared to compute the Cholesky factorization of a sparse matrix. The experimental implementations are on the Connection Machine, a distributed memory SIMD machine whose programming model conceptually supplies one processor per data element. In contrast to special purpose algorithms in which the matrix structure conforms to the connection structure of the machine, the focus is on matrices with arbitrary sparsity structure. The most promising algorithm is one whose inner loop performs several dense factorizations simultaneously on a 2-D grid of processors. Virtually any massively parallel dense factorization algorithm can be used as the key subroutine. The sparse code attains execution rates comparable to those of the dense subroutine. Although at present architectural limitations prevent the dense factorization from realizing its potential efficiency, it is concluded that a regular data parallel architecture can be used efficiently to solve arbitrarily structured sparse problems. A performance model is also presented and it is used to analyze the algorithms.

  17. Meeting Human Reliability Requirements through Human Factors Design, Testing, and Modeling

    SciTech Connect

    R. L. Boring

    2007-06-01

    In the design of novel systems, it is important for the human factors engineer to work in parallel with the human reliability analyst to arrive at the safest achievable design that meets design team safety goals and certification or regulatory requirements. This paper introduces the System Development Safety Triptych, a checklist of considerations for the interplay of human factors and human reliability through design, testing, and modeling in product development. This paper also explores three phases of safe system development, corresponding to the conception, design, and implementation of a system.

  18. Parallelizing Time With Polynomial Circuits Ryan Williams

    E-print Network

    Boneh, Dan

    Parallelizing Time With Polynomial Circuits Ryan Williams Institute for Advanced Study, Princeton computations with circuits of polynomial size. We give an algorithmic size-depth tradeoff for parallelizing parallel simulation yields logspace-uniform tO(1) size, O(t/ log t)-depth Boolean circuits having semi

  19. Structured Hardware Compilation of Parallel Programs

    E-print Network

    Luk, Wayne

    implementation. A circuit module is developed for each control structure, such as sequential or parallelStructured Hardware Compilation of Parallel Programs Wayne Luk, David Ferguson and Ian Page. The potential of this approach is evaluated. INTRODUCTION Recent work has shown how parallel programs

  20. Scheduling Computationally Intensive Data Parallel Raghu Subramanian

    E-print Network

    Scherson, Isaac D.

    Scheduling Computationally Intensive Data Parallel Programs Raghu Subramanian Systems Technology of Information and Computer Science University of California, Irvine Abstract. We consider the problem of how to run a workload of multiple parallel jobs on a single parallel machine. Jobs are assumed to be data