Sample records for parallel forms reliability

  1. Reliability of a Parallel Pipe Network

    NASA Technical Reports Server (NTRS)

    Herrera, Edgar; Chamis, Christopher (Technical Monitor)

    2001-01-01

    The goal of this NASA-funded research is to advance research and education objectives in theoretical and computational probabilistic structural analysis, reliability, and life prediction methods for improved aerospace and aircraft propulsion system components. Reliability methods are used to quantify response uncertainties due to inherent uncertainties in design variables. In this report, several reliability methods are applied to a parallel pipe network. The observed responses are the head delivered by a main pump and the head values of two parallel lines at certain flow rates. The probability that the flow rates in the lines will be less than their specified minimums will be discussed.

  2. OPTIMAL RELIABILITY ALLOCATION IN SERIES-PARALLEL SYSTEMS FROM COMPONENTS' DISCRETE COST -RELIABILITY DATA SETS

    E-print Network

    Smith, Alice E.

    OPTIMAL RELIABILITY ALLOCATION IN SERIES-PARALLEL SYSTEMS FROM COMPONENTS' DISCRETE COST - RELIABILITY DATA SETS: A NESTED SIMULATED ANNEALING APPROACH Subba Rao V. Majety# , Srikanth It is generally accepted that a component's cost is an increasing function of its reliability. Most researchers

  3. Parallel versions of FORM and more

    E-print Network

    Matthias Steinhauser; Takahiro Ueda; Jos A. M. Vermaseren

    2015-01-28

    We review the status of the parallel versions of the computer algebra system FORM. In particular, we provide a brief overview about the historical developments, discuss the strengths of ParFORM and TFORM, and mention typical applications. Furthermore, we briefly discuss the programs FIRE and FIESTA, which have also been developed with the Collaborative Research Center/TR~9 (CRC/TR~9).

  4. DR-nets: data-reconstruction networks for highly reliable parallel-disk systems

    Microsoft Academic Search

    Haruo Yokota

    1994-01-01

    We propose DR-nets, Data-Reconstruction networks, to construct massively parallel disk systems with large capacity, wide bandwidth and high reliability. Each node of a DR-net has disks, and is connected by links to form an interconnection network. To realize the high reliability, nodes in a sub-network of the interconnection network organize a group of parity calculation proposed for RAIDs. Inter-node communication

  5. Parallel algorithms for matrix normal forms

    Microsoft Academic Search

    Erich Kaltofen; M. s. Krishnamoorthy; B. David Saunders

    1990-01-01

    Here we offer a new randomized parallel algorithm that determines the Smith normalform of a matrix with entries being univariate polynomials with coefficients in an arbitraryfield. The algorithm has two important advantages over our previous one: the multipliersrelating the Smith form to the input matrix are computed, and the algorithm is probabilistic ofLas Vegas type, i.e., always finds the correct

  6. Improving Reliability of Energy-Efficient Parallel Storage Systems by Disk Swapping

    E-print Network

    Qin, Xiao

    be transitioned to low power states to conserve energy. I/O load skewing techniques like PDC and MAID inherently, we first present a reliability model to quantitatively study the reliability of energy. Keywords-Parallel disk system, energy conservation, reliability, load balancing I. INTRODUCTION Parallel

  7. Angles Formed by Parallel Lines and a Transversal

    NSDL National Science Digital Library

    Mrs. Brown

    2007-10-19

    In this lesson you will learn how to classify angles formed by parallel lines and a transversal as well as how to find the measures of these angles. You have proably heard of parallel lines but you proably don\\'t know about all the special angles that are formed when a line intersects a set of parallel lines. Click on the lecture below to learn about these special angles. The lecture has sound so make sure your ...

  8. Modified surface plasmonic waveguide formed by nanometric parallel lines

    Microsoft Academic Search

    Wen-Rui Xue; Ya-Nan Guo; Wen-Mei Zhang

    2010-01-01

    In this paper, two kinds of modified surface plasmonic waveguides formed by nanometric parallel lines are proposed. The finite-difference frequency-domain method is used to study propagation properties of the fundamental mode supported by these surface plasmonic waveguide structures. Results show that the transverse magnetic field of the fundamental mode is mainly distributed in the face to face region formed by

  9. The Reliable Router: A Reliable and High-Performance Communication Substrate for Parallel Computers

    Microsoft Academic Search

    William J. Dally; Larry R. Dennison; David Harris; Kinhong Kan; Thucydides Xanthopoulos

    1994-01-01

    . The Reliable Router (RR) is a network switching elementtargeted to two-dimensional mesh interconnection network topologies.It is designed to run at 100 MHz and reach a useful link bandwidth of3.2 Gbit\\/sec. The Reliable Router uses adaptive routing coupled withlink-level retransmission and a unique-token protocol to increase bothperformance and reliability. The RR can handle a single node or linkfailure anywhere in

  10. Alternate Forms Reliability of the Behavioral Relaxation Scale: Preliminary Results

    ERIC Educational Resources Information Center

    Lundervold, Duane A.; Dunlap, Angel L.

    2006-01-01

    Alternate forms reliability of the Behavioral Relaxation Scale (BRS; Poppen,1998), a direct observation measure of relaxed behavior, was examined. A single BRS score, based on long duration observation (5-minute), has been found to be a valid measure of relaxation and is correlated with self-report and some physiological measures. Recently,…

  11. A Reliable Processor-Allocation Strategy for Mesh-Connected Parallel Systems

    Microsoft Academic Search

    Kyung-hee Seo; Sung-chun Kim

    2001-01-01

    Efficient utilization of processing resources in a large, multi-user parallel computer system depends on the reliable processor allocation algorithms. The paper presents and LSSA (L-shaped submesh allocation) strategy to reduce external fragmentation and job response time, simultaneously. LSSA manipulates the shape of the required submesh to fit into the fragmented mesh system and accommodates incoming jobs faster than other strategies.

  12. Alternate-form reliability of the Dementia Rating Scale-2.

    PubMed

    Schmidt, Kara S; Mattis, Paul J; Adams, Jane; Nestor, Paul

    2005-06-01

    The Dementia Rating Scale-2 (DRS-2) is a frequently used assessment of cognitive status among older adults in both research and clinical practice. Despite its well-established psychometric properties, its use in serial assessments has posed limitations with regard to practice effects. The primary purpose of the present study is to provide preliminary evidence of alternate-form reliability for the DRS-2. A heterogeneous sample of 52 community-dwelling adults over age 60 with no reported diagnosis of dementia were administered the DRS-2 as well as a newly developed alternate form [DRS-2: AF; Schmidt, K. S. (2004). Dementia Rating Scale-2 Alternate Form: Manual supplement. Lutz, FL: Psychological Assessment Resources]. Our results reveal strong correlations between the two forms; further, no significant differences were found between total scale and subscale scores obtained from the two forms. Therefore, the DRS-2: AF may be a valuable assessment tool in both research and clinical arenas. PMID:15896558

  13. Parameter Interval Estimation of System Reliability for Repairable Multistate Series-Parallel System with Fuzzy Data

    PubMed Central

    2014-01-01

    The purpose of this paper is to create an interval estimation of the fuzzy system reliability for the repairable multistate series–parallel system (RMSS). Two-sided fuzzy confidence interval for the fuzzy system reliability is constructed. The performance of fuzzy confidence interval is considered based on the coverage probability and the expected length. In order to obtain the fuzzy system reliability, the fuzzy sets theory is applied to the system reliability problem when dealing with uncertainties in the RMSS. The fuzzy number with a triangular membership function is used for constructing the fuzzy failure rate and the fuzzy repair rate in the fuzzy reliability for the RMSS. The result shows that the good interval estimator for the fuzzy confidence interval is the obtained coverage probabilities the expected confidence coefficient with the narrowest expected length. The model presented herein is an effective estimation method when the sample size is n ? 100. In addition, the optimal ?-cut for the narrowest lower expected length and the narrowest upper expected length are considered. PMID:24987728

  14. Reliability Optimization of Series-Parallel Systems Using a Genetic Algorithm David W. Coit, IEEE Student Member

    E-print Network

    Smith, Alice E.

    Reliability Optimization of Series-Parallel Systems Using a Genetic Algorithm David W. Coit, IEEE Student Member Alice E. Smith1 , IEEE Member University of Pittsburgh Pittsburgh, PA 15261 To appear March Optimization of Series-Parallel Systems Using a Genetic Algorithm Key Words - Genetic algorithm, Combinatorial

  15. A New Reliable Approach for Two-Dimensional and Axisymmetric Unsteady Flows Between Parallel Plates

    NASA Astrophysics Data System (ADS)

    Sushila; Singh, Jagdev; Shishodia, Yadvendra S.

    2013-11-01

    The main aim of this work is to present a new reliable approach to compute an approximate solution of the system of nonlinear differential equations governing the problem of two-dimensional and axisymmetric unsteady flows due to normally expanding or contracting parallel plates by the homotopy perturbation method, and the Sumudu transform is adopted in the solution procedure. The method finds the solution without any discretization or restrictive assumptions and avoids the roundoff errors. The numerical solutions obtained by the proposed technique indicate that the approach is easy to implement and computationally very attractive.

  16. Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Gibson, Garth Alan

    1990-01-01

    During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

  17. Reliability of MRI-derived cortical and subcortical morphometric measures: Effects of pulse sequence, voxel geometry, and parallel imaging

    Microsoft Academic Search

    J. S. Wonderlick; D. A. Ziegler; P. Hosseini-Varnamkhasti; J. J. Locascio; A. Bakkour; A. van der Kouwe; C. Triantafyllou; S. Corkin; B. C. Dickerson

    2009-01-01

    Advances in magnetic resonance imaging (MRI) have contributed greatly to the study of neurodegenerative processes, psychiatric disorders, and normal human development, but the effect of such improvements on the reliability of downstream morphometric measures has not been extensively studied. We examined how MRI-derived neurostructural measures are affected by three technological advancements: parallel acceleration, increased spatial resolution, and the use of

  18. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel

  19. Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test

    ERIC Educational Resources Information Center

    Benton, Tom

    2013-01-01

    This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

  20. Porting an industrial sheet metal forming code to a distributed memory parallel computer

    Microsoft Academic Search

    G. P. Nikishkov; M. Kawka; A. Makinouchi; G. Yagawa; S. Yoshimura

    1998-01-01

    The parallel version of the sheet metal forming semi-implicit finite element code ITAS3D has been developed using the domain decomposition method and direct solution methods at both subdomain and interface levels. IBM Message Passing Library is used for data communication between tasks of the parallel code. Solutions of some sheet metal forming problems on IBM SP2 computer show that the

  1. Cyclic AMP Mediates a Presynaptic Form of LTP at Cerebellar Parallel Fiber Synapses

    Microsoft Academic Search

    Paul A Salin; Robert C Malenka; Roger A Nicoll

    1996-01-01

    The N-methyl-D-aspartate receptor–independent form of long-term potentiation (LTP) at hippocampal mossy fiber synapses requires presynaptic Ca2+–dependent activation of adenylyl cyclase. To determine whether this form of LTP might occur at other synapses, we examined cerebellar parallel fibers that, like hippocampal mossy fiber synapses, express high levels of the Ca2+\\/calmodulin-sensitive adenylyl cyclase I. Repetitive stimulation of parallel fibers caused a long-lasting

  2. Generating Random Parallel Test Forms Using CTT in a Computer-Based Environment.

    ERIC Educational Resources Information Center

    Weiner, John A.; Gibson, Wade M.

    1998-01-01

    Describes a procedure for automated-test-forms assembly based on Classical Test Theory (CTT). The procedure uses stratified random-content sampling and test-form preequating to ensure both content and psychometric equivalence in generating virtually unlimited parallel forms. Extends the usefulness of CTT in automated test construction. (Author/SLD)

  3. Reliability

    NSDL National Science Digital Library

    Edwin P. Christmann

    2008-11-01

    In essence, reliability is the consistency of test results. To understand the meaning of reliability and how it relates to validity, imagine going to an airport to take flight #007 from Pittsburgh to San Diego. If, every time the airplane makes the flight

  4. The test-retest reliability of the Form 90-DWI: an instrument for assessing intoxicated driving.

    PubMed

    Hettema, Jennifer E; Miller, William R; Tonigan, J Scott; Delaney, Harold D

    2008-03-01

    Although driving while intoxicated (DWI) is a pervasive problem, reliable measures of this behavior have been elusive. In the present study, the Form 90, a widely utilized alcohol and substance use instrument, was adapted for measurement of DWI and related behaviors. Levels of reliability for the adapted instrument, the Form 90-DWI, were tested among a university sample of 60 undergraduate students who had consumed alcohol during the past 90 days. The authors administered the instrument once during an intake interview and again, 7-30 days later, to determine levels of test-retest reliability. Overall, the Form 90-DWI demonstrated high levels of reliability for many general drinking and DWI behaviors. Levels of reliability were lower for riding with an intoxicated driver and for variables involving several behavioral conjunctions, such as seat belt use and the presence of passengers when driving with a blood alcohol concentration above .08. Overall, the Form 90-DWI shows promise as a reliable measure of DWI behavior in research on treatment outcome and prevention. PMID:18298237

  5. Test-retest reliability of the dementia rating scale-2: alternate form.

    PubMed

    Schmidt, Kara S; Mattis, Paul J; Adams, Jane; Nestor, Paul

    2005-01-01

    The purpose of this study was to examine the test-retest reliability of the newly developed Dementia Rating Scale-2: Alternate Form (DRS-2:AF) in a community-dwelling sample of older adults. Participants were administered the DRS-2:AF during two separate testing sessions; the interval between sessions was between 12 and 28 days. The stability coefficient for the Total Score was quite high (0.93), and reliability coefficients for the subscale scores ranged from adequate to high. This project provides evidence for the test-retest reliability of the DRS-2:AF. Given the need for cognitive status measures with equivalent forms, the DRS-2:AF is recommended as a reliable tool in the assessment of dementia. PMID:15832035

  6. CONSTRUCTING PARALLEL SIMULATION EXERCISES FOR ASSESSMENT CENTERS AND OTHER FORMS OF BEHAVIORAL ASSESSMENT

    Microsoft Academic Search

    BRADLEY J. BRUMMEL; DEBORAH E. RUPP; SETH M. SPAIN

    2009-01-01

    Assessment centers rely on multiple, carefully constructed behavioral simulation exercises to measure individuals on multiple performance dimensions. Although methods for establishing parallelism among al- ternate forms of paper-and-pencil tests have been well researched (i.e., to equate tests on difficulty such that the scores can be compared), little re- search has considered the why and how of parallel simulation exercises. This

  7. Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis

    NASA Technical Reports Server (NTRS)

    Babcock, P.; Schor, A.; Rosch, G.

    1998-01-01

    This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

  8. Comparison of heuristic methods for reliability optimization of series-parallel systems

    E-print Network

    Lee, Hsiang

    2003-01-01

    Three heuristics, the max-min approach, Nakagawa and Nakashima method, and Kim and Yum method, are considered for the redundancy allocation problem with series-parallel structures. The max-min approach can formulate the problem as an integer linear...

  9. Using the ASSIST Short Form for Evaluating an Information Technology Application: Validity and Reliability Issues

    Microsoft Academic Search

    Carol A. Speth; Deana M. Namuth; Donald J. Lee

    2007-01-01

    In this study, the Approaches and Study Skills Inventory for Students (ASSIST) short form was used to gain insight about learning style characteristics that might influence students' use of an online library of plant science learning objects. This study provides evidence concerning the in- ternal consistency reliability and construct validity of the Deep, Strategic and Surface scale scores when used

  10. Secure Internet Banking with Privacy Enhanced Mail - A Protocol for Reliable Exchange of Secured Order Forms

    Microsoft Academic Search

    Stephan Kolletzki

    1996-01-01

    The Protocol for Reliable Exchange of Secured Order Forms is a model for securing today's favourite Internet service for business, the World-Wide Web, and its capability for exchanging order forms. Based on the PEM Internet standards (RFC 1421–1424) the protocol includes integrity of communication contents and authenticity of its origin, which allows for non-repudiation services, as well as confidentiality. It

  11. A Prediction Interval for a Score on a Parallel Test Form.

    ERIC Educational Resources Information Center

    Lord, Frederic M.

    1981-01-01

    Given any observed number-right score on a test, a method is described for obtaining a predicition interval for the corresponding number-right score on a randomly parallel form of the same test. The interval can be written down directly from published tables of the hypergeometric distribution. (Author)

  12. Low power consumption and highly reliable 1060 nm VCSELs for parallel optical interconnection

    Microsoft Academic Search

    Naoki Tsukiji; Suguru Imai; Keishi Takaki; Hitoshi Shimizu; Yasumasa Kawakita; Tomohiro Takagi; Koji Hiraiwa; Junji Yoshida; Hiroshi Shimizu; Akihiko Kasukawa

    2010-01-01

    1060nm VCSELs with InGaAs\\/GaAs strained quantum wells have been reviewed in terms of power consumption and reliability. Clear eye opening was confirmed at 10Gbps with bias current of as low as 1.4mA.

  13. A comprehensive parallel study on the board level reliability of SAC, SACX and SCN solders

    Microsoft Academic Search

    Fubin Song; Jeffery C. C. Lo; Jimmy K. S. Lam; Tong Jiang; S. W. Ricky Lee

    2008-01-01

    Legislation that mandates the banning of lead (Pb) in electronics due to environmental and health concerns has been actively pursued in many countries during the past fifteen years. Lead-free electronics will be deployed in many products that serve markets where the reliability is a critical requirement. Although a large number of research studies have been performed and are currently under

  14. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  15. Magnetosheath Filamentary Structures Formed by Ion Acceleration at the Quasi-Parallel Bow Shock

    NASA Technical Reports Server (NTRS)

    Omidi, N.; Sibeck, D.; Gutynska, O.; Trattner, K. J.

    2014-01-01

    Results from 2.5-D electromagnetic hybrid simulations show the formation of field-aligned, filamentary plasma structures in the magnetosheath. They begin at the quasi-parallel bow shock and extend far into the magnetosheath. These structures exhibit anticorrelated, spatial oscillations in plasma density and ion temperature. Closer to the bow shock, magnetic field variations associated with density and temperature oscillations may also be present. Magnetosheath filamentary structures (MFS) form primarily in the quasi-parallel sheath; however, they may extend to the quasi-perpendicular magnetosheath. They occur over a wide range of solar wind Alfvénic Mach numbers and interplanetary magnetic field directions. At lower Mach numbers with lower levels of magnetosheath turbulence, MFS remain highly coherent over large distances. At higher Mach numbers, magnetosheath turbulence decreases the level of coherence. Magnetosheath filamentary structures result from localized ion acceleration at the quasi-parallel bow shock and the injection of energetic ions into the magnetosheath. The localized nature of ion acceleration is tied to the generation of fast magnetosonic waves at and upstream of the quasi-parallel shock. The increased pressure in flux tubes containing the shock accelerated ions results in the depletion of the thermal plasma in these flux tubes and the enhancement of density in flux tubes void of energetic ions. This results in the observed anticorrelation between ion temperature and plasma density.

  16. Searching for globally optimal functional forms for interatomic potentials using genetic programming with parallel tempering.

    PubMed

    Slepoy, A; Peters, M D; Thompson, A P

    2007-11-30

    Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. PMID:17565499

  17. Validity, Reliability, and Potential Bias of Short Forms of Students' Evaluation of Teaching: The Case of UAE University

    ERIC Educational Resources Information Center

    Dodeen, Hamzeh

    2013-01-01

    Students' opinions continue to be a significant factor in the evaluation of teaching in higher education institutions. The purpose of this study was to psychometrically assess short students evaluation of teaching (SET) forms using the UAE University form as a model. The study evaluated the form validity, reliability, the overall question,…

  18. De novo design of orthogonal peptide pairs forming parallel coiled-coil heterodimers.

    PubMed

    Gradišar, Helena; Jerala, Roman

    2011-02-01

    We used the principles governing the selectivity and stability of coiled-coil segments to design and experimentally test a set of four pairs of parallel coiled-coil-forming peptides composed of four heptad repeats. The design was based on maximizing the difference in stability between desired pairs and the most stable unwanted combinations using N-terminal helix initiator residues, favorable combinations of the electrostatic and hydrophobic interaction motifs and negative design motif based on burial of asparagine residues. Experimental analysis of all 36 pair combinations among the eight peptides was performed by circular dichroism (CD). On the basis of CD spectra, each peptide formed a high level of ?-helical structure exclusively in combination with its designed peptide partner which demonstrates the orthogonality of the designed peptide pair set. PMID:21234981

  19. Self-stigma of mental illness scale--short form: reliability and validity.

    PubMed

    Corrigan, Patrick W; Michaels, Patrick J; Vega, Eduardo; Gause, Michael; Watson, Amy C; Rüsch, Nicolas

    2012-08-30

    The internalization of public stigma by persons with serious mental illnesses may lead to self-stigma, which harms self-esteem, self-efficacy, and empowerment. Previous research has evaluated a hierarchical model that distinguishes among stereotype awareness, agreement, application to self, and harm to self with the 40-item Self-Stigma of Mental Illness Scale (SSMIS). This study addressed SSMIS critiques (too long, contains offensive items that discourages test completion) by strategically omitting half of the original scale's items. Here we report reliability and validity of the 20-item short form (SSMIS-SF) based on data from three previous studies. Retained items were rated less offensive by a sample of consumers. Results indicated adequate internal consistencies for each subscale. Repeated measures ANOVAs showed subscale means progressively diminished from awareness to harm. In support of its validity, the harm subscale was found to be inversely and significantly related to self-esteem, self-efficacy, empowerment, and hope. After controlling for level of depression, these relationships remained significant with the exception of the relation between empowerment and harm SSMIS-SF subscale. Future research with the SSMIS-SF should evaluate its sensitivity to change and its stability through test-rest reliability. PMID:22578819

  20. Parallel processing in the brain's visual form system: an fMRI study

    PubMed Central

    Shigihara, Yoshihito; Zeki, Semir

    2014-01-01

    We here extend and complement our earlier time-based, magneto-encephalographic (MEG), study of the processing of forms by the visual brain (Shigihara and Zeki, 2013) with a functional magnetic resonance imaging (fMRI) study, in order to better localize the activity produced in early visual areas when subjects view simple geometric stimuli of increasing perceptual complexity (lines, angles, rhombuses) constituted from the same elements (lines). Our results show that all three categories of form activate all three visual areas with which we were principally concerned (V1–V3), with angles producing the strongest and rhombuses the weakest activity in all three. The difference between the activity produced by angles and rhombuses was significant, that between lines and rhombuses was trend significant while that between lines and angles was not. Taken together with our earlier MEG results, the present ones suggest that a parallel strategy is used in processing forms, in addition to the well-documented hierarchical strategy. PMID:25126064

  1. The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills

    ERIC Educational Resources Information Center

    Bae, Jungok; Lee, Yae-Sheik

    2011-01-01

    Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

  2. Test-Retest Reliability of a Tutor Evaluation Form Used in a Problem-Based Curriculum.

    ERIC Educational Resources Information Center

    Hay, John A.

    1997-01-01

    A study examined the test-retest reliability of 30 student evaluations of tutors in a problem-based learning curriculum at McMaster University in Hamilton, Ontario. Results were used for the improvement of reliability of the instrument. (JOW)

  3. G-quadruplexes form ultrastable parallel structures in deep eutectic solvent.

    PubMed

    Zhao, Chuanqi; Ren, Jinsong; Qu, Xiaogang

    2013-01-29

    G-quadruplex DNA is highly polymorphic. Its conformation transition is involved in a series of important life events. These controllable diverse structures also make G-quadruplex DNA a promising candidate as catalyst, biosensor, and DNA-based architecture. So far, G-quadruplex DNA-based applications are restricted done in aqueous media. Since many chemical reactions and devices are required to be performed under strictly anhydrous conditions, even at high temperature, it is challenging and meaningful to conduct G-quadruplex DNA in water-free medium. In this report, we systemically studied 10 representative G-quadruplexes in anhydrous room-temperature deep eutectic solvents (DESs). The results indicate that intramolecular, intermolecular, and even higher-order G-quadruplex structures can be formed in DES. Intriguingly, in DES, parallel structure becomes the G-quadruplex DNA preferred conformation. More importantly, compared to aqueous media, G-quadruplex has ultrastability in DES and, surprisingly, some G-quadruplex DNA can survive even beyond 110 °C. Our work would shed light on the applications of G-quadruplex DNA to chemical reactions and DNA-based devices performed in an anhydrous environment, even at high temperature. PMID:23282194

  4. Bringing the Cognitive Estimation Task into the 21st Century: Normative Data on Two New Parallel Forms

    PubMed Central

    MacPherson, Sarah E.; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

    2014-01-01

    The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18–79 years with 9–22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients’ error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

  5. An Investigation of Angle Relationships Formed by Parallel Lines Cut by a Transversal Using GeoGebra

    NSDL National Science Digital Library

    2013-01-08

    In this lesson, students will discover angle relationships formed (corresponding, alternate interior, alternate exterior, same-side interior, same-side exterior) when two parallel lines are cut by a transversal. They will establish definitions and identify whether these angle pairs are supplementary or congruent.

  6. In search of parsimony: reliability and validity of the Functional Performance Inventory-Short Form

    PubMed Central

    Leidy, Nancy Kline; Knebel, Ann

    2010-01-01

    Purpose: The 65-item Functional Performance Inventory (FPI), developed to quantify functional performance in patients with chronic obstructive pulmonary disease (COPD), has been shown to be reliable and valid. The purpose of this study was to create a shorter version of the FPI while preserving the integrity and psychometric properties of the original. Patients and methods: Secondary analyses were performed on qualitative and quantitative data used to develop and validate the FPI long form. Seventeen men and women with COPD participated in the qualitative work, while 154 took part in the mail survey; 54 completed 2-week reproducibility assessment, and 40 relatives contributed validation data. Following a systematic process of item reduction, performance properties of the 32-item short form (FPI-SF) were examined. Results: The FPI-SF was internally consistent (total scale ? = 0.93; subscales: 0.76–0.89) and reproducible (r = 0.88; subscales: 0.69–0.86). Validity was maintained, with significant (P < 0.001) correlations between the FPI-SF and the Functional Status Questionnaire (activities of daily living, r = 0.71; instrumental activities of daily living, r = 0.73), Duke Activity Status Index (r = 0.65), Bronchitis-Emphysema Symptom Checklist (r = ?0.61), Basic Need Satisfaction Inventory (r = 0.61) and Cantril’s Ladder of Life Satisfaction (r = 0.63), and Katz Adjustment Scale for Relatives (socially expected activities, r = 0.51; free-time activities, r = ?0.49, P < 0.01). The FPI-SF differentiated patients with an FEVl% predicted greater than and less than 50% (t = 4.26, P < 0.001), and those with severe and moderate levels of perceived severity and activity limitation (t = 9.91, P < 0.001). Conclusion: Results suggest the FPI-SF is a viable alternative to the FPI for situations in which a shorter instrument is desired. Further assessment of the instrument’s performance properties in new samples of patients with COPD is warranted. PMID:21191436

  7. Reliability of MRI-derived cortical and subcortical morphometric measures: Effects of pulse sequence, voxel geometry, and parallel imaging

    E-print Network

    Corkin, Suzanne

    Reliability of MRI-derived cortical and subcortical morphometric measures: Effects of pulse of downstream morphometric measures has not been extensively studied. We examined how MRI methods could have a considerable impact on the reproducibility of morphometric measures. In addition

  8. Normative data for measuring performance change on parallel forms of a 15-word list recall test.

    PubMed

    Carlesimo, Giovanni A; De Risi, Marco; Monaco, Marco; Costa, Alberto; Fadda, Lucia; Picardi, Angelo; Di Gennaro, Giancarlo; Caltagirone, Carlo; Grammaldo, Liliana

    2014-05-01

    Declarative memory evaluation is an essential step in the clinical and neuropsychological assessment of a variety of neurological disorders. It typically addresses the issue of normality/abnormality of an individual's performance. Another clinical application of the neuropsychological assessment of declarative memory is the longitudinal evaluation of an individual's performance change. In fact, in a variety of neurological conditions repeated assessments are needed to evaluate the modifications of a memory disorder as a function of time or in response to a pharmacological or rehabilitation treatment. This study was aimed at collecting data for measuring and interpreting performance change on a memory test for verbal material. For this purpose, we administered to 100 healthy subjects (age range 20-80 years; years of formal education range 8-17 years) three parallel forms of a test requiring the immediate and delayed recall of a 15-word list. The subjects performed the recall test three times (each time with a different list) at least 1 week apart. The order of the lists was randomized across subjects. Results revealed that performance on the three lists was highly correlated and did not vary as a function of the order of presentation. However, accuracy of recall was slightly better on a list compared to the others. Based on a method devised by Payne and Jones (J Clin Psychol 13:115-121, 1957), we provide normative data for establishing whether a discrepancy in recall accuracy on two versions of the test exceeds the discrepancy expected based on the performance of normal controls. PMID:24218156

  9. Novel methods of powder preparation and ceramic forming for improving reliability of multilayer ceramic actuators

    Microsoft Academic Search

    William J. Dawson; Scott L. Swartz; Jean P. Issartel

    1993-01-01

    Critical components of many smart systems employ multilayer piezoelectric actuators based on lead zirconate titanate (PZT) ceramics. Applications include active vibration systems, noise suppression, acoustic camouflage, actuated structures, reconfigurable surfaces, and structural health monitoring. Two strategies involving novel materials processing techniques are discussed for improving the performance and reliability of PZT ceramic components. The first is the use of an

  10. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sáez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM© assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-­10) across 5 time points. The alternate form reliabilities ranged from…

  11. Wannier-Mott exciton formed by electron and hole separated in parallel quantum wires

    NASA Astrophysics Data System (ADS)

    del Castillo-Mussot, M.; Reyes, J. A.

    1997-03-01

    We analyze a one-dimensional Wanier-Mott exciton in which electron and hole are constrained to move in two separated and parallel quantum wires. We expand the electron-hole interaction potential in terms of multipoles by assuming that both electron and hole experience harmonic oscillator transverse confinements and calculate eigenenergies and eigenfunctions for the ground and first excited states as a function of the wires separation distance.

  12. Wannier-Mott exciton formed by electron and hole separated in parallel quantum wires

    Microsoft Academic Search

    M. del Castillo-Mussot; J. A. Reyes

    1997-01-01

    We analyze a one-dimensional Wanier-Mott exciton in which electron and hole are constrained to move in two separated and parallel quantum wires. We expand the electron-hole interaction potential in terms of multipoles by assuming that both electron and hole experience harmonic oscillator transverse confinements and calculate eigenenergies and eigenfunctions for the ground and first excited states as a function of

  13. A New Form Error Compensation Technique for Mould Insert Machining Utilizing Parallel Grinding Method

    Microsoft Academic Search

    W. K. Chen; T. Kuriyagawa; H. Huang; K. Syoji

    The Mould inserts with high form accuracy can be produced with ease using modern grinding technologies. However, several grinding cycles are often required to reduce the form error to an acceptable value, significantly dependent on the tool path compensation tech- nique used. This paper reports on a novel form error compensation technique for tungsten car- bide mould inserts machining utilizing

  14. Reliability and Validity of a Spanish Version of the Social Skills Rating System--Teacher Form

    ERIC Educational Resources Information Center

    Jurado, Michelle; Cumba-Aviles, Eduardo; Collazo, Luis C.; Matos, Maribel

    2006-01-01

    The aim of this study was to examine the psychometric properties of a Spanish version of the Social Skills Scale of the Social Skills Rating System-Teacher Form (SSRS-T) with a sample of children attending elementary schools in Puerto Rico (N = 357). The SSRS-T was developed for use with English-speaking children. Although translated, adapted, and…

  15. Assessment of the Reliability and Validity of the Discrete-Trials Teaching Evaluation Form

    ERIC Educational Resources Information Center

    Babel, Danielle A.; Martin, Garry L.; Fazzio, Daniela; Arnal, Lindsay; Thomson, Kendra

    2008-01-01

    Discrete-trials teaching (DTT) is a frequently used method for implementing Applied Behavior Analysis treatment with children with autism. Fazzio, Arnal, and Martin (2007) developed a 21-component checklist, the Discrete-Trials Teaching Evaluation Form (DTTEF), for assessing instructors conducting DTT. In Phase 1 of this research, three experts on…

  16. Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments

    ERIC Educational Resources Information Center

    Satern, Miriam N.

    2011-01-01

    Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

  17. Fast Parallel Computation of Hermite and Smith Forms of Polynomial Matrices

    Microsoft Academic Search

    Erich Kaltofen; M. S. Krishnamoorthy; B. David Saunders

    1987-01-01

    Boolean circuits of polynomial size and poly-logarithmic depth are given for com- puting the Hermite and Smith normal forms of polynomial matrices over finite fields and the field of rational numbers. The circuits for the Smith normal form computation are probabilis- tic ones and also determine very efficient sequential algorithms. Furthermore, we give a polynomial-time deterministic sequential algorithm for the

  18. Comparisons between Classical Test Theory and Item Response Theory in Automated Assembly of Parallel Test Forms

    ERIC Educational Resources Information Center

    Lin, Chuan-Ju

    2008-01-01

    The automated assembly of alternate test forms for online delivery provides an alternative to computer-administered, fixed test forms, or computerized-adaptive tests when a testing program migrates from paper/pencil testing to computer-based testing. The weighted deviations model (WDM) heuristic particularly promising for automated test assembly…

  19. An Examination of the Assumption that the Equating of Parallel Forms is Population-Independent.

    ERIC Educational Resources Information Center

    Angoff, William H.; Cowell, William R.

    Linear and equipercentile equating conversions were developed for two forms of the Graduate Record Examinations (GRE) quantitative test and the verbal-plus-quantitative test. From a very large sample of students taking the GRE in October 1981, subpopulations were selected with respect to race, sex, field of study, and level of performance (defined…

  20. A POSITIVE-STRAND RNA VIRUS REPLICATION COMPLEX PARALLELS FORM AND FUNCTION OF RETROVIRUS CAPSIDS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We show that brome mosaic virus (BMV) RNA replication protein 1a, 2a polymerase, and a cis-acting replication signal recapitulate the functions of Gag, Pol, and RNA packaging signals in conventional retrovirus and foamy virus cores. Prior to RNA replication, 1a forms spherules budding into the endop...

  1. NPC 2011 REGISTRATION FORM (RMB) The 8th IFIP International Conference on Network and Parallel Computing

    E-print Network

    Shi, Weisong

    and inquiries concerning registration and payment should be addressed to: npc2011.nudt@gmail.com (cc to mingchelai@gmail.com) Please complete this form and email to NPC 2011 Registration Office, Dr. Lai Mingche: +86-0731-84573640-120 Fax: +86-0731-84575992 E-mail: npc2011.nudt@gmail.com (cc to mingchelai@gmail

  2. NPC 2011 REGISTRATION FORM (USD) The 8th IFIP International Conference on Network and Parallel Computing

    E-print Network

    Shi, Weisong

    and inquiries concerning registration and payment should be addressed to: npc2011.nudt@gmail.com (cc to mingchelai@gmail.com) Please complete this form and email to NPC 2011 Registration Office, Dr. Lai Mingche: +86-0731-84573640-120 Fax: +86-0731-84575992 E-mail: npc2011.nudt@gmail.com (cc to mingchelai@gmail

  3. Hypo-activity screening in school setting; examining reliability and validity of the teacher estimation of activity form (teaf).

    PubMed

    Rosenblum, Sara; Engel-Yeger, Batya

    2015-06-01

    It is well established that physical activity during childhood contributes to children's physical and psychological health. The aim of this study was to test the reliability and validity of the Hebrew version of the Teacher Estimation of Activity Form (TEAF) questionnaire as a screening tool among school-aged children in Israel. Six physical education teachers completed TEAF questionnaires of 123 children aged 5-12?years, 68 children (55%) with Typical Development (TD) and 55 children (45%) diagnosed with Developmental Coordination Disorder (DCD). The Hebrew version of the TEAF indicates a very high level of internal consistency (??=?.97). There were no significant gender differences. Significant differences were found between children with and without DCD attesting to the test's construct validity. Concurrent validity was established by finding a significant high correlation (r?=?.76, p?reliability and validity estimates. It appears to be a promising standardized practical tool in both research and practice for describing information about school-aged children's involvement in physical activity. Further research is indicated with larger samples to establish cut-off scores determining what point identifies hypo activity in striated age groups. Furthermore, the majority of the participants in this study were boys, and further research is needed to include more girls for a better understanding of the phenomena of hypo activity. Copyright © 2015 John Wiley & Sons, Ltd. PMID:25665095

  4. The Bruininks-Oseretsky Test of Motor Proficiency-Short Form is reliable in children living in remote Australian Aboriginal communities

    PubMed Central

    2013-01-01

    Background The Lililwan Project is the first population-based study to determine Fetal Alcohol Spectrum Disorders (FASD) prevalence in Australia and was conducted in the remote Fitzroy Valley in North Western Australia. The diagnostic process for FASD requires accurate assessment of gross and fine motor functioning using standardised cut-offs for impairment. The Bruininks-Oseretsky Test of Motor Proficiency, Second Edition (BOT-2) is a norm-referenced assessment of motor function used worldwide and in FASD clinics in North America. It is available in a Complete Form with 53 items or a Short Form with 14 items. Its reliability in measuring motor performance in children exposed to alcohol in utero or living in remote Australian Aboriginal communities is unknown. Methods A prospective inter-rater and test-retest reliability study was conducted using the BOT-2 Short Form. A convenience sample of children (n?=?30) aged 7 to 9 years participating in the Lililwan Project cohort (n?=?108) study, completed the reliability study. Over 50% of mothers of Lililwan Project children drank alcohol during pregnancy. Two raters simultaneously scoring each child determined inter-rater reliability. Test-retest reliability was determined by assessing each child on a second occasion using predominantly the same rater. Reliability was analysed by calculating Intra-Class correlation Coefficients, ICC(2,1), Percentage Exact Agreement (PEA) and Percentage Close Agreement (PCA) and measures of Minimal Detectable Change (MDC) were calculated. Results Thirty Aboriginal children (18 male, 12 female: mean age 8.8 years) were assessed at eight remote Fitzroy Valley communities. The inter-rater reliability for the BOT-2 Short Form score sheet outcomes ranged from 0.88 (95%CI, 0.77 – 0.94) to 0.92 (95%CI, 0.84 – 0.96) indicating excellent reliability. The test-retest reliability (median interval between tests being 45.5 days) for the BOT-2 Short Form score sheet outcomes ranged from 0.62 (95%CI, 0.34 – 0.80) to 0.73 (95%CI, 0.50 – 0.86) indicating fair to good reliability. The raw score MDC was 6.12. Conclusion The BOT-2 Short Form has acceptable reliability for use in remote Australian Aboriginal communities and will be useful in determining motor deficits in children exposed to alcohol prenatally. This is the first known study evaluating the reliability of the BOT-2 Short Form, either in the context of assessment for FASD or in Aboriginal children. PMID:24010634

  5. Parameter optimization of the sheet metal forming process using an iterative parallel Kriging algorithm

    Microsoft Academic Search

    J. Jakumeit; M. Herdy; M. Nitsche

    2005-01-01

    Different numerical optimization strategies were used to find an optimized parameter setting for the sheet metal forming process. A parameterization of a time-dependent blank-holder force was used to control the deep-drawing simulation. Besides the already well-established gradient and direct search algorithms and the response surface method the novel Kriging approach was used as an optimization strategy. Results for two analytical

  6. The affect system has parallel and integrative processing components: Form follows function

    Microsoft Academic Search

    John T. Cacioppo; Wendi L. Gardner; Gary G. Berntson

    1999-01-01

    ABSTRACT The affect system has been shaped by the hammer,and chisel of adaptation and natural selection such that form follows function. The characteristics of the system thus differ across the nervous system as a function of the unique constraints existent at each level. For instance, although physical limitations constrain behavioral expressions and incline behavioral predispositions toward a bipolar (good—bad, approach—withdraw)

  7. The design of a parallel, dense linear algebra software library: Reduction to Hessenberg, tridiagonal, and bidiagonal form

    SciTech Connect

    Choi, J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Walker, D.W. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section

    1994-09-01

    This paper discusses issues in the design of ScaLAPACK, a software library for performing dense linear algebra computations on distributed memory concurrent computers. These issues are illustrated using the ScaLAPACK routines for reducing matrices to Hessenberg, tridiagonal, and bidiagonal forms. These routines are important in the solution of eigenproblems. The paper focuses on how building blocks are used to create higher-level library routines. Results are presented that demonstrate the scalability of the reduction routines. The most commonly-used building blocks used in ScaLAPACK are the sequential BLAS, the Parallel Block BLAS (PB-BLAS) and the Basic Linear Algebra Communication Subprograms (BLACS). Each of the matrix reduction algorithms consists of a series of steps in each of which one block column (or panel), and/or block row, of the matrix is reduced, followed by an update of the portion of the matrix that has not been factorized so far. This latter phase is performed using distributed Level 3 BLAS routines, and contains the bulk of the computation. However, the panel reduction phase involves a significant amount of communication. And is important in determining the scalability of the algorithm. The simplest way to parallelize the panel reduction phase is to replace the appropriate Level 2 and Level 3 BLAS routines appearing in the LAPACK routine (mostly matrix-vector and matrix-matrix multiplications) with PB-BLAS routines.

  8. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  9. Farsi Version of Social Skills Rating System-Secondary Student Form: Cultural Adaptation, Reliability and Construct Validity

    PubMed Central

    Eslami, Ahmad Ali; Amidi Mazaheri, Maryam; Mostafavi, Firoozeh; Abbasi, Mohamad Hadi; Noroozi, Ensieh

    2014-01-01

    Objective: Assessment of social skills is a necessary requirement to develop and evaluate the effectiveness of cognitive and behavioral interventions. This paper reports the cultural adaptation and psychometric properties of the Farsi version of the social skills rating system-secondary students form (SSRS-SS) questionnaire (Gresham and Elliot, 1990), in a normative sample of secondary school students. Methods: A two-phase design was used that phase 1 consisted of the linguistic adaptation and in phase 2, using cross-sectional sample survey data, the construct validity and reliability of the Farsi version of the SSRS-SS were examined in a sample of 724 adolescents aged from 13 to 19 years. Results: Content validity index was excellent, and the floor/ceiling effects were low. After deleting five of the original SSRS-SS items, the findings gave support for the item convergent and divergent validity. Factor analysis revealed four subscales. Results showed good internal consistency (0.89) and temporal stability (0.91) for the total scale score. Conclusion: Findings demonstrated support for the use of the 27-item Farsi version in the school setting. Directions for future research regarding the applicability of the scale in other settings and populations of adolescents are discussed. PMID:25053964

  10. Subjective Well-Being Under Neuroleptics Scale short form (SWN-K): reliability and validity in an Estonian speaking sample

    PubMed Central

    2013-01-01

    Background The Subjective Well-Being Under Neuroleptic Treatment Scale short form (SWN-K) is a self-rating scale developed to measure mentally ill patients' well-being under the antipsychotic drug treatment. This paper reports on adaptation and psychometric properties of the instrument in an Estonian psychiatric sample. Methods In a naturalistic study design, 124 inpatients or outpatients suffering from the first psychotic episode or chronic psychotic illness completed the translated SWN-K instrument. Item content analysis, internal consistency analysis, exploratory principal components analysis, and confirmatory factor analysis were used to construct the Estonian version of the SWN-K (SWN-K-E). Additionally, socio-demographic and clinical data, observer-rated psychopathology, medication side effects, daily antipsychotic drug dosages, and general functioning were assessed at two time points, at baseline and after a 29-week period; the associations of the SWN-K-E scores with these variables were explored. Results After having selected 20 items for the Estonian adaptation, the internal consistency of the total SWN-K-E was 0.93 and the subscale consistencies ranged from 0.70 to 0.80. Good test–retest reliabilities were observed for the adapted scale scores, with the correlation of the total score over about 6 months being r = 0.70. Confirmatory factor analysis replicated the presence of a higher-order factor (general well-being) and five first-order factors (mental functioning, physical functioning, social integration, emotional regulation, and self-control); the model fitted the data well. The results indicated a moderate-high correlations r = 0.54 between the SWN-K-E total score and the evaluation how satisfied patients were with their lives in generally. No significant correlations were found between the overall subjective well-being score and age, severity of the psychopathology, drug adverse effects, or prescribed drug dosage. Conclusion Taken together, the results demonstrated that the Estonian version of the SWN-K is a reliable and valid instrument with psychometric properties similar to the original English version. The potential uses of the scale in both research and clinical settings are considered. PMID:24025191

  11. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216

    ERIC Educational Resources Information Center

    Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

  12. Parallel Test Construction Using Classical Item Parameters.

    ERIC Educational Resources Information Center

    Sanders, Piet F.; Verschoor, Alfred J.

    1998-01-01

    Presents minimization and maximization models for parallel test construction under constraints. The minimization model constructs weakly and strongly parallel tests of minimum length, while the maximization model constructs weakly and strongly parallel tests with maximum test reliability. (Author/SLD)

  13. Verbal and Visual Parallelism

    ERIC Educational Resources Information Center

    Fahnestock, Jeanne

    2003-01-01

    This study investigates the practice of presenting multiple supporting examples in parallel form. The elements of parallelism and its use in argument were first illustrated by Aristotle. Although real texts may depart from the ideal form for presenting multiple examples, rhetorical theory offers a rationale for minimal, parallel presentation. The…

  14. An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220

    ERIC Educational Resources Information Center

    Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2012-01-01

    This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

  15. Parallel and antiparallel G*G.C base triplets in pur*pur.pyr triple helices formed with (GA) third strands

    Microsoft Academic Search

    J. Liquier; F. Geinguenaud; T. Huynh-Dinh; C. Gouyette; E. Khomyakova; E. Taillandier

    2001-01-01

    Triple helices with G*G.C and A*A.T base triplets with third GA strands either parallel or antiparallel with respect to the homologous duplex strand have been formed in presence of Na or Mg counterions. Antiparallel triplexes are more stable and can be obtained even in presence of only monovalent Na counterions. A biphasic melting has been observed, reflecting third strand separation

  16. Parallel Universes

    Microsoft Academic Search

    Max Tegmark

    2003-01-01

    I survey physics theories involving parallel universes, which form a natural four-level hierarchy of multiverses allowing progressively greater diversity. Level I: A generic prediction of inflation is an infinite ergodic universe, which contains Hubble volumes realizing all initial conditions - including an identical copy of you about 10^{10^29} meters away. Level II: In chaotic inflation, other thermalized regions may have

  17. Comparability, Reliability, and Practice Effects on Alternate Forms of the Digit Symbol Substitution and Symbol Digit Modalities Tests

    ERIC Educational Resources Information Center

    Hinton-Bayre, Anton; Geffen, Gina

    2005-01-01

    The present study examined the comparability of 4 alternate forms of the Digit Symbol Substitution test and the Symbol Digit Modalities (written) test, including the original versions. Male contact-sport athletes (N=112) were assessed on 1 of the 4 forms of each test. Reasonable alternate form comparability was demonstrated through establishing…

  18. Parallel symbolic computation in ACE

    Microsoft Academic Search

    Enrico Pontelli; Gopal Gupta

    1997-01-01

    We present an overview of the ACE system, a sound and complete parallel implementation of Prolog that exploits parallelism\\u000a transparently (i.e., without any user intervention) from AI programs and symbolic applications coded in Prolog. ACE simultaneously\\u000a exploits all the major forms of parallelism – Or-parallelism, Independent And-parallelism, and Dependent And-parallelism –\\u000a found in Prolog programs. These three varieties of parallelism

  19. An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

    2013-01-01

    The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

  20. American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form, patient self-report section: Reliability, validity, and responsiveness

    Microsoft Academic Search

    Lori A Michener; Philip W McClure; Brian J Sennett

    2002-01-01

    The purpose of this study was to examine the psychometric properties of the American Shoulder and Elbow Surgeons Standardized Shoulder Assessment Form (ASES), patient self-report section. Patients with shoulder dysfunction (n = 63) completed the ASES, The University of Pennsylvania Shoulder Score, and the Short Form–36 during the initial evaluation, 24 to 72 hours after the initial visit, and after

  1. Balancing the Need for Reliability and Time Efficiency: Short Forms of the Wechsler Adult Intelligence Scale-III

    ERIC Educational Resources Information Center

    Jeyakumar, Sharon L. E.; Warriner, Erin M.; Raval, Vaishali V.; Ahmad, Saadia A.

    2004-01-01

    Tables permitting the conversion of short-form composite scores to full-scale IQ estimates have been published for previous editions of the Wechsler Adult Intelligence Scale (WAIS). Equivalent tables are now needed for selected subtests of the WAIS-III. This article used Tellegen and Briggs's formulae to convert the sum of scaled scores for four…

  2. Japanese Version of Home Form of the ADHD-RS: An Evaluation of Its Reliability and Validity

    ERIC Educational Resources Information Center

    Tani, Iori; Okada, Ryo; Ohnishi, Masafumi; Nakajima, Shunji; Tsujii, Masatsugu

    2010-01-01

    Using the Japanese version of home form of the ADHD-RS, this survey attempted to compare the scores between the US and Japan and examined the correlates of ADHD-RS. We collected responses from parents or rearers of 5977 children (3119 males and 2858 females) in nursery, elementary, and lower-secondary schools. A confirmed factor analysis of…

  3. Reliability and validity of the Spanish version of the Child Health and Illness Profile (CHIP) Child-Edition, Parent Report Form (CHIP-CE/PRF)

    PubMed Central

    2010-01-01

    Background The objectives of the study were to assess the reliability, and the content, construct, and convergent validity of the Spanish version of the CHIP-CE/PRF, to analyze parent-child agreement, and compare the results with those of the original U.S. version. Methods Parents from a representative sample of children aged 6-12 years were selected from 9 primary schools in Barcelona. Test-retest reliability was assessed in a convenience subsample of parents from 2 schools. Parents completed the Spanish version of the CHIP-CE/PRF. The Achenbach Child Behavioural Checklist (CBCL) was administered to a convenience subsample. Results The overall response rate was 67% (n = 871). There was no floor effect. A ceiling effect was found in 4 subdomains. Reliability was acceptable at the domain level (internal consistency = 0.68-0.86; test-retest intraclass correlation coefficients = 0.69-0.85). Younger girls had better scores on Satisfaction and Achievement than older girls. Comfort domain score was lower (worse) in children with a probable mental health problem, with high effect size (ES = 1.45). The level of parent-child agreement was low (0.22-0.37). Conclusions The results of this study suggest that the parent version of the Spanish CHIP-CE has acceptable psychometric properties although further research is needed to check reliability at sub-domain level. The CHIP-CE parent report form provides a comprehensive, psychometrically sound measure of health for Spanish children 6 to 12 years old. It can be a complementary perspective to the self-reported measure or an alternative when the child is unable to complete the questionnaire. In general, the results are similar to the original U.S. version. PMID:20678198

  4. Reliability and structural integrity

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1976-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  5. PARALLEL STRINGS - PARALLEL UNIVERSES

    Microsoft Academic Search

    Jim McDowall; Saft America

    Sometimes different parts of the battery community just don't seem to operate on the same level, and attitudes towards parallel battery strings are a prime example of this. Engineers at telephone company central offices are quite happy operating 20 or more parallel strings on the same dc bus, while many manufacturers warn against connecting more than four or five strings

  6. VCSEL reliability research at Gore Photonics

    Microsoft Academic Search

    Ted D. Lowes

    2002-01-01

    Reliability of the oxide confined VCSEL used in the Gore nLIGHTENTM parallel optic interconnect is discussed. The Gore reliability program for oxide confined devices has been active for approximately five years. The excellent long term reliability results have been obtained through an approach centered upon fundamental reliability research. The details of the device lifetime measurements and projections are presented along

  7. The feasibility, reliability and validity of the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF) in palliative care population.

    PubMed

    Lua, Pei Lin; Salek, Sam; Finlay, Ilora; Lloyd-Richards, Chris

    2005-09-01

    In terminally-ill patients, effective measurement of health-related quality of life (HRQoL) needs to be done while imposing minimal burden. In an attempt to ensure that routine HRQoL assessment is simple but capable of eliciting adequate information, the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF: 8 items) was developed from its original version, the McGill Quality of Life Questionnaire (MQOL: 17 items). Psychometric properties of the MQOL-CSF were then tested in palliative care patients consisting of 55 out-patients, 48 hospice patients and 86 in-patients: The MQOL-CSF had little respondent burden (mean completion time = 3.3 min) and was evaluated as 'very clear' or 'clear' (98.2%), comprehensive (74.5%) and acceptable (96.4%). The internal consistency reliability was moderate to high (Cronbach's alpha = 0.462-0.858) and test-retest reliability (Spearman's r(s)) ranged from 0.512-0.861. Correlation was moderate to strong (0.478-0.725) between items in the short form and their analogous domains in the MQOL. Most MQOL-CSF items showed strong associations with their own domain (r(s) > or = 0.40). Scores from MQOL-CSF significantly differentiated between patients with differing haemoglobin levels (p < 0.05). Construct validity was overall supported by principal component analysis. It is concluded that the MQOL-CSF is a feasible tool with favourable psychometric properties for routine HRQoL assessment in the palliative care population. PMID:16119179

  8. Parallel MATLAB: Parallel For Loops

    E-print Network

    Crawford, T. Daniel

    Parallel MATLAB: Parallel For Loops John Burkardt (FSU) Gene Cliff (AOE/ICAM - ecliff Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 69 #12;MATLAB Parallel Example ODE SWEEP Example FMINCON Example Conclusion 2 / 69 #12;INTRO: Parallel MATLAB Parallel MATLAB

  9. The Zarit Caregiver Burden Interview Short Form (ZBI-12) in spouses of Veterans with Chronic Spinal Cord Injury, Validity and Reliability of the Persian Version

    PubMed Central

    Rajabi-Mashhadi, Mohammad T; Mashhadinejad, Hosein; Ebrahimzadeh, Mohammad H; Golhasani-Keshtan, Farideh; Ebrahimi, Hanieh; Zarei, Zahra

    2015-01-01

    Background: To test the psychometric properties of the Persian version of Zarit Burden Interview (ZBI-12) in the Iranian population. Methods: After translating and cultural adaptation of the questionnaire into Persian, 100 caregiver spouses of Iran- Iraq war (1980-88) veterans with chronic spinal cord injury who live in the city of Mashhad, Iran, invited to participate in the study. The Persian version of ZBI-12 accompanied with the Persian SF-36 was completed by the caregivers to test validity of the Persian ZBI-12.A Pearson`s correlation coefficient was calculated for validity testing. In order to assess reliability of the Persian ZBI-12, we administered the ZBI-12 randomly in 48 caregiver spouses again 3 days later. Results: Generally, the internal consistency of the questionnaire was found to be strong (Cronbach's alpha 0.77). Intercorrelation matrix between the different domains of ZBI-12 at test-retest was 0.78. The results revealed that majority of questions the Persian ZBI_12 have a significant correlation to each other. In terms of validity, our results showed that there is significant correlations between some domains of the Persian version the Short Form Health Survey -36 with the Persian Zarit Burden Interview such as Q1 with Role Physical (P=0.03),General Health (P=0.034),Social Functional (0.037), Mental Health (0.023) and Q3 with Physical Function (P=0.001),Viltality (0.002), Socil Function (0.001). Conclusions: Our findings suggest that the Zarit Burden Interview Persian version is both a valid and reliable instrument for measuring the burden of caregivers of individuals with chronic spinal cord injury. PMID:25692171

  10. Peer and Teacher Sociometrics for Preschool Children: Cross-Informant Concordance, Temporal Stability, and Reliability.

    ERIC Educational Resources Information Center

    Hart, Craig H.; Draper, Thomas W.; Olsen, Joseph A.

    2001-01-01

    Examined cross-informant concordance, temporal stability, and reliability of sociometrics in 84 preschoolers. Found that parallel forms of teacher and peer sociometrics measured overlapping and unique aspects of popularity. Teacher-measured popularity was highly stable over 8 weeks; peer-measured popularity showed lower stability. Both teacher and…

  11. Parallel Universes

    E-print Network

    Max Tegmark

    2003-02-07

    I survey physics theories involving parallel universes, which form a natural four-level hierarchy of multiverses allowing progressively greater diversity. Level I: A generic prediction of inflation is an infinite ergodic universe, which contains Hubble volumes realizing all initial conditions - including an identical copy of you about 10^{10^29} meters away. Level II: In chaotic inflation, other thermalized regions may have different effective physical constants, dimensionality and particle content. Level III: In unitary quantum mechanics, other branches of the wavefunction add nothing qualitatively new, which is ironic given that this level has historically been the most controversial. Level IV: Other mathematical structures give different fundamental equations of physics. The key question is not whether parallel universes exist (Level I is the uncontroversial cosmological concordance model), but how many levels there are. I discuss how multiverse models can be falsified and argue that there is a severe "measure problem" that must be solved to make testable predictions at levels II-IV.

  12. Estimating the Reliability of a Test Containing Multiple Item Formats.

    ERIC Educational Resources Information Center

    Qualls, Audrey L.

    1995-01-01

    Classically parallel, tau-equivalently parallel, and congenerically parallel models representing various degrees of part-test parallelism and their appropriateness for tests composed of multiple item formats are discussed. An appropriate reliability estimate for a test with multiple item formats is presented and illustrated. (SLD)

  13. Parallel Programming and Parallel Abstractions in Fortress

    Microsoft Academic Search

    Guy L. Steele Jr.

    2005-01-01

    Summary form only given. The Programming Language Research Group at Sun Microsystems Laboratories seeks to apply lessons learned from the Java (TM) programming language to the next generation of programming languages. The Java language supports platform-independent parallel programming with explicit multithreading and explicit locks. As part of the DARPA program for High Productivity Computing Systems, we are developing Fortress, a

  14. Parallel image compression

    NASA Technical Reports Server (NTRS)

    Reif, John H.

    1987-01-01

    A parallel compression algorithm for the 16,384 processor MPP machine was developed. The serial version of the algorithm can be viewed as a combination of on-line dynamic lossless test compression techniques (which employ simple learning strategies) and vector quantization. These concepts are described. How these concepts are combined to form a new strategy for performing dynamic on-line lossy compression is discussed. Finally, the implementation of this algorithm in a massively parallel fashion on the MPP is discussed.

  15. An efficient reliable broadcast protocol

    Microsoft Academic Search

    M. Frans Kaashoek; Andrew S. Tanenbaum; Susan Flynn Hummel; Henri E. Bal

    1989-01-01

    Many distributed and parallel applications can make good use of broadcast communication. In this paper we present a (software) protocol that simulates reliable broadcast, even on an unreliable network. Using this protocol, application programs need not worry about lost messages. Recovery of communication failures is handled automatically and transparently by the protocol. In normal operation, our protocol is more efficient

  16. Parallel Operators

    Microsoft Academic Search

    Jean-marc Jézéquel; Jean-lin Pacherie

    1996-01-01

    : Encapsulating parallelism and synchronization code within object-oriented softwarecomponents is a promising avenue towards mastering the complexity of the distributedmemory supercomputer programming. However, in trying to give application programmersbenefit of supercomputer power, the library designer generally resorts to low level parallelconstructs, a time consuming and error prone process. To solve this problem we introducea new abstraction called Parallel Operators. A

  17. Parallel Optimisation

    NSDL National Science Digital Library

    An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.

  18. The Ohio Scales Youth Form: Expansion and Validation of a Self-Report Outcome Measure for Young Children

    ERIC Educational Resources Information Center

    Dowell, Kathy A.; Ogles, Benjamin M.

    2008-01-01

    We examined the validity and reliability of a self-report outcome measure for children between the ages of 8 and 11. The Ohio Scales Problem Severity scale is a brief, practical outcome measure available in three parallel forms: Parent, Youth, and Agency Worker. The Youth Self-Report form is currently validated for children ages 12 and older. The…

  19. The Development and Validation of a Preliminary Research Form of an Academic Self-Concept Measure for College Students.

    ERIC Educational Resources Information Center

    Michael, William B.; And Others

    1984-01-01

    The development and construct validation of the Dimensions of Self-Concept (DOSC), Form H, are described. The 20-item subscales of the preliminary research form furnished parallel estimates of reliability ranging from .83 to .91. The five subscales show promising construct validity, as evidenced by their factor structure. (Author/BW)

  20. Parallel LOCFES 

    E-print Network

    Shah, Ronak C.

    1991-01-01

    I'sge III. A Review III. B MasPar System Architecture 18 III. C MasPar FORTRAN 21 III. D MasPar Programming Environment IV MLOCFES: PARALLEL VERSION IV. A Potential Regions of Parallelism in LOCFES IV. B FORTRAN gg Adaptations IV. C Program... formulation for spherical geometry. I. C Problem Statement LOCFES is a computationally intense program written in FORTRAN 77. It exhibits some regions of fine-grained (i. e. , at the cell level) parallelism. The program and the algorithm can be modified...

  1. Service reliability and performance in grid system with star topology

    Microsoft Academic Search

    Gregory Levitin; Yuan-Shun Dai

    2007-01-01

    The paper considers grid computing systems in which the resource management systems (RMS) can divide service tasks into subtasks and send the subtasks to different resources for parallel execution. In order to provide desired level of service reliability the RMS can assign the same subtasks to several independent resources for parallel execution.The service reliability and performance indices are introduced and

  2. Parallelizing Timed Petri Net simulations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1993-01-01

    The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

  3. Assessing the Reliability of NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    1987-01-01

    Versatile FORTRAN computer algorithm developed for calculating and plotting reliability of nondestructive evaluation (NDE) technique for inspection of flaws. Developed specifically to determine reliability of radiographic and ultrasonic methods for detection of critical flaws in structural ceramic materials. Reliability displayed in form of plot of probability of detection versus flaw size. NDE methods used in such applications as diagnostic medicine, quality control in industrial production, and prediction of failure in structural components.

  4. Massively parallel sequencing of exons on the X chromosome identifies RBM10 as the gene that causes a syndromic form of cleft palate.

    PubMed

    Johnston, Jennifer J; Teer, Jamie K; Cherukuri, Praveen F; Hansen, Nancy F; Loftus, Stacie K; Chong, Karen; Mullikin, James C; Biesecker, Leslie G

    2010-05-14

    Micrognathia, glossoptosis, and cleft palate comprise one of the most common malformation sequences, Robin sequence. It is a component of the TARP syndrome, talipes equinovarus, atrial septal defect, Robin sequence, and persistent left superior vena cava. This disorder is X-linked and severe, with apparently 100% pre- or postnatal lethality in affected males. Here we characterize a second family with TARP syndrome, confirm linkage to Xp11.23-q13.3, perform massively parallel sequencing of X chromosome exons, filter the results via a number of criteria including the linkage region, use a unique algorithm to characterize sequence changes, and show that TARP syndrome is caused by mutations in the RBM10 gene, which encodes RNA binding motif 10. We further show that this previously uncharacterized gene is expressed in midgestation mouse embryos in the branchial arches and limbs, consistent with the human phenotype. We conclude that massively parallel sequencing is useful to characterize large candidate linkage intervals and that it can be used successfully to allow identification of disease-causing gene mutations. PMID:20451169

  5. Adaptive parallel logic networks

    NASA Technical Reports Server (NTRS)

    Martinez, Tony R.; Vidal, Jacques J.

    1988-01-01

    Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

  6. Parallel LOCFES

    E-print Network

    Shah, Ronak C.

    1991-01-01

    I'sge III. A Review III. B MasPar System Architecture 18 III. C MasPar FORTRAN 21 III. D MasPar Programming Environment IV MLOCFES: PARALLEL VERSION IV. A Potential Regions of Parallelism in LOCFES IV. B FORTRAN gg Adaptations IV. C Program.... 2 MasPar MP-1 System Diagram (Adapted from MasPar System Overview). . . 19 3 The flow of control. 31 LIST OF TABLES TABLE Page Analytical vs computational (MLOCFES) solution for I = 1, K = 1, and L=1 Analytical vs computational (MLOCFES...

  7. Photovoltaic module reliability workshop

    SciTech Connect

    Mrig, L. (ed.)

    1990-01-01

    The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

  8. Parallel Resistors

    NSDL National Science Digital Library

    Michael Horton

    2009-05-30

    Students will measure the resistance of resistors that they have drawn on paper with a graphite pencil. They will then connect two resistors in parallel and measure the resistance of the combination. In this activity, it is important that students color v

  9. Numerical evaluation of Chandrasekhar's H-function, its first and second differential coefficients, its pole and moments from the new form for plane parallel scattering atmosphere in radiative transfer

    E-print Network

    Rabindra Nath Das; Rasajit Bera

    2007-11-21

    In this paper, the new forms obtained for Chandrasekhar's H- function in Radiative Transfer by one of the authors both for non-conservative and conservative cases for isotropic scattering in a semi-infinite plane parallel atmosphere are used to obtain exclusively new forms for the first and second derivatives of H-function . The numerics for evaluation of zero of dispersion function, for evaluation of H-function and its derivatives and its zeroth, the first and second moments are outlined. Those are used to get ready and accurate extensive tables of H-function and its derivatives, pole and moments for different albedo for scattering by iteration and Simpson's one third rule . The schemes for interpolation of H-function for any arbitrary value of the direction parameter for a given albedo are also outlined. Good agreement has been observed in checks with the available results within one unit of ninth decimal

  10. Perfect Pipelining: A New Loop Parallelization Technique

    Microsoft Academic Search

    Alexander Aiken; Alexandru Nicolau

    1988-01-01

    Parallelizing compilers do not handle loops in a satisfactory manner. Fine-grain transformationscapture irregular parallelism inside a loop body not amenable to coarser approaches but have limitedability to exploit parallelism across iterations. Coarse methods sacrifice irregular forms of parallelismin favor of pipelining (overlapping) iterations. In this paper we present a new transformation, PerfectPipelining, that bridges the gap between these fine- and

  11. Applications Parallel PIC plasma simulation through particle

    E-print Network

    Vlad, Gregorio

    Applications Parallel PIC plasma simulation through particle decomposition techniques B. Di Martino 2000 Abstract Parallelization of a particle-in-cell (PIC) code has been accomplished through technique requires a moderate eort in porting the code in parallel form and results in intrinsic load

  12. Parallel MATLAB at VT: Parallel For Loops

    E-print Network

    Crawford, T. Daniel

    Parallel MATLAB at VT: Parallel For Loops John Burkardt (FSU) Gene Cliff (AOE/ICAM - ecliff Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 72 #12;MATLAB Parallel Example ODE SWEEP Example FMINCON Example Conclusion 2 / 72 #12;INTRO: Parallel MATLAB Parallel MATLAB

  13. Parallel MATLAB at VT: Parallel For Loops

    E-print Network

    Crawford, T. Daniel

    Parallel MATLAB at VT: Parallel For Loops John Burkardt (FSU) Gene Cliff (AOE/ICAM - ecliff Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 71 #12;MATLAB Parallel Example ODE SWEEP Example FMINCON Example Conclusion 2 / 71 #12;INTRO: Parallel MATLAB Parallel MATLAB

  14. Results from the translation and adaptation of the Iranian Short-Form McGill Pain Questionnaire (I-SF-MPQ): preliminary evidence of its reliability, construct validity and sensitivity in an Iranian pain population

    PubMed Central

    2011-01-01

    Background The Short Form McGill Pain Questionnaire (SF-MPQ) is one of the most widely used instruments to assess pain. The aim of this study was to translate and culturally adapt the questionnaire for Farsi (the official language of Iran) speakers in order to test its reliability and sensitivity. Methods We followed Guillemin's guidelines for cross-cultural adaption of health-related measures, which include forward-backward translations, expert committee meetings, and face validity testing in a pilot group. Subsequently, the questionnaire was administered to a sample of 100 diverse chronic pain patients attending a tertiary pain and rehabilitation clinic. In order to evaluate test-retest reliability, patients completed the questionnaire in the morning and early evening of their first visit. Finally, patients were asked to complete the questionnaire for the third time after completing a standardized treatment protocol three weeks later. Intraclass correlation coefficient (ICC) was used to evaluate reliability. We used principle component analysis to assess construct validity. Results Ninety-two subjects completed the questionnaire both in the morning and in the evening of the first visit (test-retest reliability), and after three weeks (sensitivity to change). Eight patients who did not finish treatment protocol were excluded from the study. Internal consistency was found by Cronbach's alpha to be 0.951, 0.832 and 0.840 for sensory, affective and total scores respectively. ICC resulted in 0.906 for sensory, 0.712 for affective and 0.912 for total pain score. Item to subscale score correlations supported the convergent validity of each item to its hypothesized subscale. Correlations were observed to range from r2 = 0.202 to r2 = 0.739. Sensitivity or responsiveness was evaluated by pair t-test, which exhibited a significant difference between pre- and post-treatment scores (p < 0.001). Conclusion The results of this study indicate that the Iranian version of the SF-MPQ is a reliable questionnaire and responsive to changes in the subscale and total pain scores in Persian chronic pain patients over time. PMID:22074591

  15. High-performance parallel computing using residue number systems (RNS) and parallel decomposition

    Microsoft Academic Search

    Salman Mohammed Talahmeh

    1996-01-01

    Parallel computing plays a crucial rule in many applications, where speed is essential. Parallel computing requires well-designed algorithms. Such algorithms should provide the highest performance in terms of speed, cost, hardware implementation, reliability, and accuracy. VLSI is the technology that provides the most efficient and most economical implementation of such algorithms. The main objective of this thesis is the development

  16. Parallel Hardware and Parallel Software: a Reconciliation

    E-print Network

    Zanibbi, Richard

    Parallel Hardware and Parallel Software: a Reconciliation Peter Welch Computing Laboratory University of Kent, Canterbury, UK Abstract Parallel hardware is commercially marketed today at all levels of gran- ularity - from

  17. Parallel transports in webs

    Microsoft Academic Search

    Christian Fleischhack

    2004-01-01

    For connected reductive linear algebraic structure groups it is proven that\\u000aevery web is holonomically isolated. The possible tuples of parallel transports\\u000ain a web form a Lie subgroup of the corresponding power of the structure group.\\u000aThis Lie subgroup is explicitly calculated and turns out to be independent of\\u000athe chosen local trivializations. Moreover, explicit necessary and sufficient\\u000acriteria

  18. The short form of the fear survey schedule for children-revised (FSSC-R-SF): an efficient, reliable, and valid scale for measuring fear in children and adolescents.

    PubMed

    Muris, Peter; Ollendick, Thomas H; Roelofs, Jeffrey; Austin, Kristin

    2014-12-01

    The present study examined the psychometric properties of the Short Form of the Fear Survey Schedule for Children-Revised (FSSC-R-SF) in non-clinical and clinically referred children and adolescents from the Netherlands and the United States. Exploratory as well as confirmatory factor analyses of the FSSC-R-SF yielded support for the hypothesized five-factor structure representing fears in the domains of (1) failure and criticism, (2) the unknown, (3) animals, (4) danger and death, and (5) medical affairs. The FSSC-R-SF showed satisfactory reliability and was capable of assessing gender and age differences in youths' fears and fearfulness that have been documented in previous research. Further, the convergent validity of the scale was good as shown by substantial and meaningful correlations with the full-length FSSC-R and alternative childhood anxiety measures. Finally, support was found for the discriminant validity of the scale. That is, clinically referred children and adolescents exhibited higher scores on the FSSC-R-SF total scale and most subscales as compared to their non-clinical counterparts. Moreover, within the clinical sample, children and adolescents with a major anxiety disorder generally displayed higher FSSC-R-SF scores than youths without such a diagnosis. Altogether, these findings indicate that the FSSC-R-SF is a brief, reliable, and valid scale for assessing fear sensitivities in children and adolescents. PMID:25445086

  19. Reliability Analysis of Tube Hydroforming Process

    E-print Network

    Boyer, Edmond

    Reliability Analysis of Tube Hydroforming Process Bouchaib Radi Dept. Technics FST Settat BP : 577 propose a reliability­mechanical study combination to treat the tube hydroforming process (THP the proposed approach. Key words-reliability, tube hydroforming process, forming limit curve, failure mode

  20. Parallel Computing Explained

    NSDL National Science Digital Library

    NCSA

    Several tutorials on parallel computing. Overview of parallel computing. Porting and code parallelization. Scalar, cache, and parallel code tuning. Timing, profiling and performance analysis. Overview of IBM Regatta P690.

  1. Parallel fast gauss transform

    SciTech Connect

    Sampath, Rahul S [ORNL; Sundar, Hari [Siemens Corporate Research; Veerapaneni, Shravan [New York University

    2010-01-01

    We present fast adaptive parallel algorithms to compute the sum of N Gaussians at N points. Direct sequential computation of this sum would take O(N{sup 2}) time. The parallel time complexity estimates for our algorithms are O(N/n{sub p}) for uniform point distributions and O( (N/n{sub p}) log (N/n{sub p}) + n{sub p}log n{sub p}) for non-uniform distributions using n{sub p} CPUs. We incorporate a plane-wave representation of the Gaussian kernel which permits 'diagonal translation'. We use parallel octrees and a new scheme for translating the plane-waves to efficiently handle non-uniform distributions. Computing the transform to six-digit accuracy at 120 billion points took approximately 140 seconds using 4096 cores on the Jaguar supercomputer. Our implementation is 'kernel-independent' and can handle other 'Gaussian-type' kernels even when explicit analytic expression for the kernel is not known. These algorithms form a new class of core computational machinery for solving parabolic PDEs on massively parallel architectures.

  2. Parallel MATLAB at VT: Parallel For Loops

    E-print Network

    Crawford, T. Daniel

    Parallel MATLAB at VT: Parallel For Loops John Burkardt (FSU) Gene Cliff (AOE/ICAM - ecliff Research Computing ICAM: Interdisciplinary Center for Applied Mathematics 1 / 56 #12;Matlab Parallel ODE SWEEP Example MD Example Conclusion 2 / 56 #12;INTRO: Parallel Matlab In a previous lecture we

  3. PHACT: Parallel HOG and Correlation Tracking

    NASA Astrophysics Data System (ADS)

    Hassan, Waqas; Birch, Philip; Young, Rupert; Chatwin, Chris

    2014-03-01

    Histogram of Oriented Gradients (HOG) based methods for the detection of humans have become one of the most reliable methods of detecting pedestrians with a single passive imaging camera. However, they are not 100 percent reliable. This paper presents an improved tracker for the monitoring of pedestrians within images. The Parallel HOG and Correlation Tracking (PHACT) algorithm utilises self learning to overcome the drifting problem. A detection algorithm that utilises HOG features runs in parallel to an adaptive and stateful correlator. The combination of both acting in a cascade provides a much more robust tracker than the two components separately could produce.

  4. Comparison of Reliability Measures under Factor Analysis and Item Response Theory

    ERIC Educational Resources Information Center

    Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

    2012-01-01

    Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

  5. Reliability-aware resource allocation in HPC systems

    Microsoft Academic Search

    Narasimha Raju Gottumukkala; Chokchai Leangsuksun; Narate Taerat; Raja Nassar; Stephen L. Scott

    2007-01-01

    Failures and downtimes have severe impact on the performance of parallel programs in a large scale High Performance Computing (HPC) environment. There were several research efforts to understand the failure behavior of computing systems. However, the presence of multitude of hardware and software components required for uninterrupted operation of parallel programs make failure and reliability prediction a challenging problem. HPC

  6. Parallel Lines and Transversals

    NSDL National Science Digital Library

    Mrs. Sonntag

    2010-10-07

    In this lab you will review the names of angles formed by transversals. In addition you will discover the unique relationship that these pairs of anlges have when the transversal cuts through two parallel lines. picture We have already discussed many angle relationships in class. For example, we have learned to identify vertical angles and linear pairs. Each of the angles have a special relationship. Vertical angles are congruent, and Linear angles are supplementary. In the following lesson you will review the names of angle pairs ...

  7. Optimal reliability of systems subject to imperfect fault-coverage

    Microsoft Academic Search

    Suprasad V. Amari; Joanne Bechta Dugan; Ravindra B. Misra

    1999-01-01

    This paper maximizes the reliability of systems subjected to imperfect fault-coverage. The results include the effect of common-cause failures and `maximum allowable spare limit'. The generalized results are presented and then the policies for some specific systems are given. The systems considered include parallel, parallel-series, series parallel, k-out-of-n, and NMR (k-out-of-(2k-1)) systems. The results are generalized for the non s-identical

  8. A Parallel Differential Evolution Algorithm A Parallel Differential Evolution Algorithm

    Microsoft Academic Search

    Wojciech Kwedlo; Krzysztof Bandurski

    2006-01-01

    In the paper the problem of using a differential evolution algorithm for feed-forward neural network training is considered. A new parallelization scheme for the computation of the fitness function is proposed. This scheme is based on data decomposition. Both the learning set and the population of the evolutionary algorithm are distributed among processors. The processors form a pipeline using the

  9. Algorithmically Specialized Parallel Architecture For Robotics

    NASA Technical Reports Server (NTRS)

    Fijany, Amir; Bejczy, Antal K.

    1991-01-01

    Computing system called Robot Mathematics Processor (RMP) contains large number of processor elements (PE's) connected in various parallel and serial combinations reconfigurable via software. Special-purpose architecture designed for solving diverse computational problems in robot control, simulation, trajectory generation, workspace analysis, and like. System an MIMD-SIMD parallel architecture capable of exploiting parallelism in different forms and at several computational levels. Major advantage lies in design of cells, which provides flexibility and reconfigurability superior to previous SIMD processors.

  10. Stochastic Methods in Reliability and Risk Management This volume focuses on stochastic methods developed for reliability modeling and risk

    E-print Network

    Li, Haijun

    Preface Stochastic Methods in Reliability and Risk Management This volume focuses on stochastic methods developed for reliability modeling and risk analysis. Reliability theory and risk analysis have. This special volume highlights this convergence of reliability modeling and risk analysis, that forms the core

  11. Parallel operation of voltage source inverters with minimal intermodule reactors

    Microsoft Academic Search

    Bin Shi; Giri Venkataramanan

    2004-01-01

    Realization of large horsepower motor drives using parallel-connected voltage source inverters rated at smaller power levels would be highly desirable. A robust technique for such a realization would result in several benefits including modularity, ease of maintenance, n+1 redundancy, reliability, etc. Techniques for parallel operation of voltage source inverters with relatively large load inductance have been well established in the

  12. Parallel operation control technique of voltage source inverters in UPS

    Microsoft Academic Search

    Duan Shanxu; Meng Yu; Xiong Jian; Kang Yong; Chen Jian

    1999-01-01

    The control technique of a parallel operation system of voltage source inverters with other inverters or with utility source has been applied in many fields, especially in uninterruptible power supply (UPS). The multi-module UPS can flexibly implement expansion of power system capacities. Furthermore, it can be used to build up a parallel redundant system in order to improve the reliability

  13. A characterization of parallel computers for object-oriented applications

    Microsoft Academic Search

    Michael Orlovsky; Kamal Jabbour

    1996-01-01

    We present results of the characterization of parallel architectures for the execution of algorithms developed using object-oriented languages. Object-oriented analysis and design are applied to software to yield reusable, reliable, and portable code. Care must be taken to structure the software for efficient execution while avoiding implementation dependencies that bind the software to a specific parallel architecture. Characterization results for

  14. Parallel Activation in Bilingual Phonological Processing

    ERIC Educational Resources Information Center

    Lee, Su-Yeon

    2011-01-01

    In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

  15. Implementation of an efficient parallel BDD package

    Microsoft Academic Search

    Tony Stornetta; Forrest Brewer

    1996-01-01

    Large BDD applications push completing resources to their limits. One solution to overcoming resource limitations is to distribute the BDD data structure across multiple networked workstations. This paper presents an efficient parallel BDD package for a distributed environment such as a network of workstations (NOW) or a distributed memory parallel computer. The implementation exploits a number of different forms of

  16. A Parallel Algorithm for Relational Database Normalization

    Microsoft Academic Search

    Edward R. Omiecinski

    1990-01-01

    The problem of database normalization in a parallel environment is examined. Generating relation schemes in third normal form is straightforward when given a set of functional dependencies that is a reduced cover. It is shown that a reduced cover for a set of functional dependencies can be produced in parallel. The correctness of the algorithm is based on two important

  17. Improved CDMA Performance Using Parallel Interference Cancellation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin; Divsalar, Dariush

    1995-01-01

    This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

  18. Reliability Generalization: "Lapsus Linguae"

    ERIC Educational Resources Information Center

    Smith, Julie M.

    2011-01-01

    This study examines the proposed Reliability Generalization (RG) method for studying reliability. RG employs the application of meta-analytic techniques similar to those used in validity generalization studies to examine reliability coefficients. This study explains why RG does not provide a proper research method for the study of reliability,…

  19. Reliability. ERIC Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.; Schafer, William D.

    This digest discusses sources of error in testing, several approaches to estimating reliability, and several ways to increase test reliability. Reliability has been defined in different ways by different authors, but the best way to look at reliability may be the extent to which measurements resulting from a test are characteristics of those being…

  20. Reviewing Traffic Reliability Research

    Microsoft Academic Search

    Dianhai WANG; Hongsheng QI; Cheng XU

    2010-01-01

    Multi-dimension, stochastic, and dynamic are essential nature of urban traffic operation. Traffic reliability introduces the idea of reliability into traffic research and is an important field of cause analysis of traffic problems. Considerable researches have been conducted on traffic reliability, covering from theory to practice, and from model to algorithm. There already exists a framework for reliability analysis. However, few

  1. Using multivariate generalizability theory to assess the effect of content stratification on the reliability of a performance assessment.

    PubMed

    Keller, Lisa A; Clauser, Brian E; Swanson, David B

    2010-12-01

    In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed. PMID:20509047

  2. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  3. Parallel Composition Communication and Allow Hiding Parallel Processes

    E-print Network

    Groote, Jan Friso

    Parallel Composition Communication and Allow Hiding Parallel Processes Mohammad Mousavi Eindhoven University of Technology, The Netherlands Requirement Analysis and Design Verification, 2008-2009 Mousavi: Parallel Processes #12;Parallel Composition Communication and Allow Hiding Overview Motivation Parallel

  4. Data parallel algorithms

    Microsoft Academic Search

    W. Daniel Hillis; Guy L. Steele Jr.

    1986-01-01

    Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.

  5. Introduction to Parallel Programming

    E-print Network

    Introduction to Parallel Programming 1Tuesday, April 17, 12 #12;Overview · Parallel programming allows the user to use multiple cpus concurrently · Reasons for parallel execution: · shorten execution expect as a function of the number of processors (N) used and the code fraction that is parallel (p). T(1

  6. Introduction to Parallel Programming

    E-print Network

    Alvarado, Alejandro Sánchez

    Introduction to Parallel Programming Martin Cuma Center for High Performance Computing University of Utah mcuma@chpc.utah.edu #12;Overview · Types of parallel computers. · Parallel programming options. · How to write parallel applications. · How to compile. · How to debug/profile. · Summary, future

  7. Towards Distributed Memory Parallel Program Analysis

    SciTech Connect

    Quinlan, D; Barany, G; Panas, T

    2008-06-17

    This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

  8. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  9. Energy Efficient Redundant Configurations for Reliable Parallel Servers

    E-print Network

    Texas at San Antonio, University of

    - quests and pessimistic approaches favor higher levels of modular redundancy. Key words: Energy Management to embedded systems that are generally battery powered and have limited energy budget (e.g., com- plex scaling (DVS) (an popular and widely used energy management tech- nique) to scale down system processing

  10. Speculative parallelization of partially parallel loops

    E-print Network

    Dang, Francis Hoai Dinh

    2009-05-15

    , and applied a fully parallel data dependence test to determine if it had any cross–processor depen- dences. If the test failed, then the loop was re–executed serially. While this method exploits doall parallelism well, it can cause slowdowns for loops...

  11. Boolean Circuit Programming: A New Paradigm to Design Parallel Algorithms

    E-print Network

    Ha, Soonhoi

    Boolean Circuit Programming: A New Paradigm to Design Parallel Algorithms Kunsoo Park Heejin Park Woo-Chul Jeun § Soonhoi Ha ¶ Abstract The Boolean circuit has been an important model of parallel important model of parallel computation is the Boolean circuit [18, 19]. Uni- form Boolean circuits have

  12. Power electronics reliability analysis.

    SciTech Connect

    Smith, Mark A.; Atcitty, Stanley

    2009-12-01

    This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

  13. Human Reliability Program Overview

    SciTech Connect

    Bodin, Michael

    2012-09-25

    This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

  14. Integrated avionics reliability

    NASA Technical Reports Server (NTRS)

    Alikiotis, Dimitri

    1988-01-01

    The integrated avionics reliability task is an effort to build credible reliability and/or performability models for multisensor integrated navigation and flight control. The research was initiated by the reliability analysis of a multisensor navigation system consisting of the Global Positioning System (GPS), the Long Range Navigation system (Loran C), and an inertial measurement unit (IMU). Markov reliability models were developed based on system failure rates and mission time.

  15. Theory of reliable systems

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1975-01-01

    An attempt was made to refine the current notion of system reliability by identifying and investigating attributes of a system which are important to reliability considerations. Techniques which facilitate analysis of system reliability are included. Special attention was given to fault tolerance, diagnosability, and reconfigurability characteristics of systems.

  16. Software component reliability analysis

    Microsoft Academic Search

    William W. Everett

    1999-01-01

    This paper describes an approach to analyzing software reliability using component analysis. It walks through a 6-step procedure for performing software component reliability analysis. The analysis can begin prior to testing the software and can help in selecting testing strategies. It uses the Extended Execution Time (EET) reliability growth model at the software component level. The paper describes how to

  17. Parallel I/O Systems

    NSDL National Science Digital Library

    Amy Apon

    * Redundant disk array architectures,* Fault tolerance issues in parallel I/O systems,* Caching and prefetching,* Parallel file systems,* Parallel I/O systems, * Parallel I/O programming paradigms, * Parallel I/O applications and environments, * Parallel programming with parallel I/O

  18. Parallel methods for dynamic simulation of multiple manipulator systems

    NASA Technical Reports Server (NTRS)

    Mcmillan, Scott; Sadayappan, P.; Orin, David E.

    1993-01-01

    In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

  19. Reliability computation from reliability block diagrams

    NASA Technical Reports Server (NTRS)

    Chelson, P. O.; Eckstein, R. E.

    1971-01-01

    A method and a computer program are presented to calculate probability of system success from an arbitrary reliability block diagram. The class of reliability block diagrams that can be handled include any active/standby combination of redundancy, and the computations include the effects of dormancy and switching in any standby redundancy. The mechanics of the program are based on an extension of the probability tree method of computing system probabilities.

  20. High Performance Parallel Computational Nanotechnology

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    At a recent press conference, NASA Administrator Dan Goldin encouraged NASA Ames Research Center to take a lead role in promoting research and development of advanced, high-performance computer technology, including nanotechnology. Manufacturers of leading-edge microprocessors currently perform large-scale simulations in the design and verification of semiconductor devices and microprocessors. Recently, the need for this intensive simulation and modeling analysis has greatly increased, due in part to the ever-increasing complexity of these devices, as well as the lessons of experiences such as the Pentium fiasco. Simulation, modeling, testing, and validation will be even more important for designing molecular computers because of the complex specification of millions of atoms, thousands of assembly steps, as well as the simulation and modeling needed to ensure reliable, robust and efficient fabrication of the molecular devices. The software for this capacity does not exist today, but it can be extrapolated from the software currently used in molecular modeling for other applications: semi-empirical methods, ab initio methods, self-consistent field methods, Hartree-Fock methods, molecular mechanics; and simulation methods for diamondoid structures. In as much as it seems clear that the application of such methods in nanotechnology will require powerful, highly powerful systems, this talk will discuss techniques and issues for performing these types of computations on parallel systems. We will describe system design issues (memory, I/O, mass storage, operating system requirements, special user interface issues, interconnects, bandwidths, and programming languages) involved in parallel methods for scalable classical, semiclassical, quantum, molecular mechanics, and continuum models; molecular nanotechnology computer-aided designs (NanoCAD) techniques; visualization using virtual reality techniques of structural models and assembly sequences; software required to control mini robotic manipulators for positional control; scalable numerical algorithms for reliability, verifications and testability. There appears no fundamental obstacle to simulating molecular compilers and molecular computers on high performance parallel computers, just as the Boeing 777 was simulated on a computer before manufacturing it.

  1. Parallel flow diffusion battery

    DOEpatents

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  2. Parallel flow diffusion battery

    DOEpatents

    Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  3. DC Circuits: Parallel Resistances

    NSDL National Science Digital Library

    In this interactive learning activity, students will learn about parallel circuits. They will measure and calculate the resistance of parallel circuits and answer several questions about the example circuit shown.

  4. Learning in Parallel Universes

    Microsoft Academic Search

    Michael R. Berthold; Bernd Wiswedel

    2007-01-01

    This abstract summarizes a brief, preliminary formalization of learning in parallel universes. It also attempts to highlight a few neigh- boring learning paradigms to illustrate how parallel learning ts into the greater picture.

  5. Reliability models for dataflow computer systems

    NASA Technical Reports Server (NTRS)

    Kavi, K. M.; Buckles, B. P.

    1985-01-01

    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

  6. Introduction to parallel programming

    SciTech Connect

    Brawer, S. (Encore Computer Corp., Marlborough, MA (US))

    1989-01-01

    This book describes parallel programming and all the basic concepts illustrated by examples in a simplified FORTRAN. Concepts covered include: The parallel programming model; The creation of multiple processes; Memory sharing; Scheduling; Data dependencies. In addition, a number of parallelized applications are presented, including a discrete-time, discrete-event simulator, numerical integration, Gaussian elimination, and parallelized versions of the traveling salesman problem and the exploration of a maze.

  7. Parallel processing ITS

    SciTech Connect

    Fan, W.C.; Halbleib, J.A. Sr.

    1996-09-01

    This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

  8. An introduction to compilation issues for parallel machines

    Microsoft Academic Search

    Maya Gokhale; William Carlson

    1992-01-01

    The exploitation of today's high-performance computer systems requires the effective use of parallelism in many forms and at numerous levels. This survey article discusses program analysis and restructuring techniques that target parallel architectures. We first describe various categories of architectures that are oriented toward parallel computation models: vector architectures, shared-memory multiprocessors, massively parallel machines, message-passing architectures, VLIWs, and multithreaded architectures.

  9. Summary of Research on Reliability Criteria-Based Flight System Control

    NASA Technical Reports Server (NTRS)

    Wu, N. Eva; Belcastro, Christine (Technical Monitor)

    2002-01-01

    This paper presents research on the reliability assessment of adaptive flight control systems. The topics include: 1) Overview of Project Focuses; 2) Reliability Analysis; and 3) Design for Reliability. This paper is presented in viewgraph form.

  10. Pthreads for Dynamic Parallelism

    Microsoft Academic Search

    Girija J. Narlikar; Guy E. Blelloch

    1998-01-01

    Expressing a large number of lightweight, parallel threads in a shared address space significantly eases the task of writing a parallel program. Threads can be dynamically created to execute individual parallel tasks; the implementation schedules these threads onto the processors and effectively balances the load. However, unless the threads scheduler is designed carefully, such a p arallel program may suffer

  11. Special issue on parallelism

    Microsoft Academic Search

    Karen A. Frenkel

    1986-01-01

    The articles presented in our Special Issue on parallel processing on the supercomputing scale reflect, to some extent, splits in the community developing these machines. There are several schools of thought on how best to implement parallel processing at both the hard- and software levels. Controversy exists over the wisdom of aiming for general- or special-purpose parallel machines, and what

  12. Reliability quantification and visualization for electric microgrids

    NASA Astrophysics Data System (ADS)

    Panwar, Mayank

    The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

  13. Parallel algorithm development

    SciTech Connect

    Adams, T.F.

    1996-06-01

    Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

  14. Banking on NUG reliability

    SciTech Connect

    Kolbe, L.; Johnson, S.; Pfeifenberger, J.

    1994-05-15

    Plant reliability is an important issue that has been raised frequently in the context of purchased power. Most recently, section 712 of the 1992 Energy Policy Act asked state regulators to explore whether the use of highly leveraged capital structures by exempt wholesale generators (EWGs) threatens reliability. This article shows that the relative reliability of nonutility generators (NUGs) and utility-owned generation varies from case to case. Therefore, absolute statements about NUG reliability as a function of financial leverage are not only difficult to make but also highly suspect. NUG reliability is a contentious issue. Some parties strongly support the view that NUG reliability generally exceeds that of utility-owned generation. Others question this view. Indeed, studies that showed NUG plants to be more reliable than utility-owned generation may have overstated NUG reliability for several reasons. First, NUG plants are, on average, newer than utility plants; newer plants tend to be more reliable. Second, the relatively continuous operation of NUGs causes less wear and tear. And third, NUG reliability data may suffer from a self-selection bias: comprehensive data on utility plant performance exist, but NUG plant performance data have been compiled from survey responses and successful NUGs may have been more likely to respond.

  15. Parallel Atomistic Simulations

    SciTech Connect

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  16. Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples

    ERIC Educational Resources Information Center

    Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

    2005-01-01

    This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…

  17. A reliability and comparative analysis of two standby system configurations.

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.

    1973-01-01

    Equations are derived which enable one to calculate the system reliability for parallel or triple modular redundant systems with standby spares. Software error detection is introduced into the TMR/Spares system configuration in order to utilize fully all of the units. An indication of the sensitivity of the system reliability to an increase in the number of spares, partitioning, switching, variations in the powered and unpowered failures rates, and time is presented. A comparison of the parallel and the TMR/Spares system configurations, under similar conditions, is given.

  18. Human reliability analysis

    SciTech Connect

    Dougherty, E.M.; Fragola, J.R.

    1988-01-01

    The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

  19. A Bayesian approach to reliability and confidence

    NASA Technical Reports Server (NTRS)

    Barnes, Ron

    1989-01-01

    The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

  20. Recalibrating software reliability models

    NASA Technical Reports Server (NTRS)

    Brocklehurst, Sarah; Chan, P. Y.; Littlewood, Bev; Snell, John

    1989-01-01

    In spite of much research effort, there is no universally applicable software reliability growth model which can be trusted to give accurate predictions of reliability in all circumstances. Further, it is not even possible to decide a priori which of the many models is most suitable in a particular context. In an attempt to resolve this problem, techniques were developed whereby, for each program, the accuracy of various models can be analyzed. A user is thus enabled to select that model which is giving the most accurate reliability predictions for the particular program under examination. One of these ways of analyzing predictive accuracy, called the u-plot, in fact allows a user to estimate the relationship between the predicted reliability and the true reliability. It is shown how this can be used to improve reliability predictions in a completely general way by a process of recalibration. Simulation results show that the technique gives improved reliability predictions in a large proportion of cases. However, a user does not need to trust the efficacy of recalibration, since the new reliability estimates produced by the technique are truly predictive and so their accuracy in a particular application can be judged using the earlier methods. The generality of this approach would therefore suggest that it be applied as a matter of course whenever a software reliability model is used.

  1. Design reliability engineering

    SciTech Connect

    Niall, R.; Hunt, N.M. (EG and G Idaho, Inc., Idaho Falls, (USA)); Buden, D.

    1989-01-01

    Improved design techniques are needed to achieve high reliability at minimum cost. This is especially true of space systems where lifetimes of many years without maintenance are needed and severe mass limitations exist. Reliability must be designed into these systems from the start. Techniques are now being explored to structure a formal design processes that will be more complete and less expensive. The intent is to integrate the best features of design, reliability analysis, and expert systems to design highly reliable systems to meet stressing needs. Taken into account are the large uncertainties that exist in materials, design models, and fabrication techniques. Expert systems are a convenient method of integrating into the design process a complete definition of all elements that should be considered and an opportunity to integrate the design process with reliability, safety, test engineering, maintenance, and operator training. The approach being pursued to accomplish the design reliability engineering process is described. Potential benefits using the design reliability engineering approach are discussed. Design reliability engineering can become a very powerful and cost-effective design tool. The methodology takes advantage of recent developments in computers, expert systems, and reliability engineering to cost-effectively enhance the design process.

  2. Operational safety reliability research

    SciTech Connect

    Hall, R.E.; Boccio, J.L.

    1986-01-01

    Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant.

  3. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  4. Non-Cartesian parallel imaging reconstruction.

    PubMed

    Wright, Katherine L; Hamilton, Jesse I; Griswold, Mark A; Gulani, Vikas; Seiberlich, Nicole

    2014-11-01

    Non-Cartesian parallel imaging has played an important role in reducing data acquisition time in MRI. The use of non-Cartesian trajectories can enable more efficient coverage of k-space, which can be leveraged to reduce scan times. These trajectories can be undersampled to achieve even faster scan times, but the resulting images may contain aliasing artifacts. Just as Cartesian parallel imaging can be used to reconstruct images from undersampled Cartesian data, non-Cartesian parallel imaging methods can mitigate aliasing artifacts by using additional spatial encoding information in the form of the nonhomogeneous sensitivities of multi-coil phased arrays. This review will begin with an overview of non-Cartesian k-space trajectories and their sampling properties, followed by an in-depth discussion of several selected non-Cartesian parallel imaging algorithms. Three representative non-Cartesian parallel imaging methods will be described, including Conjugate Gradient SENSE (CG SENSE), non-Cartesian generalized autocalibrating partially parallel acquisition (GRAPPA), and Iterative Self-Consistent Parallel Imaging Reconstruction (SPIRiT). After a discussion of these three techniques, several potential promising clinical applications of non-Cartesian parallel imaging will be covered. PMID:24408499

  5. Component Specification for Parallel Coupling Infrastructure

    Microsoft Academic Search

    Jay Walter Larson; Boyana Norris

    2007-01-01

    \\u000a Coupled systems comprise multiple mutually interacting subsystems, and are an increasingly common computational science application, most\\u000a notably as multiscale and multiphysics models. Parallel computing, and in particular message-passing programming have spurred\\u000a the development of these models, but also present a parallel coupling problem (PCP) in the form of intermodel data dependencies. The PCP complicates model coupling through requirements for the

  6. On mesh rezoning algorithms for parallel platforms

    SciTech Connect

    Plaskacz, E.J.

    1995-07-01

    A mesh rezoning algorithm for finite element simulations in a parallel-distributed environment is described. The cornerstones of the algorithm are: the parallel computation of distortion norms on the element and subdomain level, the exchange of the individual subdomain norms to form a subdomain distortion vector, the classification of subdomains and the rezoning behavior prescribed within each subdomain as a response to its own classification and the classification of neighboring subdomains.

  7. Java Parallel Secure Stream for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2001-09-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

  8. A high-speed linear algebra library with automatic parallelism

    NASA Technical Reports Server (NTRS)

    Boucher, Michael L.

    1994-01-01

    Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

  9. Statistical criteria for parallel tests: a comparison of accuracy and power.

    PubMed

    García-Pérez, Miguel A

    2013-12-01

    Parallel tests are needed so that alternate forms can be applied to different groups or on different occasions, but also in the context of split-half reliability estimation for a given test. Statistically, parallelism holds beyond reasonable doubt when the null hypotheses of equality of observed means and variances across the two forms (or halves) are not rejected. Several statistical tests have been proposed for this purpose, but their performance has never been compared. This study assessed the relative performance (type I error rate and power) of the Student-Pitman-Morgan, Bradley-Blackwood, and Wilks tests of equality of means and variances in the typical conditions surrounding studies of parallelism-namely, integer-valued and bounded test scores with distributions that may not be bivariate normal. The results advise against the use of the Wilks test and support the use of the Bradley-Blackwood test because of its simplicity and its minimally better performance in comparison with the more cumbersome Student-Pitman-Morgan test. PMID:23413034

  10. Reliability of headache diagnosis

    Microsoft Academic Search

    Ira Daniel Turkat; Phillip J. Brantley; Keith Orton; Henry E. Adams

    1981-01-01

    The literature on diagnosis of head pain associated with psychological factors indicates that these diagnoses rely almost exclusively on self-report criteria. The reliability of self-report criteria for diagnosis of headache has not been previously reported. The present study investigated the reliability of headache diagnosis based on the criteria suggested by the Ad Hoc Committee on Classification of Headache. The results

  11. Bayesian networks in reliability

    Microsoft Academic Search

    Helge Langseth; Luigi Portinale

    2007-01-01

    Over the last decade, Bayesian networks (BNs) have become a popular tool for modelling many kinds of statistical problems. We have also seen a growing interest for using BNs in the reliability analysis community. In this paper we will discuss the properties of the modelling framework that make BNs particularly well suited for reliability applications, and point to ongoing research

  12. Electronic circuit reliability modeling

    Microsoft Academic Search

    Joseph B. Bernstein; Moshe Gurfinkel; Xiaojun Li; Jörg Walters; Yoram Shapira; Michael Talmor

    2006-01-01

    Abstract The intrinsic failure mechanisms,and reliability models,of state-of-the-art MOSFETs are reviewed. The simulation tools and failure equivalent circuits are described. The review includes historical background,as well as a new,approach,for accurately predicting circuit reliability and failure rate from,the system,point of view. ? 2006 Elsevier Ltd. All rights reserved.

  13. Quantifying Transmission Reliability Margin

    Microsoft Academic Search

    Jianfeng Zhang; Ian Dobson; Fernando L. Alvarado

    2002-01-01

    In bulk electric power transfer capability com- putations, the transmission reliability margin accounts for uncertainties related to the transmission system conditions, contingencies, and parameter values. We propose a formula which quantifies transmission reliability margin based on transfer capability sensitivities and a probabilistic character- ization of the various uncertainties. The formula is verified by comparison with results from two systems small

  14. Reliability of imaging CCD's

    NASA Technical Reports Server (NTRS)

    Beal, J. R.; Borenstein, M. D.; Homan, R. A.; Johnson, D. L.; Wilson, D. D.; Young, V. F.

    1979-01-01

    Report on reliability of imaging charge-coupled devices (CCD's) is intended to augment rather-meager existing information on CCD reliability. Study focuses on electrical and optical performance tests, packaging constraints, and failure modes of one commercially available device (Fairchild CCD121H).

  15. Parallel MR Imaging

    PubMed Central

    Deshmane, Anagha; Gulani, Vikas; Griswold, Mark A.; Seiberlich, Nicole

    2015-01-01

    Parallel imaging is a robust method for accelerating the acquisition of magnetic resonance imaging (MRI) data, and has made possible many new applications of MR imaging. Parallel imaging works by acquiring a reduced amount of k-space data with an array of receiver coils. These undersampled data can be acquired more quickly, but the undersampling leads to aliased images. One of several parallel imaging algorithms can then be used to reconstruct artifact-free images from either the aliased images (SENSE-type reconstruction) or from the under-sampled data (GRAPPA-type reconstruction). The advantages of parallel imaging in a clinical setting include faster image acquisition, which can be used, for instance, to shorten breath-hold times resulting in fewer motion-corrupted examinations. In this article the basic concepts behind parallel imaging are introduced. The relationship between undersampling and aliasing is discussed and two commonly used parallel imaging methods, SENSE and GRAPPA, are explained in detail. Examples of artifacts arising from parallel imaging are shown and ways to detect and mitigate these artifacts are described. Finally, several current applications of parallel imaging are presented and recent advancements and promising research in parallel imaging are briefly reviewed. PMID:22696125

  16. Merlin - Massively parallel heterogeneous computing

    NASA Technical Reports Server (NTRS)

    Wittie, Larry; Maples, Creve

    1989-01-01

    Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

  17. Highly scalable parallel sorting

    Microsoft Academic Search

    Edgar Solomonik; Laxmikant V. Kalé

    2010-01-01

    Sorting is a commonly used process with a wide breadth of applications in the high performance computing field. Early research in parallel processing has provided us with comprehensive analysis and theory for parallel sorting algorithms. However, modern super- computers have advanced rapidly in size and changed significantly in architecture, forcing new adaptations to these algorithms. To fully utilize the potential

  18. Parallel Lisp simulator

    SciTech Connect

    Weening, J.S.

    1988-05-01

    CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

  19. The Nas Parallel Benchmarks

    Microsoft Academic Search

    D. Bailey; E. Barszcz; J. Barton; D. Browning; R. Carter; L. Dagum

    1994-01-01

    A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of five parallel kernels and three simulated application benchmarks. Together theymimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications.The principal distinguishing feature of these benchmarks is their penciland paper specification---all details of these benchmarks are

  20. Parallelizing quantum circuits

    Microsoft Academic Search

    Anne Broadbent; Elham Kashefi

    2009-01-01

    We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we

  1. Parallelization of thermochemical nanolithography.

    PubMed

    Carroll, Keith M; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William P; Curtis, Jennifer E; Riedo, Elisa

    2014-01-01

    One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons. PMID:24337109

  2. Component specification for parallel coupling infrastructure.

    SciTech Connect

    Larson, J. W.; Norris, B.; Mathematics and Computer Science; Australian National Univ.

    2007-01-01

    Coupled systems comprise multiple mutually interacting subsystems, and are an increasingly common computational science application, most notably as multiscale and multiphysics models. Parallel computing, and in particular message-passing programming have spurred the development of these models, but also present a parallel coupling problem (PCP) in the form of intermodel data dependencies. The PCP complicates model coupling through requirements for the description, transfer, and transformation of the distributed data that models in a parallel coupled system exchange. Component-based software engineering has been proposed as one means of conquering software complexity in scientific applications, and given the compound nature of coupled models, it is a natural approach to addressing the parallel coupling problem. We define a software component specification for solving the parallel coupling problem. This design draws from the already successful Common Component Architecture (CCA). We abstract the parallel coupling problem's elements and map them onto a set of CCA components, defining a parallel coupling infrastructure toolkit. We discuss a reference implementation based on the Model Coupling Toolkit. We demonstrate how these components might be deployed to solve a relevant coupling problems in climate modeling.

  3. Massively parallel mathematical sieves

    SciTech Connect

    Montry, G.R.

    1989-01-01

    The Sieve of Eratosthenes is a well-known algorithm for finding all prime numbers in a given subset of integers. A parallel version of the Sieve is described that produces computational speedups over 800 on a hypercube with 1,024 processing elements for problems of fixed size. Computational speedups as high as 980 are achieved when the problem size per processor is fixed. The method of parallelization generalizes to other sieves and will be efficient on any ensemble architecture. We investigate two highly parallel sieves using scattered decomposition and compare their performance on a hypercube multiprocessor. A comparison of different parallelization techniques for the sieve illustrates the trade-offs necessary in the design and implementation of massively parallel algorithms for large ensemble computers.

  4. Reliability (and Fault Tree) Analysis Using Expert Opinions

    Microsoft Academic Search

    Dennis V. Lindley; Nozer D. Singpurwalla

    1986-01-01

    In this article we introduce a formal procedure for the use of expert opinions in reliability (and fault tree) analysis. We consider the case of multicomponent parallel redundant systems for which there could be a single expert or a group of experts giving us opinions about each component. Inherent in our approach are a procedure for reflecting our judgment of

  5. Sequential cumulative fatigue reliability

    NASA Technical Reports Server (NTRS)

    Kececioglu, D.; Chester, L. B.; Gardner, E. O.

    1974-01-01

    A component is assumed to be subjected to a sequence of several groups of sinusoidal stresses. Each group consists of a specific number of cycles having the same maximum alternating stress level and the same mean stress level, the maximum alternating stress level being different from group to group. A method for predicting the reliability of components subjected to such loads is proposed, given their distributional alternating stress versus cycles-to-failure (S-N) diagram. It is called the 'conditional reliability-equivalent life' method. It is applied to four-cases using distributional fatigue data generated in the Reliability Research Laboratory of The University of Arizona, and the predicted reliabilities are compared and discussed.

  6. Reliability Analysis Model

    NASA Technical Reports Server (NTRS)

    1970-01-01

    RAM program determines probability of success for one or more given objectives in any complex system. Program includes failure mode and effects, criticality and reliability analyses, and some aspects of operations, safety, flight technology, systems design engineering, and configuration analyses.

  7. JSD: Parallel Job Accounting on the IBM SP2

    NASA Technical Reports Server (NTRS)

    Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

    1995-01-01

    The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

  8. The Journey Toward Reliability

    NSDL National Science Digital Library

    Brockway, Kathy Vratil

    Kansas State University faculty members have partnered with industry to assist in the implementation of a reliability centered manufacturing (RCM) program. This paper highlights faculty members experiences, benefits to industry of implementing a reliability centered manufacturing program, and faculty members roles in the RCM program implementation. The paper includes lessons learned by faculty members, short-term extensions of the faculty-industry partnership, and a long-term vision for a RCM institute at the university level.

  9. Reliability and Regression Analysis

    NSDL National Science Digital Library

    Lane, David M.

    This applet, by David M. Lane of Rice University, demonstrates how the reliability of X and Y affect various aspects of the regression of Y on X. Java 1.1 is required and a full set of instructions is given in order to get the full value from the applet. Exercises and definitions to key terms are also given to help students understand reliability and regression analysis.

  10. Reliability of photovoltaic modules

    Microsoft Academic Search

    R. G. Ross Jr.

    1986-01-01

    In order to assess the reliability of photovoltaic modules, four categories of known array failure and degradation mechanisms are discussed, and target reliability allocations have been developed within each category based on the available technology and the life-cycle-cost requirements of future large-scale terrestrial applications. Cell-level failure mechanisms associated with open-circuiting or short-circuiting of individual solar cells generally arise from cell

  11. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Wilson, Larry W.

    1989-01-01

    The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

  12. Multidisciplinary System Reliability Analysis

    NASA Technical Reports Server (NTRS)

    Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

    2001-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  13. Bilingual parallel programming

    SciTech Connect

    Foster, I.; Overbeek, R.

    1990-01-01

    Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

  14. Parallelization for geophysical waveform analysis 

    E-print Network

    Kurth, Derek Edward

    2013-02-22

    &M University to aid the parallel programmer by providing standard implementations of common parallel programming tasks. Our research involves using STAPL to apply parallel methods to a problem that has already been solved sequentially: Seismic ray tracing...

  15. Electronic logic for enhanced switch reliability

    DOEpatents

    Cooper, J.A.

    1984-01-20

    A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

  16. Reliable aluminum contact formation by electrostatic bonding

    NASA Astrophysics Data System (ADS)

    Kárpáti, T.; Pap, A. E.; Radnóczi, Gy; Beke, B.; Bársony, I.; Fürjes, P.

    2015-07-01

    The paper presents a detailed study of a reliable method developed for aluminum fusion wafer bonding assisted by the electrostatic force evolving during the anodic bonding process. The IC-compatible procedure described allows the parallel formation of electrical and mechanical contacts, facilitating a reliable packaging of electromechanical systems with backside electrical contacts. This fusion bonding method supports the fabrication of complex microelectromechanical systems (MEMS) and micro-opto-electromechanical systems (MOEMS) structures with enhanced temperature stability, which is crucial in mechanical sensor applications such as pressure or force sensors. Due to the applied electrical potential of???1000?V the Al metal layers are compressed by electrostatic force, and at the bonding temperature of 450?°C intermetallic diffusion causes aluminum ions to migrate between metal layers.

  17. Parametric probability distributions in reliability

    E-print Network

    Coolen, Frank

    Parametric probability distributions in reliability F.P.A. Coolen Department of Mathematical parametric probability distributions which are frequently used in reliability. We present some main as models for specific reliability scenarios. Keywords: Binomial distribution, Exponential distribution

  18. Gearbox Reliability Collaborative Update (Presentation)

    SciTech Connect

    Sheng, S.

    2013-10-01

    This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

  19. Parallel Programming and Parallel Abstractions in Fortress

    Microsoft Academic Search

    Guy L. Steele Jr.

    2006-01-01

    \\u000a The Programming Language Research Group at Sun Microsystems Laboratories seeks to apply lessons learned from the Java (TM)\\u000a Programming Language to the next generation of programming languages. The Java language supports platform-independent parallel\\u000a programming with explicit multithreading and explicit locks. As part of the DARPA program for High Productivity Computing\\u000a Systems, we are developing Fortress, a language intended to support

  20. Reliability simulator for improving IC manufacturability

    NASA Astrophysics Data System (ADS)

    Moosa, Mohamod S.; Poole, Kelvin F.

    1994-09-01

    A Monte-Carlo reliability simulator for integrated circuits that incorporates the effects of process-flaws, material properties, the mask layout and use-conditions for interconnects is presented. The mask layout is decomposed into distinct objects, such as contiguous metal runs, vias and contacts, for which user-defined cumulative distribution functions (cdfs) are used for determining the probability of failure. These cdfs are represented using a mixture of defect- related and wearout-related distributions. The failure distributions for nets, which are sets of interconnected layout objects, are obtained by combining the distributions of their component objects. System reliability is obtained by applying control variate sampling to the reliability network which is comprised of all nets. The effects of series, parallel and k-out-of-n substructures within the reliability network are accounted for. A Bayesian approach to incorporating burn-in data with simulated estimates is also presented. A program that interfaces directly with commercially used CAD software has been implemented. Results provide a qualitative verification of the methodology and show that predictions which incorporate failures due to process flaws are significantly more pessimistic than those obtained by following current practice.

  1. Series and Parallel Circuits

    NSDL National Science Digital Library

    2013-08-30

    In this activity, learners demonstrate and discuss simple circuits as well as the differences between parallel and serial circuit design and functions. Learners test two different circuit designs through the use of low voltage light bulbs.

  2. Parallelization of thermochemical nanolithography

    NASA Astrophysics Data System (ADS)

    Carroll, Keith M.; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William P.; Curtis, Jennifer E.; Riedo, Elisa

    2014-01-01

    One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons.One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. Here, we demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of conjugated polymer nanostructures and graphene-based nanoribbons. Electronic supplementary information (ESI) available: Details on the cantilevers array, on the sample preparation, and on the GO AFM experiments. See DOI: 10.1039/c3nr05696a

  3. The Parallel Axiom

    ERIC Educational Resources Information Center

    Rogers, Pat

    1972-01-01

    Criteria for a reasonable axiomatic system are discussed. A discussion of the historical attempts to prove the independence of Euclids parallel postulate introduces non-Euclidean geometries. Poincare's model for a non-Euclidean geometry is defined and analyzed. (LS)

  4. Reliability Degradation Due to Stockpile Aging

    SciTech Connect

    Robinson, David G.

    1999-04-01

    The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile reliability, but have been used extensively in various forms outside Sandia National Laboratories. It is hoped that this report will encourage the use of 'nontraditional' reliabilty and uncertainty techniques in gaining insight into stockpile reliability issues.

  5. Two applications of parallel processing in power system computation

    Microsoft Academic Search

    C. Lemaitre; B. Thomas

    1995-01-01

    This paper discusses performance improvements achieved in two power system software modules through the use of parallel processing techniques. The first software module, EVARISTE outputs a voltage stability indicator for various power system situations. This module was designed for extended real-time use and is therefore required to give guaranteed response times. The second module, MEXICO, assesses power system reliability and

  6. Inspection criteria ensure quality control of parallel gap soldering

    NASA Technical Reports Server (NTRS)

    Burka, J. A.

    1968-01-01

    Investigation of parallel gap soldering of electrical leads resulted in recommendation on material preparation, equipment, process control, and visual inspection criteria to ensure reliable solder joints. The recommendations will minimize problems in heat-dwell time, amount of solder, bridging conductors, and damage of circuitry.

  7. Reconfigurable Planar Three-Legged Parallel Manipulators

    E-print Network

    Hayes, John

    Reconfigurable Planar Three-Legged Parallel Manipulators M. John D. Hayes Department of Mechanical of reconfigurable planar three legged plat- forms is introduced. Kinematic mapping techniques are applied to solve Introduction There has been great interest in reconfigurable mechanisms in recent years (see Yim et al., March

  8. High Level Synthesis of Synchronous Parallel Controllers

    Microsoft Academic Search

    Krzysztof Biliriski; Erik L. Dagless

    1996-01-01

    In this paper the application of Petri nets to high level synthesis of synchronous parallel controllers is presented. A formal specification of a design is given in a form of an interpreted synchronous Petri net. Behavioral properties of the controller are verified using symbolic traversal of its Petri net model. The net state-space explosion problem is managed using binary decision

  9. Genetic Algorithms for Reliability-Based Optimization of Water Distribution Systems

    Microsoft Academic Search

    Bryan A. Tolson; Holger R. Maier; Angus R. Simpson; Barbara J. Lence

    2004-01-01

    A new approach for reliability-based optimization of water distribution networks is presented. The approach links a genetic algorithm ~GA! as the optimization tool with the first-order reliability method ~FORM! for estimating network capacity reliability. Network capacity reliability in this case study refers to the probability of meeting minimum allowable pressure constraints across the network under uncertain nodal demands and uncertain

  10. Automated Verification of Dynamic Reliability Block Diagrams Using Colored Petri Nets1

    E-print Network

    Xu, Haiping

    Backus-Naur form. CPN Colored Petri nets. DRBD Dynamic reliability block diagram. DFTA Dynamic fault tree techniques such as fault tree analysis (FTA) and reliability block diagrams (RBD), which provide static analysis. FTA Fault tree analysis. RBD Reliability block diagram. RML Reliability markup language. SDEP

  11. Reliability Generalization (RG) Analysis: The Test Is Not Reliable

    ERIC Educational Resources Information Center

    Warne, Russell

    2008-01-01

    Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

  12. Increasing the Reliability of Reliability Jochen Brocker1,

    E-print Network

    Stevenson, Paul

    Increasing the Reliability of Reliability Diagrams Jochen Br¨ocker1, and Leonard A. Smith1,2 1 University Oxford, UK Corresponding Author, cats@lse.ac.uk November 9, 2006 #12;Abstract The reliability are reliable. Further, an alternative presentation of the same information on probability paper easies

  13. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  14. Software reliability perspectives

    NASA Technical Reports Server (NTRS)

    Wilson, Larry; Shen, Wenhui

    1987-01-01

    Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

  15. Parallel Composition Communication and Allow Hiding Parallel Processes

    E-print Network

    Mousavi, Mohammad

    Parallel Composition Challenge (Dish1+Dish2) || Coke ? = (Dish1 || Coke)+(Dish2 || Coke) Mousavi: Parallel Composition Challenge (Dish1+Dish2) || Coke ? = (Dish1 || Coke)+(Dish2 || Coke) Faron Moller's Result Parallel Parallel Composition and |: Raisons d'^etre (Dish1 + Dish2) Coke (Dish1 Coke) + (Dish2 Coke) (Dish1 + Dish

  16. Weakly parallel tests in latent trait theory with some criticisms of classical test theory

    Microsoft Academic Search

    Fumiko Samejima

    1977-01-01

    A new concept of weakly parallel tests, in contrast to strongly parallel tests in latent trait theory, is proposed. Some criticisms of the fundamental concepts in classical test theory, such as the reliability of a test and the standard error of estimation, are given.

  17. Weakly Parallel Tests In Latent Trait Theory With Some Criticisms of Classical Test Theory

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    1977-01-01

    A new concept of weakly parallel tests, in contrast to strongly parallel tests in latent trait theory, is proposed. Some criticisms of the fundamental concepts in classical test theory, such as the reliability of a test and the standard error of estimation, are given. (Author)

  18. Square form factorization

    NASA Astrophysics Data System (ADS)

    Gower, Jason E.; Wagstaff, Samuel S., Jr.

    2008-03-01

    We present a detailed analysis of SQUFOF, Daniel Shanks' Square Form Factorization algorithm. We give the average time and space requirements for SQUFOF. We analyze the effect of multipliers, either used for a single factorization or when racing the algorithm in parallel.

  19. Measuring agreement in medical informatics reliability studies

    Microsoft Academic Search

    George Hripcsak; Daniel F. Heitjan

    2002-01-01

    Agreement measures are used frequently in reliability studies that involve categorical data. Simple measures like observed agreement and specific agreement can reveal a good deal about the sample. Chance-corrected agreement in the form of the kappa statistic is used frequently based on its correspondence to an intraclass correlation coefficient and the ease of calculating it, but its magnitude depends on

  20. Thermally optimum spacing of vertical, natural convection cooled, parallel plates

    Microsoft Academic Search

    A. Bar-Cohen; W. M. Rohsenow

    1981-01-01

    Vertical two-dimensional channels formed by parallel plates or fins are a frequently encountered configuration in natural convection cooling in air of electronic equipment. In connection with the complexity of heat dissipation in vertical parallel plate arrays, little theoretical effort is devoted to thermal optimization of the relevant packaging configurations. The present investigation is concerned with the establishment of an analytical

  1. Parallel scripting for applications at the petascale and beyond.

    SciTech Connect

    Wilde, M.; Zhang, Z.; Clifford, B.; Hategan, M.; Iskra, K.; Beckman, P.; Foster, I.; Raicu, I.; Espinosa, A.; Univ. of Chicago

    2009-11-01

    Scripting accelerates and simplifies the composition of existing codes to form more powerful applications. Parallel scripting extends this technique to allow for the rapid development of highly parallel applications that can run efficiently on platforms ranging from multicore workstations to petascale supercomputers.

  2. Reliable Shapelet Image Analysis

    E-print Network

    P. Melchior; M. Meneghetti; M. Bartelmann

    2006-12-13

    Aims: We discuss the applicability and reliability of the shapelet technique for scientific image analysis. Methods: We quantify the effects of non-orthogonality of sampled shapelet basis functions and misestimation of shapelet parameters. We perform the shapelet decomposition on artificial galaxy images with underlying shapelet models and galaxy images from the GOODS survey, comparing the publicly available IDL implementation with our new C++ implementation. Results: Non-orthogonality of the sampled basis functions and misestimation of the shapelet parameters can cause substantial misinterpretation of the physical properties of the decomposed objects. Additional constraints, image preprocessing and enhanced precision have to be incorporated in order to achieve reliable decomposition results.

  3. Parallel initial value algorithms for singularly perturbed boundary-value problems

    SciTech Connect

    Gasparo, M.G.; Macconi, M. (Firenze, Universita, Florence (Italy))

    1992-06-01

    Parallel algorithms for computing asymptotic approximations to the solutions of many singularly perturbed boundary-value problems are considered. Numerical techniques, called parallel-initial-value methods, are proposed which permit several natural computation decompositions into independent tasks. The number of disposable independent processors is taken into account to design suitable parallel schemes. The reliability and performance of the proposed schemes are demonstrated using a CRAY Y-MP 8/432 multiprocessor. 6 refs.

  4. Software Reliability Engineering: A Roadmap

    Microsoft Academic Search

    Michael R. Lyu

    2007-01-01

    Software reliability engineering is focused on engineering techniques for developing and maintaining software systems whose reliability can be quantitatively evaluated. In order to estimate as well as to predict the reliability of software systems, failure data need to be properly measured by various means during software development and operational phases. Moreover, credible software reliability models are required to track underlying

  5. Software Reliability Modeling James LEDOUX

    E-print Network

    Paris-Sud XI, Université de

    Software Reliability Modeling James LEDOUX Centre de Math´ematiques INSA & IRMAR 20 Avenue des an overview of some aspects of Software Reliability (SR) engineering. Most systems are now driven by software in reliability engineering, particularly in terms of cost. But predicting software reliability is not easy

  6. Reliable framework for RFID devices

    Microsoft Academic Search

    Nova Ahmed; Umakishore Ramachandran

    2008-01-01

    RFID technology requires deployment that ensures reliability. We have developed the middleware RF2ID, a reliable framework for RFID, to improve reliability at software level. Two key abstractions 1) Virtual Reader: the distributed computational element 2) Virtual Path: the communication channel among virtual readers are used to improve the system reliability. The key idea is to use the notion of path

  7. The Parallel and Distributed Algorithms This chapter contains a brief review of both parallel and distributed computing,

    E-print Network

    Goddard III, William A.

    a brief review of both parallel and distributed computing, emphasizing the important aspects of each form of concurrent computing. Based on the resonance method and program design described in Chapter V, an algorithm for efficiently computing resonance matrix integrals is presented for both parallel computers and distributed

  8. Dynamic graphics using quasi parallelism

    Microsoft Academic Search

    Kenneth M. Kahn; Carl Hewitt

    1978-01-01

    Dynamic computer graphics is best represented as several processes operating in parallel. Full parallel processing, however, entails much complex mechanism making it difficult to write simple, intuitive programs for generating computer animation. What is presented in this paper is a simple means of attaining the appearance of parallelism and the ability to program the graphics in a conceptually parallel fashion

  9. Parallel Processing in Amplitude Analysis

    E-print Network

    Evans, Hal

    Parallel Processing in Amplitude Analysis Lecture 2 of 2 on Parallel Processing Physics 411/610 March 31, 2011 Matt Shepherd #12;M. R. Shepherd Parallel Processing Lecture 2 March 31, 2011 Outline · Theoretical Background · Experimental Technique · Application of Parallel Computing · Method of Maximum

  10. Reliable VLSI sequential controllers

    Microsoft Academic Search

    Sterling R. Whitaker; Gary K. Maki; Manjunath Shamanna

    1991-01-01

    A VLSI architecture for synchronous sequential controllers is resented that has attractive qualities for roducing reliable circuits. In these circuits, one hardware implementation can realize any flow table with a maximum of 2n internal states and m inputs. A real time fault detection means is resented along with a strategy for verifying the correctness of the checking hardware. This self-check

  11. Distribution feeder reliability assessment

    Microsoft Academic Search

    A. A. Chowdhury

    2005-01-01

    Historical distribution feeder reliability assessment generally summarizes discrete interruption events occurring at specific locations over specific time periods; whereas, predictive assessment estimates the long-run behavior of systems by combining component failure rates and repair (restoration) times that describe the central tendency of an entire distribution of possible values with feeder configurations. The outage time due to component failures can substantially

  12. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  13. Parametric Mass Reliability Study

    NASA Technical Reports Server (NTRS)

    Holt, James P.

    2014-01-01

    The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

  14. Reliable broadcast protocols

    Microsoft Academic Search

    Jo-Mei Chang; Nicholas F. Maxemchuk

    1984-01-01

    A reliable broadcast protocol for an unreliable broadcast network is described. The protocol operates between the application programs and the broadcast network. It isolates the application programs from the unreliable characteristics of the communication network. The protocol guarantees that all of the broadcast messages are received at all of the operational receivers in a broadcast group. In addition, the sequence

  15. Grid reliability management tools

    SciTech Connect

    Eto, J.; Martinez, C.; Dyer, J.; Budhraja, V.

    2000-10-01

    To summarize, Consortium for Electric Reliability Technology Solutions (CERTS) is engaged in a multi-year program of public interest R&D to develop and prototype software tools that will enhance system reliability during the transition to competitive markets. The core philosophy embedded in the design of these tools is the recognition that in the future reliability will be provided through market operations, not the decisions of central planners. Embracing this philosophy calls for tools that: (1) Recognize that the game has moved from modeling machine and engineering analysis to simulating markets to understand the impacts on reliability (and vice versa); (2) Provide real-time data and support information transparency toward enhancing the ability of operators and market participants to quickly grasp, analyze, and act effectively on information; (3) Allow operators, in particular, to measure, monitor, assess, and predict both system performance as well as the performance of market participants; and (4) Allow rapid incorporation of the latest sensing, data communication, computing, visualization, and algorithmic techniques and technologies.

  16. MMIC capacitor dielectric reliability

    Microsoft Academic Search

    H. Cramer; J. Oliver; G. Dix

    1998-01-01

    Field strength and leakage are intrinsic dielectric properties which alone are not sufficient to ensure high reliability. To analyze lifetime, one also needs to consider “charge to breakdown” failure caused by defects and their effect on the dielectric thinning. The linear field model was used to explain the aging effects of electric field and temperature on capacitors. This model shows

  17. Valuing Water Supply Reliability

    Microsoft Academic Search

    Ronald C. Griffin; James W. Mjelde

    2000-01-01

    Instead of creating water supply systems that fully insulate mankind from climate-imposed water deficiencies, it is possible that for municipal water systems a nonzero probability of water supply shortfall is efficient. Perfect water supply reliability, meaning no chance of future shortfall, is not optimal when water development costs are high. Designing an efficient strategy requires an assessment of consumer preferences

  18. Quantifying Human Performance Reliability.

    ERIC Educational Resources Information Center

    Askren, William B.; Regulinski, Thaddeus L.

    Human performance reliability for tasks in the time-space continuous domain is defined and a general mathematical model presented. The human performance measurement terms time-to-error and time-to-error-correction are defined. The model and measurement terms are tested using laboratory vigilance and manual control tasks. Error and error-correction…

  19. Reliability points for discussion

    E-print Network

    Wood, Lloyd

    Reliability points for discussion prepared for discussion at the IRTF Delay-Tolerant Networking-Lite. Permits application-level ECC, error-tolerant codecs, etc. no header or payload checking Current bundle-to-end' #12;33draft-irtf-dtnrg-bundle-checksum Control loops: security and custody transfer #1Control loops

  20. Parallel State Estimation Assessment with Practical Data

    SciTech Connect

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  1. Understanding the Elements of Operational Reliability: A Key for Achieving High Reliability

    NASA Technical Reports Server (NTRS)

    Safie, Fayssal M.

    2010-01-01

    This viewgraph presentation reviews operational reliability and its role in achieving high reliability through design and process reliability. The topics include: 1) Reliability Engineering Major Areas and interfaces; 2) Design Reliability; 3) Process Reliability; and 4) Reliability Applications.

  2. Sublattice parallel replica dynamics.

    PubMed

    Martínez, Enrique; Uberuaga, Blas P; Voter, Arthur F

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998)] by combining it with the synchronous sublattice approach of Shim and Amar [ and , Phys. Rev. B 71, 125432 (2005)], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers. PMID:25019913

  3. Sublattice parallel replica dynamics

    NASA Astrophysics Data System (ADS)

    Martínez, Enrique; Uberuaga, Blas P.; Voter, Arthur F.

    2014-06-01

    Exascale computing presents a challenge for the scientific community as new algorithms must be developed to take full advantage of the new computing paradigm. Atomistic simulation methods that offer full fidelity to the underlying potential, i.e., molecular dynamics (MD) and parallel replica dynamics, fail to use the whole machine speedup, leaving a region in time and sample size space that is unattainable with current algorithms. In this paper, we present an extension of the parallel replica dynamics algorithm [A. F. Voter, Phys. Rev. B 57, R13985 (1998), 10.1103/PhysRevB.57.R13985] by combining it with the synchronous sublattice approach of Shim and Amar [Y. Shim and J. G. Amar, Phys. Rev. B 71, 125432 (2005), 10.1103/PhysRevB.71.125432], thereby exploiting event locality to improve the algorithm scalability. This algorithm is based on a domain decomposition in which events happen independently in different regions in the sample. We develop an analytical expression for the speedup given by this sublattice parallel replica dynamics algorithm and compare it with parallel MD and traditional parallel replica dynamics. We demonstrate how this algorithm, which introduces a slight additional approximation of event locality, enables the study of physical systems unreachable with traditional methodologies and promises to better utilize the resources of current high performance and future exascale computers.

  4. Modification of risk assessment value to test industry reliability

    NASA Astrophysics Data System (ADS)

    Sorooshian, Shahryar

    2015-05-01

    This paper is in a form of a technical note, in which modified method of industry reliability analysis is reviewed. This paper introduces a multi-expert approach group decision making analysis for Risk Assessment Value (RAV), which directly tests the reliability of the system (industry). A case of construction industry has been studied for verification of the method. Reviewed RAV modified technique helps to assess the value of industry reliability; and brings contribution to the body of risk management knowledge.

  5. Reliable inverter systems

    NASA Technical Reports Server (NTRS)

    Nagano, S.

    1979-01-01

    Base driver with common-load-current feedback protects paralleled inverter systems from open or short circuits. Circuit eliminates total system oscillation that can occur in conventional inverters because of open circuit in primary transformer winding. Common feedback signal produced by functioning modules forces operating frequency of failed module to coincide with clock drive so module resumes normal operating frequency in spite of open circuit.

  6. Releasing the Grays: In Support of Legalizing Parallel Imports

    E-print Network

    Ruff, Andrew

    1992-01-01

    strong incentive to take action against distributors engagedincentive for parallel importers to undercut domestic distributors.distributor the opportunity to realize monopoly profits.1 83 Colluding suppliers, rather than forming supplier cartels, will have an incentive

  7. Parallel channel flow excursions

    SciTech Connect

    Johnston, B.S.

    1990-01-01

    Among the many known types of vapor-liquid flow instability is the excursion which may occur in heated parallel channels. Under certain conditions, the pressure drop requirement in a heated channel may increase with decreases in flow rate. This leads to an excursive reduction in flow. For channels heated by electricity or nuclear fission, this can result in overheating and damage to the channel. In the design of any parallel channel device, flow excursion limits should be established. After a review of parallel channel behavior and analysis, a conservative criterion will be proposed for avoiding excursions. In support of this criterion, recent experimental work on boiling in downward flow will be described. 5 figs.

  8. Parallelizing Quantum Circuits

    E-print Network

    Anne Broadbent; Elham Kashefi

    2007-04-13

    We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we provide for the first time a full characterization of patterns with flow of arbitrary depth, based on the notion of influencing paths and a simple rewriting system on the angles of the measurement. Our method leads to insightful knowledge for constructing parallel circuits and as applications, we demonstrate several constant and logarithmic depth circuits. Furthermore, we prove a logarithmic separation in terms of quantum depth between the quantum circuit model and the measurement-based model.

  9. Automatic generation of synchronization instructions for parallel processors

    SciTech Connect

    Midkiff, S.P.

    1986-05-01

    The development of high speed parallel multi-processors, capable of parallel execution of doacross and forall loops, has stimulated the development of compilers to transform serial FORTRAN programs to parallel forms. One of the duties of such a compiler must be to place synchronization instructions in the parallel version of the program to insure the legal execution order of doacross and forall loops. This thesis gives strategies usable by a compiler to generate these synchronization instructions. It presents algorithms for reducing the parallelism in FORTRAN programs to match a target architecture, recovering some of the parallelism so discarded, and reducing the number of synchronization instructions that must be added to a FORTRAN program, as well as basic strategies for placing synchronization instructions. These algorithms are developed for two synchronization instruction sets. 20 refs., 56 figs.

  10. Improving Parallel I/O Performance with Data Layout Awareness

    SciTech Connect

    Chen, Yong [ORNL] [ORNL; Sun, Xian-He [Illinois Institute of Technology] [Illinois Institute of Technology; Thakur, Dr. Rajeev [Argonne National Laboratory (ANL)] [Argonne National Laboratory (ANL); Song, Huaiming [Illinois Institute of Technology] [Illinois Institute of Technology; Jin, Hui [Illinois Institute of Technology] [Illinois Institute of Technology

    2010-01-01

    Parallel applications can benefit greatly from massive computational capability, but their performance suffers from large latency of I/O accesses. The poor I/O performance has been attributed as a critical cause of the low sustained performance of parallel computing systems. In this study, we propose a data layout-aware optimization strategy to promote a better integration of the parallel I/O middleware and parallel file systems, two major components of the current parallel I/O systems, and to improve the data access performance. We explore the layout-aware optimization in both independent I/O and collective I/O, two primary forms of I/O in parallel applications. We illustrate that the layout-aware I/O optimization could improve the performance of current parallel I/O strategy effectively. The experimental results verify that the proposed strategy could improve parallel I/O performance by nearly 40% on average. The proposed layout-aware parallel I/O has a promising potential in improving the I/O performance of parallel systems.

  11. A Parallel Tree Code

    E-print Network

    John Dubinski

    1996-03-18

    We describe a new implementation of a parallel N-body tree code. The code is load-balanced using the method of orthogonal recursive bisection to subdivide the N-body system into independent rectangular volumes each of which is mapped to a processor on a parallel computer. On the Cray T3D, the load balance in the range of 70-90\\% depending on the problem size and number of processors. The code can handle simulations with $>$ 10 million particles roughly a factor of 10 greater than allowed in vectorized tree codes.

  12. Parallel signal processing

    NASA Astrophysics Data System (ADS)

    McWhirter, John G.

    1989-12-01

    The potential application of parallel computing techniques to digital signal processing for radar is discussed and two types of regular array processor are discussed. The first type of processor is the systolic or wavefront processor. The application of this type of processor to adaptive beamforming is discussed and the joint STL-RSRE adaptive antenna processor test-bed is reviewed. The second type of regular array processor is the SIMD parallel computer. One such processor, the Mil-DAP, is described, and its application to a varied range of radar signal processing tasks is discussed.

  13. Speeding up parallel processing

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1988-01-01

    In 1967 Amdahl expressed doubts about the ultimate utility of multiprocessors. The formulation, now called Amdahl's law, became part of the computing folklore and has inspired much skepticism about the ability of the current generation of massively parallel processors to efficiently deliver all their computing power to programs. The widely publicized recent results of a group at Sandia National Laboratory, which showed speedup on a 1024 node hypercube of over 500 for three fixed size problems and over 1000 for three scalable problems, have convincingly challenged this bit of folklore and have given new impetus to parallel scientific computing.

  14. Parallelization of Thermochemical Nanolithography

    NASA Astrophysics Data System (ADS)

    Curtis, Jennifer E.; Carroll, Keith; Lu, Xi; Kim, Suenne; Gao, Yang; Kim, Hoe-Joon; Somnath, Suhas; Polloni, Laura; Sordan, Roman; King, William; Riedo, Elisa

    2014-03-01

    One of the most pressing technological challenges in the development of next generation nanoscale devices is the rapid, parallel, precise and robust fabrication of nanostructures. We demonstrate the possibility to parallelize thermochemical nanolithography (TCNL) by employing five nano-tips for the fabrication of luminescent polymer nanostructures and graphene-based nanoribbons. This work has been supported by the National Science Foundation PHYS 0848797 (J.E.C.), CMMI 1100290 (E.R., W.P.K), MRSEC program DMR 0820382 (E.R., J.E.C.), and the Office of Basic Energy Sciences DOE DE-FG02-06ER46293 (E.R.).

  15. Are Stromatolites Reliable Biosignatures?

    NASA Astrophysics Data System (ADS)

    Corsetti, F. A.; Berelson, W. M.; Spear, J. R.; Pepe-Raney, C.; Marshall, C.; Olcott-Marshall, A.

    2010-04-01

    On the one hand, there is no doubt that some (perhaps most) stromatolites on Earth were formed with biologic influence. On the other, recent work has suggested that stromatolite-like structures have formed without biologic input.

  16. PARALLEL ELECTRIC FIELD SPECTRUM OF SOLAR WIND TURBULENCE

    SciTech Connect

    Mozer, F. S.; Chen, C. H. K., E-mail: fmozer@ssl.berkeley.edu [Space Sciences Laboratory, University of California, Berkeley, CA 94720 (United States)

    2013-05-01

    By searching through more than 10 satellite years of THEMIS and Cluster data, 3 reliable examples of parallel electric field turbulence in the undisturbed solar wind have been found. The perpendicular and parallel electric field spectra in these examples have similar shapes and amplitudes, even at large scales (frequencies below the ion gyroscale), where Alfvenic turbulence with no parallel electric field component is thought to dominate. The spectra of the parallel electric field fluctuations are power laws with exponents near -5/3 below the ion scales ({approx}0.1 Hz), and with a flattening of the spectrum in the vicinity of this frequency. At small scales (above a few Hz), the spectra are steeper than -5/3 with values in the range of -2.1 to -2.8. These steeper slopes are consistent with expectations for kinetic Alfven turbulence, although their amplitude relative to the perpendicular fluctuations is larger than expected.

  17. UNCORRECTED Reliability analysis of hybrid ceramic/steel gun barrels

    E-print Network

    Grujicic, Mica

    UNCORRECTED PROOF Reliability analysis of hybrid ceramic/steel gun barrels M. GRUJICIC1 , J. R-5069, USA Received in final form 25 February 2002 AB ST R AC T Failure of the ceramic gun-barrel lining probability for the lining is also discussed. Keywords failure; gun-barrel lining; reliability; thermo

  18. Unpowered to powered failure rate ratio - A key reliability parameter

    NASA Technical Reports Server (NTRS)

    Taylor, D. S.

    1974-01-01

    It is shown that the initial assumption of the ratio of unpowered to powered failure rates can have a strong influence on the design of a modular system intended for space missions. The analysis is performed for parallel systems and for triple modular redundant systems. Parallel systems are shown to be much more sensitive to the unpowered to powered failure rate ratio than the TMR/Spares systems, however, regardless of which standby redundancy technique is considered, the dependence of the system reliability on this ratio increases as the number of standby spares increases.

  19. Parallel computers and parallel algorithms for CFD: An introduction

    NASA Astrophysics Data System (ADS)

    Roose, Dirk; Vandriessche, Rafael

    1995-10-01

    This text presents a tutorial on those aspects of parallel computing that are important for the development of efficient parallel algorithms and software for computational fluid dynamics. We first review the main architectural features of parallel computers and we briefly describe some parallel systems on the market today. We introduce some important concepts concerning the development and the performance evaluation of parallel algorithms. We discuss how work load imbalance and communication costs on distributed memory parallel computers can be minimized. We present performance results for some CFD test cases. We focus on applications using structured and block structured grids, but the concepts and techniques are also valid for unstructured grids.

  20. Power electronics reliability.

    SciTech Connect

    Kaplar, Robert James; Brock, Reinhard C.; Marinella, Matthew; King, Michael Patrick; Stanley, James K.; Smith, Mark A.; Atcitty, Stanley

    2010-10-01

    The project's goals are: (1) use experiments and modeling to investigate and characterize stress-related failure modes of post-silicon power electronic (PE) devices such as silicon carbide (SiC) and gallium nitride (GaN) switches; and (2) seek opportunities for condition monitoring (CM) and prognostics and health management (PHM) to further enhance the reliability of power electronics devices and equipment. CM - detect anomalies and diagnose problems that require maintenance. PHM - track damage growth, predict time to failure, and manage subsequent maintenance and operations in such a way to optimize overall system utility against cost. The benefits of CM/PHM are: (1) operate power conversion systems in ways that will preclude predicted failures; (2) reduce unscheduled downtime and thereby reduce costs; and (3) pioneering reliability in SiC and GaN.

  1. Reliable broadcast protocols

    NASA Technical Reports Server (NTRS)

    Joseph, T. A.; Birman, Kenneth P.

    1989-01-01

    A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

  2. Spacecraft transmitter reliability

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

  3. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  4. IU parallel processing benchmark

    Microsoft Academic Search

    Charles Weems; Edward Riseman; Allen Hanson; Azriel Rosenfeld

    1988-01-01

    A benchmark is presented that was designed to evaluate the merits of various parallel architectures as applied to image understanding (IU). This benchmark exercise addresses the issue of system performance on an integrated set of tasks, where the task interactions that are typical of complex vision application are present. The goal of this exercise is to gain a better understanding

  5. Parallel Traveling Salesman Problem

    NSDL National Science Digital Library

    David Joiner

    The traveling salesman problem is a classic optimization problem in which one seeks to minimize the path taken by a salesman in traveling between N cities, where the salesman stops at each city one and only one time, never retracing his/her route. This implementation is designed to run on UNIX systems with X-Windows, and includes parallelization using MPI.

  6. Parallel Circuits Lab

    NSDL National Science Digital Library

    This in-class lab exercise will give students a familiarity with basic series and parallel circuits as well as measuring voltage, current and resistance. The worksheet provided leads students through the experiment step by step. Spaces for student measurements and conclusions are provided on the sheet. This document may be downloaded in PDF file format.

  7. Parallel programming with PCN

    SciTech Connect

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  8. Parallel and distributed computation

    Microsoft Academic Search

    Dimitri P. Bertsekas; John N. Tsitsiklis

    1989-01-01

    This book focuses on numerical algorithms suited for parallelization for solving systems of equations and optimization problems. Emphasis on relaxation methods of the Jacobi and Gauss-Seidel type, and issues of communication and synchronization. Topics covered include: Algorithms for systems of linear equations and matrix inversion; Herative methods for nonlinear problems; and Shortest paths and dynamic programming.

  9. Parallel Spectral Numerical Methods

    NSDL National Science Digital Library

    Gong Chen

    This module teaches the principals of Fourier spectral methods, their utility in solving partial differential equation and how to implement them in code. Performance considerations for several Fourier spectral implementations are discussed and methods for effective scaling on parallel computers are explained.

  10. Note on parallel universes

    Microsoft Academic Search

    Niall M. Adams; David J. Hand

    2007-01-01

    The parallel universes idea is an attempt to integrate several aspects of learning which share some common aspects. This is an inter- esting idea: if successful, insights could cross-fertilise, leading to advances in each area. The 'multi-view' perspective seems to us to have particular potential. We have investigated several aspects of this, including the following:

  11. High performance parallel architectures

    SciTech Connect

    Anderson, R.E. (Lawrence Livermore National Lab., CA (USA))

    1989-09-01

    In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

  12. Parallel-plate viscometer

    NASA Technical Reports Server (NTRS)

    Fearnehough, H. T.; Fedors, R. F.; Landel, R. F.; Sauer, T. H.

    1972-01-01

    Viscometer consists of movable vertical rod with one optical flat fixed to its lower end and centered over second optical flat held rigidly parallel to moveable flat. Two perforated diaphragms of thin metal permit limited amount of vertical movement of rod carrying movable flat, but resist lateral movement.

  13. Learning in Parallel

    E-print Network

    Vitter, Jeffrey Scott; Lin, Jyh-Han

    1992-01-01

    us an alternate characterization ofNC-learnable concept classes. It implies, for example, that allNC k -learnable problems are alsoAC k -learnable, sinceNC k -learning requires that 4 #2F Learning in Parallel the circuits have bounded fanin, andAC k...

  14. Parallel hierarchical global illumination

    SciTech Connect

    Snell, Q.O.

    1997-10-08

    Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

  15. Optimizing parallel reduction operations

    SciTech Connect

    Denton, S.M.

    1995-06-01

    A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

  16. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  17. Program for computer aided reliability estimation

    NASA Technical Reports Server (NTRS)

    Mathur, F. P. (inventor)

    1972-01-01

    A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

  18. A Reliable Affine Relaxation Method for Global Optimization

    E-print Network

    Ninin, Messine, Hansen

    2012-10-23

    Oct 23, 2012 ... For example, the multiplication between two affine forms of type AF1 is performed as follows: ..... two decimals. The aim is to .... of a reliable affine arithmetic, these difficulties to compute the division and nonlinear functions.

  19. Mathematical models for the reliability research [telecommunications systems

    Microsoft Academic Search

    N. Kazakova

    2003-01-01

    Summary form only given. The analysis of mathematical models carried out has allowed the generation of calculated expressions for determination of reliability indexes of computer web routes on known values of element indexes.

  20. Multinomial-exponential reliability function: a software reliability model

    Microsoft Academic Search

    Amalio Saiz de Bustamante; Barbara Saiz de Bustamante

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure\\/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF\\/EARF considers the software failure process as a non-homogeneous Poisson

  1. On Component Reliability and System Reliability for Space Missions

    NASA Technical Reports Server (NTRS)

    Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

    2012-01-01

    This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

  2. Device for balancing parallel strings

    DOEpatents

    Mashikian, Matthew S. (Storrs, CT)

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  3. Mapping Unstructured Parallelism to Series-Parallel DAGs

    E-print Network

    Pan, Yan

    Many parallel programming languages allow programmers to describe parallelism by using constructs such as fork/join. When executed, such programs can be modeled as directed graphs, with nodes representing a computation and ...

  4. Using 'Parallel Automaton' as a Single Notation to Specify, Design and Control Small Computer Based Systems

    Microsoft Academic Search

    H. G. Mendelbaum; Raphael B. Yehezkael

    2001-01-01

    We present in this paper a methodology how to use 'Parallel Automaton' to set up the requirements, to specify and to execute small Computer Based Systems (CBS). A 'Parallel Automaton' is an extended form of the Mealy Machine. It handles a finite set of events (or variable conditions or clock conditions) which can occur in parallel, and performs a finite

  5. Parallel processing techniques for finite element analysis of nonlinear large truss structures

    NASA Technical Reports Server (NTRS)

    Chien, L. S.; Sun, C. T.

    1989-01-01

    Methods were developed for parallel processing of finite element solutions of large truss structures. The parallel processing techniques were implemented in two stages, i.e., the repeated forming of the nonlinear global stiffness matrix and the solving of the global system of equations. The Sequent Balance 21000 parallel computer was employed to demonstrate the procedures and the speed-up.

  6. Parallel Vegetation Stripe Formation Through Hydrologic Interactions

    NASA Astrophysics Data System (ADS)

    Cheng, Yiwei; Stieglitz, Marc; Turk, Greg; Engel, Victor

    2010-05-01

    It has long been a challenge to theoretical ecologists to describe vegetation pattern formations such as the "tiger bush" stripes and "leopard bush" spots in Niger, and the regular maze patterns often observed in bogs in North America and Eurasia. To date, most of simulation models focus on reproducing the spot and labyrinthine patterns, and on the vegetation bands which form perpendicular to surface and groundwater flow directions. Various hypotheses have been invoked to explain the formation of vegetation patterns: selective grazing by herbivores, fire, and anisotropic environmental conditions such as slope. Recently, short distance facilitation and long distance competition between vegetation (a.k.a scale dependent feedback) has been proposed as a generic mechanism for vegetation pattern formation. In this paper, we test the generality of this mechanism by employing an existing, spatially explicit, advection-reaction-diffusion type model to describe the formation of regularly spaced vegetation bands, including those that are parallel to flow direction. Such vegetation patterns are, for example, characteristic of the ridge and slough habitat in the Florida Everglades and which are thought to have formed parallel to the prevailing surface water flow direction. To our knowledge, this is the first time that a simple model encompassing a nutrient accumulation mechanism along with biomass development and flow is used to demonstrate the formation of parallel stripes. We also explore the interactive effects of plant transpiration, slope and anisotropic hydraulic conductivity on the resulting vegetation pattern. Our results highlight the ability of the short distance facilitation and long distance competition mechanism to explain the formation of the different vegetation patterns beyond semi-arid regions. Therefore, we propose that the parallel stripes, like the other periodic patterns observed in both isotropic and anisotropic environments, are self-organized and form as a result of scale dependent feedback. Results from this study improve upon the current understanding on the formation of parallel stripes and provide a more general theoretical framework for future empirical and modeling efforts.

  7. Reliability Impacts in Life Support Architecture and Technology Selection

    NASA Technical Reports Server (NTRS)

    Lange, Kevin E.; Anderson, Molly S.

    2011-01-01

    Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

  8. Reliability analysis of continuous fiber composite laminates

    NASA Technical Reports Server (NTRS)

    Thomas, David J.; Wetherhold, Robert C.

    1990-01-01

    A composite lamina may be viewed as a homogeneous solid whose directional strengths are random variables. Calculation of the lamina reliability under a multi-axial stress state can be approached by either assuming that the strengths act separately (modal or independent action), or that they interact through a quadratic interaction criterion. The independent action reliability may be calculated in closed form, while interactive criteria require simulations; there is currently insufficient data to make a final determination of preference between them. Using independent action for illustration purposes, the lamina reliability may be plotted in either stress space or in a non-dimensional representation. For the typical laminated plate structure, the individual lamina reliabilities may be combined in order to produce formal upper and lower bounds of reliability for the laminate, similar in nature to the bounds on properties produced from variational elastic methods. These bounds are illustrated for a (0/plus or minus 15)sub s Graphite/Epoxy (GR/EP) laminate. And addition, simple physically plausible phenomenological rules are proposed for redistribution of load after a lamina has failed. These rules are illustrated by application to (0/plus or minus 15)sub s and (90/plus or minus 45/0)sub s GR/EP laminates and results are compared with respect to the proposed bounds.

  9. Data Parallelism and Matrix Multiplication 1 Data Parallelism

    E-print Network

    Verschelde, Jan

    Data Parallelism and Matrix Multiplication 1 Data Parallelism matrix-matrix multiplication CUDA program structure 2 Code for Matrix-Matrix Multiplication linear address system for 2-dimensional array Verschelde, 31 March 2014 Introduction to Supercomputing (MCS 572) Data Parallelism & Matrix Multiplication L

  10. ZAMBEZI: a parallel pattern parallel fault sequential circuit fault simulator

    Microsoft Academic Search

    Minesh B. Amin; Bapiraju Vinnakota

    1996-01-01

    Sequential circuit fault simulators use the multiple bits in a computer data word to accelerate simulation. We introduce, and implement, a new sequential circuit fault simulator, a parallel pattern parallel fault simulator, ZAMBEZI, which simultaneously simulates multiple faults with multiple vectors in one data word. ZAMBEZI is developed by enhancing the control flow, of existing parallel pattern algorithms. For a

  11. Hybrid Parallel Programming with MPI and Unified Parallel C

    E-print Network

    Balaji, Pavan

    Hybrid Parallel Programming with MPI and Unified Parallel C James Dinan Dept. Comp. Sci. and Eng (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node

  12. Parallel Private-Cache Algorithms Parallel Private-Cache Algorithms

    E-print Network

    Arge, Lars

    -Cache Algorithms Multicores Parallel External Memory (PEM) Model PRAM Model Main Memory CPU 1 CPU 2 CPU P External External Memory (PEM) Model PRAM Model Main Memory CPU 1 CPU 2 CPU P External Memory (EM) Model Main Memory PEM Model Parallel External Memory (PEM) Model PEM ­ A simple model of combining parallelism

  13. Resistor Combinations for Parallel Circuits.

    ERIC Educational Resources Information Center

    McTernan, James P.

    1978-01-01

    To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

  14. Standard Templates Adaptive Parallel Library

    E-print Network

    Arzu, Francisco Jose

    2000-01-01

    STAPL (Standard Templates Adaptive Parallel Library) is a parallel C++ library designed as a superset of the C++ Standard Template Library (STL), sequentially consistent for functions with the same name, and executed on uni- or multi- processor...

  15. Reliability and medical device manufacturing

    Microsoft Academic Search

    L. J. Beasley

    1995-01-01

    Reliability is like quality-it has many definitions, and means different things to different people. To effect reliability, we must establish clear communications, use a common language and learn to set expectations. We must work toward a language upon which all agree. Reliability must be defined and specified in measurable terms. Realistic, achievable goals (defined in terms set by the customer)

  16. Trend testing in reliability engineering

    Microsoft Academic Search

    George A. Bohoris

    1996-01-01

    Considers trend testing in the context of reliability\\/survival applications. Suggests that the very common tendency in reliability testing to fit lifetime distributions to reliability\\/maintenance data might occasionally be invalid. Details the appropriate methods to assess the validity, or otherwise, of such a procedure. More specifically, discusses ROCOF curves and the Laplace test for trend, and demonstrates their use by means

  17. Factor reliability into load management

    SciTech Connect

    Feight, G.R.

    1983-07-01

    Hardware reliability is a major factor to consider when selecting a direct-load-control system. The author outlines a method of estimating present-value costs associated with system reliability. He points out that small differences in receiver reliability make a significant difference in owning cost. 4 figures.

  18. Testing for PV Reliability (Presentation)

    SciTech Connect

    Kurtz, S.; Bansal, S.

    2014-09-01

    The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

  19. Reliability onMultilayerModels

    E-print Network

    László, Jereb

    Network Reliability Analysis Based onMultilayerModels László Jereb, Péter Bajor, Attila Kiss a reliability analysis approach which is based on the multilayer model of the telecommunication network. Simple two­state reliability models are assigned to the network elements making it possible to describe

  20. ARMY VEHICLE DURABILITY OPTIMIZATION & RELIABILITY

    E-print Network

    Kusiak, Andrew

    ARMY VEHICLE DURABILITY OPTIMIZATION & RELIABILITY How to Optimize the Vehicle Design to Minimize/Reduce the Weight? Under These Uncertainties, How to Achieve Component Level Reliability? Under These Uncertainties, How to Achieve System Level Reliability? Dynamics Analysis FE Model System Model Dynamic Stress

  1. Design of reliable control systems

    Microsoft Academic Search

    Robert J. Veillette; J. B. Medanic; William R. Perkins

    1992-01-01

    A methodology for the design of reliable centralized and decentralized control systems is developed. The resulting control systems are reliable in that they provide guaranteed stability and H ? performance not only when all control components are operational, but also for sensor or actuator outages in the centralized case, or for control-channel outages in the decentralized case. Reliability is guaranteed

  2. Spectrophotometric Assay of Mebendazole in Dosage Forms Using Sodium Hypochlorite

    NASA Astrophysics Data System (ADS)

    Swamy, N.; Prashanth, K. N.; Basavaiah, K.

    2014-07-01

    A simple, selective and sensitive spectrophotometric method is described for the determination of mebendazole (MBD) in bulk drug and dosage forms. The method is based on the reaction of MBD with hypochlorite in the presence of sodium bicarbonate to form the chloro derivative of MBD, followed by the destruction of the excess hypochlorite by nitrite ion. The color was formed by the oxidation of iodide with the chloro derivative of MBD to iodine in the presence of starch and forming the blue colored product, which was measured at 570 nm. The optimum conditions that affect the reaction were ascertained and, under these conditions, a linear relationship was obtained in the concentration range of 1.25-25.0·g/ml MBD. The calculated molar absorptivity and Sandell sensitivity values are 9.56·103 l·mol-1·cm-1 and 0.031 ?g/cm2, respectively. The limits of detection and quantification are 0.11 and 0.33 ?g/ml, respectively. The proposed method was applied successfully to the determination of MBD in bulk drug and dosage forms, and no interference was observed from excipients present in the dosage forms. The reliability of the proposed method was further checked by parallel determination by the reference method and also by recovery studies.

  3. Further discussion on reliability: the art of reliability estimation.

    PubMed

    Yang, Yanyun; Green, Samuel B

    2015-01-01

    Sijtsma and van der Ark (2015) focused in their lead article on three frameworks for reliability estimation in nursing research: classical test theory (CTT), factor analysis (FA), and generalizability theory. We extend their presentation with particular attention to CTT and FA methods. We first consider the potential of yielding an overly negative or an overly positive assessment of reliability based on coefficient alpha. Next, we discuss other CTT methods for estimating reliability and how the choice of methods affects the interpretation of the reliability coefficient. Finally, we describe FA methods, which not only permit an understanding of a measure's underlying structure but also yield a variety of reliability coefficients with different interpretations. On a more general note, we discourage reporting reliability as a two-choice outcome--unsatisfactory or satisfactory; rather, we recommend that nursing researchers make a conceptual and empirical argument about when a measure might be more or less reliable, depending on its use. PMID:25738627

  4. Parallel multilevel preconditioners

    SciTech Connect

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1989-01-01

    In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

  5. Parallel Subconvolution Filtering Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Andrew A.

    2003-01-01

    These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

  6. The massively parallel processor

    NASA Technical Reports Server (NTRS)

    Schaefer, D. H.; Fischer, J. R.; Wallgren, K. R.

    1980-01-01

    Future sensor systems will utilize massively parallel computing systems for rapid analysis of two-dimensional data. The Goddard Space Flight Center has an ongoing program to develop these systems. A single-instruction multiple data computer known as the Massively Parallel Processor (MPP) is being fabricated for NASA by the Goodyear Aerospace Corporation. This processor contains 16,384 processing elements arranged in a 128 x 128 array. The MPP will be capable of adding more than 6 billion 8-bit numbers per second. Multiplication of eight-bit numbers can occur at a rate of 2 billion per second. Delivery of the MPP to Goddard Space Flight Center is scheduled for 1983.

  7. Predicting Performance of Parallel Computations

    Microsoft Academic Search

    Victor Wing-kit Mak; Stephen F. Lundstrom

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models

  8. Parallel Sparse Solvers, Preconditioners, and

    E-print Network

    Geddes, Cameron Guy Robinson

    i i Chapter 1 Parallel Sparse Solvers, Preconditioners, and Their Applications 1.1 Introduction them, it is essential to exploit sparsity. Moreover, parallel computing is an essential tool to reduce. This chapter will sample some of the most recent work on the parallel solution of large sparse linear systems

  9. Scha's Parallel Lines Bernhard Nickel

    E-print Network

    Nickel, Bernhard

    Scha's Parallel Lines Bernhard Nickel May 10, 2010 1 Introduction Scha (1981) has a famous example figure 1, sentence (2) is about figure 2. (1) The sides of R1 run parallel to the sides of R2. (2) The single lines run parallel to the double lines. 3 Code for Figure 1 \\begin{figure} \\centering \\setlength

  10. Loadflow analysis on parallel computers

    Microsoft Academic Search

    Chi-Pui Ng; Kamal Jabbour; Walter Meyer

    1989-01-01

    The speed performance of parallel computers in power system loadflow analysis is evaluated. Three commercial parallel computers, each of which has a fundamentally different architecture, are evaluated. A methodology for determining a suitable architecture for a given problem is developed. Major issues that cause performance degradation in parallel processing are discussed

  11. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Painter, J.; Hansen, C.

    1996-10-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

  12. Parallelization: Infectious Disease

    NSDL National Science Digital Library

    Aaron Weeden

    Epidemiology is the study of infectious disease. Infectious diseases are said to be "contagious" among people if they are transmittable from one person to another. Epidemiologists can use models to assist them in predicting the behavior of infectious diseases. This module will develop a simple agent-based infectious disease model, develop a parallel algorithm based on the model, provide a coded implementation for the algorithm, and explore the scaling of the coded implementation on high performance cluster resources.

  13. Xyce parallel electronic simulator.

    SciTech Connect

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  14. The Reliability of Neurons

    PubMed Central

    Bullock, Theodore Holmes

    1970-01-01

    The prevalent probabilistic view is virtually untestable; it remains a plausible belief. The cases usually cited can not be taken as evidence for it. Several grounds for this conclusion are developed. Three issues are distinguished in an attempt to clarify a murky debate: (a) the utility of probabilistic methods in data reduction, (b) the value of models that assume indeterminacy, and (c) the validity of the inference that the nervous system is largely indeterministic at the neuronal level. No exception is taken to the first two; the second is a private heuristic question. The third is the issue to which the assertion in the first two sentences is addressed. Of the two kinds of uncertainty, statistical mechanical (= practical unpredictability) as in a gas, and Heisenbergian indeterminancy, the first certainly exists, the second is moot at the neuronal level. It would contribute to discussion to recognize that neurons perform with a degree of reliability. Although unreliability is difficult to establish, to say nothing of measure, evidence that some neurons have a high degree of reliability, in both connections and activity is increasing greatly. An example is given from sternarchine electric fish. PMID:5462670

  15. Reliability analysis on PVT correlations

    SciTech Connect

    De Ghetto, G.; Paone, F.; Villa, M.

    1994-12-31

    This paper evaluates the reliability of the most common empirical correlations used for determining reservoir fluid properties whenever laboratory PVT data are not available: bubblepoint pressure, solution GOR, bubblepoint OFVF, isothermal compressibility, dead-oil viscosity, gas-saturated oil viscosity and undersaturated oil viscosity. The reliability of the correlations has been evaluated against a set of 195 crude oil samples collected from the Mediterranean Basin, Africa, the Persian Gulf and the North Sea. About 3700 measured data points have been collected and investigated. All measured data points are reported in the paper. For all the correlations, the following statistical parameters have been calculated: (a) relative deviation between estimated and experimental values, (b) average absolute percent error, (c) standard deviation. Oil samples have been divided into the following four different API gravity classes: extra-heavy oils for {degree}API {le} 10, heavy oils for 10 < {degree}API {le} 22.3, medium oils for 22.3 < {degree}API {le} 31.1, light oils for {degree}API > 31.1. The best correlations both for each class and for the whole range of API gravity have been evaluated for each oil-property. The functional forms of the correlations that gave the best results for each oil property have been used for finding a better correlation with average errors reduced by 5--10%. In particular, for extra-heavy oils, since no correlations are available in literature (except of viscosity), a special investigation has been performed and new equations are proposed.

  16. The Assumption of a Reliable Instrument and Other Pitfalls to Avoid When Considering the Reliability of Data

    PubMed Central

    Nimon, Kim; Zientek, Linda Reichwein; Henson, Robin K.

    2012-01-01

    The purpose of this article is to help researchers avoid common pitfalls associated with reliability including incorrectly assuming that (a) measurement error always attenuates observed score correlations, (b) different sources of measurement error originate from the same source, and (c) reliability is a function of instrumentation. To accomplish our purpose, we first describe what reliability is and why researchers should care about it with focus on its impact on effect sizes. Second, we review how reliability is assessed with comment on the consequences of cumulative measurement error. Third, we consider how researchers can use reliability generalization as a prescriptive method when designing their research studies to form hypotheses about whether or not reliability estimates will be acceptable given their sample and testing conditions. Finally, we discuss options that researchers may consider when faced with analyzing unreliable data. PMID:22518107

  17. Parallelized direct execution simulation of message-passing parallel programs

    NASA Technical Reports Server (NTRS)

    Dickens, Phillip M.; Heidelberger, Philip; Nicol, David M.

    1994-01-01

    As massively parallel computers proliferate, there is growing interest in findings ways by which performance of massively parallel codes can be efficiently predicted. This problem arises in diverse contexts such as parallelizing computers, parallel performance monitoring, and parallel algorithm development. In this paper we describe one solution where one directly executes the application code, but uses a discrete-event simulator to model details of the presumed parallel machine such as operating system and communication network behavior. Because this approach is computationally expensive, we are interested in its own parallelization specifically the parallelization of the discrete-event simulator. We describe methods suitable for parallelized direct execution simulation of message-passing parallel programs, and report on the performance of such a system, Large Application Parallel Simulation Environment (LAPSE), we have built on the Intel Paragon. On all codes measured to date, LAPSE predicts performance well typically within 10 percent relative error. Depending on the nature of the application code, we have observed low slowdowns (relative to natively executing code) and high relative speedups using up to 64 processors.

  18. Reliability of perception of fever by touch

    Microsoft Academic Search

    Deepti Chaturvedi; K. Y. Vilhekar; Pushpa Chaturvedi; M. S. Bharambe

    2003-01-01

    Objective : To assess the reliability of touch to predict fever in children.Methods : 200 children who reported with fever formed the study material. Group I consisted of 100 children between 0–1 year of age\\u000a and Group II consisted of 100 children between 6–12 years of age. Preterm, neonates under warming device, tachypnoeic and\\u000a hypothermic were excluded from the study.

  19. Reliable communication in the presence of failures

    Microsoft Academic Search

    Kenneth P. Birman; Thomas A. Joseph

    1987-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying

  20. Reliable Communication in the Presence of Failures

    Microsoft Academic Search

    KENNETH P. BIRMAN; THOMAS A. JOSEPH

    1985-01-01

    The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying

  1. Multidimensional Resource Scheduling for Parallel Queries

    Microsoft Academic Search

    Minos N. Garofalakis; Yannis E. Ioannidis

    1996-01-01

    Scheduling query execution plans is an important component of query optimization in parallel database systems. The problem is particularly complex in a shared-nothing execution environment, where each system node represents a collection of time-shareable resources (e.g., CPU(s), disk(s), etc.) and communicates with other nodes only by message-passing. Significant research effort has concentrated on only a subset of the various forms

  2. ccsd00002799, KILLING FORMS ON SYMMETRIC SPACES

    E-print Network

    ccsd­00002799, version 1 ­ 7 Sep 2004 KILLING FORMS ON SYMMETRIC SPACES FLORIN BELGUN, ANDREI MOROIANU AND UWE SEMMELMANN Abstract. Killing forms on Riemannian manifolds are di#11;erential forms whose space carries a non{parallel Killing p{form (p #21; 2) if and only if it isometric to a Riemannian

  3. DPL: a data parallel language for the expression and execution of general parallel algorithm

    Microsoft Academic Search

    Robert Gordon Willhoft

    1995-01-01

    THE NEED FOR a powerful, easy to use, parallel language continues despite very significant advances in the area of parallel processing. Many parallel languages are simply old sequential languages with parallel constructs added. This research describes the Data Parallel Language (DPL), a parallel language built from its foundations on parallel concepts. DPL bases much of its expression on data parallelism

  4. DPL: a data parallel language for the expression and execution of general parallel algorithm

    Microsoft Academic Search

    Robert Gordon Willhoft

    1995-01-01

    T a powerful, easy to use, parallel language continues despite very significant advances in the area of parallel processing. Many parallel languages are simply old sequential languages with parallel constructs added. This research describes the Data Parallel Language (DPL), a parallel language built from its foundations on parallel concepts. DPL bases much of its expression on data parallelism found in

  5. Computer Assisted Parallel Program Generation

    E-print Network

    Kawata, Shigeo

    2015-01-01

    Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

  6. Reliability Analysis of An Energy-Aware RAID System Shu Yin, Yun Tian, Jiong Xie, and Xiao Qin

    E-print Network

    Qin, Xiao

    by data locality. Keywords-Parallel storage system, RAID, energy-efficient, reliability I. INTRODUCTION the reliability trend of energy-aware storage systems. However, it is challenging to validate the MREED model Department of Computer Science and Software Engineering Auburn University, Auburn, AL 36849 Email: {szy0004

  7. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  8. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  9. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  10. 48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... false Mission Critical Space System Personnel Reliability...Acquisition Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...246-70 Mission Critical Space System Personnel...

  11. The Verification-based Analysis of Reliable Multicast Protocol

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1996-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  12. Portable Parallel Adaptation of Unstructured 3D Meshes

    Microsoft Academic Search

    Paul M. Selwood; Martin Berzins; Jonathan M. Nash; Peter M. Dew

    1998-01-01

    The need to solve ever-larger transient CFD problems more efficiently and reliably has led to the use of mesh adaptation on distributed memory parallel computers. PTETRAD is a portable parallelisation of a general-purpose, unstructured, tetrahedral adaptation code. The variation of the tetrahedral mesh density both in space and time gives rise to dynamic load balancing problems that are time-varying in

  13. Parallel sphere rendering

    SciTech Connect

    Krogh, M.; Hansen, C.; Painter, J. [Los Alamos National Lab., NM (United States); de Verdiere, G.C. [CEA Centre d`Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)

    1995-05-01

    Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

  14. Parallel Kinematic Machines (PKM)

    SciTech Connect

    Henry, R.S.

    2000-03-17

    The purpose of this 3-year cooperative research project was to develop a parallel kinematic machining (PKM) capability for complex parts that normally require expensive multiple setups on conventional orthogonal machine tools. This non-conventional, non-orthogonal machining approach is based on a 6-axis positioning system commonly referred to as a hexapod. Sandia National Laboratories/New Mexico (SNL/NM) was the lead site responsible for a multitude of projects that defined the machining parameters and detailed the metrology of the hexapod. The role of the Kansas City Plant (KCP) in this project was limited to evaluating the application of this unique technology to production applications.

  15. Parallel Eclipse Project Checkout

    NASA Technical Reports Server (NTRS)

    Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

    2011-01-01

    Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

  16. Human reliability assessment: tools for law enforcement

    NASA Astrophysics Data System (ADS)

    Ryan, Thomas G.; Overlin, Trudy K.

    1997-01-01

    This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.

  17. Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again

    ERIC Educational Resources Information Center

    Morley, Donald D.

    2012-01-01

    The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

  18. Reliability and structural integrity. [analytical model for calculating crack detection probability

    NASA Technical Reports Server (NTRS)

    Davidson, J. R.

    1973-01-01

    An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

  19. Reliability of wireless sensor networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2014-01-01

    Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

  20. Parallel flows with Soret effect in tilted cylinders

    NASA Technical Reports Server (NTRS)

    Jacqmin, David

    1990-01-01

    Henry and Roux (1986, 1987, 1988) have conducted extensive numerical studies on the interaction of Soret separation with convection in cylindrical geometry. Many of their solutions exhibit parallel flow away from end walls. Their parallel flow results can be matched by closed-form solutions. Solutions are nonunique in some parameter regions. Disappearance of one branch of solutions correlates with a sudden transition of Henry and Roux's results from a separated to a well-mixed flow.

  1. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...2012-04-01 2012-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

  2. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...2014-04-01 2014-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

  3. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...2013-04-01 2013-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

  4. 18 CFR 39.5 - Reliability Standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...2011-04-01 2011-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

  5. Massively Parallel QCD

    SciTech Connect

    Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

    2007-04-11

    The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

  6. Tolerant (parallel) Programming

    NASA Technical Reports Server (NTRS)

    DiNucci, David C.; Bailey, David H. (Technical Monitor)

    1997-01-01

    In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

  7. US electric power system reliability

    NASA Astrophysics Data System (ADS)

    Electric energy supply, transmission and distribution systems are investigated in order to determine priorities for legislation. The status and the outlook for electric power reliability are discussed.

  8. A fourth generation reliability predictor

    NASA Technical Reports Server (NTRS)

    Bavuso, Salvatore J.; Martensen, Anna L.

    1988-01-01

    A reliability/availability predictor computer program has been developed and is currently being beta-tested by over 30 US companies. The computer program is called the Hybrid Automated Reliability Predictor (HARP). HARP was developed to fill an important gap in reliability assessment capabilities. This gap was manifested through the use of its third-generation cousin, the Computer-Aided Reliability Estimation (CARE III) program, over a six-year development period and an additional three-year period during which CARE III has been in the public domain. The accumulated experience of the over 30 establishments now using CARE III was used in the development of the HARP program.

  9. Fatigue reliability method with in-service inspections

    NASA Technical Reports Server (NTRS)

    Harkness, H. H.; Fleming, M.; Moran, B.; Belytschko, T.

    1994-01-01

    The first reliability method (FORM) has traditionally been applied in a fatigue reliability setting to one inspection interval at a time, so that the random distribution of crack lengths must be recharacterized following each inspection. The FORM presented here allow each analysis to span several inspection periods without explicit characterization of the crack length distribution upon each inspection. The method thereby preserves the attractive feature of FORM in that relatively few realizations in the random variable space need to be considered. Examples are given which show that the present methodology gives estimates which are in good agreement with Monte Carlo simulations and is efficient even for complex components.

  10. Reliability of corroded pipelines

    SciTech Connect

    Jones, D.G. [British Gas plc, Northumberland (United Kingdom). On Line Inspection Centre; Dawson, S.J.; Brown, M. [British Gas plc, Newcastle upon Tyne (United Kingdom). Engineering Research Station

    1994-12-31

    Corrosion in on and offshore pipelines is an increasing problem world-wide. The pipeline operator requires a strategy for future safe operation of a corroded pipeline. A pressure test cannot guarantee the future integrity of a corroded pipeline. Pipeline operators are increasingly using internal inspection by high resolution magnetic pigs to detect and size corrosion as the basis for defining a future safe operating strategy. Using inspection results British Gas has successfully developed and applied a ``realistic`` deterministic analysis for corroded pipelines. It involves the identification of, and the calculation of the failure pressure and time to failure of, those corroded pipe spools with the highest risk of failure. This paper describes the subsequent development of a reliability based methodology which allows the failure probability with time to be determined for corroded pipelines. Examples are presented of the application to actual corroded pipelines. It is highlighted that the methodology allows the pipeline operator to evaluate appropriate future safe operating strategies including, de-rating, re-inspection and/or pipeline replacement and the necessary scheduling.

  11. Safety and reliability considerations for lithium batteries

    Microsoft Academic Search

    Samuel C. Levy

    1997-01-01

    Battery safety and reliability are closely related and, in some instances, safety may be considered a subset of reliability. However, safety is a concern from manufacture through disposal. Reliability can be approached through three different perspectives: lot reliability, individual cell reliability, and root cause analysis of failed cells. To ensure a quality product, a good reliability management program must be

  12. Parallel Computing in SCALE

    SciTech Connect

    DeHart, Mark D [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL; Bowman, Stephen M [ORNL] [ORNL

    2010-01-01

    The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

  13. Parallel processing and expert systems

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Lau, Sonie

    1991-01-01

    Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

  14. Parallel algorithms in linear algebra

    E-print Network

    Brent, Richard P

    2010-01-01

    This report provides an introduction to algorithms for fundamental linear algebra problems on various parallel computer architectures, with the emphasis on distributed-memory MIMD machines. To illustrate the basic concepts and key issues, we consider the problem of parallel solution of a nonsingular linear system by Gaussian elimination with partial pivoting. This problem has come to be regarded as a benchmark for the performance of parallel machines. We consider its appropriateness as a benchmark, its communication requirements, and schemes for data distribution to facilitate communication and load balancing. In addition, we describe some parallel algorithms for orthogonal (QR) factorization and the singular value decomposition (SVD).

  15. Parallel processor engine model program

    NASA Technical Reports Server (NTRS)

    Mclaughlin, P.

    1984-01-01

    The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

  16. The complexity of parallel search

    SciTech Connect

    Karp, R.M.; Upfal, E.; Wigderson, A.

    1987-01-01

    This paper studies parallel search algorithms within the framework of independence systems. It is motivated by earlier work on parallel algorithms for concrete problems such as determining a maximal independent set of vertices or a maximum matching in a graph, and by the general question of determining the parallel complexity of a search problem when an oracle is available to solve the associated decision problem. The results provide a parallel analogue of the self-reducibility process that is so useful in sequential computation.

  17. Computation and parallel implementation for early vision

    NASA Technical Reports Server (NTRS)

    Gualtieri, J. Anthony

    1990-01-01

    The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

  18. Large amplitude parallel propagating electromagnetic oscillitons

    SciTech Connect

    Cattaert, Tom; Verheest, Frank [Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281, B-9000 Gent (Belgium); Sterrenkundig Observatorium, Universiteit Gent, Krijgslaan 281, B-9000 Gent (Belgium); School of Physics (Howard College Campus), University of KwaZulu-Natal, Durban 4041 (South Africa)

    2005-01-01

    Earlier systematic nonlinear treatments of parallel propagating electromagnetic waves have been given within a fluid dynamic approach, in a frame where the nonlinear structures are stationary and various constraining first integrals can be obtained. This has lead to the concept of oscillitons that has found application in various space plasmas. The present paper differs in three main aspects from the previous studies: first, the invariants are derived in the plasma frame, as customary in the Sagdeev method, thus retaining in Maxwell's equations all possible effects. Second, a single differential equation is obtained for the parallel fluid velocity, in a form reminiscent of the Sagdeev integrals, hence allowing a fully nonlinear discussion of the oscilliton properties, at such amplitudes as the underlying Mach number restrictions allow. Third, the transition to weakly nonlinear whistler oscillitons is done in an analytical rather than a numerical fashion.

  19. A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix

    NASA Technical Reports Server (NTRS)

    Shroff, Gautam

    1989-01-01

    A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

  20. Cerebro : forming parallel internets and enabling ultra-local economies

    E-print Network

    Ypodimatopoulos, Polychronis Panagiotis

    2008-01-01

    Internet-based mobile communications have been increasing rapidly [5], yet there is little or no progress in platforms that enable applications for discovery, context-awareness and sharing of data and services in a peer-wise ...

  1. Series and Parallel Circuits

    NSDL National Science Digital Library

    Kuphaldt, Tony R.

    Tony R. Kuphaldt is the creator of All About Circuits, a collection of online textbooks about circuits and electricity. The site is split into volumes, chapters, and topics to make finding and learning about these subjects convenient. Volume 1, Chapter 5: Series and Parallel Circuits begins by explaining the basic differences between the two types of circuits. The topics then progress to more difficult subject matter such as conductance, and Ohmâ??s law, with a section on building circuits for a more hands-on component. This website would be a great jumping off point for educators who want to teach circuits or a fantastic supplemental resource for students who want or need to learn more.

  2. Failure analysis of the electrostatic parallel-plate micro-actuator

    Microsoft Academic Search

    Fengli Liu; Yongping Hao

    2008-01-01

    Non-parallel electrode plates resulted in further oblique of the upper electrode until it reached another equilibrium position. The relationship between the obliquity beta in the final equilibrium situation and the initial obliquity a is constructed. Efficient failure and analysis are becoming essential in electrostatic parallel-plate especially for high reliability and safety critical applications. The simulation was carried in the CoventorWare.

  3. Improving substation reliability and availability

    Microsoft Academic Search

    R. M. Spiewak; D. Pieniazek; J. Pittman; F. Weisse; D. Wilson

    2009-01-01

    Reliable electric power has become a critical operating component requirement for petroleum and chemical plants. As a result, transmission, substation, and distribution systems both inside and outside the fence line have developed into essential elements of the plant processes. This paper presents how available technologies, design philosophies, and good engineering practices can be applied to improve reliability and availability of

  4. The Reliability of Density Measurements.

    ERIC Educational Resources Information Center

    Crothers, Charles

    1978-01-01

    Data from a land-use study of small- and medium-sized towns in New Zealand are used to ascertain the relationship between official and effective density measures. It was found that the reliability of official measures of density is very low overall, although reliability increases with community size. (Author/RLV)

  5. Concerning reliability modeling of connectors

    Microsoft Academic Search

    Robert S. Mroczkowski

    1998-01-01

    A Physics of Failure approach provides a basis for modeling of connector degradation mechanisms. Such a modeling capability can be realized, with differing levels of complexity, for many important connector degradation mechanisms. However, extension of this modeling capability to modeling of connector reliability is far more complicated and, in fact, questionable. Reliability modeling of connectors requires knowledge of the relationship

  6. Electric Reliability & Hurricane Preparedness Plan

    E-print Network

    Electric Reliability & Hurricane Preparedness Plan Joe Bosco Account Executive October 17, 2012 #12 NASA: 99+% reliability #12;Hurricane KATRINA #12;MPC Infrastructure Damage · 9,000 broken poles · 2 w/o power (100%) · Power restored in 11 days · Manpower ­ 11,000 #12;MPC's Hurricane Preparedness

  7. Imprecise reliability by evidential networks

    Microsoft Academic Search

    Christophe SIMON; Philippe WEBER

    2009-01-01

    This article deals with an implementation of probist reliability problems in evi- dential networks to propagate imprecise probabilities expressed as fuzzy numbers. First, the problem of imprecise knowledge in reliability problems is described con- cerning system and data reprsentation. Then, the basics of the evidence theory and its use in a directed acyclic graph approach are given. The imprecise probist

  8. Parallel Bifold: Large-Scale Parallel Pattern Mining with Constraints

    E-print Network

    Zaiane, Osmar R.

    Parallel Bifold: Large-Scale Parallel Pattern Mining with Constraints Mohammad El-Hajj, Osmar R. Za size; not only the extent of the existing patterns, but mainly the magnitude of the search space. Many cost between $50 and $100. While discovering hidden knowledge in the available repositories of data

  9. Parallel Imaging Microfluidic Cytometer

    PubMed Central

    Ehrlich, Daniel J.; McKenna, Brian K.; Evans, James G.; Belkina, Anna C.; Denis, Gerald V.; Sherr, David; Cheung, Man Ching

    2011-01-01

    By adding an additional degree of freedom from multichannel flow, the parallel microfluidic cytometer (PMC) combines some of the best features of flow cytometry (FACS) and microscope-based high-content screening (HCS). The PMC (i) lends itself to fast processing of large numbers of samples, (ii) adds a 1-D imaging capability for intracellular localization assays (HCS), (iii) has a high rare-cell sensitivity and, (iv) has an unusual capability for time-synchronized sampling. An inability to practically handle large sample numbers has restricted applications of conventional flow cytometers and microscopes in combinatorial cell assays, network biology, and drug discovery. The PMC promises to relieve a bottleneck in these previously constrained applications. The PMC may also be a powerful tool for finding rare primary cells in the clinic. The multichannel architecture of current PMC prototypes allows 384 unique samples for a cell-based screen to be read out in approximately 6–10 minutes, about 30-times the speed of most current FACS systems. In 1-D intracellular imaging, the PMC can obtain protein localization using HCS marker strategies at many times the sample throughput of CCD-based microscopes or CCD-based single-channel flow cytometers. The PMC also permits the signal integration time to be varied over a larger range than is practical in conventional flow cytometers. The signal-to-noise advantages are useful, for example, in counting rare positive cells in the most difficult early stages of genome-wide screening. We review the status of parallel microfluidic cytometry and discuss some of the directions the new technology may take. PMID:21704835

  10. The process group approach to reliable distributed computing

    Microsoft Academic Search

    Kenneth P. Birman

    1993-01-01

    Abstract The difficulty of developing reliable distributed softwme,is an impediment,to applying distributed computing,technology in many,settings. Expeti_,with the Isis system,suggests that a structured approach based on virtually synchronous _,groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. This paper reviews six years of resemr,.hon Isis, describing the model, its impl_nentation challenges,

  11. Implementing and practicing reliability engineering

    SciTech Connect

    Bloch, H.P. [Process Machinery Consulting, Montgomery, TX (United States)

    1996-11-01

    Few, if any, of the many re-organization and re-engineering efforts are guided by a thorough understanding of the key ingredients of Best-of-Class reliability organizations. To these Best-of-Class performers, reliability improvement is not an afterthought; instead, it`s their going-in position. They know to what extent, in which degree of detail, and at what levels of the organization their operators, mechanical-technical and project-technical work force members must communicate, plan, act, challenge or implement reliability concepts in sometimes proactive and, at other times, reactive fashion. Best-of-Class maintenance and reliability organizations will inevitably display a number of attributes which are often lacking in the less profitable, or less efficiently run companies. Listed here are highlights of the reliability engineering concepts, work practices and functional interfaces pursued, practiced and implemented by the leaders of the pack.

  12. Photovoltaic performance and reliability workshop

    SciTech Connect

    Mrig, L. [ed.

    1993-12-01

    This workshop was the sixth in a series of workshops sponsored by NREL/DOE under the general subject of photovoltaic testing and reliability during the period 1986--1993. PV performance and PV reliability are at least as important as PV cost, if not more. In the US, PV manufacturers, DOE laboratories, electric utilities, and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in the field were brought together to exchange the technical knowledge and field experience as related to current information in this evolving field of PV reliability. The papers presented here reflect this effort since the last workshop held in September, 1992. The topics covered include: cell and module characterization, module and system testing, durability and reliability, system field experience, and standards and codes.

  13. Actigraph data are reliable, with functional reliability increasing with aggregation

    PubMed Central

    Wood, Alexis C.; Kuntsi, Jonna; Asherson, Philip; Saudino, Kimberly J.

    2009-01-01

    Motion sensor devices such as actigraphs are increasingly used in studies that seek to obtain an objective assessment of activity level. They have many advantages, and are useful additions to research in fields such as sleep assessment, drug efficacy, behavior genetics, and obesity. However, questions still remain over the reliability of data collected using actigraphic assessment. We aimed to apply generalizability theory to actigraph data collected on a large, general-population sample in middle childhood, during 8 cognitive tasks across two body loci, and to examine reliability coefficients on actigraph data aggregated across different numbers of tasks and different numbers of attachment loci. Our analyses show that aggregation greatly increases actigraph data reliability, with reliability coefficients on data collected at one body locus during 1 task (.29) being much lower than that aggregated across data collected on two body loci and during 8 tasks (.66). Further increases in reliability coefficients by aggregating across four loci and 12 tasks were estimated to be modest in prospective analyses, indicating an optimum trade-off between data collection and reliability estimates. We also examined possible instrumental effects on actigraph data and found these to be nonsignificant, further supporting the reliability and validity of actigraph data as a method of activity level assessment. PMID:18697683

  14. Reliability-based design optimization using efficient global reliability analysis.

    SciTech Connect

    Bichon, Barron J. (Southwest Research Institute, San Antonio, TX); Mahadevan, Sankaran (Vanderbilt University, Nashville, TN); Eldred, Michael Scott

    2010-05-01

    Finding the optimal (lightest, least expensive, etc.) design for an engineered component that meets or exceeds a specified level of reliability is a problem of obvious interest across a wide spectrum of engineering fields. Various methods for this reliability-based design optimization problem have been proposed. Unfortunately, this problem is rarely solved in practice because, regardless of the method used, solving the problem is too expensive or the final solution is too inaccurate to ensure that the reliability constraint is actually satisfied. This is especially true for engineering applications involving expensive, implicit, and possibly nonlinear performance functions (such as large finite element models). The Efficient Global Reliability Analysis method was recently introduced to improve both the accuracy and efficiency of reliability analysis for this type of performance function. This paper explores how this new reliability analysis method can be used in a design optimization context to create a method of sufficient accuracy and efficiency to enable the use of reliability-based design optimization as a practical design tool.

  15. Parallel adaptive mobile web clipping

    Microsoft Academic Search

    Alexander Vrenios

    2003-01-01

    We describe a unique approach to improving the performance of a Web clipping portal by exploiting inherent parallelism in the syntax of widely used markup languages, and by employing a parallel computing platform as an in-line proxy between the handheld mobile device and a Web server on the Internet.

  16. Parallelizing Monte Carlo with PMC

    SciTech Connect

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  17. Parallel Refinement of Unstructured Meshes

    Microsoft Academic Search

    John E. Savage

    1999-01-01

    In this paper we describe a parallel -refinement al- gorithm for unstructured finite element meshes based on the longest-edge bisection of triangles and tetrahedrons. This algorithm is implemented inPARED, a system that supports the parallel adaptive solution of PDEs. We dis- cuss the design of such an algorithm for distributed mem- ory machines including the problem of propagating refine- ment

  18. Patterns for Parallel Application Programs

    Microsoft Academic Search

    Berna L. Massingill

    1999-01-01

    We are involved in an ongoing effort to design a pattern language for parallel application programs. The pattern language consists of a set of patterns that guide the programmer through the entire process of developing a parallel program, including patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency

  19. Limited width parallel prefix circuits

    Microsoft Academic Search

    David A. Carlson; Binay Sugla

    1990-01-01

    In this paper, we present lower and upper bounds on the size of limited width, bounded and unbounded fan-out parallel prefix circuits. The lower bounds on the sizes of such circuits are a function of the depth, width, and number of inputs. The size requirement of an N input bounded fan-out parallel prefix circuit having limited width W and extra

  20. Fast data parallel polygon rendering

    Microsoft Academic Search

    Frank A. Ortega; Charles D. Hansen; James P. Ahrens

    1993-01-01

    This paper describes a data parallel method for polygon rendering on a massively parallel machine. This method, based on a simple shading model, is targeted for applications which require very fast rendering for extremely large sets of polygons. Such sets are found in many scientific visualization applications. The renderer can handle arbitrarily complex polygons which need not be meshed. Issues

  1. Parallelism in random access machines

    Microsoft Academic Search

    Steven Fortune; James Wyllie

    1978-01-01

    A model of computation based on random access machines operating in parallel and sharing a common memory is presented. The computational power of this model is related to that of traditional models. In particular, deterministic parallel RAM's can accept in polynomial time exactly the sets accepted by polynomial tape bounded Turing machines; nondeterministic RAM's can accept in polynomial time exactly

  2. Formal verification of parallel programs

    Microsoft Academic Search

    Robert M. Keller

    1976-01-01

    Two formal models for parallel computation are presented: an abstract conceptual model and a parallel-program model. The former model does not distinguish between control and data states. The latter model includes the capability for the representation of an infinite set of control states by allowing there to be arbitrarily many instruction pointers (or processes) executing the program. An induction principle

  3. Parallel execution of logic programs

    SciTech Connect

    Conery, J.S.

    1987-01-01

    This work is about the AND/OR Process Model, an abstract model for parallel execution of logic programs. This book defines a framework for implementing parallel interpreters. The research presented here provides an intermediate level of abstraction between hardware and semantics, a set of requirements for a parallel interpreter running on a multiprocessor architecture. Contents. LIST OF FIGURES. 1. INTRODUCTION. 2. LOGIC PROGRAMMING. 2.1 Syntax. 2.2 Semantics. 2.3 Control. 2.4 Prolog. 2.5 Alternate Control Strategies. 2.6 Chapter Summary. 3. PARALLELISM IN LOGIC PROGRAMS. 3.1 Models for OR Parallelism. 3.2 Models for AND Parallelism. 3.3 Low Level Parallelism 3.4 Chapter Summary. 4. THE AND/OR PROCESS MODEL. 4.1 Oracle. 4.2 Messages. 4.3 OR Processes. 4.4 AND Processes. 4.5 Interpreter. 4.6 Programming Language. 4.7 Chapter Summary. 5. PARALLEL OR PROCESSES. 5.1 Operating Modes. 5.2 Execution. 5.3 Example. 5.4 Chapter Summary.

  4. The Galley Parallel File System

    NASA Technical Reports Server (NTRS)

    Nieuwejaar, Nils; Kotz, David

    1996-01-01

    Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

  5. Parallel contingency statistics with Titan.

    SciTech Connect

    Thompson, David C.; Pebay, Philippe Pierre

    2009-09-01

    This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

  6. Optimal parallel quantum query algorithms

    E-print Network

    Stacey Jeffery; Frederic Magniez; Ronald de Wolf

    2015-02-20

    We study the complexity of quantum query algorithms that make p queries in parallel in each timestep. This model is in part motivated by the fact that decoherence times of qubits are typically small, so it makes sense to parallelize quantum algorithms as much as possible. We show tight bounds for a number of problems, specifically Theta((n/p)^{2/3}) p-parallel queries for element distinctness and Theta((n/p)^{k/(k+1)} for k-sum. Our upper bounds are obtained by parallelized quantum walk algorithms, and our lower bounds are based on a relatively small modification of the adversary lower bound method, combined with recent results of Belovs et al. on learning graphs. We also prove some general bounds, in particular that quantum and classical p-parallel complexity are polynomially related for all total functions f when p is small compared to f's block sensitivity.

  7. Sub-Second Parallel State Estimation

    SciTech Connect

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.

  8. Morphing Polyhedra with Parallel Faces: Counterexamples

    E-print Network

    Biedl, Therese

    Morphing Polyhedra with Parallel Faces: Counterexamples Therese Biedl and Anna Lubiw and Michael J-facing unit normal as the corresponding face in Q. Parallel polyhedra P and Q admit a parallel morph polyhedra that do not admit a parallel morph. Key words: morphing, parallel polyhedra, computational

  9. Efficient parallel string comparison Peter Krusche

    E-print Network

    Rand, David

    Efficient parallel string comparison Peter Krusche Department of Computer Science University of Warwick AFM Seminar, 23rd of April, 2007 #12;Outline 1 Introduction Parallel computation Basic BSP-and-conquer semi-local LCS 3 The parallel algorithm Parallel score-matrix multiplication Parallel LCS computation

  10. Integrating reliability analysis and design

    SciTech Connect

    Rasmuson, D. M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems.

  11. Supercomputing on massively parallel bit-serial architectures

    NASA Technical Reports Server (NTRS)

    Iobst, Ken

    1985-01-01

    Research on the Goodyear Massively Parallel Processor (MPP) suggests that high-level parallel languages are practical and can be designed with powerful new semantics that allow algorithms to be efficiently mapped to the real machines. For the MPP these semantics include parallel/associative array selection for both dense and sparse matrices, variable precision arithmetic to trade accuracy for speed, micro-pipelined train broadcast, and conditional branching at the processing element (PE) control unit level. The preliminary design of a FORTRAN-like parallel language for the MPP has been completed and is being used to write programs to perform sparse matrix array selection, min/max search, matrix multiplication, Gaussian elimination on single bit arrays and other generic algorithms. A description is given of the MPP design. Features of the system and its operation are illustrated in the form of charts and diagrams.

  12. Effects of different combinations of environmental tests on the reliability of UHF RFID tags

    Microsoft Academic Search

    Kirsi Saarinen; Laura Frisk; Leena Ukkonen

    2011-01-01

    Accelerated environmental tests can be used to study the effects of environmental stresses on reliability. Typically environmental tests are used in parallel so that only one test is performed for test samples and new test samples are used in another test. However, different tests one after another for the same test samples may describe the operational environment better and give

  13. Estimating Test Score Reliability When No Examinee Has Taken the Complete Test.

    ERIC Educational Resources Information Center

    Feldt, Leonard S.

    2003-01-01

    Develops formulas to cope with the situation in which the reliability of test scores must be approximated even though no examinee has taken the complete instrument. Develops different estimators for part tests that are judged to be classically parallel, tau-equivalent, or congeneric. Proposes standards for differentiating among these three models.…

  14. Reliability and Performance of Star Topology Grid Service With Precedence Constraints on Subtask Execution

    Microsoft Academic Search

    Gregory Levitin; Yuan-shun Dai; Hanoch Ben-Haim

    2006-01-01

    The paper considers grid computing systems with star architectures in which the resource management system (RMS) divides service tasks into subtasks, and sends the subtasks to different specialized resources for execution. To provide the desired level of service reliability, the RMS can assign the same subtasks to several independent resources for parallel execution. Some subtasks cannot be executed until they

  15. The system reliability analysis based on the relation of fuzzy infection

    Microsoft Academic Search

    Chuan Sun; Yongji Wang; Xiangou Zhu

    2005-01-01

    In this paper, using the fuzzy theory and gray theory, a concept of the degree of fuzzy infection and a new reliability analysis method based on the relation of fuzzy infection were proposed, and the mathematical mode of series connection and parallel connection system was first set up and their rationalities were proven. An illustrational example was given to demonstrate

  16. Parallel search of strongly ordered game trees

    SciTech Connect

    Marsland, T.A.; Campbell, M.

    1982-12-01

    The alpha-beta algorithm forms the basis of many programs that search game trees. A number of methods have been designed to improve the utility of the sequential version of this algorithm, especially for use in game-playing programs. These enhancements are based on the observation that alpha beta is most effective when the best move in each position is considered early in the search. Trees that have this so-called strong ordering property are not only of practical importance but possess characteristics that can be exploited in both sequential and parallel environments. This paper draws upon experiences gained during the development of programs which search chess game trees. Over the past decade major enhancements of the alpha beta algorithm have been developed by people building game-playing programs, and many of these methods will be surveyed and compared here. The balance of the paper contains a study of contemporary methods for searching chess game trees in parallel, using an arbitrary number of independent processors. To make efficient use of these processors, one must have a clear understanding of the basic properties of the trees actually traversed when alpha-beta cutoffs occur. This paper provides such insights and concludes with a brief description of a refinement to a standard parallel search algorithm for this problem. 33 references.

  17. Parallel retreat of rock slopes underlain by alternation of strata

    NASA Astrophysics Data System (ADS)

    Imaizumi, Fumitoshi; Nishii, Ryoko; Murakami, Wataru; Daimaru, Hiromu

    2015-06-01

    Characteristic landscapes (e.g., cuesta, cliff and overhang of caprock, or stepped terrain) formed by differential erosion can be found in areas composed of variable geology exhibiting different resistances to weathering. Parallel retreat of slopes, defined as recession of slopes without changes in their topography, is sometimes observed on slopes composed of multiple strata. However, the conditions needed for such parallel retreat have not yet been sufficiently clarified. In this study, we elucidated the conditions for parallel retreat of rock slopes composed of alternating layers using a geometric method. In addition, to evaluate whether various rock slopes fulfilled the conditions for parallel retreat, we analyzed topographic data obtained from periodic measurement of rock slopes in the Aka-kuzure landslide, central Japan. Our geometric analysis of the two-dimensional slopes indicates that dip angle, slope gradient, and erosion rate are the factors that determine parallel retreat conditions. However, dip angle does not significantly affect parallel retreat conditions in the case of steep back slopes (slope gradient > 40°). In contrast, dip angle is an important factor when we consider the parallel retreat conditions in dip slopes and gentler back slopes (slope gradient < 40°). Geology in the Aka-kuzure landslide is complex because of faulting, folding, and toppling, but spatial distribution of the erosion rate measured by airborne LiDAR scanning and terrestrial laser scanning (TLS) roughly fulfills parallel retreat conditions. The Aka-kuzure landslide is characterized by repetition of steep sandstone cliffs and gentle shale slopes that form a stepped topography. The inherent resistance of sandstone to weathering is greater than that of shale. However, the vertical erosion rate within the sandstone was higher than that within the shale, due to direct relationship between slope gradient and vertical erosion rate in the Aka-kuzure landslide.

  18. Bayesian Interpretation of Test Reliability.

    ERIC Educational Resources Information Center

    Jones, W. Paul

    1991-01-01

    A Bayesian alternative to interpretations based on classical reliability theory is presented. Procedures are detailed for calculation of a posterior score and credible interval with joint consideration of item sample and occasion error. (Author/SLD)

  19. How Reliable Is Laboratory Testing?

    MedlinePLUS

    ... for reliability through comprehensive quality control and quality assurance procedures. Therefore, when your blood is tested more ... to ensure that it reflects the most current science. A review may not require any modifications to ...

  20. An experiment in software reliability

    NASA Technical Reports Server (NTRS)

    Dunham, J. R.; Pierce, J. L.

    1986-01-01

    The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

  1. Reliability Assessment Using Discriminative Sampling and Metamodeling

    E-print Network

    Wang, Gaofeng Gary

    1 05M-400 Reliability Assessment Using Discriminative Sampling and Metamodeling G. Gary Wang Dept ABSTRACT Reliability assessment is the foundation for reliability engineering and reliability-based design optimization. It has been a difficult task, however, to perform both accurate and efficient reliability

  2. ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS

    NASA Technical Reports Server (NTRS)

    Viterna, L. A.

    1994-01-01

    The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.

  3. Is Monte Carlo embarrassingly parallel?

    SciTech Connect

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  4. Water Distribution Reliability: Analytical Methods

    Microsoft Academic Search

    Janet M. Wagner; Uri Shamir; David H. Marks

    1988-01-01

    ABSTRACT : Probabilistic reliability measures,for the performance of water distribution networks,are developed,and analytical methods,for their computation,explained. The paper begins with a review of re- liability considerations and measures for water supply systems, making use of similar notions in other fields. It classifies reliability analyses according to the level of detail with which the water system is modeled, and then concentrates

  5. Reliability evaluation of CMOS RAMs

    Microsoft Academic Search

    C. J. Salvo; A. T. Sasaki

    1982-01-01

    The results of an evaluation of the reliability of a 1K x 1 bit CMOS RAM and a 4K x 1 bit CMOS RAM for the USAF are reported. The tests consisted of temperature cycling, thermal shock, electrical overstress-static discharge and accelerated life test cells. The study indicates that the devices have high reliability potential for military applications. Use-temperature failure

  6. Parallel Marker Based Image Segmentation with Watershed

    E-print Network

    Parallel Marker Based Image Segmentation with Watershed Transformation Alina N. Moga Albert; Parallel Marker Based Watershed Transformation Abstract. The parallel watershed transformation used homogeneity with the watershed transformation. Boundary­based region merging is then effected to condense non

  7. Automatic Generation of Parallel CRC Circuits

    Microsoft Academic Search

    Michael Sprachmann

    2001-01-01

    A parallel CRC circuit simultaneously processes multiple data bits. A generic VHDL description of parallel CRC circuits lets designers synthesize CRC circuits for any generator polynomial or required amount of parallelism

  8. Benchmarking Parallel Java Master's Project Report

    E-print Network

    Kaminsky, Alan

    ................................................................ 3 3.1. Core Parallel Java Constructs Benchmarking Parallel Java Master's Project Report Asma'u Sani Mohammed Java API by implementing the OpenMP version of the NAS Parallel Benchmark (NPB

  9. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...2014-04-01 2014-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

  10. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...2013-04-01 2013-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

  11. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...2011-04-01 2011-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

  12. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...2010-04-01 2010-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

  13. 17 CFR 12.24 - Parallel proceedings.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...2012-04-01 2012-04-01 false Parallel proceedings. 12.24 Section...Consideration of Pleadings § 12.24 Parallel proceedings. (a) Definition. For purposes of this section, a parallel proceeding shall...

  14. Uhlmann's parallelism Nagaoka's quantum information geometry

    E-print Network

    Yamamoto, Hirosuke

    Uhlmann's parallelism and Nagaoka's quantum information geometry Keiji Matsumoto METR 97-09 October 1997 #12;Uhmann's parallelism and Nagaoka's quantum information geometry Keiji Matsumoto 1 Abstract: Uhlmann's parallelism and Nagaoka's quantum information geometry. In this paper, intrinsic relation

  15. Instrumentation for parallel magnetic resonance imaging

    E-print Network

    Brown, David Gerald

    2007-04-25

    Parallel magnetic resonance (MR) imaging may be used to increase either the throughput or the speed of the MR imaging experiment. As such, parallel imaging may be accomplished either through a "parallelization" of the MR experiment, or by the use...

  16. Reliability measure for segmenting algorithms

    NASA Astrophysics Data System (ADS)

    Alvarez, Robert E.

    2004-05-01

    Segmenting is a key initial step in many computer-aided detection (CAD) systems. Our purpose is to develop a method to estimate the reliability of segmenting algorithm results. We use a statistical shape model computed using principal component analysis. The model retains a small number of eigenvectors, or modes, that represent a large fraction of the variance. The residuals between the segmenting result and its projection into the space of retained modes are computed. The sum of the squares of residuals is transformed to a zero-mean, unit standard deviation Gaussian random variable. We also use the standardized scale parameter. The reliability measure is the probability that the transformed residuals and scale parameter are greater than the absolute value of the observed values. We tested the reliability measure with thirty chest x-ray images with "leave-out-one" testing. The Gaussian assumption was verified using normal probability plots. For each image, a statistical shape model was computed from the hand-digitized data of the rest of the images in the training set. The residuals and scale parameter with automated segment results for the image were used to compute the reliability measure in each case. The reliability measure was significantly lower for two images in the training set with unusual lung fields or processing errors. The data and Matlab scripts for reproducing the figures are at http://www.aprendtech.com/papers/relmsr.zip Errors detected by the new reliability measure can be used to adjust processing or warn the user.

  17. Robust fusion with reliabilities weights

    NASA Astrophysics Data System (ADS)

    Grandin, Jean-Francois; Marques, Miguel

    2002-03-01

    The reliability is a value of the degree of trust in a given measurement. We analyze and compare: ML (Classical Maximum Likelihood), MLE (Maximum Likelihood weighted by Entropy), MLR (Maximum Likelihood weighted by Reliability), MLRE (Maximum Likelihood weighted by Reliability and Entropy), DS (Credibility Plausibility), DSR (DS weighted by reliabilities). The analysis is based on a model of a dynamical fusion process. It is composed of three sensors, which have each it's own discriminatory capacity, reliability rate, unknown bias and measurement noise. The knowledge of uncertainties is also severely corrupted, in order to analyze the robustness of the different fusion operators. Two sensor models are used: the first type of sensor is able to estimate the probability of each elementary hypothesis (probabilistic masses), the second type of sensor delivers masses on union of elementary hypotheses (DS masses). In the second case probabilistic reasoning leads to sharing the mass abusively between elementary hypotheses. Compared to the classical ML or DS which achieves just 50% of correct classification in some experiments, DSR, MLE, MLR and MLRE reveals very good performances on all experiments (more than 80% of correct classification rate). The experiment was performed with large variations of the reliability coefficients for each sensor (from 0 to 1), and with large variations on the knowledge of these coefficients (from 0 0.8). All four operators reveal good robustness, but the MLR reveals to be uniformly dominant on all the experiments in the Bayesian case and achieves the best mean performance under incomplete a priori information.

  18. Fatigue Reliability of Gas Turbine Engine Structures

    NASA Technical Reports Server (NTRS)

    Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.

    1997-01-01

    The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

  19. Parallelizing the spectral transform method, part 2

    NASA Astrophysics Data System (ADS)

    Walker, D. W.; Worley, P. H.; Drake, J. B.

    1991-07-01

    This paper describes the parallelization and performance of the spectral method for solving the shallow water equations on the surface of a sphere using a 128-node Intel iPSC/860 hypercube. The shallow water equations form a computational kernel of more complex climate models. This work is part of a research program to develop climate models that are capable of much longer simulations at a significantly finer resolution than current models. Such models are important in understanding the effects of the increasing atmospheric concentrations of greenhouse gases, and the computational requirements are so large that massively parallel multiprocessors will be necessary to run climate models simulations in a reasonable amount of time. The spectral method involves the transformation of data between the physical, Fourier, and spectral domains. Each of these domains is two-dimensional. The spectral method performs Fourier transforms in the longitude direction followed by summation in the latitude direction to evaluate the discrete spectral transform. A simple way of parallelizing the spectral code is to decompose the physical problem domain in just the latitude direction. This allows an optimized sequential FFT algorithm to be used in the longitude direction. However, this approach limits the number of processors that can be brought to bear on the problem. Decomposing the problem over both directions allows the parallelism inherent in the problem to be exploited more effectively - the grain size is reduced and more processors can be used. Results are presented that show that decomposing over both directions does result in a more rapid solution of the problem. The importance of minimizing communication latency and overlapping communication with calculation is stressed. General methods for doing this, that may be applied to many other problems, are discussed.

  20. Parallel Adaptive Mesh Refinement

    SciTech Connect

    Diachin, L; Hornung, R; Plassmann, P; WIssink, A

    2005-03-04

    As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

  1. Copperhead: compiling an embedded data parallel language

    Microsoft Academic Search

    Bryan C. Catanzaro; Michael Garland; Kurt Keutzer

    2011-01-01

    Modern parallel microprocessors deliver high performance on applications that expose substantial fine-grained data parallelism. Although data parallelism is widely available in many computations, implementing data parallel algorithms in low-level languages is often an unnecessarily difficult task. The characteristics of parallel microprocessors and the limitations of current programming methodologies motivate our design of Copperhead, a high-level data parallel language embedded in

  2. Robust Design of Reliability Test Plans Using Degradation Measures.

    SciTech Connect

    Lane, Jonathan Wesley; Lane, Jonathan Wesley; Crowder, Stephen V.; Crowder, Stephen V.

    2014-10-01

    With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus, it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. Generally, the assumption is made that the error associated with a degradation measure follows a known distribution, usually normal, although in practice cases may arise where that assumption is not valid. In this paper, we examine such degradation measures, both simulated and real, and present non-parametric methods to demonstrate reliability and to develop reliability test plans for the future production of components with this form of degradation.

  3. Three-dimensional parallel vortex rings in Bose-Einstein condensates

    SciTech Connect

    Crasovan, Lucian-Cornel [ICFO-Institut de Ciencies Fotoniques, and Department of Signal Theory and Communications, Universitat Politecnica de Catalunya, ES 08034 Barcelona (Spain); Department of Theoretical Physics, Institute of Atomic Physics, P.O. Box MG-6, Bucharest (Romania); Perez-Garcia, Victor M. [Departamento de Matematicas, ETSI Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain); Danaila, Ionut [Laboratoire Jacques-Louis Lions, Universite Paris 6, 175 Rue du Chevaleret, 75013 Paris (France); Mihalache, Dumitru [Department of Theoretical Physics, Institute of Atomic Physics, P.O. Box MG-6, Bucharest (Romania); Torner, Lluis [ICFO-Institut de Ciencies Fotoniques, and Department of Signal Theory and Communications, Universitat Politecnica de Catalunya, ES 08034 Barcelona (Spain)

    2004-09-01

    We construct three-dimensional structures of topological defects hosted in trapped wave fields, in the form of vortex stars, vortex cages, parallel vortex lines, perpendicular vortex rings, and parallel vortex rings, and we show that the latter exist as robust stationary, collective states of nonrotating Bose-Einstein condensates. We discuss the stability properties of excited states containing several parallel vortex rings hosted by the condensate, including their dynamical and structural stability.

  4. SIERRA - A 3-D device simulator for reliability modeling

    NASA Astrophysics Data System (ADS)

    Chern, Jue-Hsien; Arledge, Lawrence A., Jr.; Yang, Ping; Maeda, John T.

    1989-05-01

    SIERRA is a three-dimensional general-purpose semiconductor-device simulation program which serves as a foundation for investigating integrated-circuit (IC) device and reliability issues. This program solves the Poisson and continuity equations in silicon under dc, transient, and small-signal conditions. Executing on a vector/parallel minisupercomputer, SIERRA utilizes a matrix solver which uses an incomplete LU (ILU) preconditioned conjugate gradient square (CGS, BCG) method. The ILU-CGS method provides a good compromise between memory size and convergence rate. The authors have observed a 5x to 7x speedup over standard direct methods in simulations of transient problems containing highly coupled Poisson and continuity equations such as those found in reliability-oriented simulations. The application of SIERRA to parasitic CMOS latchup and dynamic random-access memory single-event-upset studies is described.

  5. Reliability and Functional Availability of HVAC Systems 

    E-print Network

    Myrefelt, S.

    2004-01-01

    This paper presents a model to calculate the reliability and availability of heating, ventilation and air conditioning systems. The reliability is expressed in the terms of reliability, maintainability and decision capability. These terms are a...

  6. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...2014-04-01 2014-04-01 false Reliability reports. 39.11 Section 39.11...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11...

  7. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...2013-04-01 2013-04-01 false Reliability reports. 39.11 Section 39.11...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11...

  8. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...2011-04-01 2011-04-01 false Reliability reports. 39.11 Section 39.11...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11...

  9. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...2014-07-01 2014-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

  10. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...2011-07-01 2011-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

  11. 18 CFR 39.11 - Reliability reports.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...2012-04-01 2012-04-01 false Reliability reports. 39.11 Section 39.11...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.11...

  12. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...2010-07-01 2010-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

  13. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...2013-07-01 2013-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

  14. Arnold Schwarzenegger REAL-TIME GRID RELIABILITY

    E-print Network

    Arnold Schwarzenegger Governor REAL-TIME GRID RELIABILITY MANAGEMENT California ISO Real Laboratory Consortium for Electric Reliability Technology Solutions APPENDIXC October 2008 CEC-500 (VSA) prototype to monitor system voltage conditions and provide real time dispatchers with reliability

  15. 40 CFR 75.42 - Reliability criteria.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...2012-07-01 2012-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

  16. 76 FR 42534 - Mandatory Reliability Standards for Interconnection Reliability Operating Limits; System...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    ...Reliability Operating Limits; System Restoration Reliability...Interconnection Reliability Operating Limits (IROL) within...Analysis'' and ``Real Time Assessment...Interconnection Reliability Operating Limits, Order No...determined by detailed system studies to allow...

  17. Cost/benefit assessment of power system reliability

    NASA Astrophysics Data System (ADS)

    Jonnavithula, Satish

    1997-12-01

    Value based planning in power systems is becoming increasingly important due to required system investment costs and the necessity to quantify and justify system reliability levels. Utilities faced with increasingly limited resources, strive to maintain high levels of reliability by adopting improved methodologies in planning, operation, construction, and maintenance. Many utilities now recognize that the total system cost used in the decision making process must include the value to the customers in the form of power interruption costs, in addition to investment and maintenance costs. Performing a reliability cost/benefit analysis requires an assessment of the costs of providing reliable service and quantification of the worth of having it. The annual expected values of the customer cost of interruption and curtailments can be added to the predicted annual investment and maintenance costs to form a total cost index for the specified project. Comparisons of various alternatives can then be made on the basis of the total cost. The main objective of this research was to develop a range of techniques to perform power system reliability cost/benefit assessment. An important aspect of the work was the development of techniques to conduct reliability evaluation for an overall power system. This required the utilization of a practical system configuration which includes generation, transmission, switching station and distribution facilities. This research extends an existing test system by developing the necessary distribution and subtransmission networks. The extended test system has all the main facilities that are found in a practical system. This thesis extends the available techniques and illustrates procedures for performing overall power system reliability evaluation. This thesis illustrates how system planners and operators can incorporate reliability cost/benefit assessment in a range of power system applications. A new approach to determine the optimum reserve in generation planning is presented. A new formulation for selecting the optimum number, locations and timing of line reinforcements in transmission planning is proposed taking into consideration investment, maintenance, resistive loss and unreliability costs. Station configuration design in a power system involves both reliability and economic considerations. This thesis presents a technique for optimum station configuration design selection utilizing reliability cost/benefit assessment. This thesis also illustrates the application of reliability cost/benefit assessment to determine the optimal routes and to obtain optimum switching device placement in distribution planning. The basic concepts associated with reliability cost/benefit assessment for an overall power system are illustrated by quantitative application to practical power system configurations.

  18. Predicting performance of parallel computations

    NASA Technical Reports Server (NTRS)

    Mak, Victor W.; Lundstrom, Stephen F.

    1990-01-01

    An accurate and computationally efficient method for predicting the performance of a class of parallel computations running on concurrent systems is described. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queuing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method is validated against both detailed simulation and actual execution on a commercial multiprocessor. Using 100 test cases, the average error of the prediction when compared to simulation statistics is 1.7 percent, with a standard deviation of 1.5 percent; the maximum error is about 10 percent.

  19. Predicting performance of parallel computations

    SciTech Connect

    Mak, V.W. (Distributed Software Research Group, Bell Communications Research, Morristown, NJ (US)); Lundstrom, S.F. (Stanford Univ., CA (USA). Computer Systems Lab.)

    1990-07-01

    This paper describes an accurate and computationally efficient method for predicting performance of a class of parallel computations running on concurrent systems. Earlier work either dealt with very restricted computation structures or used methods with exponential complexity. A parallel computation is modeled as a task system with precedence relationships expressed as a series-parallel directed acyclic graph. Resources in a concurrent system are modeled as service centers in a queueing network model. Using these two models as inputs, the method outputs predictions of expected execution time of the parallel computation and the concurrent system utilization. The method has been validated against both detailed simulation and actual execution on a commercial multiprocessor. Using one hundred test cases, the average error of the prediction when compared to simulation statistics was 1.7% with a standard deviation of 1.5%, and the maximum error was about 10%.

  20. Demonstrating Forces between Parallel Wires.

    ERIC Educational Resources Information Center

    Baker, Blane

    2000-01-01

    Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

  1. "Feeling" Series and Parallel Resistances.

    ERIC Educational Resources Information Center

    Morse, Robert A.

    1993-01-01

    Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

  2. Designing and Building Parallel Programs

    NSDL National Science Digital Library

    Designing and Building Parallel Programs [Online] is an innovative traditional print and online resource publishing project. It incorporates the content of a textbook published by Addison-Wesley into an evolving online resource.

  3. Parallel algorithms for inductance extraction

    E-print Network

    Mahawar, Hemant

    2007-09-17

    of the iterative method becomes a challenging task. This work presents a class of parallel algorithms for fast and accurate inductance extraction of VLSI circuits. We use the solenoidal basis approach that converts the linear system into a reduced system...

  4. PARALLEL DATABASE MACHINES Kjell Bratbergsengen

    E-print Network

    and database servers for "new" data types, notably film and video. THE TRAUMATIC HISTORY OF DATABASE COMPUTERS and later, European Community supported developments. Also the massively parallel search system based

  5. The Gaussian parallel relay network

    Microsoft Academic Search

    B. Schein; R. Gallager

    2000-01-01

    We introduce the real, discrete-time Gaussian parallel relay network. This simple network is theoretically important in the context of network information theory. We present upper and lower bounds to capacity and explain where they coincide

  6. Master\\/slave speculative parallelization

    Microsoft Academic Search

    Craig B. Zilles; Gurindar S. Sohi

    2002-01-01

    Master\\/Slave Speculative Parallelization (MSSP) is an execution paradigm for improving the execution rate of sequential programs by parallelizing them speculatively for execution on a multiprocessor. In MSSP, one processor---the master---executes an approximate version of the program to compute selected values that the full program's execution is expected to compute. The master's results are checked by slave processors that execute the

  7. Address tracing for parallel machines

    NASA Technical Reports Server (NTRS)

    Stunkel, Craig B.; Janssens, Bob; Fuchs, W. Kent

    1991-01-01

    Recently implemented parallel system address-tracing methods based on several metrics are surveyed. The issues specific to collection of traces for both shared and distributed memory parallel computers are highlighted. Five general categories of address-trace collection methods are examined: hardware-captured, interrupt-based, simulation-based, altered microcode-based, and instrumented program-based traces. The problems unique to shared memory and distributed memory multiprocessors are examined separately.

  8. Parallel operation of single phase inverter modules with no control interconnections

    Microsoft Academic Search

    A. Tuladhar; H. Jin; T. Unger; K. Mauch

    1997-01-01

    To provide reliable power under scheduled and unscheduled outages requires an uninterruptible power supply (UPS) which can be easily expanded to meet the needs of a growing demand. A system suck as this should also be fault tolerant and include the capability for redundancy. These goals can be met by paralleling together smaller inverters if a control scheme can be

  9. Tight Bounds for Parallel Randomized Load Balancing (TIK Report Number 324)

    E-print Network

    algorithm terminating within O(1) rounds. All these bounds hold with high probability. #12;1 Introduction the fundamental limits of distributed balls-into-bins algorithms, i.e., algorithms where balls act in parallel algorithms cannot reliably perform better than a maximum bin load of (log log n/ log log log n) within

  10. TRENDS IN OPTICS AND PHOTONICS SERIES Vol.90 Can Optical Interconnects be Sufficiently Parallel to Support

    E-print Network

    Esener, Sadik C.

    TRENDS IN OPTICS AND PHOTONICS SERIES Vol.90 10 Can Optical Interconnects be Sufficiently Parallel to optical interconnect in terms of reliability, integration, power management and cost. In this paper, we on a layer of a printed circuit board (PCB) is determined by the effective cross section area and the center

  11. TimeNET-Sim-a parallel simulator for stochastic Petri nets

    Microsoft Academic Search

    Christian Kelling; E R. Germany

    1995-01-01

    TimeNET is a software package for modeling and performance evaluation with non-Markovian Petri nets. Concepts and implementation of the simulation component of this tool are introduced. The paper focuses on a reliable statistical analysis and the application of variance reduction techniques in a parallel, distributed simulation framework. It examines the application of variance reduction with control variates and shows an

  12. PARALLEL IDENTIFICATION OF STRUCTURAL DAMAGES USING VIBRATION MODES AND SENSOR CHARACTERISTICS

    Microsoft Academic Search

    Reiki YOSHIMOTO; Akira MITA; Koichi MORITA

    The knowledge of modal parameters is used to enhance a parallel system identification technique that is aimed at estimating the story stiffness and damping of a building structure. The modal parameters are used to decide the pass band for band-pass filters to improve the quality of data by selecting the reliable signals confined in the vicinity of modal frequencies. The

  13. CEFT: A cost-effective, fault-tolerant parallel virtual file system

    Microsoft Academic Search

    Yifeng Zhu; Hong Jiang

    The vulnerability of computer nodes due to component failures is a critical issue for cluster-based file systems. This paper studies the development and deployment of mirroring in cluster-based parallel virtual file systems to provide fault tolerance and analyzes the tradeoffs between the performance and the reliability in the mirroring scheme. It presents the design and implementation of CEFT, a scalable

  14. A parallel stereo algorithm that produces dense depth maps and preserves image features

    Microsoft Academic Search

    Pascal Fua

    1992-01-01

    To compute reliable dense depth maps, a stereo algorithm must preserve depth discontinuities and avoid gross errors. In this paper, we show how simple and parallel techniques can be combined to achieve this goal and deal with complex real world scenes. Our algorithm relies on correlation followed by interpolation. During the correlation phase the two images play a symmetric role

  15. Architectures for reasoning in parallel

    NASA Technical Reports Server (NTRS)

    Hall, Lawrence O.

    1989-01-01

    The research conducted has dealt with rule-based expert systems. The algorithms that may lead to effective parallelization of them were investigated. Both the forward and backward chained control paradigms were investigated in the course of this work. The best computer architecture for the developed and investigated algorithms has been researched. Two experimental vehicles were developed to facilitate this research. They are Backpac, a parallel backward chained rule-based reasoning system and Datapac, a parallel forward chained rule-based reasoning system. Both systems have been written in Multilisp, a version of Lisp which contains the parallel construct, future. Applying the future function to a function causes the function to become a task parallel to the spawning task. Additionally, Backpac and Datapac have been run on several disparate parallel processors. The machines are an Encore Multimax with 10 processors, the Concert Multiprocessor with 64 processors, and a 32 processor BBN GP1000. Both the Concert and the GP1000 are switch-based machines. The Multimax has all its processors hung off a common bus. All are shared memory machines, but have different schemes for sharing the memory and different locales for the shared memory. The main results of the investigations come from experiments on the 10 processor Encore and the Concert with partitions of 32 or less processors. Additionally, experiments have been run with a stripped down version of EMYCIN.

  16. Efficiency of parallel direct optimization

    NASA Technical Reports Server (NTRS)

    Janies, D. A.; Wheeler, W. C.

    2001-01-01

    Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

  17. Detection of faults and software reliability analysis

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1987-01-01

    Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

  18. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  19. Parallel electronic circuit simulation on the iPSC system

    Microsoft Academic Search

    C.-P. Yuan; R. Lucas; P. Chan; R. Dutton

    1988-01-01

    A parallel circuit simulator was implemented on the iPSC system. Concurrent model evaluation, hierarchical BBDF (bordered block diagonal form) reordering, and distributed multifrontal decomposition to solve the sparse matrix are used. A speedup of six times has been achieved on an eight-processor iPSC hypercube system

  20. A Generic Grid Interface for Parallel and Adaptive Scientific Computing.

    E-print Network

    Kornhuber, Ralf

    's grid component in the form of C++ classes and illus- trates its functionality and efficiency with someA Generic Grid Interface for Parallel and Adaptive Scientific Computing. Part I: Abstract Framework-70569 Stuttgart Abteilung f¨ur Angewandte Mathematik, Universit¨at Freiburg, Hermann-Herder-Str. 10, D

  1. Coarse Grain Parallel Finite Element Simulations for Incompressible Flows

    E-print Network

    Grant, P. W.

    .F. Webster1 Institute of non­Newtonian Fluid Mechanics Department of Computer Science University of Wales in a sequential form for the simulation of incompressible Newtonian and non-Newtonian flows [1, 2]. Here University Uxbridge, Middlesex, UB8 3PH, UK Parallel simulation of incompressible fluid flows is considered

  2. Paralleling the stator coils in permanent magnet machines

    Microsoft Academic Search

    Mohammad S. Islam; Sayeed Mir; Tomy Sebastian

    2005-01-01

    As low voltage machines require smaller number of turns per phase compared to higher voltage machines, it is normal to connect the various stator phase coils in parallel to form the phase winding. The placement of various coil sides in the slot and the difference in the field produced by different poles of the rotor magnet can influence the induced

  3. A Parallel-Line Detection Algorithm Based on HMM Decoding

    Microsoft Academic Search

    Yefeng Zheng; Huiping Li; David S. Doermann

    2005-01-01

    The detection of groups of parallel lines is important in applications such as form processing and text (handwriting) extraction from rule lined paper. These tasks can be very challenging in degraded documents where the lines are severely broken. In this paper, we propose a novel model-based method which incorporates high-level context to detect these lines. After preprocessing (such as skew

  4. PARALLEL ALGORITHMS FOR A MULTI-LEVEL NETWORK OPTIMIZATION PROBLEM

    E-print Network

    Cruz, Frederico

    -integer programming; G.2.2. [Discrete Mathematics]: Graph Theory-network problems; G.4.[Mathematics of ComputingPARALLEL ALGORITHMS FOR A MULTI-LEVEL NETWORK OPTIMIZATION PROBLEM F.R.B. CRUZa, * and G.R. MATEUSb­MG, Brazil (Received 15 February 2000; In final form 12 March 2001) Multi-level network optimization (MLNO

  5. The Design And Implementation Of A Parallel Document Retrieval Engine

    E-print Network

    Hawking, David

    . Document retrieval software can potentially exploit the power and capacity of a large-scale parallel very large collections of text (provided it exists in electronic form), without reliance on manually retrieval methods. Other Related Operations On Collections Of Text In linguistic and lexicographic research

  6. Applications of reliability degradation analysis

    SciTech Connect

    Vesely, W.E. [Science Applications International Corp., Dublin, OH (United States); Samanta, P.K. [Brookhaven National Lab., Upton, NY (United States)

    1996-02-01

    Reliability degradation analysis is the analysis of the occurrences of degradations and the times of maintenance to determine their reliability and risk implications. A program is presented for applying reliability degradation analyses to maintenance data collected at nuclear power plants. As a specific part of the program, time trending of maintenance data is illustrated. Maintenance data on residual heat removal (RHR) pumps and service water (SW) pumps at selected boiling water reactor (BWR) plants are evaluated to show how trends in maintenance data, which generally do not involve failures, can be used to understand effectiveness of maintenance. These trends also are translated to specific impacts on pump unavailability and on core-damage frequency (assuming that the trends in failure rate are the same as those observed for degradation rate). The second application shows the use of reliability degradation analysis to quantitatively evaluate the effect of maintenance, i.e., the quantitative change in component unavailability when no maintenance is performed. Assessment of these impacts are important since they measure the reliability and risk impacts of maintenance and can be fed back to the maintenance program to improve its effectiveness.

  7. RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM

    NASA Technical Reports Server (NTRS)

    Bowerman, P. N.

    1994-01-01

    RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.

  8. High frequency switched capacitor IIR filters using parallel cyclic type circuits

    Microsoft Academic Search

    Yoshinori HIRATA; Kyoko KATO; Nobuaki TAKAHASHI; Tsuyoshi TAKEBE

    1992-01-01

    In order to reduce the performance deterioration due to the finite gain bandwidth (GB) product of op-amps in switched capacitor (SC) transversal filters, parallel cyclic type circuits have been proposed. The authors consider how to implement direct form I SC IIR (infinite impulse response) filters using the parallel cyclic type circuit. The effects of finite GB products of op-amps and

  9. Improving system performance in contiguous processor allocation for mesh-connected parallel systems

    Microsoft Academic Search

    Kyung-hee Seo; Sung-chun Kim

    2003-01-01

    Fragmentation is the main performance bottleneck of large, multiuser parallel computer systems. Current contiguous processor allocation techniques for mesh-connected parallel systems are restricted to rectangular submesh allocation strategies causing significant fragmentation problems. This paper presents an L-shaped submesh allocation (LSSA) strategy, which lifts the restriction on the rectangular shape formed by allocated processors in order to address the problem of

  10. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  11. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOEpatents

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  12. Semantic knowledge representation and its parallel reasoning applications

    SciTech Connect

    Brown, S.N.

    1989-01-01

    This dissertation presents a new way of thinking about machine reasoning as applied to the domain of knowledge as it appears in nature, along with its taxonomy. The new way of thinking involves a new approach for the representation of this natural knowledge taxonomy: the Semantic Overlapped Tree (S.O.Tree). This structure differs from previous semantic structures in that order is provided in the form of an overlapped tree while maintaining its semantic integrity. To facilitate machine reasoning, algorithms are presented for efficient operation on the S.O.Tree, including sequential search, insert, and delete, and parallel search, insert and delete. The speedup performance of the parallel algorithms is provided as a function of the number of processors, as well as processor efficiency. The speedup asymptote for the parallel algorithms for search, insert and delete is 20.21, 13.93 and 11.80, respectively. Also presented is a hybrid memory architecture which permits the hardware to map to the topology of the problem domain. The hybrid nature of this memory configuration consists of a combination of globally shared memory and local/private memory. This new method for knowledge representation, the machine's capacity to carry out parallel reasoning, and the hybrid memory architecture combine to form a parallel-reasoning system which well exemplifies the domain-driven concept.

  13. Reliability model for planetary gear

    NASA Technical Reports Server (NTRS)

    Savage, M.; Paridon, C. A.; Coy, J. J.

    1982-01-01

    A reliability model is presented for planetary gear trains in which the ring gear is fixed, the Sun gear is the input, and the planet arm is the output. The input and output shafts are coaxial and the input and output torques are assumed to be coaxial with these shafts. Thrust and side loading are neglected. This type of gear train is commonly used in main rotor transmissions for helicopters and in other applications which require high reductions in speed. The reliability model is based on the Weibull distribution of the individual reliabilities of the transmission components. The transmission's basic dynamic capacity is defined as the input torque which may be applied for one million input rotations of the Sun gear. Load and life are related by a power law. The load life exponent and basic dynamic capacity are developed as functions of the component capacities.

  14. Assessment of NDE reliability data

    NASA Technical Reports Server (NTRS)

    Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

    1975-01-01

    Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

  15. JPARSS: A Java Parallel Network Package for Grid Computing

    SciTech Connect

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

  16. Scalable Parallel Random Number Generators Library, The (SPRNG)

    NSDL National Science Digital Library

    Michael Mascagni, Ashok Srinivasan

    Computational stochasitc approaches (Monte Carlo methods) based on the random sampling are becoming extremely important research tools not only in their "traditional" fields such as physics, chemistry or applied mathematics but also in social sciences and, recently, in various branches of industry. An indication of importance is, for example, the fact that Monte Carlo calculations consume about one half of the supercomputer cycles. One of the indispensable and important ingredients for reliable and statistically sound calculations is the source of pseudo random numbers. The goal of our project is to develop, implement and test a scalable package for parallel pseudo random number generation which will be easy to use on a variety of architectures, especially in large-scale parallel Monte Carlo applications.

  17. Performance and Scalability Evaluation of the Ceph Parallel File System

    SciTech Connect

    Wang, Feiyi [ORNL] [ORNL; Nelson, Mark [Inktank Storage, Inc.] [Inktank Storage, Inc.; Oral, H Sarp [ORNL] [ORNL; Settlemyer, Bradley W [ORNL] [ORNL; Atchley, Scott [ORNL] [ORNL; Caldwell, Blake A [ORNL] [ORNL; Hill, Jason J [ORNL] [ORNL

    2013-01-01

    Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

  18. Using parallel banded linear system solvers in generalized eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Zhang, Hong; Moss, William F.

    1993-01-01

    Subspace iteration is a reliable and cost effective method for solving positive definite banded symmetric generalized eigenproblems, especially in the case of large scale problems. This paper discusses an algorithm that makes use of two parallel banded solvers in subspace iteration. A shift is introduced to decompose the banded linear systems into relatively independent subsystems and to accelerate the iterations. With this shift, an eigenproblem is mapped efficiently into the memories of a multiprocessor and a high speed-up is obtained for parallel implementations. An optimal shift is a shift that balances total computation and communication costs. Under certain conditions, we show how to estimate an optimal shift analytically using the decay rate for the inverse of a banded matrix, and how to improve this estimate. Computational results on iPSC/2 and iPSC/860 multiprocessors are presented.

  19. Designing magnetic systems for reliability

    SciTech Connect

    Heitzenroeder, P.J.

    1991-01-01

    Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not to be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.

  20. Reliability Education Opportunity: "Reliability Analysis of Field Data"

    E-print Network

    Bernstein, Joseph B.

    : Nonparametric Inferences ~1.4% failing @ 9 MIS Concavity is an indication of an IFR. Note: F(t)H(t), for small F Data Root cause analysis and future failure avoidance through statistical engineering inferences reliability problems Cash flow optimization through the prediction of the required warranty reserve and