These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Reliable Parallel File System Using RAID Technology  

Microsoft Academic Search

Providing data avaliability in a cluster environ- ment is very important. Most clusters either use RAID technology or redudant nodes to ensure reliability. High performance clusters usually use parallel file systems to increase throughput. However, when parallel file system is involved in a cluster system, some mechanism must be provided to overcome the side-effect of using striping. PVFS is a

Sheng-Kai Hung; Yarsun Hsu

2004-01-01

2

Overview of ICLASS research: Reliable and parallel computing  

NASA Technical Reports Server (NTRS)

An overview of Illinois Computer Laboratory for Aerospace Systems and Software (ICLASS) Research: Reliable and Parallel Computing is presented. Topics covered include: reliable and fault tolerant computing; fault tolerant multiprocessor architectures; fault tolerant matrix computation; and parallel processing.

Iyer, Ravi K.

1987-01-01

3

Parallel versions of the symbolic manipulation system FORM  

E-print Network

The symbolic manipulation program FORM is specialized to handle very large algebraic expressions. Some specific features of its internal structure make FORM very well suited for parallelization. We have now two parallel versions of FORM, one is based on POSIX threads and is optimal for modern multicore computers while another one uses MPI and can be used to parallelize FORM on clusters and Massive Parallel Processing systems. Most existing FORM programs will be able to take advantage of the parallel execution without the need for modifications.

M. Tentyukov; J. A. M. Vermaseren; J. Vollinga

2010-06-10

4

Optimistic Parallel Simulation of Reliable Multicast Protocols \\Lambda Dan Rubenstein Jim Kurose Don Towsley  

E-print Network

Optimistic Parallel Simulation of Reliable Multicast Protocols \\Lambda Dan Rubenstein Jim Kurose Don Towsley Department of Computer Science University of Massachusetts Amherst MA 01003 USA fdrubenst,kurose

Massachusetts at Amherst, University of

5

Accuracy analysis of the parallel composition for the block diagram based reliability assessment of quantum circuits  

Microsoft Academic Search

Simulation cannot be applied for complex quantum circuits' reliability analysis. Therefore, mixed simulation - analytical techniques should be applied in order to evaluate the reliability of quantum devices. One of the most used analytical methods is represented by the reliability block diagrams (RBD). This paper presents the accuracy estimates analysis for the parallel composition in the RBD based reliability estimation

O. Boncalo; M. Vladutiu; A. Amaricai

2010-01-01

6

Angles Formed by Parallel Lines and a Transversal  

NSDL National Science Digital Library

In this lesson you will learn how to classify angles formed by parallel lines and a transversal as well as how to find the measures of these angles. You have proably heard of parallel lines but you proably don\\'t know about all the special angles that are formed when a line intersects a set of parallel lines. Click on the lecture below to learn about these special angles. The lecture has sound so make sure your ...

Mrs. Brown

2007-10-19

7

Generating Parallel Test Forms for Minimum Competency Exams.  

ERIC Educational Resources Information Center

A procedure which employs a method of item substitution based on item difficulty is recommended for developing parallel criterion referenced test forms. This procedure is currently being used in the Florida functional literacy testing program and the Georgia teacher certification testing program. Reasons for developing parallel test forms involve…

Nassif, Paula M.; And Others

8

PRAND: GPU accelerated parallel random number generation library: Using most reliable algorithms and applying parallelism of modern GPUs and CPUs  

NASA Astrophysics Data System (ADS)

The library PRAND for pseudorandom number generation for modern CPUs and GPUs is presented. It contains both single-threaded and multi-threaded realizations of a number of modern and most reliable generators recently proposed and studied in Barash (2011), Matsumoto and Tishimura (1998), L'Ecuyer (1999,1999), Barash and Shchur (2006) and the efficient SIMD realizations proposed in Barash and Shchur (2011). One of the useful features for using PRAND in parallel simulations is the ability to initialize up to 1019 independent streams. Using massive parallelism of modern GPUs and SIMD parallelism of modern CPUs substantially improves performance of the generators.

Barash, L. Yu.; Shchur, L. N.

2014-04-01

9

The Reliable Router: A Reliable and High-Performance Communication Substrate for Parallel Computers  

Microsoft Academic Search

. The Reliable Router (RR) is a network switching elementtargeted to two-dimensional mesh interconnection network topologies.It is designed to run at 100 MHz and reach a useful link bandwidth of3.2 Gbit\\/sec. The Reliable Router uses adaptive routing coupled withlink-level retransmission and a unique-token protocol to increase bothperformance and reliability. The RR can handle a single node or linkfailure anywhere in

William J. Dally; Larry R. Dennison; David Harris; Kinhong Kan; Thucydides Xanthopoulos

1994-01-01

10

Masking reveals parallel form systems in the visual brain  

PubMed Central

It is generally supposed that there is a single, hierarchically organized pathway dedicated to form processing, in which complex forms are elaborated from simpler ones, beginning with the orientation-selective cells of V1. In this psychophysical study, we undertook to test another hypothesis, namely that the brain’s visual form system consists of multiple parallel systems and that complex forms are other than the sum of their parts. Inspired by imaging experiments which show that forms of increasing perceptual complexity (lines, angles, and rhombuses) constituted from the same elements (lines) activate the same visual areas (V1, V2, and V3) with the same intensity and latency (Shigihara and Zeki, 2013, 2014), we used backward masking to test the supposition that these forms are processed in parallel. We presented subjects with lines, angles, and rhombuses as different target-mask pairs. Evidence in favor of our supposition would be if masking is the most effective when target and mask are processed by the same system and least effective when they are processed in different systems. Our results showed that rhombuses were strongly masked by rhombuses but only weakly masked by lines or angles, but angles and lines were well masked by each other. The relative resistance of rhombuses to masking by low-level forms like lines and angles suggests that complex forms like rhombuses may be processed in a separate parallel system, whereas lines and angles are processed in the same one. PMID:25120460

Lo, Yu Tung; Zeki, Semir

2014-01-01

11

Redundant disk arrays: Reliable, parallel secondary storage. Ph.D. Thesis  

NASA Technical Reports Server (NTRS)

During the past decade, advances in processor and memory technology have given rise to increases in computational performance that far outstrip increases in the performance of secondary storage technology. Coupled with emerging small-disk technology, disk arrays provide the cost, volume, and capacity of current disk subsystems, by leveraging parallelism, many times their performance. Unfortunately, arrays of small disks may have much higher failure rates than the single large disks they replace. Redundant arrays of inexpensive disks (RAID) use simple redundancy schemes to provide high data reliability. The data encoding, performance, and reliability of redundant disk arrays are investigated. Organizing redundant data into a disk array is treated as a coding problem. Among alternatives examined, codes as simple as parity are shown to effectively correct single, self-identifying disk failures.

Gibson, Garth Alan

1990-01-01

12

The Experiences in Close Relationship Scale (ECR)Short Form: Reliability, Validity, and Factor Structure  

Microsoft Academic Search

We developed a 12-item, short form of the Experiences in Close Relationship Scale (ECR; Brennan, Clark, & Shaver, 1998) across 6 studies. In Study 1, we examined the reliability and factor structure of the measure. In Studies 2 and 3, we cross-validated the reliability, factor structure, and validity of the short form measure; whereas in Study 4, we examined test-retest

Meifen Wei; Daniel W. Russell; Brent Mallinckrodt; David L. Vogel

2007-01-01

13

Separation of image parts using 2-D parallel form recursive filters  

Microsoft Academic Search

This correspondence deals with a new technique to separate objects or image parts in a composite image. A parallel form extension of a 2-D Steiglitz-McBride method is applied to the discrete cosine transform (DCT) of the image containing the objects that are to be separated. The obtained parallel form is the sum of several filters or systems, where the impulse

Radhika Sivaramakrishna

1996-01-01

14

Reliable and Fast Estimation of Recombination Rates by Convergence Diagnosis and Parallel Markov Chain Monte Carlo.  

PubMed

Genetic recombination is an essential event during the process of meiosis resulting in an exchange of segments between paired chromosomes. Estimating recombination rate is crucial for understanding evolution. Experimental methods are normally difficult and limited to small scale estimations. Thus statistical methods using population genetic data are important for large-scale analysis. LDhat is an extensively used statistical method using rjMCMC algorithm to predict recombination rates. Due to the complexity of rjMCMC scheme, LDhat may take a long time to generate results for large SNP data. In addition, rjMCMC parameters should be manually defined in the original program that directly impact results. To address these issues, we designed an improved algorithm based on LDhat implementing MCMC convergence diagnostic algorithms to automatically predict values of parameters and monitor the mixing process. Then parallel computation methods were employed to further accelerate the new program. The new algorithms have been tested on ten samples from HapMap phase 2 datasets. The results were compared with previous code and showed nearly identical outputs, however our new methods achieved significant acceleration proving that they are more efficient and reliable for the estimation of recombination rates. The stand-alone package is freely available for download at http://www.ntu.edu.sg/home/zhengjie/software/CPLDhat/. PMID:24166655

Guo, Jing; Jain, Ritika; Yang, Peng; Fan, Rui; Kwoh, Chee Keong; Zheng, Jie

2013-10-23

15

The Reliability and Validity of the Instructional Climate Inventory-Student Form.  

ERIC Educational Resources Information Center

Study examines the reliability and validity of the Instructional Climate Survey-Form S (ICI-S), a 20-item instrument that measures school climate, administered to students (N=328) in three programs. Analysis indicates that ICI-S was best explained by one factor. Reliability coeffecients of the total score were within the acceptable range for all…

Worrell, Frank C.

2000-01-01

16

Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method  

ERIC Educational Resources Information Center

In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel

Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

2008-01-01

17

Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test  

ERIC Educational Resources Information Center

This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…

Benton, Tom

2013-01-01

18

Reliability  

NSDL National Science Digital Library

In essence, reliability is the consistency of test results. To understand the meaning of reliability and how it relates to validity, imagine going to an airport to take flight #007 from Pittsburgh to San Diego. If, every time the airplane makes the flight

Edwin P. Christmann

2008-11-01

19

Reliability of Three Benton Judgment of Line Orientation Short Forms in Idiopathic Parkinson’s Disease  

PubMed Central

Individuals with Parkinson’s disease (PD) often exhibit deficits in visuospatial functioning throughout the course of their disease. These deficits should be carefully assessed as they may have implications for patient safety and disease severity. One of the most commonly administered tests of visuospatial ability, the Benton Judgment of Line Orientation (JLO), consists of 30 pairs of lines requiring the patient to match the orientation of two lines to an array of 11 lines on a separate page. Reliable short forms have been constructed out of the full JLO form, but the reliability of these forms in PD has yet to be examined. Recent functional MRI studies examining the JLO demonstrate right parietal and occipital activation, as well as bilateral frontal activation and PD is known to adversely affect these pathways. We compared the reliability of the original full form to three unique short forms in a sample of 141 non-demented, idiopathic PD patients and 56 age and education matched controls. Results indicated that a two-thirds length short form can be used with high reliability and classification accuracy in patients with idiopathic PD. The other short forms performed in a similar, though slightly less reliable manner. PMID:23957375

Gullett, Joseph M.; Price, Catherine C.; Nguyen, Peter; Okun, Michael S.; Bauer, Russell M.; Bowers, Dawn

2013-01-01

20

Reliability of the McKenzie spinal pain classification using patient assessment forms  

Microsoft Academic Search

Objectives To determine whether case studies presented on a completed McKenzie assessment form, commonly used in the McKenzie educational training programme, contain sufficient information to permit a therapist to reach a classification of the patient.Design An inter-rater reliability study of patient classifications made based upon inspection of McKenzie assessment forms. The assessment forms of 50 patients with spinal pain (25

Helen A. Clare; Roger Adams; Christopher G. Maher

2004-01-01

21

Reliability Modeling Methodology for Independent Approaches on Parallel Runways Safety Analysis  

NASA Technical Reports Server (NTRS)

This document is an adjunct to the final report An Integrated Safety Analysis Methodology for Emerging Air Transport Technologies. That report presents the results of our analysis of the problem of simultaneous but independent, approaches of two aircraft on parallel runways (independent approaches on parallel runways, or IAPR). This introductory chapter presents a brief overview and perspective of approaches and methodologies for performing safety analyses for complex systems. Ensuing chapter provide the technical details that underlie the approach that we have taken in performing the safety analysis for the IAPR concept.

Babcock, P.; Schor, A.; Rosch, G.

1998-01-01

22

Separation of image parts using 2-D parallel form recursive filters.  

PubMed

This correspondence deals with a new technique to separate objects or image parts in a composite image. A parallel form extension of a 2-D Steiglitz-McBride method is applied to the discrete cosine transform (DCT) of the image containing the objects that are to be separated. The obtained parallel form is the sum of several filters or systems, where the impulse response of each filter corresponds to the DCT of one object in the original image. Preliminary results on an image with two objects show that the algorithm works well, even in the case where one object occludes another as well as in the case of moderate noise. PMID:18285105

Sivaramakrishna, R

1996-01-01

23

A comprehensive parallel study on the board level reliability of SAC, SACX and SCN solders  

Microsoft Academic Search

Legislation that mandates the banning of lead (Pb) in electronics due to environmental and health concerns has been actively pursued in many countries during the past fifteen years. Lead-free electronics will be deployed in many products that serve markets where the reliability is a critical requirement. Although a large number of research studies have been performed and are currently under

Fubin Song; Jeffery C. C. Lo; Jimmy K. S. Lam; Tong Jiang; S. W. Ricky Lee

2008-01-01

24

Parallel adaptive feedback enhances reliability of the Ca2+ signaling system.  

PubMed

Despite large cell-to-cell variations in the concentrations of individual signaling proteins, cells transmit signals correctly. This phenomenon raises the question of what signaling systems do to prevent a predicted high failure rate. Here we combine quantitative modeling, RNA interference, and targeted selective reaction monitoring (SRM) mass spectrometry, and we show for the ubiquitous and fundamental calcium signaling system that cells monitor cytosolic and endoplasmic reticulum (ER) Ca(2+) levels and adjust in parallel the concentrations of the store-operated Ca(2+) influx mediator stromal interaction molecule (STIM), the plasma membrane Ca(2+) pump plasma membrane Ca-ATPase (PMCA), and the ER Ca(2+) pump sarco/ER Ca(2+)-ATPase (SERCA). Model calculations show that this combined parallel regulation in protein expression levels effectively stabilizes basal cytosolic and ER Ca(2+) levels and preserves receptor signaling. Our results demonstrate that, rather than directly controlling the relative level of signaling proteins in a forward regulation strategy, cells prevent transmission failure by sensing the state of the signaling pathway and using multiple parallel adaptive feedbacks. PMID:21844332

Abell, Ellen; Ahrends, Robert; Bandara, Samuel; Park, Byung Ouk; Teruel, Mary N

2011-08-30

25

Parallel adaptive feedback enhances reliability of the Ca2+ signaling system  

PubMed Central

Despite large cell-to-cell variations in the concentrations of individual signaling proteins, cells transmit signals correctly. This phenomenon raises the question of what signaling systems do to prevent a predicted high failure rate. Here we combine quantitative modeling, RNA interference, and targeted selective reaction monitoring (SRM) mass spectrometry, and we show for the ubiquitous and fundamental calcium signaling system that cells monitor cytosolic and endoplasmic reticulum (ER) Ca2+ levels and adjust in parallel the concentrations of the store-operated Ca2+ influx mediator stromal interaction molecule (STIM), the plasma membrane Ca2+ pump plasma membrane Ca–ATPase (PMCA), and the ER Ca2+ pump sarco/ER Ca2+–ATPase (SERCA). Model calculations show that this combined parallel regulation in protein expression levels effectively stabilizes basal cytosolic and ER Ca2+ levels and preserves receptor signaling. Our results demonstrate that, rather than directly controlling the relative level of signaling proteins in a forward regulation strategy, cells prevent transmission failure by sensing the state of the signaling pathway and using multiple parallel adaptive feedbacks. PMID:21844332

Abell, Ellen; Ahrends, Robert; Bandara, Samuel; Park, Byung Ouk; Teruel, Mary N.

2011-01-01

26

Validity and reliability of the Food-Life Questionnaire. Short form.  

PubMed

Measures of beliefs and attitudes towards food need to be valid, and easy to use and interpret. The present study aimed to establish the validity and reliability of a short-form of the Food-Life Questionnaire (FLQ). Participants (247 females; 118 males), recruited in South Australia, completed a questionnaire in 2012 incorporating the original FLQ, a revised short form (FLQ-SF), and measures of food choice and consumption. Validity (construct, criterion-related, and incremental) and reliability (internal consistency and short-form) were assessed. Factor analysis established that short-form items loaded onto five factors consistent with the original FLQ and explained 60% of variance. Moderate correlations were observed between the FLQ-SF and a measure of food choices (r=.32-.64), and the FLQ-SF predicted unhealthy food consumption over and above the full FLQ demonstrating criterion-related and incremental validity respectively. The final FLQ-SF included 21 items and had a Cronbach's alpha of .75. Short-form reliability was established with correlations between corresponding subscales of the FLQ and FLQ-SF ranging from r=.64-.84. Overall, the FLQ-SF is brief, psychometrically robust, and easy to administer. It should be considered an important tool in research informing public policies and programs that aim to improve food choices. PMID:23856433

Sharp, Gemma; Hutchinson, Amanda D; Prichard, Ivanka; Wilson, Carlene

2013-11-01

27

Reliability of MRI-derived cortical and subcortical morphometric measures: Effects of pulse sequence, voxel geometry, and parallel imaging  

PubMed Central

Advances in magnetic resonance imaging (MRI) have contributed greatly to the study of neurodegenerative processes, psychiatric disorders, and normal human development, but the effect of such improvements on the reliability of downstream morphometric measures has not been extensively studied. We examined how MRI-derived neurostructural measures are affected by three technological advancements: parallel acceleration, increased spatial resolution, and the use of a high bandwidth multiecho sequence. Test-retest data were collected from 11 healthy participants during 2 imaging sessions occurring approximately 2 weeks apart. We acquired 4 T1-weighted MP-RAGE sequences during each session: a non-accelerated anisotropic sequence (MPR), a non-accelerated isotropic sequence (ISO), an accelerated isotropic sequence (ISH), and an accelerated isotropic high bandwidth multiecho sequence (MEM). Cortical thickness and volumetric measures were computed for each sequence to assess test-retest reliability and measurement bias. Reliability was extremely high for most measures and similar across imaging parameters. Significant measurement bias was observed, however, between MPR and all isotropic sequences for all cortical regions and some subcortical structures. These results suggest that these improvements in MRI acquisition technology do not compromise data reproducibility, but that consistency should be maintained in choosing imaging parameters for structural MRI studies. PMID:19038349

Wonderlick, J.S.; Ziegler, D.A.; Hosseini-Varnamkhasti, P.; Locascio, J.J.; Bakkour, A.; van der Kouwe, A.; Triantafyllou, C.; Corkin, S.; Dickerson, B.C.

2009-01-01

28

Comparison of heuristic methods for reliability optimization of series-parallel systems  

E-print Network

for Problems with Various Resource Lim- itations, Part I Max-min NNK KY recource OR SR OR SR OR SR limitation (NNK) (KY) (max-min) (max-min) (NNK) 0. 3 64 68 48 14 48 49 24 62 0. 5 52 60 46 21 36 34 26 18 0. 7 53 60 50 18 26 29 28 12 0. 9 40 58 45 22 28... for small-scale problem. However, msx-min approach can not be applied to systems with other struc- ture while NNK method snd KY method can be applied to general non series-parallel systems [6, 6]. 27 5. CONCLUSIONS A recently proposed heuristic method...

Lee, Hsiang

2003-01-01

29

Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis  

NASA Technical Reports Server (NTRS)

Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

Liu, Kuojuey Ray

1990-01-01

30

Computer simulation machining a 3D free form surface by using a 3UPU parallel manipulator and a milling machine  

Microsoft Academic Search

A novel CAD approach to machine a 3D free form surface by using a milling machine and a 3 -UPU parallel manipulator is proposed. Based on the CAD variation geometry technique, first, a simulation mechanism of the 3- UPU parallel manipulator is created. Next, a 3D free form surface and a guiding plane of tool path are constituted, and are

Tatu Leinonen

2007-01-01

31

Estimation of Interrater and Parallel Forms Reliability for the MCAT Essay.  

ERIC Educational Resources Information Center

The Association of American Medical Colleges is conducting research to develop, implement, and evaluate a Medical College Admission Test (MCAT) essay testing program. Essay administration in the spring and fall of 1985 and 1986 suggested that additional research was needed on the development of topics which elicit similar skills and meet standard…

Mitchell, Karen J.; Anderson, Judith A.

32

Reliability and Validity of a Short Form of the Marijuana Craving Questionnaire  

PubMed Central

Background The Marijuana Craving Questionnaire (MCQ) is a valid and reliable, 47-item self-report instrument that assesses marijuana craving along four dimensions: compulsivity, emotionality, expectancy, and purposefulness. For use in research and clinical settings, we constructed a 12-item version of the MCQ by selecting three items from each of the four factors that exhibited the greatest within-factor internal consistency (Cronbach's alpha coefficient). Methods Adult marijuana users (n = 490), who had made at least one serious attempt to quit marijuana use but were not seeking treatment, completed the MCQ-Short Form (MCQ-SF) in a single session. Results Confirmatory factor analysis of the MCQ-SF indicated good fit with the 4-factor MCQ model, and the coefficient of congruence indicated moderate similarity in factor patterns and loadings between the MCQ and MCQ-SF. Homogeneity (unidimensionality and internal consistency) of MCQ-SF factors was also consistent with reliability values obtained in the initial validation of the MCQ. Conclusions Findings of psychometric fidelity indicate that the MCQ-SF is a reliable and valid measure of the same multidimensional aspects of marijuana craving as the MCQ in marijuana users not seeking treatment. PMID:19217724

Heishman, Stephen J.; Evans, Rebecca J.; Singleton, Edward G.; Levin, Kenneth H.; Copersino, Marc L.; Gorelick, David A.

2009-01-01

33

An Examination of the Reliability of Scores from Zuckerman's Sensation Seeking Scales, Form V.  

ERIC Educational Resources Information Center

Conducted a reliability generalization study on Zuckerman's Sensation Seeking Scale (M. Zuckerman and others, 1964) using 113 reliability coefficients from 21 published studies. The reliability of scores was marginal for four of the five scales, and low for the other. Mean age of subjects has a significant relationship with score reliability. (SLD)

Deditius-Island, Heide K.; Caruso, John C.

2002-01-01

34

Manifolds with parallel differential forms and Kähler identities for G2-manifolds  

NASA Astrophysics Data System (ADS)

Let M be a compact Riemannian manifold equipped with a parallel differential form ?. We prove a version of the Kähler identities in this setting. This is used to show that the de Rham algebra of M is weakly equivalent to its subquotient (Hc?(M),d), called the pseudo-cohomology of M. When M is compact and Kähler, and ? is its Kähler form, (Hc?(M),d) is isomorphic to the cohomology algebra of M. This gives another proof of homotopy formality for Kähler manifolds, originally shown by Deligne, Griffiths, Morgan and Sullivan. We compute Hc?(M) for a compact G2-manifold, showing that Hci(M)?Hi(M) unless i=3,4. For i=3,4, we compute Hc?(M) explicitly in terms of the first-order differential operator ?d:?(M)??(M).

Verbitsky, Misha

2011-06-01

35

Microelectromechanical filter formed from parallel-connected lattice networks of contour-mode resonators  

SciTech Connect

A microelectromechanical (MEM) filter is disclosed which has a plurality of lattice networks formed on a substrate and electrically connected together in parallel. Each lattice network has a series resonant frequency and a shunt resonant frequency provided by one or more contour-mode resonators in the lattice network. Different types of contour-mode resonators including single input, single output resonators, differential resonators, balun resonators, and ring resonators can be used in MEM filter. The MEM filter can have a center frequency in the range of 10 MHz-10 GHz, with a filter bandwidth of up to about 1% when all of the lattice networks have the same series resonant frequency and the same shunt resonant frequency. The filter bandwidth can be increased up to about 5% by using unique series and shunt resonant frequencies for the lattice networks.

Wojciechowski, Kenneth E; Olsson, III, Roy H; Ziaei-Moayyed, Maryam

2013-07-30

36

The Validation of Parallel Test Forms: "Mountain" and "Beach" Picture Series for Assessment of Language Skills  

ERIC Educational Resources Information Center

Pictures are widely used to elicit expressive language skills, and pictures must be established as parallel before changes in ability can be demonstrated by assessment using pictures prompts. Why parallel prompts are required and what it is necessary to do to ensure that prompts are in fact parallel is not widely known. To date, evidence of…

Bae, Jungok; Lee, Yae-Sheik

2011-01-01

37

Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.  

SciTech Connect

Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours. In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters

Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee; Neary, Vincent Sinclair

2014-09-01

38

A short form of the Maximization Scale: Factor structure, reliability and validity studies  

Microsoft Academic Search

We conducted an analysis of the 13-item Maximization Scale (Schwartz et al., 2002) with the goal of establishing its factor structure, reliability and validity. We also investigated the psychometric properties of several proposed refined versions of the scale. Four sets of analyses are reported. The first analysis confirms the 3-part factor structure of the scale and assesses its reliability. The

Gergana Y. Nenkov; Maureen Morrin; Andrew Ward; Barry Schwartz; John Hulland

2008-01-01

39

A reliability study of springback on the sheet metal forming process under probabilistic variation of prestrain and blank holder force  

NASA Astrophysics Data System (ADS)

This work deals with a reliability assessment of springback problem during the sheet metal forming process. The effects of operative parameters and material properties, blank holder force and plastic prestrain, on springback are investigated. A generic reliability approach was developed to control springback. Subsequently, the Monte Carlo simulation technique in conjunction with the Latin hypercube sampling method was adopted to study the probabilistic springback. Finite element method based on implicit/explicit algorithms was used to model the springback problem. The proposed constitutive law for sheet metal takes into account the adaptation of plastic parameters of the hardening law for each prestrain level considered. Rackwitz-Fiessler algorithm is used to find reliability properties from response surfaces of chosen springback geometrical parameters. The obtained results were analyzed using a multi-state limit reliability functions based on geometry compensations.

Mrad, Hatem; Bouazara, Mohamed; Aryanpour, Gholamreza

2013-08-01

40

In search of parsimony: reliability and validity of the Functional Performance Inventory-Short Form  

PubMed Central

Purpose: The 65-item Functional Performance Inventory (FPI), developed to quantify functional performance in patients with chronic obstructive pulmonary disease (COPD), has been shown to be reliable and valid. The purpose of this study was to create a shorter version of the FPI while preserving the integrity and psychometric properties of the original. Patients and methods: Secondary analyses were performed on qualitative and quantitative data used to develop and validate the FPI long form. Seventeen men and women with COPD participated in the qualitative work, while 154 took part in the mail survey; 54 completed 2-week reproducibility assessment, and 40 relatives contributed validation data. Following a systematic process of item reduction, performance properties of the 32-item short form (FPI-SF) were examined. Results: The FPI-SF was internally consistent (total scale ? = 0.93; subscales: 0.76–0.89) and reproducible (r = 0.88; subscales: 0.69–0.86). Validity was maintained, with significant (P < 0.001) correlations between the FPI-SF and the Functional Status Questionnaire (activities of daily living, r = 0.71; instrumental activities of daily living, r = 0.73), Duke Activity Status Index (r = 0.65), Bronchitis-Emphysema Symptom Checklist (r = ?0.61), Basic Need Satisfaction Inventory (r = 0.61) and Cantril’s Ladder of Life Satisfaction (r = 0.63), and Katz Adjustment Scale for Relatives (socially expected activities, r = 0.51; free-time activities, r = ?0.49, P < 0.01). The FPI-SF differentiated patients with an FEVl% predicted greater than and less than 50% (t = 4.26, P < 0.001), and those with severe and moderate levels of perceived severity and activity limitation (t = 9.91, P < 0.001). Conclusion: Results suggest the FPI-SF is a viable alternative to the FPI for situations in which a shorter instrument is desired. Further assessment of the instrument’s performance properties in new samples of patients with COPD is warranted. PMID:21191436

Leidy, Nancy Kline; Knebel, Ann

2010-01-01

41

A Validation Study of the Dutch Childhood Trauma Questionnaire-Short Form: Factor Structure, Reliability, and Known-Groups Validity  

ERIC Educational Resources Information Center

Objective: The 28-item Childhood Trauma Questionnaire-Short Form (CTQ-SF) has been translated into at least 10 different languages. The validity of translated versions of the CTQ-SF, however, has generally not been examined. The objective of this study was to investigate the factor structure, internal consistency reliability, and known-groups…

Thombs, Brett D.; Bernstein, David P.; Lobbestael, Jill; Arntz, Arnoud

2009-01-01

42

Reliability of the International Physical Activity Questionnaire in Research Settings: Last 7-Day Self-Administered Long Form  

ERIC Educational Resources Information Center

The purpose of this study was to examine the test-retest reliability of the last 7-day long form International Physical Activity Questionnaire (Craig et al., 2003) and to examine the construct validity for the measure in a research setting. Participants were 151 male (n = 52) and female (n = 99) university students (M age = 24.15 years, SD = 5.01)…

Levy, Susan S.; Readdy, R. Tucker

2009-01-01

43

Bringing the Cognitive Estimation Task into the 21st Century: Normative Data on Two New Parallel Forms  

PubMed Central

The Cognitive Estimation Test (CET) is widely used by clinicians and researchers to assess the ability to produce reasonable cognitive estimates. Although several studies have published normative data for versions of the CET, many of the items are now outdated and parallel forms of the test do not exist to allow cognitive estimation abilities to be assessed on more than one occasion. In the present study, we devised two new 9-item parallel forms of the CET. These versions were administered to 184 healthy male and female participants aged 18–79 years with 9–22 years of education. Increasing age and years of education were found to be associated with successful CET performance as well as gender, intellect, naming, arithmetic and semantic memory abilities. To validate that the parallel forms of the CET were sensitive to frontal lobe damage, both versions were administered to 24 patients with frontal lobe lesions and 48 age-, gender- and education-matched controls. The frontal patients’ error scores were significantly higher than the healthy controls on both versions of the task. This study provides normative data for parallel forms of the CET for adults which are also suitable for assessing frontal lobe dysfunction on more than one occasion without practice effects. PMID:24671170

MacPherson, Sarah E.; Wagner, Gabriela Peretti; Murphy, Patrick; Bozzali, Marco; Cipolotti, Lisa; Shallice, Tim

2014-01-01

44

Measurement of impulsive choice in rats: same- and alternate-form test-retest reliability and temporal tracking.  

PubMed

Impulsive choice is typically measured by presenting smaller-sooner (SS) versus larger-later (LL) rewards, with biases towards the SS indicating impulsivity. The current study tested rats on different impulsive choice procedures with LL delay manipulations to assess same-form and alternate-form test-retest reliability. In the systematic-GE procedure (Green & Estle, 2003), the LL delay increased after several sessions of training; in the systematic-ER procedure (Evenden & Ryan, 1996), the delay increased within each session; and in the adjusting-M procedure (Mazur, 1987), the delay changed after each block of trials within a session based on each rat's choices in the previous block. In addition to measuring choice behavior, we also assessed temporal tracking of the LL delays using the median times of responding during LL trials. The two systematic procedures yielded similar results in both choice and temporal tracking measures following extensive training, whereas the adjusting procedure resulted in relatively more impulsive choices and poorer temporal tracking. Overall, the three procedures produced acceptable same form test-retest reliability over time, but the adjusting procedure did not show significant alternate form test-retest reliability with the other two procedures. The results suggest that systematic procedures may supply better measurements of impulsive choice in rats. PMID:25490901

Peterson, Jennifer R; Hill, Catherine C; Kirkpatrick, Kimberly

2015-01-01

45

Development and Reliability Testing of a Fast-Food Restaurant Observation Form.  

PubMed

Abstract Purpose . To develop a reliable observational data collection instrument to measure characteristics of the fast-food restaurant environment likely to influence consumer behaviors, including product availability, pricing, and promotion. Design . The study used observational data collection. Setting . Restaurants were in the Chicago Metropolitan Statistical Area. Subjects . A total of 131 chain fast-food restaurant outlets were included. Measures . Interrater reliability was measured for product availability, pricing, and promotion measures on a fast-food restaurant observational data collection instrument. Analysis . Analysis was done with Cohen's ? coefficient and proportion of overall agreement for categorical variables and intraclass correlation coefficient (ICC) for continuous variables. Results . Interrater reliability, as measured by average ? coefficient, was .79 for menu characteristics, .84 for kids' menu characteristics, .92 for food availability and sizes, .85 for beverage availability and sizes, .78 for measures on the availability of nutrition information,.75 for characteristics of exterior advertisements, and .62 and .90 for exterior and interior characteristics measures, respectively. For continuous measures, average ICC was .88 for food pricing measures, .83 for beverage prices, and .65 for counts of exterior advertisements. Conclusion . Over 85% of measures demonstrated substantial or almost perfect agreement. Although some measures required revision or protocol clarification, results from this study suggest that the instrument may be used to reliably measure the fast-food restaurant environment. PMID:24819996

Rimkus, Leah; Ohri-Vachaspati, Punam; Powell, Lisa M; Zenk, Shannon N; Quinn, Christopher M; Barker, Dianne C; Pugach, Oksana; Resnick, Elissa A; Chaloupka, Frank J

2014-05-12

46

Reliability and Validity of the Sensation-Seeking Scale: Psychometric Problems in Form V.  

ERIC Educational Resources Information Center

Psychometric properties of Zuckerman's Sensation Seeking Scale were examined. Evidence supported the theoretical notion of an individual difference variable in arousal-seeking. Other evidence, however, suggested that measurement problems continue to hamper research: the total score was moderately reliable, but the subscales were only marginally…

Ridgeway, Doreen; Russell, James A.

1980-01-01

47

Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data  

ERIC Educational Resources Information Center

Horn's parallel analysis (PA) is the method of consensus in the literature on empirical methods for deciding how many components/factors to retain. Different authors have proposed various implementations of PA. Horn's seminal 1965 article, a 1996 article by Thompson and Daniel, and a 2004 article by Hayton, Allen, and Scarpello all make assertions…

Dinno, Alexis

2009-01-01

48

Defining the "Correct Form": Using Biomechanics to Develop Reliable and Valid Assessment Instruments  

ERIC Educational Resources Information Center

Physical educators should be able to define the "correct form" they expect to see each student performing in their classes. Moreover, they should be able to go beyond assessing students' skill levels by measuring the outcomes (products) of movements (i.e., how far they throw the ball or how many successful attempts are completed) or counting the…

Satern, Miriam N.

2011-01-01

49

Development and reliability testing of a food store observation form. — Measures of the Food Environment  

Cancer.gov

Skip to Main Content at the National Institutes of Health | www.cancer.gov Print Page E-mail Page Search: Please wait while this form is being loaded.... Home Browse by Resource Type Browse by Area of Research Research Networks Funding Information About

50

Assessment of the Reliability and Validity of the Discrete-Trials Teaching Evaluation Form  

ERIC Educational Resources Information Center

Discrete-trials teaching (DTT) is a frequently used method for implementing Applied Behavior Analysis treatment with children with autism. Fazzio, Arnal, and Martin (2007) developed a 21-component checklist, the Discrete-Trials Teaching Evaluation Form (DTTEF), for assessing instructors conducting DTT. In Phase 1 of this research, three experts on…

Babel, Danielle A.; Martin, Garry L.; Fazzio, Daniela; Arnal, Lindsay; Thomson, Kendra

2008-01-01

51

Accuracy, reliability and validity of finite element analysis in metal forming: a user's perspective  

Microsoft Academic Search

Purpose – The purpose of this paper is to provide industrial, education and academic users of computer programs a basic overview of finite elements in metal forming that will enable them to recognize the pitfalls of the existing formulations, identify the possible sources of errors and understand the routes for validating their numerical results. Design\\/methodology\\/approach – The methodology draws from

A. E. Tekkaya; P. A. F. Martins

2009-01-01

52

Reliability and Validity of the Short Form of the Literacy-Independent Cognitive Assessment in the Elderly  

PubMed Central

Background and Purpose The Literacy-Independent Cognitive Assessment (LICA) has been developed for a diagnosis of dementia and is a useful neuropsychological test battery for illiterate populations as well as literate populations. The objective of this study was to develop the short form of the LICA (S-LICA) and to evaluate the reliability and validity of the S-LICA. Methods The subtests of the S-LICA were selected based on the factor analysis and validation study results of the LICA. Patients with dementia (n=101) and normal elderly controls (n=185) participated in this study. Results Cronbach's coefficient alpha of the S-LICA was 0.92 for illiterate subjects and 0.94 for literate subjects, and the item-total correlation ranged from 0.63 to 0.81 (p<.01).The test-retest reliability of the S-LICA total score was high (r=0.94, p<.001), and the subtests had high test-retest reliabilities (r=0.68-0.87, p<.01). The correlation between the K-MMSE and S-LICA total scores were substantial in both the illiterate subjects (r=0.837, p<.001) and the literate subjects(r=0.802, p<.001). The correlation between the S-LICA and LICA was very high (r=0.989, p<.001). The area under the curve of the receiver operating characteristic was 0.999 for the literate subjects and 0.985 for the illiterate subjects. The sensitivity and specificity of the S-LICA for a diagnosis of dementia were 97% and 96% at the cutoff point of 72 for the literate subjects, and 96% and 93% at the cutoff point of 68 for the illiterate subjects, respectively. Conclusions Our results indicate that the S-LICA is a reliable and valid instrument for quick evaluation of patients with dementia in both illiterate and literate elderly populations. PMID:23626649

Kim, Jungeun; Jeong, Jee H.; Han, Seol-Heui; Ryu, Hui Jin; Lee, Jun-Young; Ryu, Seung-Ho; Lee, Dong Woo; Shim, Yong S.

2013-01-01

53

Reliability of equivalent sphere model in blood-forming organ dose estimation  

SciTech Connect

The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

Shinn, J.L.; Wilson, J.W.; Nealy, J.E.

1990-04-01

54

Reliability of equivalent sphere model in blood-forming organ dose estimation  

NASA Technical Reports Server (NTRS)

The radiation dose equivalents to blood-forming organs (BFO's) of the astronauts at the Martian surface due to major solar flare events are calculated using the detailed body geometry of Langley and Billings. The solar flare spectra of February 1956, November 1960, and August 1972 events are employed instead of the idealized Webber form. The detailed geometry results are compared with those based on the 5-cm sphere model which was used often in the past to approximate BFO dose or dose equivalent. Larger discrepancies are found for the later two events possibly due to the lower numbers of highly penetrating protons. It is concluded that the 5-cm sphere model is not suitable for quantitative use in connection with future NASA deep-space, long-duration mission shield design studies.

Shinn, Judy L.; Wilson, John W.; Nealy, John E.

1990-01-01

55

Embedded Parallel Distributed Artificial Intelligent Processors for Adaptive Beam Forming in WCDMA System  

Microsoft Academic Search

Genetic algorithms (GAs) are powerful search techniques that are used successfully to solve problems in many different disciplines. One application would be in WCDMA adaptive beam forming technique. Adaptive antenna has dynamic beam to cater for users' needs and provides better capacity for mobile communication but requires more intelligent and advance beam forming algorithm such as genetic algorithm. Compared to

Prajindra Sankar Krishnan; T. S. Kiong; J. Koh; D. Yap

2008-01-01

56

The relative noise levels of parallel axis gear sets with various contact ratios and gear tooth forms  

NASA Technical Reports Server (NTRS)

The real noise reduction benefits which may be obtained through the use of one gear tooth form as compared to another is an important design parameter for any geared system, especially for helicopters in which both weight and reliability are very important factors. This paper describes the design and testing of nine sets of gears which are as identical as possible except for their basic tooth geometry. Noise measurements were made at various combinations of load and speed for each gear set so that direct comparisons could be made. The resultant data was analyzed so that valid conclusions could be drawn and interpreted for design use.

Drago, Raymond J.; Lenski, Joseph W., Jr.; Spencer, Robert H.; Valco, Mark; Oswald, Fred B.

1993-01-01

57

The Bruininks-Oseretsky Test of Motor Proficiency-Short Form is reliable in children living in remote Australian Aboriginal communities  

PubMed Central

Background The Lililwan Project is the first population-based study to determine Fetal Alcohol Spectrum Disorders (FASD) prevalence in Australia and was conducted in the remote Fitzroy Valley in North Western Australia. The diagnostic process for FASD requires accurate assessment of gross and fine motor functioning using standardised cut-offs for impairment. The Bruininks-Oseretsky Test of Motor Proficiency, Second Edition (BOT-2) is a norm-referenced assessment of motor function used worldwide and in FASD clinics in North America. It is available in a Complete Form with 53 items or a Short Form with 14 items. Its reliability in measuring motor performance in children exposed to alcohol in utero or living in remote Australian Aboriginal communities is unknown. Methods A prospective inter-rater and test-retest reliability study was conducted using the BOT-2 Short Form. A convenience sample of children (n?=?30) aged 7 to 9 years participating in the Lililwan Project cohort (n?=?108) study, completed the reliability study. Over 50% of mothers of Lililwan Project children drank alcohol during pregnancy. Two raters simultaneously scoring each child determined inter-rater reliability. Test-retest reliability was determined by assessing each child on a second occasion using predominantly the same rater. Reliability was analysed by calculating Intra-Class correlation Coefficients, ICC(2,1), Percentage Exact Agreement (PEA) and Percentage Close Agreement (PCA) and measures of Minimal Detectable Change (MDC) were calculated. Results Thirty Aboriginal children (18 male, 12 female: mean age 8.8 years) were assessed at eight remote Fitzroy Valley communities. The inter-rater reliability for the BOT-2 Short Form score sheet outcomes ranged from 0.88 (95%CI, 0.77 – 0.94) to 0.92 (95%CI, 0.84 – 0.96) indicating excellent reliability. The test-retest reliability (median interval between tests being 45.5 days) for the BOT-2 Short Form score sheet outcomes ranged from 0.62 (95%CI, 0.34 – 0.80) to 0.73 (95%CI, 0.50 – 0.86) indicating fair to good reliability. The raw score MDC was 6.12. Conclusion The BOT-2 Short Form has acceptable reliability for use in remote Australian Aboriginal communities and will be useful in determining motor deficits in children exposed to alcohol prenatally. This is the first known study evaluating the reliability of the BOT-2 Short Form, either in the context of assessment for FASD or in Aboriginal children. PMID:24010634

2013-01-01

58

A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows  

NASA Astrophysics Data System (ADS)

Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

Allphin, Devin

59

Parallel-plate submicron gap formed by micromachined low-density pillars for near-field radiative heat transfer  

NASA Astrophysics Data System (ADS)

Near-field radiative heat transfer has been a subject of great interest due to the applicability to thermal management and energy conversion. In this letter, a submicron gap between a pair of diced fused quartz substrates is formed by using micromachined low-density pillars to obtain both the parallelism and small parasitic heat conduction. The gap uniformity is validated by the optical interferometry at four corners of the substrates. The heat flux across the gap is measured in a steady-state and is no greater than twice of theoretically predicted radiative heat flux, which indicates that the parasitic heat conduction is suppressed to the level of the radiative heat transfer or less. The heat conduction through the pillars is modeled, and it is found to be limited by the thermal contact resistance between the pillar top and the opposing substrate surface. The methodology to form and evaluate the gap promotes the near-field radiative heat transfer to various applications such as thermal rectification, thermal modulation, and thermophotovoltaics.

Ito, Kota; Miura, Atsushi; Iizuka, Hideo; Toshiyoshi, Hiroshi

2015-02-01

60

Short-Forms of the Schedule for Nonadaptive and Adaptive Personality (SNAP) for Self- and Collateral Ratings: Development, Reliability, and Validity.  

ERIC Educational Resources Information Center

Reports the development of a paragraph-descriptor short form of the Schedule for Nonadaptive and Adaptive Personality (SNAP); (L. Clark, 1993) with self- and other versions. Data from 294 college students, with parental ratings for 94 students, support the reliability and validity of the measure. (SLD)

Harlan, Elena; Clark, Lee Anna

1999-01-01

61

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 1. Technical Report #1216  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest/and alternate form) and G-Theory/D-Study research on the easy CBM reading measures, grades 1-5. Data were gathered in the spring 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due…

Anderson, Daniel; Park, Jasmine, Bitnara; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

62

The French-Canadian Version of the Self-Report Coping Scale: Estimates of the Reliability, Validity, and Development of a Short Form  

ERIC Educational Resources Information Center

This investigation was conducted to explore the reliability and validity of scores on the French Canadian version of the Self-Report Coping Scale (SRCS; D. L. Causey & E. F. Dubow, 1992) and that of a short form of the SRCS. Evidence provides initial support for construct validity by replication of the factor structure and correlations with…

Hebert, Martine; Parent, Nathalie; Daignault, Isabelle V.

2007-01-01

63

Reliability Modeling and Analysis of Safety-Critical Manufacture System  

Microsoft Academic Search

There are working, fail-safe and fail-dangerous states in safety-critical manufacture systems. This paper presents three typical safety-critical manufacture system architecture models: series, parallel and series-parallel system, whose components lifetime distributions are general forms. Also the reliability related indices, such as the probabilities that the system in these states and the mean times for the system fail-safe and fail-dangerous, are derived

Qing Sun; Lirong Cui; Gong Chen; Rong Pan

2009-01-01

64

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 2. Technical Report #1217  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest an alternate form) and G-Theory/D-Study on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from the convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest. Due to…

Anderson, Daniel; Lai, Cheg-Fei; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

65

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Passage Reading Fluency Assessments: Grade 4. Technical Report #1219  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

66

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Reading Assessments: Grade 5. Technical Report #1220  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Lai, Cheng-Fei; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

2012-01-01

67

Parallel Universes  

Microsoft Academic Search

I survey physics theories involving parallel universes, which form a natural four-level hierarchy of multiverses allowing progressively greater diversity. Level I: A generic prediction of inflation is an infinite ergodic universe, which contains Hubble volumes realizing all initial conditions - including an identical copy of you about 10^{10^29} meters away. Level II: In chaotic inflation, other thermalized regions may have

Max Tegmark

2003-01-01

68

An Investigation of Psychometric Properties of Coping Styles Scale Brief Form: A Study of Validity and Reliability  

ERIC Educational Resources Information Center

The aim of the current study was to develop a short form of Coping Styles Scale based on COPE Inventory. A total of 275 undergraduate students (114 female, and 74 male) were administered in the first study. In order to test factors structure of Coping Styles Scale Brief Form, principal components factor analysis and direct oblique rotation was…

Bacanli, Hasan; Surucu, Mustafa; Ilhan, Tahsin

2013-01-01

69

Improved Reliability and ESD Characteristics of Flip-Chip GaN-Based LEDs With Internal Inverse-Parallel Protection Diodes  

Microsoft Academic Search

In this letter, a GaN\\/sapphire light-emitting diode (LED) structure was designed with improved electrostatic discharge (ESD) performance through the use of a shunt GaN ESD diode connected in inverse-parallel to the GaN LED. Thus, electrostatic charge can be discharged from the GaN LED through the shunt diode. We found that the ESD withstanding capability of GaN\\/sapphire LEDs incorporating this ESD-protection

Shih-Chang Shei; Jinn-Kong Sheu; Chien-Fu Shen

2007-01-01

70

Use and Reliability of the World Wide Web Version of the Block Health Habits and History Questionnaire with Older Rural Women  

Microsoft Academic Search

ObjectiveTo estimate the parallel forms reliability of the paper and pencil and World Wide Web versions of the 1998 Block Health Habits and History Questionnaire (HHHQ) and to examine the feasibility of older women using the Web version.

Linda S. Boeckner; Carol H. Pullen; Susan Noble Walker; Gerald W. Abbott; Torin Block

2002-01-01

71

A 12Item Short-Form Health Survey: construction of scales and preliminary tests of reliability and validity  

Microsoft Academic Search

Regression methods were used to select and score 12 items from the Medical Outcomes Study 36-Item Short-Form Health Survey (SF-36) to reproduce the Physical Component Summary and Mental Component Summary scales in the general US population (n=2,333). The resulting 12-item short-form (SF-12) achieved multiple R squares of 0.911 and 0.918 in predictions of the SF-36 Physical Component Summary and SF-36

Ware John E. Jr; Mark Kosinski; Susan D. Keller

1996-01-01

72

The Behavior Problems Inventory-Short Form for Individuals with Intellectual Disabilities: Part II--Reliability and Validity  

ERIC Educational Resources Information Center

Background: The Behavior Problems Inventory-01 (BPI-01) is an informant-based behaviour rating instrument for intellectual disabilities (ID) with 49 items and three sub-scales: "Self-injurious Behavior," "Stereotyped Behavior" and "Aggressive/Destructive Behavior." The Behavior Problems Inventory-Short Form (BPI-S) is a BPI-01 spin-off with 30…

Rojahn, J.; Rowe, E. W.; Sharber, A. C.; Hastings, R.; Matson, J. L.; Didden, R.; Kroes, D. B. H.; Dumont, E. L. M.

2012-01-01

73

Japanese Version of Home Form of the ADHD-RS: An Evaluation of Its Reliability and Validity  

ERIC Educational Resources Information Center

Using the Japanese version of home form of the ADHD-RS, this survey attempted to compare the scores between the US and Japan and examined the correlates of ADHD-RS. We collected responses from parents or rearers of 5977 children (3119 males and 2858 females) in nursery, elementary, and lower-secondary schools. A confirmed factor analysis of…

Tani, Iori; Okada, Ryo; Ohnishi, Masafumi; Nakajima, Shunji; Tsujii, Masatsugu

2010-01-01

74

Reliability and structural integrity  

NASA Technical Reports Server (NTRS)

An analytic model is developed to calculate the reliability of a structure after it is inspected for cracks. The model accounts for the growth of undiscovered cracks between inspections and their effect upon the reliability after subsequent inspections. The model is based upon a differential form of Bayes' Theorem for reliability, and upon fracture mechanics for crack growth.

Davidson, J. R.

1976-01-01

75

The core of Ure2p prion fibrils is formed by the N-terminal segment in a parallel cross-? structure: Evidence from solid state NMR  

PubMed Central

Intracellular fibril formation by Ure2p produces the non-Mendelian genetic element [URE3] in S. cerevisiae, making Ure2p a prion protein. We show that solid state NMR spectra of full-length Ure2p fibrils, seeded with infectious prions from a specific [URE3] strain and labeled with uniformly 15N,13C-enriched Ile, include strong, sharp signals from Ile residues in the globular C-terminal domain (CTD), with both helical and non-helical 13C chemical shifts. Treatment with proteinase K (PK) eliminates these CTD signals, leaving only non-helical signals from the Gln- and Asn-rich N-terminal segment, which are also observed in solid state NMR spectra of Ile-labeled fibrils formed by residues 1-89 of Ure2p. Thus, the N-terminal segment, or “prion domain” (PD), forms the fibril core, while CTD units are located outside the core. We additionally show that, after PK treatment, Ile-labeled Ure2p fibrils formed without prion seeding exhibit a broader set of solid state NMR signals than do the prion-seeded fibrils, consistent with the idea that structural variations within the PD core account for prion strains. Measurements of 13C-13C magnetic dipole-dipole couplings among 13C-labeled Ile carbonyl sites in full-length Ure2p fibrils support an in-register parallel ?-sheet structure for the PD core of Ure2p fibrils. Finally, we show that a model in which CTD units are attached rigidly to the parallel ?-sheet core is consistent with steric constraints. PMID:21497604

Kryndushkin, Dmitry S.; Wickner, Reed B.; Tycko, Robert

2011-01-01

76

The Zarit Caregiver Burden Interview Short Form (ZBI-12) in spouses of Veterans with Chronic Spinal Cord Injury, Validity and Reliability of the Persian Version  

PubMed Central

Background: To test the psychometric properties of the Persian version of Zarit Burden Interview (ZBI-12) in the Iranian population. Methods: After translating and cultural adaptation of the questionnaire into Persian, 100 caregiver spouses of Iran- Iraq war (1980-88) veterans with chronic spinal cord injury who live in the city of Mashhad, Iran, invited to participate in the study. The Persian version of ZBI-12 accompanied with the Persian SF-36 was completed by the caregivers to test validity of the Persian ZBI-12.A Pearson`s correlation coefficient was calculated for validity testing. In order to assess reliability of the Persian ZBI-12, we administered the ZBI-12 randomly in 48 caregiver spouses again 3 days later. Results: Generally, the internal consistency of the questionnaire was found to be strong (Cronbach's alpha 0.77). Intercorrelation matrix between the different domains of ZBI-12 at test-retest was 0.78. The results revealed that majority of questions the Persian ZBI_12 have a significant correlation to each other. In terms of validity, our results showed that there is significant correlations between some domains of the Persian version the Short Form Health Survey -36 with the Persian Zarit Burden Interview such as Q1 with Role Physical (P=0.03),General Health (P=0.034),Social Functional (0.037), Mental Health (0.023) and Q3 with Physical Function (P=0.001),Viltality (0.002), Socil Function (0.001). Conclusions: Our findings suggest that the Zarit Burden Interview Persian version is both a valid and reliable instrument for measuring the burden of caregivers of individuals with chronic spinal cord injury. PMID:25692171

Rajabi-Mashhadi, Mohammad T; Mashhadinejad, Hosein; Ebrahimzadeh, Mohammad H; Golhasani-Keshtan, Farideh; Ebrahimi, Hanieh; Zarei, Zahra

2015-01-01

77

A new model of in vitro fungal biofilms formed on human nail fragments allows reliable testing of laser and light therapies against onychomycosis.  

PubMed

Onychomycoses represent approximately 50 % of all nail diseases worldwide. In warmer and more humid countries like Brazil, the incidence of onychomycoses caused by non-dermatophyte molds (NDM, including Fusarium spp.) or yeasts (including Candida albicans) has been increasing. Traditional antifungal treatments used for the dermatophyte-borne disease are less effective against onychomycoses caused by NDM. Although some laser and light treatments have demonstrated clinical efficacy against onychomycosis, their US Food and Drug Administration (FDA) approval as "first-line" therapy is pending, partly due to the lack of well-demonstrated fungicidal activity in a reliable in vitro model. Here, we describe a reliable new in vitro model to determine the fungicidal activity of laser and light therapies against onychomycosis caused by Fusarium oxysporum and C. albicans. Biofilms formed in vitro on sterile human nail fragments were treated with 1064 nm neodymium-doped yttrium aluminum garnet laser (Nd:YAG), 420 nm intense pulsed light (IPL) IPL 420, followed by Nd:YAG, or near-infrared light ((NIR) 700-1400 nm). Light and laser antibiofilm effects were evaluated using cell viability assay and scanning electron microscopy (SEM). All treatments were highly effective against C. albicans and F. oxysporum biofilms, resulting in decreases in cell viability of 45-60 % for C. albicans and 92-100 % for F. oxysporum. The model described here yielded fungicidal activities that matched more closely to those observed in the clinic, when compared to published in vitro models for laser and light therapies. Thus, our model might represent an important tool for the initial testing, validation, and "fine-tuning" of laser and light therapies against onychomycosis. PMID:25471266

Vila, Taissa Vieira Machado; Rozental, Sonia; de Sá Guimarães, Claudia Maria Duarte

2015-04-01

78

Ultra Reliability  

NASA Technical Reports Server (NTRS)

This viewgraph presentation gives a general overview of NASA's ultra reliability areas. The contents include: 1) Objectives; 2) Approach; 3) Ultra Reliability Areas; 4) Plan Overview; 5) Work Flows; and 6) Customers.

Napala, Phil; Barnes, Charles; Shapiro, Andrew A.

2004-01-01

79

A Note on the Reliability Coefficients for Item Response Model-Based Ability Estimates  

ERIC Educational Resources Information Center

Assuming item parameters on a test are known constants, the reliability coefficient for item response theory (IRT) ability estimates is defined for a population of examinees in two different ways: as (a) the product-moment correlation between ability estimates on two parallel forms of a test and (b) the squared correlation between the true…

Kim, Seonghoon

2012-01-01

80

Parallel processing for control applications  

SciTech Connect

Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

Telford, J. W. (John W.)

2001-01-01

81

Reliability training  

NASA Technical Reports Server (NTRS)

Discussed here is failure physics, the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low cost reliable products. A review of reliability for the years 1940 to 2000 is given. Next, a review of mathematics is given as well as a description of what elements contribute to product failures. Basic reliability theory and the disciplines that allow us to control and eliminate failures are elucidated.

Lalli, Vincent R. (editor); Malec, Henry A. (editor); Dillard, Richard B.; Wong, Kam L.; Barber, Frank J.; Barina, Frank J.

1992-01-01

82

The MOS 36-item Short-Form Health Survey (SF36): III. Tests of data quality, scaling assumptions, and reliability across diverse patient groups  

Microsoft Academic Search

The widespread use of standardized health surveys is predicated on the largely untested assumption that scales constructed from those surveys will satisfy minimum psychometric requirements across diverse population groups. Data from the Medical Outcomes Study (MOS) were used to evaluate data completeness and quality, test scaling assumptions, and estimate internal-consistency reliability for the eight scales constructed from the MOS SF-36

Colleen A. McHorney; Ware John E. Jr; J. F. Rachel Lu; Cathy Donald Sherbourne

1994-01-01

83

Parallel rendering  

NASA Technical Reports Server (NTRS)

This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering problem. We also explore concepts from computer graphics, such as coherence and projection, which have a significant impact on the structure of parallel rendering algorithms. Our survey covers a number of practical considerations as well, including the choice of architectural platform, communication and memory requirements, and the problem of image assembly and display. We illustrate the discussion with numerous examples from the parallel rendering literature, representing most of the principal rendering methods currently used in computer graphics.

Crockett, Thomas W.

1995-01-01

84

Person Reliability  

ERIC Educational Resources Information Center

Person changes can be of three kinds: developmental trends, swells, and tremors. Person unreliability in the tremor sense (momentary fluctuations) can be estimated from person characteristic curves. Average person reliability for groups can be compared from item characteristic curves. (Author)

Lumsden, James

1977-01-01

85

Electricity Reliability  

E-print Network

Electricity Delivery and Energy Reliability High Temperature Superconductivity (HTS) Visualization in the future because they have virtually no resistance to electric current, offering the possibility of new electric power equipment with more energy efficiency and higher capacity than today's systems

86

Parallel Algorithms  

NSDL National Science Digital Library

Content prepared for the Supercomputing 2002 session on "Using Clustering Technologies in the Classroom". Contains a series of exercises for teaching parallel computing concepts through kinesthetic activities.

Paul Gray

87

Examining the reliability and validity of a modified version of the International Physical Activity Questionnaire, long form (IPAQ-LF) in Nigeria: a cross-sectional study  

PubMed Central

Objectives To investigate the reliability and an aspect of validity of a modified version of the long International Physical Activity Questionnaire (Hausa IPAQ-LF) in Nigeria. Design Cross-sectional study, examining the reliability and construct validity of the Hausa IPAQ-LF compared with anthropometric and biological variables. Setting Metropolitan Maiduguri, the capital city of Borno State in Nigeria. Participants 180 Nigerian adults (50% women) with a mean age of 35.6 (SD=10.3) years, recruited from neighbourhoods with diverse socioeconomic status and walkability. Outcome measures Domains (domestic physical activity (PA), occupational PA, leisure-time PA, active transportation and sitting time) and intensities of PA (vigorous, moderate and walking) were measured with the Hausa IPAQ-LF on two different occasions, 8?days apart. Outcomes for construct validity were measured body mass index (BMI), systolic blood pressure (SBP) and diastolic blood pressure (DBP). Results The Hausa IPAQ-LF demonstrated good test–retest reliability (intraclass correlation coefficient, ICC>75) for total PA (ICC=0.79, 95% CI 0.65 to 0.82), occupational PA (ICC=0.77, 95% CI 0.68 to 0.82), active transportation (ICC=0.82, 95% CI 0.75 to 0.87) and vigorous intensity activities (ICC=0.82, 95% CI 0.76 to 0.87). Reliability was substantially higher for total PA (ICC=0.80), occupational PA (ICC=0.78), leisure-time PA (ICC=0.75) and active transportation (ICC=0.80) in men than in women, but domestic PA (ICC=0.38) and sitting time (ICC=0.71) demonstrated more substantial reliability coefficients in women than in men. For the construct validity, domestic PA was significantly related mainly with SBP (r=?0.27) and DBP (r=?0.17), and leisure-time PA and total PA were significantly related only with SBP (r=?0.16) and BMI (r=?0.29), respectively. Similarly, moderate-intensity PA was mainly related with SBP (r=?0.16, p<0.05) and DBP (r=?0.21, p<0.01), but vigorous-intensity PA was only related with BMI (r=?0.11, p<0.05). Conclusions The modified Hausa IPAQ-LF demonstrated sufficient evidence of test–retest reliability and may be valid for assessing context specific PA behaviours of adults in Nigeria. PMID:25448626

Oyeyemi, Adewale L; Bello, Umar M; Philemon, Saratu T; Aliyu, Habeeb N; Majidadi, Rebecca W; Oyeyemi, Adetoyeje Y

2014-01-01

88

An Examination of Test-Retest, Alternate Form Reliability, and Generalizability Theory Study of the easyCBM Word and Passage Reading Fluency Assessments: Grade 3. Technical Report #1218  

ERIC Educational Resources Information Center

This technical report is one in a series of five describing the reliability (test/retest and alternate form) and G-Theory/D-Study research on the easyCBM reading measures, grades 1-5. Data were gathered in the spring of 2011 from a convenience sample of students nested within classrooms at a medium-sized school district in the Pacific Northwest.…

Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

89

Item Selection for the Development of Parallel Forms from an IRT-Based Seed Test Using a Sampling and Classification Approach  

ERIC Educational Resources Information Center

Two sampling-and-classification-based procedures were developed for automated test assembly: the Cell Only and the Cell and Cube methods. A simulation study based on a 540-item bank was conducted to compare the performance of the procedures with the performance of a mixed-integer programming (MIP) method for assembling multiple parallel test…

Chen, Pei-Hua; Chang, Hua-Hua; Wu, Haiyan

2012-01-01

90

Scalable parallel communications  

NASA Technical Reports Server (NTRS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-01-01

91

Scalable parallel communications  

NASA Astrophysics Data System (ADS)

Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

1992-06-01

92

Parallel Optimisation  

NSDL National Science Digital Library

An introduction to optimisation techniques that may improve parallel performance and scaling on HECToR. It assumes that the reader has some experience of parallel programming including basic MPI and OpenMP. Scaling is a measurement of the ability for a parallel code to use increasing numbers of cores efficiently. A scalable application is one that, when the number of processors is increased, performs better by a factor which justifies the additional resource employed. Making a parallel application scale to many thousands of processes requires not only careful attention to the communication, data and work distribution but also to the choice of the algorithms to use. Since the choice of algorithm is too broad a subject and very particular to application domain to include in this brief guide we concentrate on general good practices towards parallel optimisation on HECToR.

93

Multithreading and Parallel Microprocessors  

E-print Network

Multithreading and Parallel Microprocessors Stephen Jenks Electrical Engineering and Computer Scalable Parallel and Distributed Systems Lab 4 Outline Parallelism in Microprocessors Multicore Processor Parallelism Parallel Programming for Shared Memory OpenMP POSIX Threads Java Threads Parallel Microprocessor

Shinozuka, Masanobu

94

Redefining reliability  

SciTech Connect

Want to buy some reliability? The question would have been unthinkable in some markets served by the natural gas business even a few years ago, but in the new gas marketplace, industrial, commercial and even some residential customers have the opportunity to choose from among an array of options about the kind of natural gas service they need--and are willing to pay for. The complexities of this brave new world of restructuring and competition have sent the industry scrambling to find ways to educate and inform its customers about the increased responsibility they will have in determining the level of gas reliability they choose. This article discusses the new options and the new responsibilities of customers, the needed for continuous education, and MidAmerican Energy Company`s experiment in direct marketing of natural gas.

Paulson, S.L.

1995-11-01

95

SSD Reliability  

NASA Astrophysics Data System (ADS)

SSD are complex electronic systems prone to wear-out and failure mechanisms mainly related to their basic component: the Flash memory. The reliability of a Flash memory depends on many technological and architectural aspects, from the physical concepts on which the store paradigm is achieved to the interaction among cells, from possible new physical mechanisms arising as the technology scales down to the countermeasures adopted within the memory controller to face erroneous behaviors.

Zambelli, C.; Olivo, P.

96

Parallelizing Timed Petri Net simulations  

NASA Technical Reports Server (NTRS)

The possibility of using parallel processing to accelerate the simulation of Timed Petri Nets (TPN's) was studied. It was recognized that complex system development tools often transform system descriptions into TPN's or TPN-like models, which are then simulated to obtain information about system behavior. Viewed this way, it was important that the parallelization of TPN's be as automatic as possible, to admit the possibility of the parallelization being embedded in the system design tool. Later years of the grant were devoted to examining the problem of joint performance and reliability analysis, to explore whether both types of analysis could be accomplished within a single framework. In this final report, the results of our studies are summarized. We believe that the problem of parallelizing TPN's automatically for MIMD architectures has been almost completely solved for a large and important class of problems. Our initial investigations into joint performance/reliability analysis are two-fold; it was shown that Monte Carlo simulation, with importance sampling, offers promise of joint analysis in the context of a single tool, and methods for the parallel simulation of general Continuous Time Markov Chains, a model framework within which joint performance/reliability models can be cast, were developed. However, very much more work is needed to determine the scope and generality of these approaches. The results obtained in our two studies, future directions for this type of work, and a list of publications are included.

Nicol, David M.

1993-01-01

97

Assessing the Discriminant Ability, Reliability, and Comparability of Multiple Short Forms of the Boston Naming Test in an Alzheimer’s Disease Center Cohort  

PubMed Central

Background The Boston Naming Test (BNT) is a commonly used neuropsychological test of confrontation naming that aids in determining the presence and severity of dysnomia. Many short versions of the original 60-item test have been developed and are routinely administered in clinical/research settings. Because of the common need to translate similar measures within and across studies, it is important to evaluate the operating characteristics and agreement of different BNT versions. Methods We analyzed longitudinal data of research volunteers (n = 681) from the University of Kentucky Alzheimer’s Disease Center longitudinal cohort. Conclusions With the notable exception of the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) 15-item BNT, short forms were internally consistent and highly correlated with the full version; these measures varied by diagnosis and generally improved from normal to mild cognitive impairment (MCI) to dementia. All short forms retained the ability to discriminate between normal subjects and those with dementia. The ability to discriminate between normal and MCI subjects was less strong for the short forms than the full BNT, but they exhibited similar patterns. These results have important implications for researchers designing longitudinal studies, who must consider that the statistical properties of even closely related test forms may be quite different. PMID:25613081

Katsumata, Yuriko; Mathews, Melissa; Abner, Erin L.; Jicha, Gregory A.; Caban-Holt, Allison; Smith, Charles D.; Nelson, Peter T.; Kryscio, Richard J.; Schmitt, Frederick A.; Fardo, David W.

2015-01-01

98

Parallel LOCFES  

E-print Network

I'sge III. A Review III. B MasPar System Architecture 18 III. C MasPar FORTRAN 21 III. D MasPar Programming Environment IV MLOCFES: PARALLEL VERSION IV. A Potential Regions of Parallelism in LOCFES IV. B FORTRAN gg Adaptations IV. C Program.... 2 MasPar MP-1 System Diagram (Adapted from MasPar System Overview). . . 19 3 The flow of control. 31 LIST OF TABLES TABLE Page Analytical vs computational (MLOCFES) solution for I = 1, K = 1, and L=1 Analytical vs computational (MLOCFES...

Shah, Ronak C.

1991-01-01

99

The feasibility, reliability and validity of the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF) in palliative care population  

Microsoft Academic Search

In terminally-ill patients, effective measurement of health-related quality of life (HRQoL) needs to be done while imposing minimal burden. In an attempt to ensure that routine HRQoL assessment is simple but capable of eliciting adequate information, the McGill Quality of Life Questionnaire-Cardiff Short Form (MQOL-CSF: 8 items) was developed from its original version, the McGill Quality of Life Questionnaire (MQOL:

Pei Lin Lua; Sam Salek; Ilora Finlay; Chris Lloyd-Richards

2005-01-01

100

Adaptive parallel logic networks  

NASA Technical Reports Server (NTRS)

Adaptive, self-organizing concurrent systems (ASOCS) that combine self-organization with massive parallelism for such applications as adaptive logic devices, robotics, process control, and system malfunction management, are presently discussed. In ASOCS, an adaptive network composed of many simple computing elements operating in combinational and asynchronous fashion is used and problems are specified by presenting if-then rules to the system in the form of Boolean conjunctions. During data processing, which is a different operational phase from adaptation, the network acts as a parallel hardware circuit.

Martinez, Tony R.; Vidal, Jacques J.

1988-01-01

101

Photovoltaic module reliability workshop  

NASA Astrophysics Data System (ADS)

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986 to 1990. The reliability photovoltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warrantees available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the U.S., PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L.

102

Photovoltaic module reliability workshop  

SciTech Connect

The paper and presentations compiled in this volume form the Proceedings of the fourth in a series of Workshops sponsored by Solar Energy Research Institute (SERI/DOE) under the general theme of photovoltaic module reliability during the period 1986--1990. The reliability Photo Voltaic (PV) modules/systems is exceedingly important along with the initial cost and efficiency of modules if the PV technology has to make a major impact in the power generation market, and for it to compete with the conventional electricity producing technologies. The reliability of photovoltaic modules has progressed significantly in the last few years as evidenced by warranties available on commercial modules of as long as 12 years. However, there is still need for substantial research and testing required to improve module field reliability to levels of 30 years or more. Several small groups of researchers are involved in this research, development, and monitoring activity around the world. In the US, PV manufacturers, DOE laboratories, electric utilities and others are engaged in the photovoltaic reliability research and testing. This group of researchers and others interested in this field were brought together under SERI/DOE sponsorship to exchange the technical knowledge and field experience as related to current information in this important field. The papers presented here reflect this effort.

Mrig, L. (ed.)

1990-01-01

103

Reliability and validity of the Dutch Dimensional Assessment of Personality Pathology-Short Form (DAPP-SF), a shortened version of the DAPP-Basic Questionnaire.  

PubMed

The Dimensional Assessment of Personality Pathology-Basic Questionnaire (DAPP-BQ) appears to be a good choice for the assessment of personality pathology. However, due to its length, administration of the instrument is rather time-consuming, hindering standard inclusion of the DABB-BQ in a battery of assessment instruments at intake. We developed the 136-item DAPP-SF (Short Form), and investigated its psychometric characteristics in various samples, i.e., a community-based sample (n = 487), patients with mood-, anxiety-, and somatoform disorders (n = 1,329), and patients with personality disorders (n = 1,393). Results revealed high internal consistency for almost all dimensions. The factor structure appeared almost identical as compared to the factor structure of the original DAPP-BQ, and was shown to be invariant across the various patient and community samples. Indices for convergent, discriminant and criterion related validity were satisfactory. It is concluded that the good psychometric characteristics of the original DAPP-BQ were preserved in the shortened version of the instrument. PMID:19538084

de Beurs, Edwin; Rinne, Thomas; van Kampen, Dirk; Verheul, Roel; Andrea, Helen

2009-06-01

104

Perfect Pipelining: A New Loop Parallelization Technique  

Microsoft Academic Search

Parallelizing compilers do not handle loops in a satisfactory manner. Fine-grain transformationscapture irregular parallelism inside a loop body not amenable to coarser approaches but have limitedability to exploit parallelism across iterations. Coarse methods sacrifice irregular forms of parallelismin favor of pipelining (overlapping) iterations. In this paper we present a new transformation, PerfectPipelining, that bridges the gap between these fine- and

Alexander Aiken; Alexandru Nicolau

1988-01-01

105

Manufacturing & Reliability  

E-print Network

Metallurgy Building http://ammrc.case.edu The Center is capable of mechanically evaluating and deformation (i.e liquid nitrogen) up to 1400C. Monotonic as well as cyclic fatigue testing is possible via remote with superimposed pressures up to 2 Gpa are possible. Deformation processing is conducted on novel forging, forming

Rollins, Andrew M.

106

d(CGGTGGT) forms an octameric parallel G-quadruplex via stacking of unusual G(:C):G(:C):G(:C):G(:C) octads  

PubMed Central

Among non-canonical DNA secondary structures, G-quadruplexes are currently widely studied because of their probable involvement in many pivotal biological roles, and for their potential use in nanotechnology. The overall quadruplex scaffold can exhibit several morphologies through intramolecular or intermolecular organization of G-rich oligodeoxyribonucleic acid strands. In particular, several G-rich strands can form higher order assemblies by multimerization between several G-quadruplex units. Here, we report on the identification of a novel dimerization pathway. Our Nuclear magnetic resonance, circular dichroism, UV, gel electrophoresis and mass spectrometry studies on the DNA sequence dCGGTGGT demonstrate that this sequence forms an octamer when annealed in presence of K+ or NH4+ ions, through the 5?-5? stacking of two tetramolecular G-quadruplex subunits via unusual G(:C):G(:C):G(:C):G(:C) octads. PMID:21715378

Borbone, Nicola; Amato, Jussara; Oliviero, Giorgia; D’Atri, Valentina; Gabelica, Valérie; De Pauw, Edwin; Piccialli, Gennaro; Mayol, Luciano

2011-01-01

107

Parallelization: Sieve of Eratosthenes  

NSDL National Science Digital Library

This module presents the Sieve of Eratosthenes, a method for finding the prime numbers below a certain integer. One can model the sieve for small integers by hand. For bigger integers, it becomes necessary to use a coded implementation. This code can be either serial (sequential) or parallel. Students will explore the various forms of parallelism (shared memory, distributed memory, and hybrid) as well as the scaling of the algorithm on multiple cores in its various forms, observing the relationship between run time of the program and number of cores devoted to the program. An assessment rubric, two exercises, and two student project ideas allow the student to consolidate her/his understanding of the material presented in the module.

Aaron Weeden

108

Parallel Anisotropic Tetrahedral Adaptation  

NASA Technical Reports Server (NTRS)

An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.

Park, Michael A.; Darmofal, David L.

2008-01-01

109

The short form of the fear survey schedule for children-revised (FSSC-R-SF): an efficient, reliable, and valid scale for measuring fear in children and adolescents.  

PubMed

The present study examined the psychometric properties of the Short Form of the Fear Survey Schedule for Children-Revised (FSSC-R-SF) in non-clinical and clinically referred children and adolescents from the Netherlands and the United States. Exploratory as well as confirmatory factor analyses of the FSSC-R-SF yielded support for the hypothesized five-factor structure representing fears in the domains of (1) failure and criticism, (2) the unknown, (3) animals, (4) danger and death, and (5) medical affairs. The FSSC-R-SF showed satisfactory reliability and was capable of assessing gender and age differences in youths' fears and fearfulness that have been documented in previous research. Further, the convergent validity of the scale was good as shown by substantial and meaningful correlations with the full-length FSSC-R and alternative childhood anxiety measures. Finally, support was found for the discriminant validity of the scale. That is, clinically referred children and adolescents exhibited higher scores on the FSSC-R-SF total scale and most subscales as compared to their non-clinical counterparts. Moreover, within the clinical sample, children and adolescents with a major anxiety disorder generally displayed higher FSSC-R-SF scores than youths without such a diagnosis. Altogether, these findings indicate that the FSSC-R-SF is a brief, reliable, and valid scale for assessing fear sensitivities in children and adolescents. PMID:25445086

Muris, Peter; Ollendick, Thomas H; Roelofs, Jeffrey; Austin, Kristin

2014-12-01

110

Parallel Computing Explained  

NSDL National Science Digital Library

Several tutorials on parallel computing. Overview of parallel computing. Porting and code parallelization. Scalar, cache, and parallel code tuning. Timing, profiling and performance analysis. Overview of IBM Regatta P690.

NCSA

111

Parallel Programming in the Age of Ubiquitous Parallelism  

NASA Astrophysics Data System (ADS)

Multicore and manycore processors are now ubiquitous, but parallel programming remains as difficult as it was 30-40 years ago. During this time, our community has explored many promising approaches including functional and dataflow languages, logic programming, and automatic parallelization using program analysis and restructuring, but none of these approaches has succeeded except in a few niche application areas. In this talk, I will argue that these problems arise largely from the computation-centric foundations and abstractions that we currently use to think about parallelism. In their place, I will propose a novel data-centric foundation for parallel programming called the operator formulation in which algorithms are described in terms of actions on data. The operator formulation shows that a generalized form of data-parallelism called amorphous data-parallelism is ubiquitous even in complex, irregular graph applications such as mesh generation/refinement/partitioning and SAT solvers. Regular algorithms emerge as a special case of irregular ones, and many application-specific optimization techniques can be generalized to a broader context. The operator formulation also leads to a structural analysis of algorithms called TAO-analysis that provides implementation guidelines for exploiting parallelism efficiently. Finally, I will describe a system called Galois based on these ideas for exploiting amorphous data-parallelism on multicores and GPUs

Pingali, Keshav

2014-04-01

112

Parallel hierarchical radiosity rendering  

SciTech Connect

In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

Carter, M.

1993-07-01

113

Microlenses focal length measurement using Z-scan and parallel moiré deflectometry  

NASA Astrophysics Data System (ADS)

In this paper, a simple and accurate method based on Z-scan and parallel moiré deflectometry for measuring the focal length of microlenses is reported. A laser beam is focused by one lens and is re-collimated by another lens, and then strikes a parallel moiré deflectometer. In the presence of a microlens near the focal point of the first lens, the radius of curvature of the beam is changed; the parallel moiré fringes are formed only due to the beam divergence or convergence. The focal length of the microlens is obtained from the moiré fringe period graph without the need to know the position of the principal planes. This method is simple, more reliable, and completely automated. The implementation of the method is straightforward. Since a focused laser beam and Z-scan in free space are used, it can be employed for determining small focal lengths of small size microlenses without serious limitation on their size.

Rasouli, Saifollah; Rajabi, Y.; Sarabi, H.

2013-12-01

114

Parallel pivoting combined with parallel reduction  

NASA Technical Reports Server (NTRS)

Parallel algorithms for triangularization of large, sparse, and unsymmetric matrices are presented. The method combines the parallel reduction with a new parallel pivoting technique, control over generations of fill-ins and a check for numerical stability, all done in parallel with the work being distributed over the active processes. The parallel technique uses the compatibility relation between pivots to identify parallel pivot candidates and uses the Markowitz number of pivots to minimize fill-in. This technique is not a preordering of the sparse matrix and is applied dynamically as the decomposition proceeds.

Alaghband, Gita

1987-01-01

115

Comparison of Reliability Measures under Factor Analysis and Item Response Theory  

ERIC Educational Resources Information Center

Reliability of test scores is one of the most pervasive psychometric concepts in measurement. Reliability coefficients based on a unifactor model for continuous indicators include maximal reliability rho and an unweighted sum score-based omega, among many others. With increasing popularity of item response theory, a parallel reliability measure pi…

Cheng, Ying; Yuan, Ke-Hai; Liu, Cheng

2012-01-01

116

Parallel Lines and Transversals  

NSDL National Science Digital Library

In this lab you will review the names of angles formed by transversals. In addition you will discover the unique relationship that these pairs of anlges have when the transversal cuts through two parallel lines. picture We have already discussed many angle relationships in class. For example, we have learned to identify vertical angles and linear pairs. Each of the angles have a special relationship. Vertical angles are congruent, and Linear angles are supplementary. In the following lesson you will review the names of angle pairs ...

Mrs. Sonntag

2010-10-07

117

Special parallel processing workshop  

SciTech Connect

This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

NONE

1994-12-01

118

Parallel local search  

Microsoft Academic Search

We present a survey of parallel local search algorithms in which we review the concepts that can be used to incorporate parallelism\\u000a into local search. For this purpose we distinguish between single-walk and multiple-walk parallel local search and between\\u000a asynchronous and synchronous parallelism. Within the class of single-walk algorithms we differentiate between multiple-step\\u000a and single-step parallelism. To describe parallel local

M. G. A. Verhoeven; E. H. L. Aarts

1995-01-01

119

A trial for a reliable shape measurement using interferometry and deflectometry  

NASA Astrophysics Data System (ADS)

Phase measuring deflectometry is an emerging technique to measure specular complex surface, such as aspherical surface and free-form surface. It is very attractive for its wide dynamic range of vertical scale and application range. Because it is a gradient based surface profilometry, we have to integrate the measured data to get surface shape. It can be cause of low accuracy. On the other hand, interferometry is accurate and well-known method for precision shape measurement. In interferometry, the original measured data is phase of interference signal, which directly shows the surface shape of the target. However interferometry is too precise to measure aspherical surface, free-form surface and usual surface in common industry. To assure the accuracy in ultra-precision measurement, reliability is the most important thing. Reliability can be kept by cross-checking. Then I will propose measuring method using both interferometer and deflectometry for reliable shape measurement. In this concept, global shape is measured using deflectometry and local shape around flat area is measured using interferometry. The result of deflectometry is global and precise. But it include ambiguity due to slope integration. In interferometry, only a small area can be measured, which is almost parallel to the reference surface. But it is accurate and reliable. To combine both results, it should be global, precise and reliable measurement. I will present the concept of combination of interferometry and deflectometry and some preliminary experimental results.

Hanayama, Ryohei

2014-07-01

120

Simulating Concurrent Intrusions for Testing Intrusion Detection Systems: Parallelizing Intrusions  

Microsoft Academic Search

For testing Intrusion Detection Systems (IDS), it is essentialthat we be able to simulate intrusions in different forms(both sequential and parallelized) in order to comprehensivelytest and evaluate the detection capability of an IDS. This paperpresents an algorithm for automatically transforming a sequentialintrusive script into a set of parallel intrusive scripts(formed by a group of parallel threads) which simulate a concurrentintrusion.

1995-01-01

121

Delivery point reliability measurement  

Microsoft Academic Search

Reliability is one of the most important criteria which must be taken into consideration during the planning, designing and operating phases of any industrial, commercial or utility power system. Two distinct reliability assessment methodologies are frequently used to evaluate the reliability performance of power systems: 1) historical reliability assessment, i.e. the collection and analysis of system outage data; and 2)

A. A. Chowdhury; D. O. Koval

1996-01-01

122

Transmission reliability performance assessment  

Microsoft Academic Search

Transmission reliability performance assessment is undergoing a basic change due to the changes in the electric transmission industry. While fundamental concepts of transmission reliability performance assessment have not changed key trends in the industry signal the need for industry stakeholders to assess transmission reliability data collection capabilities and performance metrics to ensure continued grid reliability and economic benefits of an

E. A. Kram

2006-01-01

123

Delivery point reliability measurement  

Microsoft Academic Search

Reliability is one of the most important criteria which must be taken into consideration during the planning, designing and operating phases of any industrial, commercial or electric utility power system. Two distinct reliability assessment methodologies are frequently used to evaluate the reliability performance of power systems. They are: (1) historical reliability assessment, i.e., the collection and analysis of system outage

Ali A. Chowdhury; D. O. Koval

1995-01-01

124

Method of Administration of PROMIS Scales Did Not Significantly Impact Score Level, Reliability or Validity  

PubMed Central

Objective To test the impact of method of administration (MOA) on score level, reliability, and validity of scales developed in the Patient Reported Outcomes Measurement Information System (PROMIS). Study Design and Setting Two non-overlapping parallel forms each containing 8 items from each of three PROMIS item banks (Physical Function, Fatigue and Depression) were completed by 923 adults with COPD, depression, or rheumatoid arthritis. In a randomized cross-over design, subjects answered one form by interactive voice response (IVR) technology, paper questionnaire (PQ), personal digital assistant (PDA), or personal computer (PC) and a second form by PC, in the same administration. Method equivalence was evaluated through analyses of difference scores, intraclass correlations (ICC), and convergent/discriminant validity. Results In difference score analyses, no significant mode differences were found and all confidence intervals were within the pre-specified MID of 0.2 SD. Parallel forms reliabilities were very high (ICC=0.85-0.93). Only one across mode ICC was significantly lower than the same mode ICC. Tests of validity showed no differential effect by MOA. Participants preferred screen interface over PQ and IVR. Conclusion We found no statistically or clinically significant differences in score levels or psychometric properties of IVR, PQ or PDA administration as compared to PC. PMID:24262772

Bjorner, Jakob B.; Rose, Matthias; Gandek, Barbara; Stone, Arthur A.; Junghaenel, Doerte U.; Ware, John E.

2014-01-01

125

Parallel application experience with replicated method invocation  

Microsoft Academic Search

We describe and evaluate a new approach to object replication in Java, aimed at improving the performance of parallel programs. Our programming model allows the programmer to define groups of objects that can be replicated and updated as a whole, using reliable, totally- ordered broadcast to send update methods to all machines containing a copy. The model has been implemented

Jason Maassen; Thilo Kielmann; Henri E. Bal

2001-01-01

126

Spectral analysis in parallel structures  

Microsoft Academic Search

Experimental studies of large class applications are needed of spectral analysis in real time. The possibility is shown to form computer algorithm for the model of signal representation in stochastic realization. A possible variant of parallel structures algorithm for crosspectral and spectral density function of signal estimation is considered

V. Zagursky; V. Zorin

2001-01-01

127

Can There Be Reliability without "Reliability?"  

ERIC Educational Resources Information Center

An "Educational Researcher" article by Pamela Moss (1994) asks the title question, "Can there be validity without reliability?" Yes, she answers, if by reliability one means "consistency among independent observations intended as interchangeable" (Moss, 1994, p. 7), quantified by internal consistency indices such as KR-20 coefficients and…

Mislevy, Robert J.

2004-01-01

128

Parallel processing of natural language  

SciTech Connect

Two types of parallel natural language processing are studied in this work: (1) the parallelism between syntactic and nonsyntactic processing and (2) the parallelism within syntactic processing. It is recognized that a syntactic category can potentially be attached to more than one node in the syntactic tree of a sentence. Even if all the attachments are syntactically well-formed, nonsyntactic factors such as semantic and pragmatic consideration may require one particular attachment. Syntactic processing must synchronize and communicate with nonsyntactic processing. Two syntactic processing algorithms are proposed for use in a parallel environment: Early's algorithm and the LR(k) algorithm. Conditions are identified to detect the syntactic ambiguity and the algorithms are augmented accordingly. It is shown that by using nonsyntactic information during syntactic processing, backtracking can be reduced, and the performance of the syntactic processor is improved. For the second type of parallelism, it is recognized that one portion of a grammar can be isolated from the rest of the grammar and be processed by a separate processor. A partial grammar of a larger grammar is defined. Parallel syntactic processing is achieved by using two processors concurrently: the main processor (mp) and the two processors concurrently: the main processor (mp) and the auxiliary processor (ap).

Chang, H.O.

1986-01-01

129

Data parallel algorithms  

Microsoft Academic Search

Parallel computers with tens of thousands of processors are typically programmed in a data parallel style, as opposed to the control parallel style used in multiprocessing. The success of data parallel algorithms—even on problems that at first glance seem inherently serial—suggests that this style of programming has much wider applicability than was previously thought.

W. Daniel Hillis; Guy L. Steele Jr.

1986-01-01

130

Improved CDMA Performance Using Parallel Interference Cancellation  

NASA Technical Reports Server (NTRS)

This report considers a general parallel interference cancellation scheme that significantly reduces the degradation effect of user interference but with a lesser implementation complexity than the maximum-likelihood technique. The scheme operates on the fact that parallel processing simultaneously removes from each user the interference produced by the remaining users accessing the channel in an amount proportional to their reliability. The parallel processing can be done in multiple stages. The proposed scheme uses tentative decision devices with different optimum thresholds at the multiple stages to produce the most reliably received data for generation and cancellation of user interference. The 1-stage interference cancellation is analyzed for three types of tentative decision devices, namely, hard, null zone, and soft decision, and two types of user power distribution, namely, equal and unequal powers. Simulation results are given for a multitude of different situations, in particular, those cases for which the analysis is too complex.

Simon, Marvin; Divsalar, Dariush

1995-01-01

131

Parallel operation control technique of voltage source inverters in UPS  

Microsoft Academic Search

The control technique of a parallel operation system of voltage source inverters with other inverters or with utility source has been applied in many fields, especially in uninterruptible power supply (UPS). The multi-module UPS can flexibly implement expansion of power system capacities. Furthermore, it can be used to build up a parallel redundant system in order to improve the reliability

Duan Shanxu; Meng Yu; Xiong Jian; Kang Yong; Chen Jian

1999-01-01

132

Parallel operation of voltage source inverters with minimal intermodule reactors  

Microsoft Academic Search

Realization of large horsepower motor drives using parallel-connected voltage source inverters rated at smaller power levels would be highly desirable. A robust technique for such a realization would result in several benefits including modularity, ease of maintenance, n+1 redundancy, reliability, etc. Techniques for parallel operation of voltage source inverters with relatively large load inductance have been well established in the

Bin Shi; Giri Venkataramanan

2004-01-01

133

Parallel Activation in Bilingual Phonological Processing  

ERIC Educational Resources Information Center

In bilingual language processing, the parallel activation hypothesis suggests that bilinguals activate their two languages simultaneously during language processing. Support for the parallel activation mainly comes from studies of lexical (word-form) processing, with relatively less attention to phonological (sound) processing. According to…

Lee, Su-Yeon

2011-01-01

134

DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY  

SciTech Connect

Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters.

Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States)] [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States)] [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States)] [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States)] [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States)] [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States)] [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States)] [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: albertod@ucr.edu [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom)

2013-02-15

135

Human Reliability Program Overview  

SciTech Connect

This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.

Bodin, Michael

2012-09-25

136

Power electronics reliability analysis.  

SciTech Connect

This report provides the DOE and industry with a general process for analyzing power electronics reliability. The analysis can help with understanding the main causes of failures, downtime, and cost and how to reduce them. One approach is to collect field maintenance data and use it directly to calculate reliability metrics related to each cause. Another approach is to model the functional structure of the equipment using a fault tree to derive system reliability from component reliability. Analysis of a fictitious device demonstrates the latter process. Optimization can use the resulting baseline model to decide how to improve reliability and/or lower costs. It is recommended that both electric utilities and equipment manufacturers make provisions to collect and share data in order to lay the groundwork for improving reliability into the future. Reliability analysis helps guide reliability improvements in hardware and software technology including condition monitoring and prognostics and health management.

Smith, Mark A.; Atcitty, Stanley

2009-12-01

137

Revisiting parallel catadioptric goniophotometers  

NASA Astrophysics Data System (ADS)

A thorough knowledge of the angular distribution of light scattered by an illuminated surface under different angles is essential in numerous industrial and research applications. Traditionally, the angular distribution of a reflected or transmitted light flux as function of the illumination angle, described by the Bidirectional Scattering Distribution Function (BSDF), is measured with a point-by-point scanning goniophotometer yielding impractically long acquisition times. Significantly faster measurements can be achieved by a device capable of simultaneously imaging the far-field distribution of light scattered by a sample onto a two-dimensional sensor array. Such an angular-to-spatial mapping function can be realized with a parallel catadioptric mapping goniophotometer (CMG). In this contribution, we formally establish the design requirement for a reliable CMG. Based on heuristic considerations we show that, to avoid degrading the angular-to-spatial function, the acceptance angle of the lens system inherent to a CMG must be smaller than 60°. By means of a parametric study, we investigate the practical design limitations of a CMG caused by the constraints imposed by the properties of a real lens system. Our study reveals that the values of the key design parameters of a CMG fall within a relatively small range. This imposes the shape of the ellipsoidal reflector and drastically restricts the room for a design trade-off between the sample size and the angular resolution. We provide a quantitative analysis for the key parameters of a CMG for two relevant cases.

Karamata, Boris; Andersen, Marilyne

2013-04-01

138

Reliability as Argument  

ERIC Educational Resources Information Center

Reliability consists of both important social and scientific values and methods for evidencing those values, though in practice methods are often conflated with the values. With the two distinctly understood, a reliability argument can be made that articulates the particular reliability values most relevant to the particular measurement situation…

Parkes, Jay

2007-01-01

139

Reliability computation from reliability block diagrams  

NASA Technical Reports Server (NTRS)

A method and a computer program are presented to calculate probability of system success from an arbitrary reliability block diagram. The class of reliability block diagrams that can be handled include any active/standby combination of redundancy, and the computations include the effects of dormancy and switching in any standby redundancy. The mechanics of the program are based on an extension of the probability tree method of computing system probabilities.

Chelson, P. O.; Eckstein, R. E.

1971-01-01

140

Design considerations for parallel graphics libraries  

NASA Technical Reports Server (NTRS)

Applications which run on parallel supercomputers are often characterized by massive datasets. Converting these vast collections of numbers to visual form has proven to be a powerful aid to comprehension. For a variety of reasons, it may be desirable to provide this visual feedback at runtime. One way to accomplish this is to exploit the available parallelism to perform graphics operations in place. In order to do this, we need appropriate parallel rendering algorithms and library interfaces. This paper provides a tutorial introduction to some of the issues which arise in designing parallel graphics libraries and their underlying rendering algorithms. The focus is on polygon rendering for distributed memory message-passing systems. We illustrate our discussion with examples from PGL, a parallel graphics library which has been developed on the Intel family of parallel systems.

Crockett, Thomas W.

1994-01-01

141

Towards Distributed Memory Parallel Program Analysis  

SciTech Connect

This paper presents a parallel attribute evaluation for distributed memory parallel computer architectures where previously only shared memory parallel support for this technique has been developed. Attribute evaluation is a part of how attribute grammars are used for program analysis within modern compilers. Within this work, we have extended ROSE, a open compiler infrastructure, with a distributed memory parallel attribute evaluation mechanism to support user defined global program analysis required for some forms of security analysis which can not be addressed by a file by file view of large scale applications. As a result, user defined security analyses may now run in parallel without the user having to specify the way data is communicated between processors. The automation of communication enables an extensible open-source parallel program analysis infrastructure.

Quinlan, D; Barany, G; Panas, T

2008-06-17

142

Low-power approaches for parallel, free-space photonic interconnects  

SciTech Connect

Future advances in the application of photonic interconnects will involve the insertion of parallel-channel links into Multi-Chip Modules (MCMS) and board-level parallel connections. Such applications will drive photonic link components into more compact forms that consume far less power than traditional telecommunication data links. These will make use of new device-level technologies such as vertical cavity surface-emitting lasers and special low-power parallel photoreceiver circuits. Depending on the application, these device technologies will often be monolithically integrated to reduce the amount of board or module real estate required by the photonics. Highly parallel MCM and board-level applications will also require simplified drive circuitry, lower cost, and higher reliability than has been demonstrated in photonic and optoelectronic technologies. An example is found in two-dimensional point-to-point array interconnects for MCM stacking. These interconnects are based on high-efficiency Vertical Cavity Surface Emitting Lasers (VCSELs), Heterojunction Bipolar Transistor (HBT) photoreceivers, integrated micro-optics, and MCM-compatible packaging techniques. Individual channels have been demonstrated at 100 Mb/s, operating with a direct 3.3V CMOS electronic interface while using 45 mW of electrical power. These results demonstrate how optoelectronic device technologies can be optimized for low-power parallel link applications.

Carson, R.F.; Lovejoy, M.L.; Lear, K.L.; WSarren, M.E.; Seigal, P.K.; Craft, D.C.; Kilcoyne, S.P.; Patrizi, G.A.; Blum, O.

1995-12-31

143

Parallel Adaptive Mesh Refinement Library  

NASA Technical Reports Server (NTRS)

Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

Mac-Neice, Peter; Olson, Kevin

2005-01-01

144

Parallel integrated frame synchronizer chip  

NASA Technical Reports Server (NTRS)

A parallel integrated frame synchronizer which implements a sequential pipeline process wherein serial data in the form of telemetry data or weather satellite data enters the synchronizer by means of a front-end subsystem and passes to a parallel correlator subsystem or a weather satellite data processing subsystem. When in a CCSDS mode, data from the parallel correlator subsystem passes through a window subsystem, then to a data alignment subsystem and then to a bit transition density (BTD)/cyclical redundancy check (CRC) decoding subsystem. Data from the BTD/CRC decoding subsystem or data from the weather satellite data processing subsystem is then fed to an output subsystem where it is output from a data output port.

Ghuman, Parminder Singh (Inventor); Solomon, Jeffrey Michael (Inventor); Bennett, Toby Dennis (Inventor)

2000-01-01

145

Business of reliability  

NASA Astrophysics Data System (ADS)

The presentation is organized around three themes: (1) The decrease of reception equipment costs allows non-Remote Sensing organization to access a technology until recently reserved to scientific elite. What this means is the rise of 'operational' executive agencies considering space-based technology and operations as a viable input to their daily tasks. This is possible thanks to totally dedicated ground receiving entities focusing on one application for themselves, rather than serving a vast community of users. (2) The multiplication of earth observation platforms will form the base for reliable technical and financial solutions. One obstacle to the growth of the earth observation industry is the variety of policies (commercial versus non-commercial) ruling the distribution of the data and value-added products. In particular, the high volume of data sales required for the return on investment does conflict with traditional low-volume data use for most applications. Constant access to data sources supposes monitoring needs as well as technical proficiency. (3) Large volume use of data coupled with low- cost equipment costs is only possible when the technology has proven reliable, in terms of application results, financial risks and data supply. Each of these factors is reviewed. The expectation is that international cooperation between agencies and private ventures will pave the way for future business models. As an illustration, the presentation proposes to use some recent non-traditional monitoring applications, that may lead to significant use of earth observation data, value added products and services: flood monitoring, ship detection, marine oil pollution deterrent systems and rice acreage monitoring.

Engel, Pierre

1999-12-01

146

User's guide to the Reliability Estimation System Testbed (REST)  

NASA Technical Reports Server (NTRS)

The Reliability Estimation System Testbed is an X-window based reliability modeling tool that was created to explore the use of the Reliability Modeling Language (RML). RML was defined to support several reliability analysis techniques including modularization, graphical representation, Failure Mode Effects Simulation (FMES), and parallel processing. These techniques are most useful in modeling large systems. Using modularization, an analyst can create reliability models for individual system components. The modules can be tested separately and then combined to compute the total system reliability. Because a one-to-one relationship can be established between system components and the reliability modules, a graphical user interface may be used to describe the system model. RML was designed to permit message passing between modules. This feature enables reliability modeling based on a run time simulation of the system wide effects of a component's failure modes. The use of failure modes effects simulation enhances the analyst's ability to correctly express system behavior when using the modularization approach to reliability modeling. To alleviate the computation bottleneck often found in large reliability models, REST was designed to take advantage of parallel processing on hypercube processors.

Nicol, David M.; Palumbo, Daniel L.; Rifkin, Adam

1992-01-01

147

DC Circuits: Parallel Resistances  

NSDL National Science Digital Library

In this interactive learning activity, students will learn about parallel circuits. They will measure and calculate the resistance of parallel circuits and answer several questions about the example circuit shown.

148

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, H.C.; Cheng, Y.S.

1984-01-01

149

Parallel flow diffusion battery  

DOEpatents

A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

Yeh, Hsu-Chi (Albuquerque, NM); Cheng, Yung-Sung (Albuquerque, NM)

1984-08-07

150

Parallel I/O Systems  

NSDL National Science Digital Library

* Redundant disk array architectures,* Fault tolerance issues in parallel I/O systems,* Caching and prefetching,* Parallel file systems,* Parallel I/O systems, * Parallel I/O programming paradigms, * Parallel I/O applications and environments, * Parallel programming with parallel I/O

Amy Apon

151

Reliability models for dataflow computer systems  

NASA Technical Reports Server (NTRS)

The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers.

Kavi, K. M.; Buckles, B. P.

1985-01-01

152

Parallel processing ITS  

SciTech Connect

This report provides a users` guide for parallel processing ITS on a UNIX workstation network, a shared-memory multiprocessor or a massively-parallel processor. The parallelized version of ITS is based on a master/slave model with message passing. Parallel issues such as random number generation, load balancing, and communication software are briefly discussed. Timing results for example problems are presented for demonstration purposes.

Fan, W.C.; Halbleib, J.A. Sr.

1996-09-01

153

Boolean Circuit Programming: A New Paradigm to Design Parallel Algorithms  

E-print Network

Boolean Circuit Programming: A New Paradigm to Design Parallel Algorithms Kunsoo Park Heejin Park Woo-Chul Jeun § Soonhoi Ha ¶ Abstract The Boolean circuit has been an important model of parallel important model of parallel computation is the Boolean circuit [18, 19]. Uni- form Boolean circuits have

Ha, Soonhoi

154

Test-Retest Reliability and Minimal Detectable Change on Balance and Ambulation Tests, the 36Item Short Form Health Survey, and the Unified Parkinson Disease Rating Scale in People With Parkinsonism  

Microsoft Academic Search

Background and Purpose. Distinguishing between a clinically significant change and change due to measurement error can be difficult. The purpose of this study was to determine test-retest reliability and minimal detectable change for the Berg Balance Scale (BBS), forward and backward functional reach, the Romberg Test and the Sharpened Romberg Test (SRT) with eyes open and closed, the Activities- specific

Teresa Steffen; Megan Seney

155

Reliability quantification and visualization for electric microgrids  

NASA Astrophysics Data System (ADS)

The electric grid in the United States is undergoing modernization from the state of an aging infrastructure of the past to a more robust and reliable power system of the future. The primary efforts in this direction have come from the federal government through the American Recovery and Reinvestment Act of 2009 (Recovery Act). This has provided the U.S. Department of Energy (DOE) with 4.5 billion to develop and implement programs through DOE's Office of Electricity Delivery and Energy Reliability (OE) over the a period of 5 years (2008-2012). This was initially a part of Title XIII of the Energy Independence and Security Act of 2007 (EISA) which was later modified by Recovery Act. As a part of DOE's Smart Grid Programs, Smart Grid Investment Grants (SGIG), and Smart Grid Demonstration Projects (SGDP) were developed as two of the largest programs with federal grants of 3.4 billion and $600 million respectively. The Renewable and Distributed Systems Integration (RDSI) demonstration projects were launched in 2008 with the aim of reducing peak electricity demand by 15 percent at distribution feeders. Nine such projects were competitively selected located around the nation. The City of Fort Collins in co-operative partnership with other federal and commercial entities was identified to research, develop and demonstrate a 3.5MW integrated mix of heterogeneous distributed energy resources (DER) to reduce peak load on two feeders by 20-30 percent. This project was called FortZED RDSI and provided an opportunity to demonstrate integrated operation of group of assets including demand response (DR), as a single controllable entity which is often called a microgrid. As per IEEE Standard 1547.4-2011 (IEEE Guide for Design, Operation, and Integration of Distributed Resource Island Systems with Electric Power Systems), a microgrid can be defined as an electric power system which has following characteristics: (1) DR and load are present, (2) has the ability to disconnect from and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

Panwar, Mayank

156

CFD on parallel computers  

NASA Astrophysics Data System (ADS)

CFD or Computational Fluid Dynamics is one of the scientific disciplines that has always posed new challenges to the capabilities of the modern, ultra-fast supercomputers, and now to the even faster parallel computers. For applications where number crunching is of primary importance, there is perhaps no escaping parallel computers since sequential computers can only be (as projected) as fast as a few gigaflops and no more, unless, of course, some altogether new technology appears in future. For parallel computers, on the other hand, there is no such limit since any number of processors can be made to work in parallel. Computationally demanding CFD codes and parallel computers are therefore soul-mates, and will remain so for all foreseeable future. So much so that there is a separate and fast-emerging discipline that tackles problems specific to CFD as applied to parallel computers. For some years now, there is an international conference on parallel CFD. So, one can indeed say that parallel CFD has arrived. To understand how CFD codes are parallelized, one must understand a little about how parallel computers function. Therefore, in what follows we will first deal with parallel computers, how a typical CFD code (if there is one such) looks like, and then the strategies of parallelization.

Basu, A. J.

1994-10-01

157

Adaptive Parallelism and Piranha  

Microsoft Academic Search

. Under "adaptive parallelism," the set of processors executing a parallel programmay grow or shrink as the program runs. Potential gains include the capacity to runa parallel program on the idle workstations in a conventional LAN---processors join thecomputation when they become idle, and withdraw when their owners need them---and tomanage the nodes of a dedicated multiprocessor efficiency. Experience to date

Nicholas Carriero; Eric Freeman; David Gelernter; David Kaminsky

1995-01-01

158

Parallel simulation today  

NASA Technical Reports Server (NTRS)

This paper surveys topics that presently define the state of the art in parallel simulation. Included in the tutorial are discussions on new protocols, mathematical performance analysis, time parallelism, hardware support for parallel simulation, load balancing algorithms, and dynamic memory management for optimistic synchronization.

Nicol, David; Fujimoto, Richard

1992-01-01

159

Decomposing the Potentially Parallel  

NSDL National Science Digital Library

This course provides an introduction to the issues involved in decomposing problems onto parallel machines, and to the types of architectures and programming styles commonly found in parallel computers. The list of topics discussed includes types of decomposition, task farming, regular domain decomposition, unbalanced grids, and parallel molecular dynamics.

Elspeth Minty, Robert Davey, Alan Simpson, David Henty

160

Parallel methods for dynamic simulation of multiple manipulator systems  

NASA Technical Reports Server (NTRS)

In this paper, efficient dynamic simulation algorithms for a system of m manipulators, cooperating to manipulate a large load, are developed; their performance, using two possible forms of parallelism on a general-purpose parallel computer, is investigated. One form, temporal parallelism, is obtained with the use of parallel numerical integration methods. A speedup of 3.78 on four processors of CRAY Y-MP8 was achieved with a parallel four-point block predictor-corrector method for the simulation of a four manipulator system. These multi-point methods suffer from reduced accuracy, and when comparing these runs with a serial integration method, the speedup can be as low as 1.83 for simulations with the same accuracy. To regain the performance lost due to accuracy problems, a second form of parallelism is employed. Spatial parallelism allows most of the dynamics of each manipulator chain to be computed simultaneously. Used exclusively in the four processor case, this form of parallelism in conjunction with a serial integration method results in a speedup of 3.1 on four processors over the best serial method. In cases where there are either more processors available or fewer chains in the system, the multi-point parallel integration methods are still advantageous despite the reduced accuracy because both forms of parallelism can then combine to generate more parallel tasks and achieve greater effective speedups. This paper also includes results for these cases.

Mcmillan, Scott; Sadayappan, P.; Orin, David E.

1993-01-01

161

Reliable Testable Secure Systems  

Microsoft Academic Search

\\u000a Although reliability has been extensively studied for decades in the space industry, it is now becoming evident that even\\u000a ground-based embedded systems are facing similar reliability issues. This chapter will briefly discuss the single-event-upset\\u000a (SEU) phenomena, also known as soft errors, and provide several examples of how reliability can be designed into secure embedded\\u000a systems. The chapter will also discuss

Catherine H. Gebotys

162

Human reliability analysis  

SciTech Connect

The authors present a treatment of human reliability analysis incorporating an introduction to probabilistic risk assessment for nuclear power generating stations. They treat the subject according to the framework established for general systems theory. Draws upon reliability analysis, psychology, human factors engineering, and statistics, integrating elements of these fields within a systems framework. Provides a history of human reliability analysis, and includes examples of the application of the systems approach.

Dougherty, E.M.; Fragola, J.R.

1988-01-01

163

ParPEST: a pipeline for EST data analysis based on parallel computing  

PubMed Central

Background Expressed Sequence Tags (ESTs) are short and error-prone DNA sequences generated from the 5' and 3' ends of randomly selected cDNA clones. They provide an important resource for comparative and functional genomic studies and, moreover, represent a reliable information for the annotation of genomic sequences. Because of the advances in biotechnologies, ESTs are daily determined in the form of large datasets. Therefore, suitable and efficient bioinformatic approaches are necessary to organize data related information content for further investigations. Results We implemented ParPEST (Parallel Processing of ESTs), a pipeline based on parallel computing for EST analysis. The results are organized in a suitable data warehouse to provide a starting point to mine expressed sequence datasets. The collected information is useful for investigations on data quality and on data information content, enriched also by a preliminary functional annotation. Conclusion The pipeline presented here has been developed to perform an exhaustive and reliable analysis on EST data and to provide a curated set of information based on a relational database. Moreover, it is designed to reduce execution time of the specific steps required for a complete analysis using distributed processes and parallelized software. It is conceived to run on low requiring hardware components, to fulfill increasing demand, typical of the data used, and scalability at affordable costs. PMID:16351758

D'Agostino, Nunzio; Aversano, Mario; Chiusano, Maria Luisa

2005-01-01

164

Parallel algorithm development  

SciTech Connect

Rapid changes in parallel computing technology are causing significant changes in the strategies being used for parallel algorithm development. One approach is simply to write computer code in a standard language like FORTRAN 77 or with the expectation that the compiler will produce executable code that will run in parallel. The alternatives are: (1) to build explicit message passing directly into the source code; or (2) to write source code without explicit reference to message passing or parallelism, but use a general communications library to provide efficient parallel execution. Application of these strategies is illustrated with examples of codes currently under development.

Adams, T.F.

1996-06-01

165

Operational safety reliability research  

SciTech Connect

Operating reactor events such as the TMI accident and the Salem automatic-trip failures raised the concern that during a plant's operating lifetime the reliability of systems could degrade from the design level that was considered in the licensing process. To address this concern, NRC is sponsoring the Operational Safety Reliability Research project. The objectives of this project are to identify the essential tasks of a reliability program and to evaluate the effectiveness and attributes of such a reliability program applicable to maintaining an acceptable level of safety during the operating lifetime at the plant.

Hall, R.E.; Boccio, J.L.

1986-01-01

166

A Design Methodology for Data-Parallel Applications  

Microsoft Academic Search

A methodology for the design and development of data parallel applications and components is presented. Data- parallelism is a well understood form of parallel computation, yet developing simple applications can involve substantial efforts to express the problem in low-level notations. We describe a process of software development for data-parallel applications starting from high-level specifications, generating repeated refinements of designs to

Lars S. Nyland; Jan F. Prins; Allen Goldberg; Peter H. Mills

2000-01-01

167

Parallel Atomistic Simulations  

SciTech Connect

Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

HEFFELFINGER,GRANT S.

2000-01-18

168

A Bayesian approach to reliability and confidence  

NASA Technical Reports Server (NTRS)

The historical evolution of NASA's interest in quantitative measures of reliability assessment is outlined. The introduction of some quantitative methodologies into the Vehicle Reliability Branch of the Safety, Reliability and Quality Assurance (SR and QA) Division at Johnson Space Center (JSC) was noted along with the development of the Extended Orbiter Duration--Weakest Link study which will utilize quantitative tools for a Bayesian statistical analysis. Extending the earlier work of NASA sponsor, Richard Heydorn, researchers were able to produce a consistent Bayesian estimate for the reliability of a component and hence by a simple extension for a system of components in some cases where the rate of failure is not constant but varies over time. Mechanical systems in general have this property since the reliability usually decreases markedly as the parts degrade over time. While they have been able to reduce the Bayesian estimator to a simple closed form for a large class of such systems, the form for the most general case needs to be attacked by the computer. Once a table is generated for this form, researchers will have a numerical form for the general solution. With this, the corresponding probability statements about the reliability of a system can be made in the most general setting. Note that the utilization of uniform Bayesian priors represents a worst case scenario in the sense that as researchers incorporate more expert opinion into the model, they will be able to improve the strength of the probability calculations.

Barnes, Ron

1989-01-01

169

Reliability Generalization of the Psychopathy Checklist Applied in Youthful Samples  

ERIC Educational Resources Information Center

This study examines the average reliability of Hare Psychopathy Checklists (PCLs) adapted for use in samples of youthful offenders (aged 12 to 21 years). Two forms of reliability are examined: 18 alpha estimates of internal consistency and 18 intraclass correlation (two or more raters) estimates of interrater reliability. The results, an average…

Campbell, Justin S.; Pulos, Steven; Hogan, Mike; Murry, Francie

2005-01-01

170

Hawaii electric system reliability.  

SciTech Connect

This report addresses Hawaii electric system reliability issues; greater emphasis is placed on short-term reliability but resource adequacy is reviewed in reference to electric consumers' views of reliability %E2%80%9Cworth%E2%80%9D and the reserve capacity required to deliver that value. The report begins with a description of the Hawaii electric system to the extent permitted by publicly available data. Electrical engineering literature in the area of electric reliability is researched and briefly reviewed. North American Electric Reliability Corporation standards and measures for generation and transmission are reviewed and identified as to their appropriateness for various portions of the electric grid and for application in Hawaii. Analysis of frequency data supplied by the State of Hawaii Public Utilities Commission is presented together with comparison and contrast of performance of each of the systems for two years, 2010 and 2011. Literature tracing the development of reliability economics is reviewed and referenced. A method is explained for integrating system cost with outage cost to determine the optimal resource adequacy given customers' views of the value contributed by reliable electric supply. The report concludes with findings and recommendations for reliability in the State of Hawaii.

Silva Monroy, Cesar Augusto; Loose, Verne William

2012-09-01

171

Software )Reliability Perspectives  

Microsoft Academic Search

Software which is used in life critical functions must be known to be highly reliable before insral- lation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try

Larry Wilson; Wenhui Shen

172

Parallel digital forensics infrastructure.  

SciTech Connect

This report documents the architecture and implementation of a Parallel Digital Forensics infrastructure. This infrastructure is necessary for supporting the design, implementation, and testing of new classes of parallel digital forensics tools. Digital Forensics has become extremely difficult with data sets of one terabyte and larger. The only way to overcome the processing time of these large sets is to identify and develop new parallel algorithms for performing the analysis. To support algorithm research, a flexible base infrastructure is required. A candidate architecture for this base infrastructure was designed, instantiated, and tested by this project, in collaboration with New Mexico Tech. Previous infrastructures were not designed and built specifically for the development and testing of parallel algorithms. With the size of forensics data sets only expected to increase significantly, this type of infrastructure support is necessary for continued research in parallel digital forensics. This report documents the implementation of the parallel digital forensics (PDF) infrastructure architecture and implementation.

Liebrock, Lorie M. (New Mexico Tech, Socorro, NM); Duggan, David Patrick

2009-10-01

173

Java Parallel Secure Stream for Grid Computing  

SciTech Connect

The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

2001-09-01

174

Realistic analytical phantoms for parallel magnetic resonance imaging.  

PubMed

The quantitative validation of reconstruction algorithms requires reliable data. Rasterized simulations are popular but they are tainted by an aliasing component that impacts the assessment of the performance of reconstruction. We introduce analytical simulation tools that are suited to parallel magnetic resonance imaging and allow one to build realistic phantoms. The proposed phantoms are composed of ellipses and regions with piecewise-polynomial boundaries, including spline contours, Bézier contours, and polygons. In addition, they take the channel sensitivity into account, for which we investigate two possible models. Our analytical formulations provide well-defined data in both the spatial and k-space domains. Our main contribution is the closed-form determination of the Fourier transforms that are involved. Experiments validate the proposed implementation. In a typical parallel magnetic resonance imaging reconstruction experiment, we quantify the bias in the overly optimistic results obtained with rasterized simulations-the inverse-crime situation. We provide a package that implements the different simulations and provide tools to guide the design of realistic phantoms. PMID:22049364

Guerquin-Kern, M; Lejeune, L; Pruessmann, K P; Unser, M

2012-03-01

175

Adaptive Parallelism with Piranha  

Microsoft Academic Search

"Adaptive parallelism" refers to parallel computations on a dynamically changingset of processors: processors may join or withdraw from the computation as it proceeds.Networks of fast workstations are the most important setting for adaptive parallelism atpresent. Workstations at most sites are typically idle for significant fractions of the day,and those idle cycles may constitute in the aggregate a powerful computing resource.For

Nicholas Carriero; David Gelernter; David Kaminsky; Jeffery Westbrook

176

PCLIPS: Parallel CLIPS  

NASA Technical Reports Server (NTRS)

A parallel version of CLIPS 5.1 has been developed to run on Intel Hypercubes. The user interface is the same as that for CLIPS with some added commands to allow for parallel calls. A complete version of CLIPS runs on each node of the hypercube. The system has been instrumented to display the time spent in the match, recognize, and act cycles on each node. Only rule-level parallelism is supported. Parallel commands enable the assertion and retraction of facts to/from remote nodes working memory. Parallel CLIPS was used to implement a knowledge-based command, control, communications, and intelligence (C(sup 3)I) system to demonstrate the fusion of high-level, disparate sources. We discuss the nature of the information fusion problem, our approach, and implementation. Parallel CLIPS has also be used to run several benchmark parallel knowledge bases such as one to set up a cafeteria. Results show from running Parallel CLIPS with parallel knowledge base partitions indicate that significant speed increases, including superlinear in some cases, are possible.

Hall, Lawrence O.; Bennett, Bonnie H.; Tello, Ivan

1994-01-01

177

Merlin - Massively parallel heterogeneous computing  

NASA Technical Reports Server (NTRS)

Hardware and software for Merlin, a new kind of massively parallel computing system, are described. Eight computers are linked as a 300-MIPS prototype to develop system software for a larger Merlin network with 16 to 64 nodes, totaling 600 to 3000 MIPS. These working prototypes help refine a mapped reflective memory technique that offers a new, very general way of linking many types of computer to form supercomputers. Processors share data selectively and rapidly on a word-by-word basis. Fast firmware virtual circuits are reconfigured to match topological needs of individual application programs. Merlin's low-latency memory-sharing interfaces solve many problems in the design of high-performance computing systems. The Merlin prototypes are intended to run parallel programs for scientific applications and to determine hardware and software needs for a future Teraflops Merlin network.

Wittie, Larry; Maples, Creve

1989-01-01

178

A high-speed linear algebra library with automatic parallelism  

NASA Technical Reports Server (NTRS)

Parallel or distributed processing is key to getting highest performance workstations. However, designing and implementing efficient parallel algorithms is difficult and error-prone. It is even more difficult to write code that is both portable to and efficient on many different computers. Finally, it is harder still to satisfy the above requirements and include the reliability and ease of use required of commercial software intended for use in a production environment. As a result, the application of parallel processing technology to commercial software has been extremely small even though there are numerous computationally demanding programs that would significantly benefit from application of parallel processing. This paper describes DSSLIB, which is a library of subroutines that perform many of the time-consuming computations in engineering and scientific software. DSSLIB combines the high efficiency and speed of parallel computation with a serial programming model that eliminates many undesirable side-effects of typical parallel code. The result is a simple way to incorporate the power of parallel processing into commercial software without compromising maintainability, reliability, or ease of use. This gives significant advantages over less powerful non-parallel entries in the market.

Boucher, Michael L.

1994-01-01

179

Sequential cumulative fatigue reliability  

NASA Technical Reports Server (NTRS)

A component is assumed to be subjected to a sequence of several groups of sinusoidal stresses. Each group consists of a specific number of cycles having the same maximum alternating stress level and the same mean stress level, the maximum alternating stress level being different from group to group. A method for predicting the reliability of components subjected to such loads is proposed, given their distributional alternating stress versus cycles-to-failure (S-N) diagram. It is called the 'conditional reliability-equivalent life' method. It is applied to four-cases using distributional fatigue data generated in the Reliability Research Laboratory of The University of Arizona, and the predicted reliabilities are compared and discussed.

Kececioglu, D.; Chester, L. B.; Gardner, E. O.

1974-01-01

180

Compositional C++: Compositional Parallel Programming  

Microsoft Academic Search

A compositional parallel program is a program constructed by composing component programs in parallel, where the composed program inherits properties of its components. In this paper, we describe a small extension of C++ called Compositional C++ or CC++ which is an object-oriented notation that supports compositional parallel programming. CC++ integrates different paradigms of parallel programming: data-parallel, task-parallel and object-parallel paradigms;

K. Mani Chandy; Carl Kesselman

1992-01-01

181

Extracting task-level parallelism  

Microsoft Academic Search

Automatic detection of task-level parallelism (also referred to as functional, DAG, unstructured, or thread parallelism) at various levels of program granularity is becoming increasingly important for parallelizing and back-end compilers. Parallelizing compilers detect iteration-level or coarser granularity parallelism which is suitable for parallel computers; detection of parallelism at the statement-or operation-level is essential for most modern microprocessors, including superscalar and

Milind Girkar; Constantine D. Polychronopoulos

1995-01-01

182

Reliability and Regression Analysis  

NSDL National Science Digital Library

This applet, by David M. Lane of Rice University, demonstrates how the reliability of X and Y affect various aspects of the regression of Y on X. Java 1.1 is required and a full set of instructions is given in order to get the full value from the applet. Exercises and definitions to key terms are also given to help students understand reliability and regression analysis.

Lane, David M.

183

The Journey Toward Reliability  

NSDL National Science Digital Library

Kansas State University faculty members have partnered with industry to assist in the implementation of a reliability centered manufacturing (RCM) program. This paper highlights faculty members experiences, benefits to industry of implementing a reliability centered manufacturing program, and faculty members roles in the RCM program implementation. The paper includes lessons learned by faculty members, short-term extensions of the faculty-industry partnership, and a long-term vision for a RCM institute at the university level.

Brockway, Kathy Vratil

184

Ultra Reliability Workshop Introduction  

NASA Technical Reports Server (NTRS)

This plan is the accumulation of substantial work by a large number of individuals. The Ultra-Reliability team consists of representatives from each center who have agreed to champion the program and be the focal point for their center. A number of individuals from NASA, government agencies (including the military), universities, industry and non-governmental organizations also contributed significantly to this effort. Most of their names may be found on the Ultra-Reliability PBMA website.

Shapiro, Andrew A.

2006-01-01

185

Software reliability studies  

NASA Technical Reports Server (NTRS)

The longterm goal of this research is to identify or create a model for use in analyzing the reliability of flight control software. The immediate tasks addressed are the creation of data useful to the study of software reliability and production of results pertinent to software reliability through the analysis of existing reliability models and data. The completed data creation portion of this research consists of a Generic Checkout System (GCS) design document created in cooperation with NASA and Research Triangle Institute (RTI) experimenters. This will lead to design and code reviews with the resulting product being one of the versions used in the Terminal Descent Experiment being conducted by the Systems Validations Methods Branch (SVMB) of NASA/Langley. An appended paper details an investigation of the Jelinski-Moranda and Geometric models for software reliability. The models were given data from a process that they have correctly simulated and asked to make predictions about the reliability of that process. It was found that either model will usually fail to make good predictions. These problems were attributed to randomness in the data and replication of data was recommended.

Wilson, Larry W.

1989-01-01

186

Multidisciplinary System Reliability Analysis  

NASA Technical Reports Server (NTRS)

The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code, developed under the leadership of NASA Glenn Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multidisciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

Mahadevan, Sankaran; Han, Song; Chamis, Christos C. (Technical Monitor)

2001-01-01

187

Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel  

E-print Network

Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel the Fastest Fourier Transform February 2014 Introduction to Supercomputing (MCS 572) Parallel FFT & Isoefficiency L-14 14 February 2014 1 / 25 #12;Parallel FFT & Isoefficiency 1 The Fast Fourier Transform in Parallel the Fastest Fourier

Verschelde, Jan

188

Parallel Lisp simulator  

SciTech Connect

CSIM is a simulator for parallel Lisp, based on a continuation passing interpreter. It models a shared-memory multiprocessor executing programs written in Common Lisp, extended with several primitives for creating and controlling processes. This paper describes the structure of the simulator, measures its performance, and gives an example of its use with a parallel Lisp program.

Weening, J.S.

1988-05-01

189

Parallel Programming Workshop  

NSDL National Science Digital Library

This is an online course for parallel programming. Topics include MPI basics, point-to-point communication, derived datatypes, virtual topologies, collective communication, parallel I/O, and performance analysis and profiling. Other languages will be discussed such as OpenMP and High Performance Fortran (HPF). A Computational Fluid Dynamics section includes flux functions, Riemann solver, Euler equations, and Navier-Stokes equations.

190

Grid Aware Parallelizing Algorithms  

Microsoft Academic Search

Running tightly coupled parallel MPI applications in a real grid environment using distributed MPI implementations (1, 2) can, in principle, make better and more fle xible use of computational resources, but for most parallel applications it has a major downside: The performance of such codes tends to be very poor. Most often the natural characteristics of realworld grids are responsible

Thomas Dramlitsch; Gabrielle Allen; Ed Seidel

191

Parallelizing quantum circuits  

Microsoft Academic Search

We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we

Anne Broadbent; Elham Kashefi

2009-01-01

192

Statistical modelling of software reliability  

NASA Technical Reports Server (NTRS)

During the six-month period from 1 April 1991 to 30 September 1991 the following research papers in statistical modeling of software reliability appeared: (1) A Nonparametric Software Reliability Growth Model; (2) On the Use and the Performance of Software Reliability Growth Models; (3) Research and Development Issues in Software Reliability Engineering; (4) Special Issues on Software; and (5) Software Reliability and Safety.

Miller, Douglas R.

1991-01-01

193

Influence of Hydrogen Incorporation on the Reliability of Gate Oxide Formed by Using Low-Temperature Plasma Selective Oxidation Applicable to Sub50-nm W-Polymetal Gate Devices  

Microsoft Academic Search

This letter reveals the physical and electrical properties of silicon dioxide (Si02) formed by the plasma selective oxidation (plasma selox) using 02 and H2 gas mixture, which is applicable to sub-50-nm tungsten-polymetal gate memory devices without capping nitride film. Metal-oxide-semiconductor capacitors with gate oxide formed by the plasma selox at the process temperature in the range of 400degC-700degC showed much

Kwan-Yong Lim; Min-Gyu Sung; Heung-Jae Cho; Yong Soo Kim; Se-Aug Jang; Seung Ryong Lee; Kwangok Kim; Hong-Seon Yang; Hyun-Chul Sohn; Seung-Ho Pyi; Ja-Chun Ku; Jin-Woong Kim

2008-01-01

194

Parallel computing works  

SciTech Connect

An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

Not Available

1991-10-23

195

Totally parallel multilevel algorithms  

NASA Technical Reports Server (NTRS)

Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

Frederickson, Paul O.

1988-01-01

196

Reliability Assessment for Two Versions of Vocabulary Levels Tests  

ERIC Educational Resources Information Center

This article reports a reliability study of two versions of the Vocabulary Levels Test at the 5000 word level. This study was motivated by a finding from an ongoing longitudinal study of vocabulary acquisition that Version A and Version B of Vocabulary Levels Test at the 5000 word level were not parallel. In order to investigate this issue,…

Xing, Peiling; Fulcher, Glenn

2007-01-01

197

Towards quantitative software reliability assessment in incremental development processes  

Microsoft Academic Search

The iterative and incremental development is becoming a major development process model in industry, and allows us for a good deal of parallelism between development and testing. In this paper we develop a quantitative software reliability assessment method in incremental development processes, based on the familiar non-homogeneous Poisson processes. More specifically, we utilize the software metrics observed in each incremental

Toshiya Fujii; Tadashi Dohi; Takaji Fujiwara

2011-01-01

198

A case study of the splithalf reliability coefficient  

Microsoft Academic Search

Different values for split-half reliability will be found for a single test if the items comprising the contrasted halves of the test are selected in different ways. The author presents evidence based on 4 arbitrary splits, such as the odd-even item split, on 30 random splits, and on 14 parallel splits in which the division was determined by item analysis

L. J. Cronbach

1946-01-01

199

Gearbox Reliability Collaborative Update (Presentation)  

SciTech Connect

This presentation was given at the Sandia Reliability Workshop in August 2013 and provides information on current statistics, a status update, next steps, and other reliability research and development activities related to the Gearbox Reliability Collaborative.

Sheng, S.

2013-10-01

200

Bilingual parallel programming  

SciTech Connect

Numerous experiments have demonstrated that computationally intensive algorithms support adequate parallelism to exploit the potential of large parallel machines. Yet successful parallel implementations of serious applications are rare. The limiting factor is clearly programming technology. None of the approaches to parallel programming that have been proposed to date -- whether parallelizing compilers, language extensions, or new concurrent languages -- seem to adequately address the central problems of portability, expressiveness, efficiency, and compatibility with existing software. In this paper, we advocate an alternative approach to parallel programming based on what we call bilingual programming. We present evidence that this approach provides and effective solution to parallel programming problems. The key idea in bilingual programming is to construct the upper levels of applications in a high-level language while coding selected low-level components in low-level languages. This approach permits the advantages of a high-level notation (expressiveness, elegance, conciseness) to be obtained without the cost in performance normally associated with high-level approaches. In addition, it provides a natural framework for reusing existing code.

Foster, I.; Overbeek, R.

1990-01-01

201

Proposed reliability cost model  

NASA Technical Reports Server (NTRS)

The research investigations which were involved in the study include: cost analysis/allocation, reliability and product assurance, forecasting methodology, systems analysis, and model-building. This is a classic example of an interdisciplinary problem, since the model-building requirements include the need for understanding and communication between technical disciplines on one hand, and the financial/accounting skill categories on the other. The systems approach is utilized within this context to establish a clearer and more objective relationship between reliability assurance and the subcategories (or subelements) that provide, or reenforce, the reliability assurance for a system. Subcategories are further subdivided as illustrated by a tree diagram. The reliability assurance elements can be seen to be potential alternative strategies, or approaches, depending on the specific goals/objectives of the trade studies. The scope was limited to the establishment of a proposed reliability cost-model format. The model format/approach is dependent upon the use of a series of subsystem-oriented CER's and sometimes possible CTR's, in devising a suitable cost-effective policy.

Delionback, L. M.

1973-01-01

202

Orbiter Autoland reliability analysis  

NASA Technical Reports Server (NTRS)

The Space Shuttle Orbiter is the only space reentry vehicle in which the crew is seated upright. This position presents some physiological effects requiring countermeasures to prevent a crewmember from becoming incapacitated. This also introduces a potential need for automated vehicle landing capability. Autoland is a primary procedure that was identified as a requirement for landing following and extended duration orbiter mission. This report documents the results of the reliability analysis performed on the hardware required for an automated landing. A reliability block diagram was used to evaluate system reliability. The analysis considers the manual and automated landing modes currently available on the Orbiter. (Autoland is presently a backup system only.) Results of this study indicate a +/- 36 percent probability of successfully extending a nominal mission to 30 days. Enough variations were evaluated to verify that the reliability could be altered with missions planning and procedures. If the crew is modeled as being fully capable after 30 days, the probability of a successful manual landing is comparable to that of Autoland because much of the hardware is used for both manual and automated landing modes. The analysis indicates that the reliability for the manual mode is limited by the hardware and depends greatly on crew capability. Crew capability for a successful landing after 30 days has not been determined yet.

Welch, D. Phillip

1993-01-01

203

Ultra reliability at NASA  

NASA Technical Reports Server (NTRS)

Ultra reliable systems are critical to NASA particularly as consideration is being given to extended lunar missions and manned missions to Mars. NASA has formulated a program designed to improve the reliability of NASA systems. The long term goal for the NASA ultra reliability is to ultimately improve NASA systems by an order of magnitude. The approach outlined in this presentation involves the steps used in developing a strategic plan to achieve the long term objective of ultra reliability. Consideration is given to: complex systems, hardware (including aircraft, aerospace craft and launch vehicles), software, human interactions, long life missions, infrastructure development, and cross cutting technologies. Several NASA-wide workshops have been held, identifying issues for reliability improvement and providing mitigation strategies for these issues. In addition to representation from all of the NASA centers, experts from government (NASA and non-NASA), universities and industry participated. Highlights of a strategic plan, which is being developed using the results from these workshops, will be presented.

Shapiro, Andrew A.

2006-01-01

204

ImPact Test-Retest Reliability: Reliably Unreliable?  

PubMed Central

Context: Computerized neuropsychological testing is commonly used in the assessment and management of sport-related concussion. Even though computerized testing is widespread, psychometric evidence for test-retest reliability is somewhat limited. Additional evidence for test-retest reliability is needed to optimize clinical decision making after concussion. Objective: To document test-retest reliability for a commercially available computerized neuropsychological test battery (ImPACT) using 2 different clinically relevant time intervals. Design: Cross-sectional study. Setting: Two research laboratories. Patients or Other Participants: Group 1 (n = 46) consisted of 25 men and 21 women (age = 22.4 ± 1.89 years). Group 2 (n = 45) consisted of 17 men and 28 women (age = 20.9 ± 1.72 years). Intervention(s): Both groups completed ImPACT forms 1, 2, and 3, which were delivered sequentially either at 1-week intervals (group 1) or at baseline, day 45, and day 50 (group 2). Group 2 also completed the Green Word Memory Test (WMT) as a measure of effort. Main Outcome Measures: Intraclass correlation coefficients (ICCs) were calculated for the composite scores of ImPACT between time points. Repeated-measures analysis of variance was used to evaluate changes in ImPACT and WMT results over time. Results: The ICC values for group 1 ranged from 0.26 to 0.88 for the 4 ImPACT composite scores. The ICC values for group 2 ranged from 0.37 to 0.76. In group 1, ImPACT classified 37.0% and 46.0% of healthy participants as impaired at time points 2 and 3, respectively. In group 2, ImPACT classified 22.2% and 28.9% of healthy participants as impaired at time points 2 and 3, respectively. Conclusions: We found variable test-retest reliability for ImPACT metrics. Visual motor speed and reaction time demonstrated greater reliability than verbal and visual memory. Our current data support a multifaceted approach to concussion assessment using clinical examinations, symptom reports, cognitive testing, and balance assessment. PMID:23724770

Resch, Jacob; Driscoll, Aoife; McCaffrey, Noel; Brown, Cathleen; Ferrara, Michael S.; Macciocchi, Stephen; Baumgartner, Ted; Walpert, Kimberly

2013-01-01

205

Electronic logic for enhanced switch reliability  

DOEpatents

A logic circuit is used to enhance redundant switch reliability. Two or more switches are monitored for logical high or low output. The output for the logic circuit produces a redundant and fail-safe representation of the switch outputs. When both switch outputs are high, the output is high. Similarly, when both switch outputs are low, the logic circuit's output is low. When the output states of the two switches do not agree, the circuit resolves the conflict by memorizing the last output state which both switches were simultaneously in and produces the logical complement of this output state. Thus, the logic circuit of the present invention allows the redundant switches to be treated as if they were in parallel when the switches are open and as if they were in series when the switches are closed. A failsafe system having maximum reliability is thereby produced.

Cooper, J.A.

1984-01-20

206

Understanding biological computation: reliable learning and recognition.  

PubMed Central

We experimentally examine the consequences of the hypothesis that the brain operates reliably, even though individual components may intermittently fail, by computing with dynamical attractors. Specifically, such a mechanism exploits dynamic collective behavior of a system with attractive fixed points in its phase space. In contrast to the usual methods of reliable computation involving a large number of redundant elements, this technique of self-repair only requires collective computation with a few units, and it is amenable to quantitative investigation. Experiments on parallel computing arrays show that this mechanism leads naturally to rapid self-repair, adaptation to the environment, recognition and discrimination of fuzzy inputs, and conditional learning, properties that are commonly associated with biological computation. PMID:6593731

Hogg, T; Huberman, B A

1984-01-01

207

Quality enhancement of parallel MDP flows with mask suppliers  

NASA Astrophysics Data System (ADS)

For many maskshops, designed parallel mask data preparation (MDP) flows accompanying with a final data comparison are viewed as a reliable method that could reduce quality risks caused by mis-operation. However, in recent years, more and more mask data mistakes have shown that present parallel MDP flows could not capture all mask data errors yet. In this paper, we will show major failure models of parallel MDP flows from analyzing MDP quality accidents and share our approaches to achieve further improvement with mask suppliers together.

Deng, Erwin; Lee, Rachel; Lee, Chun Der

2013-06-01

208

Reliability Centered Maintenance - Methodologies  

NASA Technical Reports Server (NTRS)

Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

Kammerer, Catherine C.

2009-01-01

209

Software reliability perspectives  

NASA Technical Reports Server (NTRS)

Software which is used in life critical functions must be known to be highly reliable before installation. This requires a strong testing program to estimate the reliability, since neither formal methods, software engineering nor fault tolerant methods can guarantee perfection. Prior to the final testing software goes through a debugging period and many models have been developed to try to estimate reliability from the debugging data. However, the existing models are poorly validated and often give poor performance. This paper emphasizes the fact that part of their failures can be attributed to the random nature of the debugging data given to these models as input, and it poses the problem of correcting this defect as an area of future research.

Wilson, Larry; Shen, Wenhui

1987-01-01

210

JSD: Parallel Job Accounting on the IBM SP2  

NASA Technical Reports Server (NTRS)

The IBM SP2 is one of the most promising parallel computers for scientific supercomputing - it is fast and usually reliable. One of its biggest problems is a lack of robust and comprehensive system software. Among other things, this software allows a collection of Unix processes to be treated as a single parallel application. It does not, however, provide accounting for parallel jobs other than what is provided by AIX for the individual process components. Without parallel job accounting, it is not possible to monitor system use, measure the effectiveness of system administration strategies, or identify system bottlenecks. To address this problem, we have written jsd, a daemon that collects accounting data for parallel jobs. jsd records information in a format that is easily machine- and human-readable, allowing us to extract the most important accounting information with very little effort. jsd also notifies system administrators in certain cases of system failure.

Saphir, William; Jones, James Patton; Walter, Howard (Technical Monitor)

1995-01-01

211

Reliable Shapelet Image Analysis  

E-print Network

Aims: We discuss the applicability and reliability of the shapelet technique for scientific image analysis. Methods: We quantify the effects of non-orthogonality of sampled shapelet basis functions and misestimation of shapelet parameters. We perform the shapelet decomposition on artificial galaxy images with underlying shapelet models and galaxy images from the GOODS survey, comparing the publicly available IDL implementation with our new C++ implementation. Results: Non-orthogonality of the sampled basis functions and misestimation of the shapelet parameters can cause substantial misinterpretation of the physical properties of the decomposed objects. Additional constraints, image preprocessing and enhanced precision have to be incorporated in order to achieve reliable decomposition results.

P. Melchior; M. Meneghetti; M. Bartelmann

2006-12-13

212

Reliability Generalization (RG) Analysis: The Test Is Not Reliable  

ERIC Educational Resources Information Center

Literature shows that most researchers are unaware of some of the characteristics of reliability. This paper clarifies some misconceptions by describing the procedures, benefits, and limitations of reliability generalization while using it to illustrate the nature of score reliability. Reliability generalization (RG) is a meta-analytic method…

Warne, Russell

2008-01-01

213

DPL : Data Parallel Library Manual  

Microsoft Academic Search

IntroductionIn [PP93] we described a transformational approach to realizing architecture independent parallel executionof a high-level parallel language. Detailed in that document is a series of steps that when applied to highlevel,data-parallel programs written in the Proteus programming language yield parallel execution on avariety of different parallel architectures. The Data Parallel Library (DPL) directly supports Proteus bysupplying a vital link in

Daniel W. Palmer

1994-01-01

214

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

Foster, I.; Tuecke, S.

1991-12-01

215

Parallels with nature  

NASA Astrophysics Data System (ADS)

Adam Nelson and Stuart Warriner, from the University of Leeds, talk with Nature Chemistry about their work to develop viable synthetic strategies for preparing new chemical structures in parallel with the identification of desirable biological activity.

2014-10-01

216

Series/Parallel Batteries  

NSDL National Science Digital Library

It is important for students to understand how resistors, capacitors, and batteries combine in series and parallel. The combination of batteries has a lot of practical applications in science competitions. This lab also reinforces how to use a voltmeter t

Michael Horton

2009-05-30

217

Methodological Approach Parallel Computation  

E-print Network

and intensity (hurricanes, tornados, storm surges, etc.) are of great general interest. Chris Paciorek Parallel (levels) A 100-year flood is the size of flood expected to occur once every 100 years on average, also

Paciorek, Chris

218

Reliability Degradation Due to Stockpile Aging  

SciTech Connect

The objective of this reseach is the investigation of alternative methods for characterizing the reliability of systems with time dependent failure modes associated with stockpile aging. Reference to 'reliability degradation' has, unfortunately, come to be associated with all types of aging analyes: both deterministic and stochastic. In this research, in keeping with the true theoretical definition, reliability is defined as a probabilistic description of system performance as a funtion of time. Traditional reliability methods used to characterize stockpile reliability depend on the collection of a large number of samples or observations. Clearly, after the experiments have been performed and the data has been collected, critical performance problems can be identified. A Major goal of this research is to identify existing methods and/or develop new mathematical techniques and computer analysis tools to anticipate stockpile problems before they become critical issues. One of the most popular methods for characterizing the reliability of components, particularly electronic components, assumes that failures occur in a completely random fashion, i.e. uniformly across time. This method is based primarily on the use of constant failure rates for the various elements that constitute the weapon system, i.e. the systems do not degrade while in storage. Experience has shown that predictions based upon this approach should be regarded with great skepticism since the relationship between the life predicted and the observed life has been difficult to validate. In addition to this fundamental problem, the approach does not recognize that there are time dependent material properties and variations associated with the manufacturing process and the operational environment. To appreciate the uncertainties in predicting system reliability a number of alternative methods are explored in this report. All of the methods are very different from those currently used to assess stockpile reliability, but have been used extensively in various forms outside Sandia National Laboratories. It is hoped that this report will encourage the use of 'nontraditional' reliabilty and uncertainty techniques in gaining insight into stockpile reliability issues.

Robinson, David G.

1999-04-01

219

Parallelization: Binary Tree Traversal  

NSDL National Science Digital Library

This module teaches the use of binary trees to sort through large data sets, different traversal methods for binary trees, including parallel methods, and how to scale a binary tree traversal on multiple compute cores. Upon completion of this module, students should be able to recognize the structure of a binary tree, employ different methods for traversing a binary tree, understand how to parallelize a binary tree traversal, and how to scale a binary tree traversal over multiple compute cores.

Aaron Weeden

220

Artificial intelligence in parallel  

SciTech Connect

The current rage in the Artificial Intelligence (AI) community is parallelism: the idea is to build machines with many independent processors doing many things at once. The upshot is that about a dozen parallel machines are now under development for AI alone. As might be expected, the approaches are diverse yet there are a number of fundamental issues in common: granularity, topology, control, and algorithms.

Waldrop, M.M.

1984-08-10

221

FLIP CHIP ENCAPSULATION RELIABILITY  

Microsoft Academic Search

The use of flip chip technology by the electronic industry dates back to the early sixties. The concept of interconnection arrays, in contrast to the peripheral, offers a very attractive packaging technology from numerous aspects, including thermal, electrical and mechanical performance and size reduction. Although a very reliable technology, (more than a billion controlled collapse chip connections, C4, have been

Horatio Quinones; Alec Babiarz; Alan Lewis

1998-01-01

222

Parametric Mass Reliability Study  

NASA Technical Reports Server (NTRS)

The International Space Station (ISS) systems are designed based upon having redundant systems with replaceable orbital replacement units (ORUs). These ORUs are designed to be swapped out fairly quickly, but some are very large, and some are made up of many components. When an ORU fails, it is replaced on orbit with a spare; the failed unit is sometimes returned to Earth to be serviced and re-launched. Such a system is not feasible for a 500+ day long-duration mission beyond low Earth orbit. The components that make up these ORUs have mixed reliabilities. Components that make up the most mass-such as computer housings, pump casings, and the silicon board of PCBs-typically are the most reliable. Meanwhile components that tend to fail the earliest-such as seals or gaskets-typically have a small mass. To better understand the problem, my project is to create a parametric model that relates both the mass of ORUs to reliability, as well as the mass of ORU subcomponents to reliability.

Holt, James P.

2014-01-01

223

Wood Durability Service & Reliability  

E-print Network

Horizontal lap joints E-22 Accelerated wood decay E-23 Wood preservatives in soil contact E-24 Mold EWood Durability Laboratory Service & Reliability Equipment & Facilities Field sites for AWPA E-7 stake test (Formosan termites and de- cay), AWPA E-16 (horizontal lap- joint), AWPA E-XX (anti

224

Quantifying Human Performance Reliability.  

ERIC Educational Resources Information Center

Human performance reliability for tasks in the time-space continuous domain is defined and a general mathematical model presented. The human performance measurement terms time-to-error and time-to-error-correction are defined. The model and measurement terms are tested using laboratory vigilance and manual control tasks. Error and error-correction…

Askren, William B.; Regulinski, Thaddeus L.

225

Reliable broadcast protocols  

Microsoft Academic Search

A reliable broadcast protocol for an unreliable broadcast network is described. The protocol operates between the application programs and the broadcast network. It isolates the application programs from the unreliable characteristics of the communication network. The protocol guarantees that all of the broadcast messages are received at all of the operational receivers in a broadcast group. In addition, the sequence

Jo-Mei Chang; Nicholas F. Maxemchuk

1984-01-01

226

Software reliability report  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

Wilson, Larry

1991-01-01

227

A reliability analysis for girder bridges  

Microsoft Academic Search

A reliability analysis is performed for composite steel girders, reinforced concrete T-beams and prestressed concrete girders. The results are presented in the form of sensitivity functions. The live load is more important than the dead load or the dynamic load. For composite steel girders, the most important parameters are strength of steel and girder depth. For reinforced concrete T-beams and

Andrzej S. Nowak; Ahmed S. Yamani

1995-01-01

228

Space Shuttle Propulsion System Reliability  

NASA Technical Reports Server (NTRS)

This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.

Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David

2011-01-01

229

Reliability data analysis software development  

Microsoft Academic Search

Reliability is one of the major keys in product development. While reliability test are conducted in almost every manufacturing plant, the analysis of reliability test data is hardly rigorous, and engineers mainly rely on the softwares that come with the reliability test equipment to perform the test data analysis. Although this is usually sufficient, the underlying assumptions of the analysis

Zhang Guan; Cher Ming Tan

2000-01-01

230

Parallelizing simulated annealing algorithms based on high-performance computer  

Microsoft Academic Search

We implemented five conversions of simulated annealing (SA) algorithm from sequential-to-parallel forms on high-performance\\u000a computers and applied them to a set of standard function optimization problems in order to test their performances. According\\u000a to the experimental results, we eventually found that the traditional approach to parallelizing simulated annealing, namely,\\u000a parallelizing moves in sequential SA, difficultly handled very difficult problem instances.

Ding-Jun Chen; Chung-Yeol Lee; Cheol-Hoon Park; Pedro Mendes

2007-01-01

231

Efficient Reliable Internet Storage  

Microsoft Academic Search

This position paper presents a new design for an Internet- wide peer-to-peer storage facility. The design is intended to reduce the required replication significantly without loss of availability. Two techniques are proposed. First, ag- gressive use of parallel recovery made possible by plac- ing blocks randomly, rather than in a DHT-based fashion. Second, tracking of individual nodes availabilities, so that

Robbert van Renesse

232

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 3. Technical Report #1202  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the third-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Lai, Cheng-Fei; Irvin, P. Shawn; Park, Bitnara Jasmine; Alonzo, Julie; Tindal, Gerald

2012-01-01

233

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 5. Technical Report #1204  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the fifth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Park, Bitnara Jasmine; Irvin, P. Shawn; Lai, Cheng-Fei; Alonzo, Julie; Tindal, Gerald

2012-01-01

234

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 4. Technical Report #1203  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the fourth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Park, Bitnara Jasmine; Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

235

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 7. Technical Report #1206  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the seventh-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Irvin, P. Shawn; Alonzo, Julie; Lai, Cheng-Fei; Park, Bitnara Jasmine; Tindal, Gerald

2012-01-01

236

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 2. Technical Report #1201  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the second-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Lai, Cheng-Fei; Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Tindal, Gerald

2012-01-01

237

Analyzing the Reliability of the easyCBM Reading Comprehension Measures: Grade 6. Technical Report #1205  

ERIC Educational Resources Information Center

In this technical report, we present the results of a reliability study of the sixth-grade multiple choice reading comprehension measures available on the easyCBM learning system conducted in the spring of 2011. Analyses include split-half reliability, alternate form reliability, person and item reliability as derived from Rasch analysis,…

Irvin, P. Shawn; Alonzo, Julie; Park, Bitnara Jasmine; Lai, Cheng-Fei; Tindal, Gerald

2012-01-01

238

Parallel Implementation of a Lagrangian Stochastic Model for Pollutant Dispersion  

Microsoft Academic Search

Lagrangian dispersion models have shown to be effective and reliable tools for simulating the airborne pollutant dispersion. However, the main drawback for their use as regulatory models is the associated high computational costs. Consequently, in this paper a parallel version of a Lagrangian particle model—LAMBDA—is developed using the MPI message passing communication library. Performance tests were executed in a distributed

Debora R. Roberti; Roberto P. Souto; Haroldo F. De Campos Velho; Gervásio Annes Degrazia; Domenico Anfossi

2005-01-01

239

Parallel and Multilevel Algorithms for Computational Partial Differential Equations  

E-print Network

Parallel and Multilevel Algorithms for Computational Partial Differential Equations Peter K Jimack 343 5464 ABSTRACT The efficient and reliable solution of partial differential equations (PDEs) plays an essential role in a very large number of applications in business, engineering and science, ranging from

Jimack, Peter

240

Parallel implementation of sparse matrix solvers  

E-print Network

: Academic Press lnc. , 1973. [10] K. Hwang and F. A. Briggs, Computer architecture and parallel processing, New York, NY: McGraw-Hill, 19S4. [11] H. M. Markowitz, "The Elimination Form of the Inverse and its Application to Linear Programming...

Pujari, Sushant Kumar

1992-01-01

241

General Aviation Aircraft Reliability Study  

NASA Technical Reports Server (NTRS)

This reliability study was performed in order to provide the aviation community with an estimate of Complex General Aviation (GA) Aircraft System reliability. To successfully improve the safety and reliability for the next generation of GA aircraft, a study of current GA aircraft attributes was prudent. This was accomplished by benchmarking the reliability of operational Complex GA Aircraft Systems. Specifically, Complex GA Aircraft System reliability was estimated using data obtained from the logbooks of a random sample of the Complex GA Aircraft population.

Pettit, Duane; Turnbull, Andrew; Roelant, Henk A. (Technical Monitor)

2001-01-01

242

Understanding the Elements of Operational Reliability: A Key for Achieving High Reliability  

NASA Technical Reports Server (NTRS)

This viewgraph presentation reviews operational reliability and its role in achieving high reliability through design and process reliability. The topics include: 1) Reliability Engineering Major Areas and interfaces; 2) Design Reliability; 3) Process Reliability; and 4) Reliability Applications.

Safie, Fayssal M.

2010-01-01

243

Camera calibration based on parallel lines  

NASA Astrophysics Data System (ADS)

Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

Li, Weimin; Zhang, Yuhai; Zhao, Yu

2015-01-01

244

Parallel State Estimation Assessment with Practical Data  

SciTech Connect

This paper presents a parallel state estimation (PSE) implementation using a preconditioned gradient algorithm and an orthogonal decomposition-based algorithm. The preliminary tests against a commercial Energy Management System (EMS) State Estimation (SE) tool using real-world data are performed. The results show that while the precondition gradient algorithm can solve the SE problem quicker with the help of parallel computing techniques, it might not be good for real-world data due to the large condition number of gain matrix introduced by the wide range of measurement weights. With the help of PETSc package and considering one iteration of the SE process, the orthogonal decomposition-based PSE algorithm can achieve 5-20 times speedup comparing against the commercial EMS tool. It is very promising that the developed PSE can solve the SE problem for large power systems at the SCADA rate, to improve grid reliability.

Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

2013-07-31

245

Human Reliability Program Workshop  

SciTech Connect

A Human Reliability Program (HRP) is designed to protect national security as well as worker and public safety by continuously evaluating the reliability of those who have access to sensitive materials, facilities, and programs. Some elements of a site HRP include systematic (1) supervisory reviews, (2) medical and psychological assessments, (3) management evaluations, (4) personnel security reviews, and (4) training of HRP staff and critical positions. Over the years of implementing an HRP, the Department of Energy (DOE) has faced various challenges and overcome obstacles. During this 4-day activity, participants will examine programs that mitigate threats to nuclear security and the insider threat to include HRP, Nuclear Security Culture (NSC) Enhancement, and Employee Assistance Programs. The focus will be to develop an understanding of the need for a systematic HRP and to discuss challenges and best practices associated with mitigating the insider threat.

Landers, John; Rogers, Erin; Gerke, Gretchen

2014-05-18

246

Reliability and durability problems  

NASA Astrophysics Data System (ADS)

The papers presented in this volume focus on methods for determining the stress-strain state of structures and machines and evaluating their reliability and service life. Specific topics discussed include a method for estimating the service life of thin-sheet automotive structures, stressed state at the tip of small cracks in anisotropic plates under biaxial tension, evaluation of the elastic-dissipative characteristics of joints by vibrational diagnostics methods, and calculation of the reliability of ceramic structures for arbitrary long-term loading programs. Papers are also presented on the effect of prior plastic deformation on fatigue damage kinetics, axisymmetric and local deformation of cylindrical parts during finishing-hardening treatments, and adhesion of polymers to diffusion coatings on steels.

Bojtsov, B. V.; Kondrashov, V. Z.

247

Reliable broadcast protocols  

NASA Technical Reports Server (NTRS)

A number of broadcast protocols that are reliable subject to a variety of ordering and delivery guarantees are considered. Developing applications that are distributed over a number of sites and/or must tolerate the failures of some of them becomes a considerably simpler task when such protocols are available for communication. Without such protocols the kinds of distributed applications that can reasonably be built will have a very limited scope. As the trend towards distribution and decentralization continues, it will not be surprising if reliable broadcast protocols have the same role in distributed operating systems of the future that message passing mechanisms have in the operating systems of today. On the other hand, the problems of engineering such a system remain large. For example, deciding which protocol is the most appropriate to use in a certain situation or how to balance the latency-communication-storage costs is not an easy question.

Joseph, T. A.; Birman, Kenneth P.

1989-01-01

248

Photovoltaic array reliability optimization  

NASA Technical Reports Server (NTRS)

An overview of the photovoltaic array reliability problem is presented, and a high reliability/minimum cost approach to this problem is presented. Design areas covered are cell failure, interconnect fatigue, and electrical insulation breakdown, and three solution strategies are discussed. The first involves controlling component failures in the solar cell (cell cracking, cell interconnects) and at the module level (must be statistically treated). Second, a fault tolerant circuit is designed which reduces array degradation, improves module yield losses, and controls hot-spot heating. Third, cost optimum module replacement strategies are also effective in reducing array degradation. This can be achieved by minimizing the life-cycle energy cost of the photovoltaic system. The integration of these solutions is aimed at reducing the 0.01% failure rate.

Ross, R. G., Jr.

1982-01-01

249

Factoring out the Parallelism Effect in VP-Ellipsis: English vs. Dutch Contrasts  

ERIC Educational Resources Information Center

Previous studies, including Duffield and Matsuo (2001; 2002; 2009), have demonstrated second language learners' overall sensitivity to a parallelism constraint governing English VP-ellipsis constructions: like native speakers (NS), advanced Dutch, Spanish and Japanese learners of English reliably prefer ellipsis clauses with structurally parallel

Duffield, Nigel; Matsuo, Ayumi; Roberts, Leah

2009-01-01

250

A Library Hierarchy for Implementing Scalable Parallel Search Algorithms  

E-print Network

A Library Hierarchy for Implementing Scalable Parallel Search Algorithms T. K. Ralphs , L. Lad libraries forming a hierarchy built on top of ALPS. The first is the Branch, Constrain, and Price Software for performing large-scale parallel search in distributed-memory computing environments. To support the devel

Ralphs, Ted

251

Silt: A distributed bit-parallel architecture for early vision  

Microsoft Academic Search

A new form of parallelism,distributed bit-parallelism, is introduced. A DBP organization distributes each bit of a data item to a different processor. DBP allows computation that is sublinear with word size for such operations as integer addition, arithmetic shifts, and data moves. The implications of DBP for system architecture are analyzed. An implementation of a DPB architecture based on a

Michael Bolotski; Rod Barman; James J. Little; Daniel Camporese

1993-01-01

252

Parallel Identification: A Shield Against the Assault of Traumatic Jealousy  

Microsoft Academic Search

This paper draws a connection between the clinical emergence of a primitive form of identification, termed parallel identification, and a temporary stasis in the transference. Parallel identification is defined as a manic defence that blocks the acute suffering brought on by consciously experienced jealousy arising from the loss of a beloved yet sadistic object. It occurs as follows: the identifying

Stephanie Lewin

2011-01-01

253

Compact, Reliable EEPROM Controller  

NASA Technical Reports Server (NTRS)

A compact, reliable controller for an electrically erasable, programmable read-only memory (EEPROM) has been developed specifically for a space-flight application. The design may be adaptable to other applications in which there are requirements for reliability in general and, in particular, for prevention of inadvertent writing of data in EEPROM cells. Inadvertent writes pose risks of loss of reliability in the original space-flight application and could pose such risks in other applications. Prior EEPROM controllers are large and complex and do not provide all reasonable protections (in many cases, few or no protections) against inadvertent writes. In contrast, the present controller provides several layers of protection against inadvertent writes. The controller also incorporates a write-time monitor, enabling determination of trends in the performance of an EEPROM through all phases of testing. The controller has been designed as an integral subsystem of a system that includes not only the controller and the controlled EEPROM aboard a spacecraft but also computers in a ground control station, relatively simple onboard support circuitry, and an onboard communication subsystem that utilizes the MIL-STD-1553B protocol. (MIL-STD-1553B is a military standard that encompasses a method of communication and electrical-interface requirements for digital electronic subsystems connected to a data bus. MIL-STD- 1553B is commonly used in defense and space applications.) The intent was to both maximize reliability while minimizing the size and complexity of onboard circuitry. In operation, control of the EEPROM is effected via the ground computers, the MIL-STD-1553B communication subsystem, and the onboard support circuitry, all of which, in combination, provide the multiple layers of protection against inadvertent writes. There is no controller software, unlike in many prior EEPROM controllers; software can be a major contributor to unreliability, particularly in fault situations such as the loss of power or brownouts. Protection is also provided by a powermonitoring circuit.

Katz, Richard; Kleyner, Igor

2010-01-01

254

Spacecraft transmitter reliability  

NASA Technical Reports Server (NTRS)

A workshop on spacecraft transmitter reliability was held at the NASA Lewis Research Center on September 25 and 26, 1979, to discuss present knowledge and to plan future research areas. Since formal papers were not submitted, this synopsis was derived from audio tapes of the workshop. The following subjects were covered: users' experience with space transmitters; cathodes; power supplies and interfaces; and specifications and quality assurance. A panel discussion ended the workshop.

1980-01-01

255

Software reliability studies  

NASA Technical Reports Server (NTRS)

There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

Hoppa, Mary Ann; Wilson, Larry W.

1994-01-01

256

Parallel State Estimation Assessment with Practical Data  

SciTech Connect

This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

2014-10-31

257

Parallelizing Quantum Circuits  

E-print Network

We present a novel automated technique for parallelizing quantum circuits via forward and backward translation to measurement-based quantum computing patterns and analyze the trade off in terms of depth and space complexity. As a result we distinguish a class of polynomial depth circuits that can be parallelized to logarithmic depth while adding only polynomial many auxiliary qubits. In particular, we provide for the first time a full characterization of patterns with flow of arbitrary depth, based on the notion of influencing paths and a simple rewriting system on the angles of the measurement. Our method leads to insightful knowledge for constructing parallel circuits and as applications, we demonstrate several constant and logarithmic depth circuits. Furthermore, we prove a logarithmic separation in terms of quantum depth between the quantum circuit model and the measurement-based model.

Anne Broadbent; Elham Kashefi

2007-04-13

258

Parallel optical sampler  

SciTech Connect

An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

2014-05-20

259

Is quantum parallelism real?  

NASA Astrophysics Data System (ADS)

In this paper we raise questions about the reality of computational quantum parallelism. Such questions are important because while quantum theory is rigorously established, the hypothesis that it supports a more powerful model of computation remains speculative. More specifically, we suggest the possibility that the seeming computational parallelism offered by quantum superpositions is actually effected by gate-level parallelism in the reversible implementation of the quantum operator. In other words, when the total number of logic operations is analyzed, quantum computing may not be more powerful than classical. This fact has significant public policy implications with regard to the relative levels of effort that are appropriate for the development of quantumparallel algorithms and associated hardware (i.e., qubit-based) versus quantum-scale classical hardware.

Lanzagorta, Marco; Uhlmann, Jeffrey

2008-04-01

260

Parallel Magnetic Resonance Imaging  

E-print Network

The main disadvantage of Magnetic Resonance Imaging (MRI) are its long scan times and, in consequence, its sensitivity to motion. Exploiting the complementary information from multiple receive coils, parallel imaging is able to recover images from under-sampled k-space data and to accelerate the measurement. Because parallel magnetic resonance imaging can be used to accelerate basically any imaging sequence it has many important applications. Parallel imaging brought a fundamental shift in image reconstruction: Image reconstruction changed from a simple direct Fourier transform to the solution of an ill-conditioned inverse problem. This work gives an overview of image reconstruction from the perspective of inverse problems. After introducing basic concepts such as regularization, discretization, and iterative reconstruction, advanced topics are discussed including algorithms for auto-calibration, the connection to approximation theory, and the combination with compressed sensing.

Uecker, Martin

2015-01-01

261

Module 12: Parallel Circuits  

NSDL National Science Digital Library

This module on parallel circuits contains a set of notes and a link to allaboutcircuits.com. The module was created by the California Regional Consortium for Engineering Advances in Technological Education (CREATE) which is “a joint effort between seven community colleges and over 30 large high tech engineering/technology employers.” This collection of study modules encourages students to learn about the basics of DC electronics and circuits. Module 12 teaches students about parallel circuits through a study guide and worked annotated sample problems.

262

Development of a Short Form of the Roommate Rapport Scale.  

ERIC Educational Resources Information Center

Evaluated a short form of the Roommate Rapport Scale that would maintain the scale's reliability and eliminate potentially objectionable items using students (N=320) who resided in dormitories. Results showed the short form to be reliable and unidimensional. (ABL)

Carey, John C.; And Others

1988-01-01

263

On Component Reliability and System Reliability for Space Missions  

NASA Technical Reports Server (NTRS)

This paper is to address the basics, the limitations and the relationship between component reliability and system reliability through a study of flight computing architectures and related avionics components for NASA future missions. Component reliability analysis and system reliability analysis need to be evaluated at the same time, and the limitations of each analysis and the relationship between the two analyses need to be understood.

Chen, Yuan; Gillespie, Amanda M.; Monaghan, Mark W.; Sampson, Michael J.; Hodson, Robert F.

2012-01-01

264

Substation Configuration Reliability 1 Reliability of Substation Configurations  

E-print Network

Substation Configuration Reliability 1 Reliability of Substation Configurations Daniel Nack, Iowa substation, it still contains what could be described as weak points or points of failure that would lead to loss of load. By knowing how to calculate the reliability of different substation configurations

McCalley, James D.

265

Program for computer aided reliability estimation  

NASA Technical Reports Server (NTRS)

A computer program for estimating the reliability of self-repair and fault-tolerant systems with respect to selected system and mission parameters is presented. The computer program is capable of operation in an interactive conversational mode as well as in a batch mode and is characterized by maintenance of several general equations representative of basic redundancy schemes in an equation repository. Selected reliability functions applicable to any mathematical model formulated with the general equations, used singly or in combination with each other, are separately stored. One or more system and/or mission parameters may be designated as a variable. Data in the form of values for selected reliability functions is generated in a tabular or graphic format for each formulated model.

Mathur, F. P. (inventor)

1972-01-01

266

Parallel programming with PCN  

SciTech Connect

PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

Foster, I.; Tuecke, S.

1993-01-01

267

Parallel Spectral Numerical Methods  

NSDL National Science Digital Library

This module teaches the principals of Fourier spectral methods, their utility in solving partial differential equation and how to implement them in code. Performance considerations for several Fourier spectral implementations are discussed and methods for effective scaling on parallel computers are explained.

Gong Chen

268

Massively parallel processor computer  

NASA Technical Reports Server (NTRS)

An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

Fung, L. W. (inventor)

1983-01-01

269

High performance parallel architectures  

SciTech Connect

In this paper the author describes current high performance parallel computer architectures. A taxonomy is presented to show computer architecture from the user programmer's point-of-view. The effects of the taxonomy upon the programming model are described. Some current architectures are described with respect to the taxonomy. Finally, some predictions about future systems are presented. 5 refs., 1 fig.

Anderson, R.E. (Lawrence Livermore National Lab., CA (USA))

1989-09-01

270

Optimizing parallel reduction operations  

SciTech Connect

A parallel program consists of sets of concurrent and sequential tasks. Often, a reduction (such as array sum) sequentially combines values produced by a parallel computation. Because reductions occur so frequently in otherwise parallel programs, they are good candidates for optimization. Since reductions may introduce dependencies, most languages separate computation and reduction. The Sisal functional language is unique in that reduction is a natural consequence of loop expressions; the parallelism is implicit in the language. Unfortunately, the original language supports only seven reduction operations. To generalize these expressions, the Sisal 90 definition adds user-defined reductions at the language level. Applicable optimizations depend upon the mathematical properties of the reduction. Compilation and execution speed, synchronization overhead, memory use and maximum size influence the final implementation. This paper (1) Defines reduction syntax and compares with traditional concurrent methods; (2) Defines classes of reduction operations; (3) Develops analysis of classes for optimized concurrency; (4) Incorporates reductions into Sisal 1.2 and Sisal 90; (5) Evaluates performance and size of the implementations.

Denton, S.M.

1995-06-01

271

Parallel hierarchical global illumination  

SciTech Connect

Solving the global illumination problem is equivalent to determining the intensity of every wavelength of light in all directions at every point in a given scene. The complexity of the problem has led researchers to use approximation methods for solving the problem on serial computers. Rather than using an approximation method, such as backward ray tracing or radiosity, the authors have chosen to solve the Rendering Equation by direct simulation of light transport from the light sources. This paper presents an algorithm that solves the Rendering Equation to any desired accuracy, and can be run in parallel on distributed memory or shared memory computer systems with excellent scaling properties. It appears superior in both speed and physical correctness to recent published methods involving bidirectional ray tracing or hybrid treatments of diffuse and specular surfaces. Like progressive radiosity methods, it dynamically refines the geometry decomposition where required, but does so without the excessive storage requirements for ray histories. The algorithm, called Photon, produces a scene which converges to the global illumination solution. This amounts to a huge task for a 1997-vintage serial computer, but using the power of a parallel supercomputer significantly reduces the time required to generate a solution. Currently, Photon can be run on most parallel environments from a shared memory multiprocessor to a parallel supercomputer, as well as on clusters of heterogeneous workstations.

Snell, Q.O.

1997-10-08

272

Parallel Circuits Lab  

NSDL National Science Digital Library

This in-class lab exercise will give students a familiarity with basic series and parallel circuits as well as measuring voltage, current and resistance. The worksheet provided leads students through the experiment step by step. Spaces for student measurements and conclusions are provided on the sheet. This document may be downloaded in PDF file format.

273

Parallel Traveling Salesman Problem  

NSDL National Science Digital Library

The traveling salesman problem is a classic optimization problem in which one seeks to minimize the path taken by a salesman in traveling between N cities, where the salesman stops at each city one and only one time, never retracing his/her route. This implementation is designed to run on UNIX systems with X-Windows, and includes parallelization using MPI.

David Joiner

274

NAS Parallel Benchmarks Results  

NASA Technical Reports Server (NTRS)

The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a pencil and paper fashion i.e. the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: Cray C90, Cray T'90 and Fujitsu VPP500; (b) Highly Parallel Processors: Cray T3D, IBM SP2 and IBM SP-TN2 (Thin Nodes 2); (c) Symmetric Multiprocessing Processors: Convex Exemplar SPP1000, Cray J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL. We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention NAS future plans of NPB.

Subhash, Saini; Bailey, David H.; Lasinski, T. A. (Technical Monitor)

1995-01-01

275

Parallelism for imaging applications  

Microsoft Academic Search

Numerous image processing functions involve repetitive operations and therefore can benefit from parallel processing, where performance may be significantly improved as a function of the number of processors applied to the task. One such application that requires processing to be as near to real-time as possible is vision processing and, in particular, low level vision processing. A system developed by

M. P. Battaglia

1993-01-01

276

Ultimately Reliable Pyrotechnic Systems  

NASA Technical Reports Server (NTRS)

This paper presents the methods by which NASA has designed, built, tested, and certified pyrotechnic devices for high reliability operation in extreme environments and illustrates the potential applications in the oil and gas industry. NASA's extremely successful application of pyrotechnics is built upon documented procedures and test methods that have been maintained and developed since the Apollo Program. Standards are managed and rigorously enforced for performance margins, redundancy, lot sampling, and personnel safety. The pyrotechnics utilized in spacecraft include such devices as small initiators and detonators with the power of a shotgun shell, detonating cord systems for explosive energy transfer across many feet, precision linear shaped charges for breaking structural membranes, and booster charges to actuate valves and pistons. NASA's pyrotechnics program is one of the more successful in the history of Human Spaceflight. No pyrotechnic device developed in accordance with NASA's Human Spaceflight standards has ever failed in flight use. NASA's pyrotechnic initiators work reliably in temperatures as low as -420 F. Each of the 135 Space Shuttle flights fired 102 of these initiators, some setting off multiple pyrotechnic devices, with never a failure. The recent landing on Mars of the Opportunity rover fired 174 of NASA's pyrotechnic initiators to complete the famous '7 minutes of terror.' Even after traveling through extreme radiation and thermal environments on the way to Mars, every one of them worked. These initiators have fired on the surface of Titan. NASA's design controls, procedures, and processes produce the most reliable pyrotechnics in the world. Application of pyrotechnics designed and procured in this manner could enable the energy industry's emergency equipment, such as shutoff valves and deep-sea blowout preventers, to be left in place for years in extreme environments and still be relied upon to function when needed, thus greatly enhancing safety and operational availability.

Scott, John H.; Hinkel, Todd

2015-01-01

277

Ferrite logic reliability study  

NASA Technical Reports Server (NTRS)

Development and use of digital circuits called all-magnetic logic are reported. In these circuits the magnetic elements and their windings comprise the active circuit devices in the logic portion of a system. The ferrite logic device belongs to the all-magnetic class of logic circuits. The FLO device is novel in that it makes use of a dual or bimaterial ferrite composition in one physical ceramic body. This bimaterial feature, coupled with its potential for relatively high speed operation, makes it attractive for high reliability applications. (Maximum speed of operation approximately 50 kHz.)

Baer, J. A.; Clark, C. B.

1973-01-01

278

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2011 CFR

...2011-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2011-10-01

279

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2010 CFR

...2010-10-01 true Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2010-10-01

280

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2012 CFR

...2012-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2012-10-01

281

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2014 CFR

...2014-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2014-10-01

282

48 CFR 1852.246-70 - Mission Critical Space System Personnel Reliability Program.  

Code of Federal Regulations, 2013 CFR

...2013-10-01 false Mission Critical Space System Personnel Reliability Program...Regulations System NATIONAL AERONAUTICS AND SPACE ADMINISTRATION CLAUSES AND FORMS SOLICITATION...Clauses 1852.246-70 Mission Critical Space System Personnel Reliability...

2013-10-01

283

Message based event specification for debugging nondeterministic parallel programs  

SciTech Connect

Portability and reliability of parallel programs can be severely impaired by their nondeterministic behavior. Therefore, an effective means to precisely and accurately specify unacceptable nondeterministic behavior is necessary for testing and debugging parallel programs. In this paper we describe a class of expressions, called Message Expressions that can be used to specify nondeterministic behavior of message passing parallel programs. Specification of program behavior with Message Expressions is easier than pattern based specification techniques in that the former does not require knowledge of run-time event order, whereas that later depends on the user`s knowledge of the run-time event order for correct specification. We also discuss our adaptation of Message Expressions for use in a dynamic distributed testing and debugging tool, called mdb, for programs written for PVM (Parallel Virtual Machine).

Damohdaran-Kamal, S.K. [Los Alamos National Lab., NM (United States); Francioni, J.M. [University of Southwestern Louisiana, Lafayette, LA (United States)

1995-02-01

284

Automated grading of venous beading: an algorithm and parallel implementation  

NASA Astrophysics Data System (ADS)

A consistent, reliable method of quantifying diabetic retinopathy is required, both for patient assessment and eventually for use in screening tests for diabetes. To this end, an algorithm for determining the degree of venous beading in digitized ocular fundus images has been developed. A parallel implementation of the algorithm has also been investigated. The algorithm thresholds the fundus image to extract vein silhouettes. Morphological closing is used to fill any anomolous holes. Thinning is used to determine vein centerlines. Vein diameters are measured normal to the centerlines. A frequency analysis of vein diameter with distance along the centerline is then performed to permit estimation of veinous beading. For the parallel implementation, the binary vein silhouette and the vein centerline are rotated so that vein diameter may be estimated in one direction only. The time complexity of the parallel algorithm is O(N). Algorithm performance is demonstrated with real fundus images. A simulation of the parallel algorithm is used with actual fundus images.

Shen, Zhijiang; Gregson, Peter H.; Cheng, Heng-Da; Kozousek, V.

1991-11-01

285

Recent Developments in Reliability Analysis.  

ERIC Educational Resources Information Center

When one wants to set data reliability standards for a class of scientific inquiries or when one needs to compare and select among many different kinds of data with reliabilities that are crucial to a particular research undertaking, then one needs a single reliability coefficient that is adaptable to all or most situations. Work toward this goal…

Krippendorff, Klaus

286

FIELD RELIABILITY OF ELECTRONIC SYSTEMS  

E-print Network

investigates, through several examples from the field, the reliability of electronic units in a broader sense Reliability «.... 20 #12;3.5. Source 5 21 3.5.1. Description (Isotope Irradiation Control Electronics) 21 3I Ww i 1 i FIELD RELIABILITY OF ELECTRONIC SYSTEMS wcwotoias R I S 0 - M - 2 4 1 8 An analytical

287

Testing for PV Reliability (Presentation)  

SciTech Connect

The DOE SUNSHOT workshop is seeking input from the community about PV reliability and how the DOE might address gaps in understanding. This presentation describes the types of testing that are needed for PV reliability and introduces a discussion to identify gaps in our understanding of PV reliability testing.

Kurtz, S.; Bansal, S.

2014-09-01

288

Further discussion on reliability: the art of reliability estimation.  

PubMed

Sijtsma and van der Ark (2015) focused in their lead article on three frameworks for reliability estimation in nursing research: classical test theory (CTT), factor analysis (FA), and generalizability theory. We extend their presentation with particular attention to CTT and FA methods. We first consider the potential of yielding an overly negative or an overly positive assessment of reliability based on coefficient alpha. Next, we discuss other CTT methods for estimating reliability and how the choice of methods affects the interpretation of the reliability coefficient. Finally, we describe FA methods, which not only permit an understanding of a measure's underlying structure but also yield a variety of reliability coefficients with different interpretations. On a more general note, we discourage reporting reliability as a two-choice outcome-unsatisfactory or satisfactory; rather, we recommend that nursing researchers make a conceptual and empirical argument about when a measure might be more or less reliable, depending on its use. PMID:25738627

Yang, Yanyun; Green, Samuel B

2015-01-01

289

Information hiding in parallel programs  

SciTech Connect

A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

Foster, I.

1992-01-30

290

Reliability Impacts in Life Support Architecture and Technology Selection  

NASA Technical Reports Server (NTRS)

Equivalent System Mass (ESM) and reliability estimates were performed for different life support architectures based primarily on International Space Station (ISS) technologies. The analysis was applied to a hypothetical 1-year deep-space mission. High-level fault trees were initially developed relating loss of life support functionality to the Loss of Crew (LOC) top event. System reliability was then expressed as the complement (nonoccurrence) this event and was increased through the addition of redundancy and spares, which added to the ESM. The reliability analysis assumed constant failure rates and used current projected values of the Mean Time Between Failures (MTBF) from an ISS database where available. Results were obtained showing the dependence of ESM on system reliability for each architecture. Although the analysis employed numerous simplifications and many of the input parameters are considered to have high uncertainty, the results strongly suggest that achieving necessary reliabilities for deep-space missions will add substantially to the life support system mass. As a point of reference, the reliability for a single-string architecture using the most regenerative combination of ISS technologies without unscheduled replacement spares was estimated to be less than 1%. The results also demonstrate how adding technologies in a serial manner to increase system closure forces the reliability of other life support technologies to increase in order to meet the system reliability requirement. This increase in reliability results in increased mass for multiple technologies through the need for additional spares. Alternative parallel architecture approaches and approaches with the potential to do more with less are discussed. The tall poles in life support ESM are also reexamined in light of estimated reliability impacts.

Lange, Kevin E.; Anderson, Molly S.

2011-01-01

291

Darwinian Evolution in Parallel Universes: A Parallel Genetic Algorithm for  

E-print Network

Darwinian Evolution in Parallel Universes: A Parallel Genetic Algorithm for Variable Selection Mu outcome of interest commonly arises in various industrial engineering applications. The genetic algorithm modification. Our idea is to run a number of GAs in parallel without allowing each GA to fully converge

Zhu, Mu

292

ZAMBEZI: a parallel pattern parallel fault sequential circuit fault simulator  

Microsoft Academic Search

Sequential circuit fault simulators use the multiple bits in a computer data word to accelerate simulation. We introduce, and implement, a new sequential circuit fault simulator, a parallel pattern parallel fault simulator, ZAMBEZI, which simultaneously simulates multiple faults with multiple vectors in one data word. ZAMBEZI is developed by enhancing the control flow, of existing parallel pattern algorithms. For a

Minesh B. Amin; Bapiraju Vinnakota

1996-01-01

293

Parallel Vegetation Stripe Formation Through Hydrologic Interactions  

NASA Astrophysics Data System (ADS)

It has long been a challenge to theoretical ecologists to describe vegetation pattern formations such as the "tiger bush" stripes and "leopard bush" spots in Niger, and the regular maze patterns often observed in bogs in North America and Eurasia. To date, most of simulation models focus on reproducing the spot and labyrinthine patterns, and on the vegetation bands which form perpendicular to surface and groundwater flow directions. Various hypotheses have been invoked to explain the formation of vegetation patterns: selective grazing by herbivores, fire, and anisotropic environmental conditions such as slope. Recently, short distance facilitation and long distance competition between vegetation (a.k.a scale dependent feedback) has been proposed as a generic mechanism for vegetation pattern formation. In this paper, we test the generality of this mechanism by employing an existing, spatially explicit, advection-reaction-diffusion type model to describe the formation of regularly spaced vegetation bands, including those that are parallel to flow direction. Such vegetation patterns are, for example, characteristic of the ridge and slough habitat in the Florida Everglades and which are thought to have formed parallel to the prevailing surface water flow direction. To our knowledge, this is the first time that a simple model encompassing a nutrient accumulation mechanism along with biomass development and flow is used to demonstrate the formation of parallel stripes. We also explore the interactive effects of plant transpiration, slope and anisotropic hydraulic conductivity on the resulting vegetation pattern. Our results highlight the ability of the short distance facilitation and long distance competition mechanism to explain the formation of the different vegetation patterns beyond semi-arid regions. Therefore, we propose that the parallel stripes, like the other periodic patterns observed in both isotropic and anisotropic environments, are self-organized and form as a result of scale dependent feedback. Results from this study improve upon the current understanding on the formation of parallel stripes and provide a more general theoretical framework for future empirical and modeling efforts.

Cheng, Yiwei; Stieglitz, Marc; Turk, Greg; Engel, Victor

2010-05-01

294

Parallel Subconvolution Filtering Architectures  

NASA Technical Reports Server (NTRS)

These architectures are based on methods of vector processing and the discrete-Fourier-transform/inverse-discrete- Fourier-transform (DFT-IDFT) overlap-and-save method, combined with time-block separation of digital filters into frequency-domain subfilters implemented by use of sub-convolutions. The parallel-processing method implemented in these architectures enables the use of relatively small DFT-IDFT pairs, while filter tap lengths are theoretically unlimited. The size of a DFT-IDFT pair is determined by the desired reduction in processing rate, rather than on the order of the filter that one seeks to implement. The emphasis in this report is on those aspects of the underlying theory and design rules that promote computational efficiency, parallel processing at reduced data rates, and simplification of the designs of very-large-scale integrated (VLSI) circuits needed to implement high-order filters and correlators.

Gray, Andrew A.

2003-01-01

295

Parallel multilevel preconditioners  

SciTech Connect

In this paper, we shall report on some techniques for the development of preconditioners for the discrete systems which arise in the approximation of solutions to elliptic boundary value problems. Here we shall only state the resulting theorems. It has been demonstrated that preconditioned iteration techniques often lead to the most computationally effective algorithms for the solution of the large algebraic systems corresponding to boundary value problems in two and three dimensional Euclidean space. The use of preconditioned iteration will become even more important on computers with parallel architecture. This paper discusses an approach for developing completely parallel multilevel preconditioners. In order to illustrate the resulting algorithms, we shall describe the simplest application of the technique to a model elliptic problem.

Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

1989-01-01

296

Parallel Consensual Neural Networks  

NASA Technical Reports Server (NTRS)

A new neural network architecture is proposed and applied in classification of remote sensing/geographic data from multiple sources. The new architecture is called the parallel consensual neural network and its relation to hierarchical and ensemble neural networks is discussed. The parallel consensual neural network architecture is based on statistical consensus theory. The input data are transformed several times and the different transformed data are applied as if they were independent inputs and are classified using stage neural networks. Finally, the outputs from the stage networks are then weighted and combined to make a decision. Experimental results based on remote sensing data and geographic data are given. The performance of the consensual neural network architecture is compared to that of a two-layer (one hidden layer) conjugate-gradient backpropagation neural network. The results with the proposed neural network architecture compare favorably in terms of classification accuracy to the backpropagation method.

Benediktsson, J. A.; Sveinsson, J. R.; Ersoy, O. K.; Swain, P. H.

1993-01-01

297

Resistor Combinations for Parallel Circuits.  

ERIC Educational Resources Information Center

To help simplify both teaching and learning of parallel circuits, a high school electricity/electronics teacher presents and illustrates the use of tables of values for parallel resistive circuits in which total resistances are whole numbers. (MF)

McTernan, James P.

1978-01-01

298

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

As the I/O needs of parallel scientific applications increase, file systems for multiprocessors are being designed to provide applications with parallel access to multiple disks. Many parallel file systems present applications with a conventional Unix-like interface that allows the application to access multiple disks transparently. The interface conceals the parallelism within the file system, which increases the ease of programmability, but makes it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. Furthermore, most current parallel file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic parallel workloads. We discuss Galley's file structure and application interface, as well as an application that has been implemented using that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

299

Laws for Communicating Parallel Processes  

E-print Network

This paper presents some laws that must be satisfied by computations involving communicating parallel processes. The laws are stated in the context of the actor theory, a model for distributed parallel computation, and ...

Baker, Henry

1977-05-10

300

Parallelization: Infectious Disease  

NSDL National Science Digital Library

Epidemiology is the study of infectious disease. Infectious diseases are said to be "contagious" among people if they are transmittable from one person to another. Epidemiologists can use models to assist them in predicting the behavior of infectious diseases. This module will develop a simple agent-based infectious disease model, develop a parallel algorithm based on the model, provide a coded implementation for the algorithm, and explore the scaling of the coded implementation on high performance cluster resources.

Aaron Weeden

301

Xyce parallel electronic simulator.  

SciTech Connect

This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

2010-05-01

302

Parallel sphere rendering  

SciTech Connect

Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the M.

Krogh, M.; Painter, J.; Hansen, C.

1996-10-01

303

Consider insulation reliability  

SciTech Connect

This paper reports that when calcium silicate and two brands of mineral wool were compared in a series of laboratory tests, calcium silicate was more reliable. And in-service experience with mineral wool at a Canadian heavy crude refinery provided examples of many of the lab's findings. Lab tests, conducted under controlled conditions following industry accepted practices, showed calcium silicate insulation was stronger, tougher and more durable than the mineral wools to which it was compared. For instance, the calcium silicate insulation exhibited only some minor surface cracking when heated to 1,200[degrees]F (649[degrees]C), while the mineral wools suffered binder burnout resulting in sagging, delamination and a general loss of dimensional stability.

Gamboa (Manville Mechanical Insulations, a Div. of Schuller International Inc., Denver, CO (United States))

1993-01-01

304

The Vesta parallel file system  

Microsoft Academic Search

The Vesta parallel file system is designed to provide parallel file access to application programs running on multicomputers with parallel I\\/O subsystems. Vesta uses a new abstraction of files: a file is not a sequence of bytes, but rather it can be partitioned into multiple disjoint sequences that are accessed in parallel. The partitioning—which can also be changed dynamically—reduces the

Peter F. Corbett; Dror G. Feitelson

1996-01-01

305

Reliable communication in the presence of failures  

Microsoft Academic Search

The design and correctness of a communication facility for a distributed computer system are reported on. The facility provides support for fault-tolerant process groups in the form of a family of reliable multicast protocols that can be used in both local- and wide-area networks. These protocols attain high levels of concurrency, while respecting application-specific delivery ordering constraints, and have varying

Kenneth P. Birman; Thomas A. Joseph

1987-01-01

306

Parallelism in gene transcription among sympatric lake whitefish ( Coregonus clupeaformis Mitchill) ecotypes  

Microsoft Academic Search

We tested the hypothesis that phenotypic parallelism between dwarf and normal whitefish ecotypes ( Coregonus clupeaformis, Salmonidae) is accompanied by parallelism in gene transcription. The most striking phenotypic differences between these forms implied energetic metabolism and swimming activity. Therefore, we predicted that genes showing parallel expression should mainly belong to functional groups associated with these phenotypes. Transcriptome profiles were obtained

N. DEROME; P. DUCHESNE; L. BERNATCHEZ

2006-01-01

307

On the analysis of a new spatial three-degrees-of-freedom parallel manipulator  

Microsoft Academic Search

In this paper, a new spatial three-degrees-of-freedom (two degrees of translational freedom and one degree of rotational freedom) parallel manipulator is proposed. The parallel manipulator consists of a base plate, a movable platform, and three connecting legs. The inverse and forward kinematics problems are described in closed forms and the velocity equation of the new parallel manipulator is given. Three

Xin-Jun Liu; Jinsong Wang; Feng Gao; Li-Ping Wang

2001-01-01

308

Roo: A parallel theorem prover  

SciTech Connect

We describe a parallel theorem prover based on the Argonne theorem-proving system OTTER. The parallel system, called Roo, runs on shared-memory multiprocessors such as the Sequent Symmetry. We explain the parallel algorithm used and give performance results that demonstrate near-linear speedups on large problems.

Lusk, E.L.; McCune, W.W.; Slaney, J.K.

1991-11-01

309

A taxonomy of parallel sorting  

Microsoft Academic Search

We propose a taxonomy of parallel sorting that encompasses a broad range of array- and file-sorting algorithms. We analyze how research on parallel sorting has evolved, from the earliest sorting networks to shared memory algorithms and VLSI sorters. In the context of sorting networks, we describe two fundamental parallel merging schemes: the odd-even and the bitonic merge. We discuss sorting

Dina Bitton; David J. DeWitt; David K. Hsaio; Jaishankar Menon

1984-01-01

310

Synchronous Parallel Kinetic Monte Carlo  

SciTech Connect

A novel parallel kinetic Monte Carlo (kMC) algorithm formulated on the basis of perfect time synchronicity is presented. The algorithm provides an exact generalization of any standard serial kMC model and is trivially implemented in parallel architectures. We demonstrate the mathematical validity and parallel performance of the method by solving several well-understood problems in diffusion.

Mart?nez, E; Marian, J; Kalos, M H

2006-12-14

311

Parallel Job Scheduling and Workloads  

E-print Network

Parallel Job Scheduling and Workloads Dror Feitelson Hebrew University #12;Parallel Jobs · A set · On multicores: probably more dynamic #12;MPP Parallel Job Scheduling · Each job is a rectangle in processorsXtime space · Given many jobs, we must schedule them to run on available processors · This is like packing

Segall, Adrian

312

Factorial Validity and Reliability of the Malaysian Simplified Chinese Version of Multidimensional Scale of Perceived Social Support (MSPSS-SCV) Among a Group of University Students.  

PubMed

This study was aimed at validating the simplified Chinese version of the Multidimensional Scale of Perceived Support (MSPSS-SCV) among a group of medical and dental students in University Malaya. Two hundred and two students who took part in this study were given the MSPSS-SCV, the Medical Outcome Study social support survey, the Malay version of the Beck Depression Inventory, the Malay version of the General Health Questionnaire, and the English version of the MSPSS. After 1 week, these students were again required to complete the MSPSS-SCV but with the item sequences shuffled. This scale displayed excellent internal consistency (Cronbach's ? = .924), high test-retest reliability (.71), parallel form reliability (.92; Spearman's ?, P < .01), and validity. In conclusion, the MSPSS-SCV demonstrated sound psychometric properties in measuring social support among a group of medical and dental students. It could therefore be used as a simple screening tool among young educated Malaysian adolescents. PMID:23449622

Guan, Ng Chong; Seng, Loh Huai; Hway Ann, Anne Yee; Hui, Koh Ong

2015-03-01

313

Spectrophotometric Assay of Mebendazole in Dosage Forms Using Sodium Hypochlorite  

NASA Astrophysics Data System (ADS)

A simple, selective and sensitive spectrophotometric method is described for the determination of mebendazole (MBD) in bulk drug and dosage forms. The method is based on the reaction of MBD with hypochlorite in the presence of sodium bicarbonate to form the chloro derivative of MBD, followed by the destruction of the excess hypochlorite by nitrite ion. The color was formed by the oxidation of iodide with the chloro derivative of MBD to iodine in the presence of starch and forming the blue colored product, which was measured at 570 nm. The optimum conditions that affect the reaction were ascertained and, under these conditions, a linear relationship was obtained in the concentration range of 1.25-25.0·g/ml MBD. The calculated molar absorptivity and Sandell sensitivity values are 9.56·103 l·mol-1·cm-1 and 0.031 ?g/cm2, respectively. The limits of detection and quantification are 0.11 and 0.33 ?g/ml, respectively. The proposed method was applied successfully to the determination of MBD in bulk drug and dosage forms, and no interference was observed from excipients present in the dosage forms. The reliability of the proposed method was further checked by parallel determination by the reference method and also by recovery studies.

Swamy, N.; Prashanth, K. N.; Basavaiah, K.

2014-07-01

314

Reliability of wireless sensor networks.  

PubMed

Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

2014-01-01

315

Reliability of Wireless Sensor Networks  

PubMed Central

Wireless Sensor Networks (WSNs) consist of hundreds or thousands of sensor nodes with limited processing, storage, and battery capabilities. There are several strategies to reduce the power consumption of WSN nodes (by increasing the network lifetime) and increase the reliability of the network (by improving the WSN Quality of Service). However, there is an inherent conflict between power consumption and reliability: an increase in reliability usually leads to an increase in power consumption. For example, routing algorithms can send the same packet though different paths (multipath strategy), which it is important for reliability, but they significantly increase the WSN power consumption. In this context, this paper proposes a model for evaluating the reliability of WSNs considering the battery level as a key factor. Moreover, this model is based on routing algorithms used by WSNs. In order to evaluate the proposed models, three scenarios were considered to show the impact of the power consumption on the reliability of WSNs. PMID:25157553

Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

2014-01-01

316

Computer Assisted Parallel Program Generation  

E-print Network

Parallel computation is widely employed in scientific researches, engineering activities and product development. Parallel program writing itself is not always a simple task depending on problems solved. Large-scale scientific computing, huge data analyses and precise visualizations, for example, would require parallel computations, and the parallel computing needs the parallelization techniques. In this Chapter a parallel program generation support is discussed, and a computer-assisted parallel program generation system P-NCAS is introduced. Computer assisted problem solving is one of key methods to promote innovations in science and engineering, and contributes to enrich our society and our life toward a programming-free environment in computing science. Problem solving environments (PSE) research activities had started to enhance the programming power in 1970's. The P-NCAS is one of the PSEs; The PSE concept provides an integrated human-friendly computational software and hardware system to solve a target ...

Kawata, Shigeo

2015-01-01

317

Parallelized nested sampling  

NASA Astrophysics Data System (ADS)

One of the important advantages of nested sampling as an MCMC technique is its ability to draw representative samples from multimodal distributions and distributions with other degeneracies. This coverage is accomplished by maintaining a number of so-called live samples within a likelihood constraint. In usual practice, at each step, only the sample with the least likelihood is discarded from this set of live samples and replaced. In [1], Skilling shows that for a given number of live samples, discarding only one sample yields the highest precision in estimation of the log-evidence. However, if we increase the number of live samples, more samples can be discarded at once while still maintaining the same precision. For computer code running only serially, this modification would considerably increase the wall clock time necessary to reach convergence. However, if we use a computer with parallel processing capabilities, and we write our code to take advantage of this parallelism to replace multiple samples concurrently, the performance penalty can be eliminated entirely and possibly reversed. In this case, we must use the more general equation in [1] for computing the expectation of the shrinkage distribution: E [- log t]= (N r-r+1)-1+(Nr-r+2)-1+⋯+Nr-1, for shrinkage t with Nr live samples and r samples discarded at each iteration. The equation for the variance Var (- log t)= (N r-r+1)-2+(Nr-r+2)-2+⋯+Nr-2 is used to find the appropriate number of live samples Nr to use with r > 1 to match the variance achieved with N1 live samples and r = 1. In this paper, we show that by replacing multiple discarded samples in parallel, we are able to achieve a more thorough sampling of the constrained prior distribution, reduce runtime, and increase precision.

Henderson, R. Wesley; Goggans, Paul M.

2014-12-01

318

DPL: a data parallel language for the expression and execution of general parallel algorithm  

Microsoft Academic Search

T a powerful, easy to use, parallel language continues despite very significant advances in the area of parallel processing. Many parallel languages are simply old sequential languages with parallel constructs added. This research describes the Data Parallel Language (DPL), a parallel language built from its foundations on parallel concepts. DPL bases much of its expression on data parallelism found in

Robert Gordon Willhoft

1995-01-01

319

DPL: a data parallel language for the expression and execution of general parallel algorithm  

Microsoft Academic Search

THE NEED FOR a powerful, easy to use, parallel language continues despite very significant advances in the area of parallel processing. Many parallel languages are simply old sequential languages with parallel constructs added. This research describes the Data Parallel Language (DPL), a parallel language built from its foundations on parallel concepts. DPL bases much of its expression on data parallelism

Robert Gordon Willhoft

1995-01-01

320

Fatigue reliability of deck structures subjected to correlated crack growth  

NASA Astrophysics Data System (ADS)

The objective of this work is to analyse fatigue reliability of deck structures subjected to correlated crack growth. The stress intensity factors of the correlated cracks are obtained by finite element analysis and based on which the geometry correction functions are derived. The Monte Carlo simulations are applied to predict the statistical descriptors of correlated cracks based on the Paris-Erdogan equation. A probabilistic model of crack growth as a function of time is used to analyse the fatigue reliability of deck structures accounting for the crack propagation correlation. A deck structure is modelled as a series system of stiffened panels, where a stiffened panel is regarded as a parallel system composed of plates and are longitudinal. It has been proven that the method developed here can be conveniently applied to perform the fatigue reliability assessment of structures subjected to correlated crack growth.

Feng, G. Q.; Garbatov, Y.; Guedes Soares, C.

2013-12-01

321

The Verification-based Analysis of Reliable Multicast Protocol  

NASA Technical Reports Server (NTRS)

Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP Multicasting. In this paper, we develop formal models for R.W using existing automatic verification systems, and perform verification-based analysis on the formal RMP specifications. We also use the formal models of RW specifications to generate a test suite for conformance testing of the RMP implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress between the implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

Wu, Yunqing

1996-01-01

322

Parallel Eclipse Project Checkout  

NASA Technical Reports Server (NTRS)

Parallel Eclipse Project Checkout (PEPC) is a program written to leverage parallelism and to automate the checkout process of plug-ins created in Eclipse RCP (Rich Client Platform). Eclipse plug-ins can be aggregated in a feature project. This innovation digests a feature description (xml file) and automatically checks out all of the plug-ins listed in the feature. This resolves the issue of manually checking out each plug-in required to work on the project. To minimize the amount of time necessary to checkout the plug-ins, this program makes the plug-in checkouts parallel. After parsing the feature, a request to checkout for each plug-in in the feature has been inserted. These requests are handled by a thread pool with a configurable number of threads. By checking out the plug-ins in parallel, the checkout process is streamlined before getting started on the project. For instance, projects that took 30 minutes to checkout now take less than 5 minutes. The effect is especially clear on a Mac, which has a network monitor displaying the bandwidth use. When running the client from a developer s home, the checkout process now saturates the bandwidth in order to get all the plug-ins checked out as fast as possible. For comparison, a checkout process that ranged from 8-200 Kbps from a developer s home is now able to saturate a pipe of 1.3 Mbps, resulting in significantly faster checkouts. Eclipse IDE (integrated development environment) tries to build a project as soon as it is downloaded. As part of another optimization, this innovation programmatically tells Eclipse to stop building while checkouts are happening, which dramatically reduces lock contention and enables plug-ins to continue downloading until all of them finish. Furthermore, the software re-enables automatic building, and forces Eclipse to do a clean build once it finishes checking out all of the plug-ins. This software is fully generic and does not contain any NASA-specific code. It can be applied to any Eclipse-based repository with a similar structure. It also can apply build parameters and preferences automatically at the end of the checkout.

Crockett, Thomas M.; Joswig, Joseph C.; Shams, Khawaja S.; Powell, Mark W.; Bachmann, Andrew G.

2011-01-01

323

Highly parallel computation  

NASA Technical Reports Server (NTRS)

Highly parallel computing architectures are the only means to achieve the computation rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines and current research focuses on which architectures designated as multiple instruction multiple datastream (MIMD) and single instruction multiple datastream (SIMD) have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed.

Denning, Peter J.; Tichy, Walter F.

1990-01-01

324

Parallel sphere rendering  

SciTech Connect

Sphere rendering is an important method for visualizing molecular dynamics data. This paper presents a parallel divide-and-conquer algorithm that is almost 90 times faster than current graphics workstations. To render extremely large data sets and large images, the algorithm uses the MIMD features of the supercomputers to divide up the data, render independent partial images, and then finally composite the multiple partial images using an optimal method. The algorithm and performance results are presented for the CM-5 and the T3D.

Krogh, M.; Hansen, C.; Painter, J. [Los Alamos National Lab., NM (United States); de Verdiere, G.C. [CEA Centre d`Etudes de Limeil, 94 - Villeneuve-Saint-Georges (France)

1995-05-01

325

A fourth generation reliability predictor  

NASA Technical Reports Server (NTRS)

A reliability/availability predictor computer program has been developed and is currently being beta-tested by over 30 US companies. The computer program is called the Hybrid Automated Reliability Predictor (HARP). HARP was developed to fill an important gap in reliability assessment capabilities. This gap was manifested through the use of its third-generation cousin, the Computer-Aided Reliability Estimation (CARE III) program, over a six-year development period and an additional three-year period during which CARE III has been in the public domain. The accumulated experience of the over 30 establishments now using CARE III was used in the development of the HARP program.

Bavuso, Salvatore J.; Martensen, Anna L.

1988-01-01

326

US electric power system reliability  

NASA Astrophysics Data System (ADS)

Electric energy supply, transmission and distribution systems are investigated in order to determine priorities for legislation. The status and the outlook for electric power reliability are discussed.

327

Human reliability assessment: tools for law enforcement  

NASA Astrophysics Data System (ADS)

This paper suggests ways in which human reliability analysis (HRA) can assist the United State Justice System, and more specifically law enforcement, in enhancing the reliability of the process from evidence gathering through adjudication. HRA is an analytic process identifying, describing, quantifying, and interpreting the state of human performance, and developing and recommending enhancements based on the results of individual HRA. It also draws on lessons learned from compilations of several HRA. Given the high legal standards the Justice System is bound to, human errors that might appear to be trivial in other venues can make the difference between a successful and unsuccessful prosecution. HRA has made a major contribution to the efficiency, favorable cost-benefit ratio, and overall success of many enterprises where humans interface with sophisticated technologies, such as the military, ground transportation, chemical and oil production, nuclear power generation, commercial aviation and space flight. Each of these enterprises presents similar challenges to the humans responsible for executing action and action sequences, especially where problem solving and decision making are concerned. Nowhere are humans confronted, to a greater degree, with problem solving and decision making than are the diverse individuals and teams responsible for arrest and adjudication of criminal proceedings. This paper concludes that because of the parallels between the aforementioned technologies and the adjudication process, especially crime scene evidence gathering, there is reason to believe that the HRA technology, developed and enhanced in other applications, can be transferred to the Justice System with minimal cost and with significant payoff.

Ryan, Thomas G.; Overlin, Trudy K.

1997-01-01

328

Interrelation Between Safety Factors and Reliability  

NASA Technical Reports Server (NTRS)

An evaluation was performed to establish relationships between safety factors and reliability relationships. Results obtained show that the use of the safety factor is not contradictory to the employment of the probabilistic methods. In many cases the safety factors can be directly expressed by the required reliability levels. However, there is a major difference that must be emphasized: whereas the safety factors are allocated in an ad hoc manner, the probabilistic approach offers a unified mathematical framework. The establishment of the interrelation between the concepts opens an avenue to specify safety factors based on reliability. In cases where there are several forms of failure, then the allocation of safety factors should he based on having the same reliability associated with each failure mode. This immediately suggests that by the probabilistic methods the existing over-design or under-design can be eliminated. The report includes three parts: Part 1-Random Actual Stress and Deterministic Yield Stress; Part 2-Deterministic Actual Stress and Random Yield Stress; Part 3-Both Actual Stress and Yield Stress Are Random.

Elishakoff, Isaac; Chamis, Christos C. (Technical Monitor)

2001-01-01

329

18 CFR 39.5 - Reliability Standards.  

Code of Federal Regulations, 2010 CFR

...2010-04-01 2010-04-01 false Reliability Standards. 39.5 Section 39...CONCERNING CERTIFICATION OF THE ELECTRIC RELIABILITY ORGANIZATION; AND PROCEDURES FOR THE...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability...

2010-04-01

330

18 CFR 39.5 - Reliability Standards.  

Code of Federal Regulations, 2011 CFR

...2011-04-01 2011-04-01 false Reliability Standards. 39.5 Section 39.5 Conservation...APPROVAL, AND ENFORCEMENT OF ELECTRIC RELIABILITY STANDARDS § 39.5 Reliability Standards. (a) The Electric Reliability...

2011-04-01

331

76 FR 16277 - System Restoration Reliability Standards  

Federal Register 2010, 2011, 2012, 2013, 2014

...Reliability Standards and One New Glossary Term and for Retirement of Five Existing Reliability Standards and One Glossary Term. The three Reliability standards...EOP Reliability Standards and the glossary term filed by NERC in this...

2011-03-23

332

Parallelization of a treecode  

E-print Network

I describe here the performance of a parallel treecode with individual particle timesteps. The code is based on the Barnes-Hut algorithm and runs cosmological N-body simulations on parallel machines with a distributed memory architecture using the MPI message-passing library. For a configuration with a constant number of particles per processor the scalability of the code was tested up to P=128 processors on an IBM SP4 machine. In the large $P$ limit the average CPU time per processor necessary for solving the gravitational interactions is $\\sim 10 %$ higher than that expected from the ideal scaling relation. The processor domains are determined every large timestep according to a recursive orthogonal bisection, using a weighting scheme which takes into account the total particle computational load within the timestep. The results of the numerical tests show that the load balancing efficiency $L$ of the code is high ($>=90%$) up to P=32, and decreases to $L\\sim 80%$ when P=128. In the latter case it is found that some aspects of the code performance are affected by machine hardware, while the proposed weighting scheme can achieve a load balance as high as $L\\sim 90%$ even in the large $P$ limit.

R. Valdarnini

2003-03-18

333

Tolerant (parallel) Programming  

NASA Technical Reports Server (NTRS)

In order to be truly portable, a program must be tolerant of a wide range of development and execution environments, and a parallel program is just one which must be tolerant of a very wide range. This paper first defines the term "tolerant programming", then describes many layers of tools to accomplish it. The primary focus is on F-Nets, a formal model for expressing computation as a folded partial-ordering of operations, thereby providing an architecture-independent expression of tolerant parallel algorithms. For implementing F-Nets, Cooperative Data Sharing (CDS) is a subroutine package for implementing communication efficiently in a large number of environments (e.g. shared memory and message passing). Software Cabling (SC), a very-high-level graphical programming language for building large F-Nets, possesses many of the features normally expected from today's computer languages (e.g. data abstraction, array operations). Finally, L2(sup 3) is a CASE tool which facilitates the construction, compilation, execution, and debugging of SC programs.

DiNucci, David C.; Bailey, David H. (Technical Monitor)

1997-01-01

334

Making parallel lines meet  

PubMed Central

The extracellular matrix is constructed beyond the plasma membrane, challenging mechanisms for its control by the cell. In plants, the cell wall is highly ordered, with cellulose microfibrils aligned coherently over a scale spanning hundreds of cells. To a considerable extent, deploying aligned microfibrils determines mechanical properties of the cell wall, including strength and compliance. Cellulose microfibrils have long been seen to be aligned in parallel with an array of microtubules in the cell cortex. How do these cortical microtubules affect the cellulose synthase complex? This question has stood for as many years as the parallelism between the elements has been observed, but now an answer is emerging. Here, we review recent work establishing that the link between microtubules and microfibrils is mediated by a protein named cellulose synthase-interacting protein 1 (CSI1). The protein binds both microtubules and components of the cellulose synthase complex. In the absence of CSI1, microfibrils are synthesized but their alignment becomes uncoupled from the microtubules, an effect that is phenocopied in the wild type by depolymerizing the microtubules. The characterization of CSI1 significantly enhances knowledge of how cellulose is aligned, a process that serves as a paradigmatic example of how cells dictate the construction of their extracellular environment. PMID:22902763

Baskin, Tobias I.; Gu, Ying

2012-01-01

335

Massively Parallel QCD  

SciTech Connect

The theory of the strong nuclear force, Quantum Chromodynamics (QCD), can be numerically simulated from first principles on massively-parallel supercomputers using the method of Lattice Gauge Theory. We describe the special programming requirements of lattice QCD (LQCD) as well as the optimal supercomputer hardware architectures that it suggests. We demonstrate these methods on the BlueGene massively-parallel supercomputer and argue that LQCD and the BlueGene architecture are a natural match. This can be traced to the simple fact that LQCD is a regular lattice discretization of space into lattice sites while the BlueGene supercomputer is a discretization of space into compute nodes, and that both are constrained by requirements of locality. This simple relation is both technologically important and theoretically intriguing. The main result of this paper is the speedup of LQCD using up to 131,072 CPUs on the largest BlueGene/L supercomputer. The speedup is perfect with sustained performance of about 20% of peak. This corresponds to a maximum of 70.5 sustained TFlop/s. At these speeds LQCD and BlueGene are poised to produce the next generation of strong interaction physics theoretical results.

Soltz, R; Vranas, P; Blumrich, M; Chen, D; Gara, A; Giampap, M; Heidelberger, P; Salapura, V; Sexton, J; Bhanot, G

2007-04-11

336

Applied Parallel Metadata Indexing  

SciTech Connect

The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

Jacobi, Michael R [Los Alamos National Laboratory

2012-08-01

337

Parallel ptychographic reconstruction.  

PubMed

Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source. PMID:25607174

Nashed, Youssef S G; Vine, David J; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

2014-12-29

338

Claims about the Reliability of Student Evaluations of Instruction: The Ecological Fallacy Rides Again  

ERIC Educational Resources Information Center

The vast majority of the research on student evaluation of instruction has assessed the reliability of groups of courses and yielded either a single reliability coefficient for the entire group, or grouped reliability coefficients for each student evaluation of teaching (SET) item. This manuscript argues that these practices constitute a form of…

Morley, Donald D.

2012-01-01

339

Ultrahigh reliability estimates through simulation  

Microsoft Academic Search

A statistical variance reduction technique called importance sampling is described, and its effectiveness in estimating ultrahigh reliability of life-critical electronics systems is compared with that of the widely used HARP and SURE analytic tools. Importance sampling is seen to provide more accurate reliability estimates with relatively little computational expense for the models studied. The technique is also seen to provide

Robert M. Geist; Mark K. Smotherman

1989-01-01

340

Avionics design for reliability bibliography  

NASA Technical Reports Server (NTRS)

A bibliography with abstracts was presented in support of AGARD lecture series No. 81. The following areas were covered: (1) program management, (2) design for high reliability, (3) selection of components and parts, (4) environment consideration, (5) reliable packaging, (6) life cycle cost, and (7) case histories.

1976-01-01

341

Electric Reliability & Hurricane Preparedness Plan  

E-print Network

NASA: 99+% reliability #12;Hurricane KATRINA #12;MPC Infrastructure Damage · 9,000 broken poles · 2Electric Reliability & Hurricane Preparedness Plan Joe Bosco Account Executive October 17, 2012 #12 w/o power (100%) · Power restored in 11 days · Manpower ­ 11,000 #12;MPC's Hurricane Preparedness

342

Validity and Reliability of Provided Constructs in Assessing Death Threat.  

ERIC Educational Resources Information Center

In an attempt to assess a person's orientation toward death, a self-administered form of the Threat Index (TI) was introduced and compared to the original interview form along dimensions of validity, reliability, internal consistency, and independence from social desirability response sets. Results supported theoretical and psychometric soundness…

Krieger, Seth R.; And Others

1979-01-01

343

Scalable multi-path discovery technique for parallel data transmission in next generation wide area layer-2 network  

Microsoft Academic Search

This paper focuses on parallel data transmission, which is one of the solutions in considering high-speed trans- mission in next generation wide area layer-2 network. To realize parallel data transmission, reliable delay and route measuring for every possible path between two nodes and choosing multiple paths with small delay difference is necessary. This paper proposes a new multipath discovery and

Jumpei Marukawa; Yuki Nomura; Shota Yamada; Midori Terasawa; Satoru Okamoto; Naoaki Yamanaka

2011-01-01

344

Component fragility data base for reliability and probability studies  

SciTech Connect

Safety-related equipment in a nuclear plant plays a vital role in its proper operation and control, and failure of such equipment due to an earthquake may pose a risk to the safe operation of the plant. Therefore, in order to assess the overall reliability of a plant, the reliability of performance of the equipment should be studied first. The success of a reliability or a probability study depends to a great extent on the data base. To meet this demand, Brookhaven National Laboratory (BNL), under a sponsorship of the United States Nuclear Regulatory Commission (USNRC), has formed a test data base relating the seismic capacity of equipment specimens to the earthquake levels. Subsequently, the test data have been analyzed for use in reliability and probability studies. This paper describes the data base and discusses the analysis methods. The final results that can be directly used in plant reliability and probability studies are also presented in this paper. 2 refs., 2 tabs.

Bandyopadhyay, K.; Hofmayer, C.; Kassir, M.; Pepper, S.

1989-01-01

345

Parallel Computing in SCALE  

SciTech Connect

The SCALE computational architecture has remained basically the same since its inception 30 years ago, although constituent modules and capabilities have changed significantly. This SCALE concept was intended to provide a framework whereby independent codes can be linked to provide a more comprehensive capability than possible with the individual programs - allowing flexibility to address a wide variety of applications. However, the current system was designed originally for mainframe computers with a single CPU and with significantly less memory than today's personal computers. It has been recognized that the present SCALE computation system could be restructured to take advantage of modern hardware and software capabilities, while retaining many of the modular features of the present system. Preliminary work is being done to define specifications and capabilities for a more advanced computational architecture. This paper describes the state of current SCALE development activities and plans for future development. With the release of SCALE 6.1 in 2010, a new phase of evolutionary development will be available to SCALE users within the TRITON and NEWT modules. The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system developed by Oak Ridge National Laboratory (ORNL) provides a comprehensive and integrated package of codes and nuclear data for a wide range of applications in criticality safety, reactor physics, shielding, isotopic depletion and decay, and sensitivity/uncertainty (S/U) analysis. Over the last three years, since the release of version 5.1 in 2006, several important new codes have been introduced within SCALE, and significant advances applied to existing codes. Many of these new features became available with the release of SCALE 6.0 in early 2009. However, beginning with SCALE 6.1, a first generation of parallel computing is being introduced. In addition to near-term improvements, a plan for longer term SCALE enhancement activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

DeHart, Mark D [ORNL] [ORNL; Williams, Mark L [ORNL] [ORNL; Bowman, Stephen M [ORNL] [ORNL

2010-01-01

346

Parallel computation of seismic analysis of high arch dam  

NASA Astrophysics Data System (ADS)

Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

Chen, Houqun; Ma, Huaifa; Tu, Jin; Cheng, Guangqing; Tang, Juzhen

2008-03-01

347

Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux  

NASA Astrophysics Data System (ADS)

In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

Guo, Zehua; Tang, Xian-Zhu

2012-06-01

348

Parallel transport of long mean-free-path plasma along open magnetic field lines: Parallel heat flux  

SciTech Connect

In a long mean-free-path plasma where temperature anisotropy can be sustained, the parallel heat flux has two components with one associated with the parallel thermal energy and the other the perpendicular thermal energy. Due to the large deviation of the distribution function from local Maxwellian in an open field line plasma with low collisionality, the conventional perturbative calculation of the parallel heat flux closure in its local or non-local form is no longer applicable. Here, a non-perturbative calculation is presented for a collisionless plasma in a two-dimensional flux expander bounded by absorbing walls. Specifically, closures of previously unfamiliar form are obtained for ions and electrons, which relate two distinct components of the species parallel heat flux to the lower order fluid moments such as density, parallel flow, parallel and perpendicular temperatures, and the field quantities such as the magnetic field strength and the electrostatic potential. The plasma source and boundary condition at the absorbing wall enter explicitly in the closure calculation. Although the closure calculation does not take into account wave-particle interactions, the results based on passing orbits from steady-state collisionless drift-kinetic equation show remarkable agreement with fully kinetic-Maxwell simulations. As an example of the physical implications of the theory, the parallel heat flux closures are found to predict a surprising observation in the kinetic-Maxwell simulation of the 2D magnetic flux expander problem, where the parallel heat flux of the parallel thermal energy flows from low to high parallel temperature region.

Guo Zehua; Tang Xianzhu [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States)

2012-06-15

349

Parallel processors and nonlinear structural dynamics algorithms and software  

NASA Technical Reports Server (NTRS)

A nonlinear structural dynamics program with an element library that exploits parallel processing is under development. The aim is to exploit scheduling-allocation so that parallel processing and vectorization can effectively be treated in a general purpose program. As a byproduct an automatic scheme for assigning time steps was devised. A rudimentary form of the program is complete and has been tested; it shows substantial advantage can be taken of parallelism. In addition, a stability proof for the subcycling algorithm has been developed.

Belytschko, T.

1986-01-01

350

Parallel coding schemes of whisker velocity in the rat's somatosensory system.  

PubMed

The function of rodents' whisker somatosensory system is to transform tactile cues, in the form of vibrissa vibrations, into neuronal responses. It is well established that rodents can detect numerous tactile stimuli and tell them apart. However, the transformation of tactile stimuli obtained through whisker movements to neuronal responses is not well-understood. Here we examine the role of whisker velocity in tactile information transmission and its coding mechanisms. We show that in anaesthetized rats, whisker velocity is related to the radial distance of the object contacted and its own velocity. Whisker velocity is accurately and reliably coded in first-order neurons in parallel, by both the relative time interval between velocity-independent first spike latency of rapidly adapting neurons and velocity-dependent first spike latency of slowly adapting neurons. At the same time, whisker velocity is also coded, although less robustly, by the firing rates of slowly adapting neurons. Comparing first- and second-order neurons, we find similar decoding efficiencies for whisker velocity using either temporal or rate-based methods. Both coding schemes are sufficiently robust and hardly affected by neuronal noise. Our results suggest that whisker kinematic variables are coded by two parallel coding schemes and are disseminated in a similar way through various brain stem nuclei to multiple brain areas. PMID:25552637

Lottem, Eran; Gugig, Erez; Azouz, Rony

2015-03-15

351

Cerebro : forming parallel internets and enabling ultra-local economies  

E-print Network

Internet-based mobile communications have been increasing rapidly [5], yet there is little or no progress in platforms that enable applications for discovery, context-awareness and sharing of data and services in a peer-wise ...

Ypodimatopoulos, Polychronis Panagiotis

2008-01-01

352

Revised, final form, July 1994 Domain Decomposition, Parallel Computing and  

E-print Network

Petter E. Bjørstad \\Lambda Terje Kšarstad y Abstract A prototype black oil simulator is described of application is the study of ground water flow, in particular, the study of ground contamination by way

Bjørstad, Petter E.

353

Parallel flows with Soret effect in tilted cylinders  

NASA Technical Reports Server (NTRS)

Henry and Roux (1986, 1987, 1988) have conducted extensive numerical studies on the interaction of Soret separation with convection in cylindrical geometry. Many of their solutions exhibit parallel flow away from end walls. Their parallel flow results can be matched by closed-form solutions. Solutions are nonunique in some parameter regions. Disappearance of one branch of solutions correlates with a sudden transition of Henry and Roux's results from a separated to a well-mixed flow.

Jacqmin, David

1990-01-01

354

A Probabilistic Approach to Symbolic Performance Modeling of Parallel Systems  

Microsoft Academic Search

Performance modeling plays a significant role in predicting the effects of a particular design choice or in diagnosing the cause for some observed performance behavior. Especially for complex systems such as parallel computer, typically, an intended performance cannot be achieved without recourse to some form of predictive models.\\u000a\\u000aIn performance prediction of parallel programs we distinguish static and dynamic prediction

Hasyim GAUTAMA

2004-01-01

355

PARLOG: parallel programming in logic  

Microsoft Academic Search

PARLOG is a logic programming language in the sense that nearly every definition and query can be read as a sentence of predicate logic. It differs from PROLOG in incorporating parallel modes of evaluation. For reasons of efficient implementation, it distinguishes and separates and-parallel and or-parallel evaluation.PARLOG relations are divided into two types: single-solution relations and all-solutions relations. A conjunction

Keith L. Clark; Steve Gregory

1986-01-01

356

Toward Parallel Document Clustering  

SciTech Connect

A key challenge to automated clustering of documents in large text corpora is the high cost of comparing documents in a multimillion dimensional document space. The Anchors Hierarchy is a fast data structure and algorithm for localizing data based on a triangle inequality obeying distance metric, the algorithm strives to minimize the number of distance calculations needed to cluster the documents into “anchors” around reference documents called “pivots”. We extend the original algorithm to increase the amount of available parallelism and consider two implementations: a complex data structure which affords efficient searching, and a simple data structure which requires repeated sorting. The sorting implementation is integrated with a text corpora “Bag of Words” program and initial performance results of end-to-end a document processing workflow are reported.

Mogill, Jace A.; Haglin, David J.

2011-09-01

357

Parallel processing and expert systems  

NASA Technical Reports Server (NTRS)

Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 90's cannot enjoy an increased level of autonomy without the efficient use of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real time demands are met for large expert systems. Speed-up via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial labs in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems was surveyed. The survey is divided into three major sections: (1) multiprocessors for parallel expert systems; (2) parallel languages for symbolic computations; and (3) measurements of parallelism of expert system. Results to date indicate that the parallelism achieved for these systems is small. In order to obtain greater speed-ups, data parallelism and application parallelism must be exploited.

Yan, Jerry C.; Lau, Sonie

1991-01-01

358

Parallel processor engine model program  

NASA Technical Reports Server (NTRS)

The Parallel Processor Engine Model Program is a generalized engineering tool intended to aid in the design of parallel processing real-time simulations of turbofan engines. It is written in the FORTRAN programming language and executes as a subset of the SOAPP simulation system. Input/output and execution control are provided by SOAPP; however, the analysis, emulation and simulation functions are completely self-contained. A framework in which a wide variety of parallel processing architectures could be evaluated and tools with which the parallel implementation of a real-time simulation technique could be assessed are provided.

Mclaughlin, P.

1984-01-01

359

Innovatory structure design for parallel manipulators: Essential change of design methods  

Microsoft Academic Search

By analyzing the constraint characteristic of each limb which connected the fixed platform with the moving platform of parallel manipulators, the form of limbs can be designed corresponding to certainty kinetic catachrestic form of parallel manipulator. Conventional design method is that one or more actuated joints included in one limb and other joints are passive, then the coefficients of these

Zhu Dachang; Feng Yanping; Cai Jinbao; Xiao Guifang

2009-01-01

360

Evaluation of fault-tolerant parallel-processor architectures over long space missions  

NASA Technical Reports Server (NTRS)

The impact of a five year space mission environment on fault-tolerant parallel processor architectures is examined. The target application is a Strategic Defense Initiative (SDI) satellite requiring 256 parallel processors to provide the computation throughput. The reliability requirements are that the system still be operational after five years with .99 probability and that the probability of system failure during one-half hour of full operation be less than 10(-7). The fault tolerance features an architecture must possess to meet these reliability requirements are presented, many potential architectures are briefly evaluated, and one candidate architecture, the Charles Stark Draper Laboratory's Fault-Tolerant Parallel Processor (FTPP) is evaluated in detail. A methodology for designing a preliminary system configuration to meet the reliability and performance requirements of the mission is then presented and demonstrated by designing an FTPP configuration.

Johnson, Sally C.

1989-01-01

361

Int J Parallel Prog (2009) 37:417431 DOI 10.1007/s10766-009-0104-y  

E-print Network

Parallel Prog (2009) 37:417­431 no longer be adequate for the reliability requirements in future technology lifetime reliability requirements in future devices [11]. The growing concern of device failure due to NBTI together a functional simulator, gate-level simulator, analog simulator, and state-of-the-art device models

Xie, Yuan

362

Computation and parallel implementation for early vision  

NASA Technical Reports Server (NTRS)

The problem of early vision is to transform one or more retinal illuminance images-pixel arrays-to image representations built out of such primitive visual features such as edges, regions, disparities, and clusters. These transformed representations form the input to later vision stages that perform higher level vision tasks including matching and recognition. Researchers developed algorithms for: (1) edge finding in the scale space formulation; (2) correlation methods for computing matches between pairs of images; and (3) clustering of data by neural networks. These algorithms are formulated for parallel implementation of SIMD machines, such as the Massively Parallel Processor, a 128 x 128 array processor with 1024 bits of local memory per processor. For some cases, researchers can show speedups of three orders of magnitude over serial implementations.

Gualtieri, J. Anthony

1990-01-01

363

Parallel execution of logic programs  

SciTech Connect

This work is about the AND/OR Process Model, an abstract model for parallel execution of logic programs. This book defines a framework for implementing parallel interpreters. The research presented here provides an intermediate level of abstraction between hardware and semantics, a set of requirements for a parallel interpreter running on a multiprocessor architecture. Contents. LIST OF FIGURES. 1. INTRODUCTION. 2. LOGIC PROGRAMMING. 2.1 Syntax. 2.2 Semantics. 2.3 Control. 2.4 Prolog. 2.5 Alternate Control Strategies. 2.6 Chapter Summary. 3. PARALLELISM IN LOGIC PROGRAMS. 3.1 Models for OR Parallelism. 3.2 Models for AND Parallelism. 3.3 Low Level Parallelism 3.4 Chapter Summary. 4. THE AND/OR PROCESS MODEL. 4.1 Oracle. 4.2 Messages. 4.3 OR Processes. 4.4 AND Processes. 4.5 Interpreter. 4.6 Programming Language. 4.7 Chapter Summary. 5. PARALLEL OR PROCESSES. 5.1 Operating Modes. 5.2 Execution. 5.3 Example. 5.4 Chapter Summary.

Conery, J.S.

1987-01-01

364

Applications Parallel computing for chromosome  

E-print Network

Applications Parallel computing for chromosome reconstruction via ordering of DNA sequences and implementation of a suite of parallel algorithms called PARODS, for chromosome reconstruction via ordering of DNA.V. All rights reserved. Keywords: Clone ordering; DNA sequencing; Chromosome reconstruction; Simulated

Bhandarkar, Suchendra "Suchi" M.

365

Limited width parallel prefix circuits  

Microsoft Academic Search

In this paper, we present lower and upper bounds on the size of limited width, bounded and unbounded fan-out parallel prefix circuits. The lower bounds on the sizes of such circuits are a function of the depth, width, and number of inputs. The size requirement of an N input bounded fan-out parallel prefix circuit having limited width W and extra

David A. Carlson; Binay Sugla

1990-01-01

366

Parallel Matlab MIT Lincoln Laboratory  

E-print Network

by the Department of Defense under Air Force Contract F19628-00-C-0002. Opinions, interpretations, conclusions development, testing and system integration · MatlabMPI allows any Matlab program to become a high performance parallel program · MatlabMPI allows any Matlab program to become a high performance parallel program #12

Kepner, Jeremy

367

Patterns for Parallel Application Programs  

Microsoft Academic Search

We are involved in an ongoing effort to design a pattern language for parallel application programs. The pattern language consists of a set of patterns that guide the programmer through the entire process of developing a parallel program, including patterns that help find the concurrency in the problem, patterns that help find the appropriate algorithm structure to exploit the concurrency

Berna L. Massingill

1999-01-01

368

Reliability and Maintainability (RAM) Training  

NASA Technical Reports Server (NTRS)

The theme of this manual is failure physics-the study of how products, hardware, software, and systems fail and what can be done about it. The intent is to impart useful information, to extend the limits of production capability, and to assist in achieving low-cost reliable products. In a broader sense the manual should do more. It should underscore the urgent need CI for mature attitudes toward reliability. Five of the chapters were originally presented as a classroom course to over 1000 Martin Marietta engineers and technicians. Another four chapters and three appendixes have been added, We begin with a view of reliability from the years 1940 to 2000. Chapter 2 starts the training material with a review of mathematics and a description of what elements contribute to product failures. The remaining chapters elucidate basic reliability theory and the disciplines that allow us to control and eliminate failures.

Lalli, Vincent R. (Editor); Malec, Henry A. (Editor); Packard, Michael H. (Editor)

2000-01-01

369

An experiment in software reliability  

NASA Technical Reports Server (NTRS)

The results of a software reliability experiment conducted in a controlled laboratory setting are reported. The experiment was undertaken to gather data on software failures and is one in a series of experiments being pursued by the Fault Tolerant Systems Branch of NASA Langley Research Center to find a means of credibly performing reliability evaluations of flight control software. The experiment tests a small sample of implementations of radar tracking software having ultra-reliability requirements and uses n-version programming for error detection, and repetitive run modeling for failure and fault rate estimation. The experiment results agree with those of Nagel and Skrivan in that the program error rates suggest an approximate log-linear pattern and the individual faults occurred with significantly different error rates. Additional analysis of the experimental data raises new questions concerning the phenomenon of interacting faults. This phenomenon may provide one explanation for software reliability decay.

Dunham, J. R.; Pierce, J. L.

1986-01-01

370

Parallel contingency statistics with Titan.  

SciTech Connect

This report summarizes existing statistical engines in VTK/Titan and presents the recently parallelized contingency statistics engine. It is a sequel to [PT08] and [BPRT09] which studied the parallel descriptive, correlative, multi-correlative, and principal component analysis engines. The ease of use of this new parallel engines is illustrated by the means of C++ code snippets. Furthermore, this report justifies the design of these engines with parallel scalability in mind; however, the very nature of contingency tables prevent this new engine from exhibiting optimal parallel speed-up as the aforementioned engines do. This report therefore discusses the design trade-offs we made and study performance with up to 200 processors.

Thompson, David C.; Pebay, Philippe Pierre

2009-09-01

371

Optimal parallel quantum query algorithms  

E-print Network

We study the complexity of quantum query algorithms that make p queries in parallel in each timestep. This model is in part motivated by the fact that decoherence times of qubits are typically small, so it makes sense to parallelize quantum algorithms as much as possible. We show tight bounds for a number of problems, specifically Theta((n/p)^{2/3}) p-parallel queries for element distinctness and Theta((n/p)^{k/(k+1)} for k-sum. Our upper bounds are obtained by parallelized quantum walk algorithms, and our lower bounds are based on a relatively small modification of the adversary lower bound method, combined with recent results of Belovs et al. on learning graphs. We also prove some general bounds, in particular that quantum and classical p-parallel complexity are polynomially related for all total functions f when p is small compared to f's block sensitivity.

Stacey Jeffery; Frederic Magniez; Ronald de Wolf

2015-02-20

372

The Galley Parallel File System  

NASA Technical Reports Server (NTRS)

Most current multiprocessor file systems are designed to use multiple disks in parallel, using the high aggregate bandwidth to meet the growing I/0 requirements of parallel scientific applications. Many multiprocessor file systems provide applications with a conventional Unix-like interface, allowing the application to access multiple disks transparently. This interface conceals the parallelism within the file system, increasing the ease of programmability, but making it difficult or impossible for sophisticated programmers and libraries to use knowledge about their I/O needs to exploit that parallelism. In addition to providing an insufficient interface, most current multiprocessor file systems are optimized for a different workload than they are being asked to support. We introduce Galley, a new parallel file system that is intended to efficiently support realistic scientific multiprocessor workloads. We discuss Galley's file structure and application interface, as well as the performance advantages offered by that interface.

Nieuwejaar, Nils; Kotz, David

1996-01-01

373

Transfer form  

Cancer.gov

10/02 Transfer Investigational Agent Form This form is to be used for an intra-institutional transfer, one transfer/form. Division of Cancer Prevention National Cancer Institute National Institutes of Health TRANSFER FROM: Investigator transferring agent:

374

A parallel algorithm for the eigenvalues and eigenvectors for a general complex matrix  

NASA Technical Reports Server (NTRS)

A new parallel Jacobi-like algorithm is developed for computing the eigenvalues of a general complex matrix. Most parallel methods for this parallel typically display only linear convergence. Sequential norm-reducing algorithms also exit and they display quadratic convergence in most cases. The new algorithm is a parallel form of the norm-reducing algorithm due to Eberlein. It is proven that the asymptotic convergence rate of this algorithm is quadratic. Numerical experiments are presented which demonstrate the quadratic convergence of the algorithm and certain situations where the convergence is slow are also identified. The algorithm promises to be very competitive on a variety of parallel architectures.

Shroff, Gautam

1989-01-01

375

Photovoltaics performance and reliability workshop  

SciTech Connect

This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities and others exchanged technical knowledge and field experience. The topics of cell and module characterization, module and system performance, materials and module durability/reliability research, solar radiation, and applications are discussed.

Mrig, L. [ed.] [ed.

1992-11-01

376

Photovoltaics performance and reliability workshop  

SciTech Connect

This document consists of papers and viewgraphs compiled from the proceedings of a workshop held in September 1992. This workshop was the fifth in a series sponsored by NREL/DOE under the general subject areas of photovoltaic module testing and reliability. PV manufacturers, DOE laboratories, electric utilities and others exchanged technical knowledge and field experience. The topics of cell and module characterization, module and system performance, materials and module durability/reliability research, solar radiation, and applications are discussed.

Mrig, L. (ed.) [ed.

1992-01-01

377

Trajectories in parallel optics.  

PubMed

In our previous work we showed the ability to improve the optical system's matrix condition by optical design, thereby improving its robustness to noise. It was shown that by using singular value decomposition, a target point-spread function (PSF) matrix can be defined for an auxiliary optical system, which works parallel to the original system to achieve such an improvement. In this paper, after briefly introducing the all optics implementation of the auxiliary system, we show a method to decompose the target PSF matrix. This is done through a series of shifted responses of auxiliary optics (named trajectories), where a complicated hardware filter is replaced by postprocessing. This process manipulates the pixel confined PSF response of simple auxiliary optics, which in turn creates an auxiliary system with the required PSF matrix. This method is simulated on two space variant systems and reduces their system condition number from 18,598 to 197 and from 87,640 to 5.75, respectively. We perform a study of the latter result and show significant improvement in image restoration performance, in comparison to a system without auxiliary optics and to other previously suggested hybrid solutions. Image restoration results show that in a range of low signal-to-noise ratio values, the trajectories method gives a significant advantage over alternative approaches. A third space invariant study case is explored only briefly, and we present a significant improvement in the matrix condition number from 1.9160e+013 to 34,526. PMID:21979506

Klapp, Iftach; Sochen, Nir; Mendlovic, David

2011-10-01

378

High Performance Parallel Architectures  

NASA Technical Reports Server (NTRS)

Traditional remote sensing instruments are multispectral, where observations are collected at a few different spectral bands. Recently, many hyperspectral instruments, that can collect observations at hundreds of bands, have been operational. Furthermore, there have been ongoing research efforts on ultraspectral instruments that can produce observations at thousands of spectral bands. While these remote sensing technology developments hold great promise for new findings in the area of Earth and space science, they present many challenges. These include the need for faster processing of such increased data volumes, and methods for data reduction. Dimension Reduction is a spectral transformation, aimed at concentrating the vital information and discarding redundant data. One such transformation, which is widely used in remote sensing, is the Principal Components Analysis (PCA). This report summarizes our progress on the development of a parallel PCA and its implementation on two Beowulf cluster configuration; one with fast Ethernet switch and the other with a Myrinet interconnection. Details of the implementation and performance results, for typical sets of multispectral and hyperspectral NASA remote sensing data, are presented and analyzed based on the algorithm requirements and the underlying machine configuration. It will be shown that the PCA application is quite challenging and hard to scale on Ethernet-based clusters. However, the measurements also show that a high- performance interconnection network, such as Myrinet, better matches the high communication demand of PCA and can lead to a more efficient PCA execution.

El-Ghazawi, Tarek; Kaewpijit, Sinthop

1998-01-01

379

Reliability measure for segmenting algorithms  

NASA Astrophysics Data System (ADS)

Segmenting is a key initial step in many computer-aided detection (CAD) systems. Our purpose is to develop a method to estimate the reliability of segmenting algorithm results. We use a statistical shape model computed using principal component analysis. The model retains a small number of eigenvectors, or modes, that represent a large fraction of the variance. The residuals between the segmenting result and its projection into the space of retained modes are computed. The sum of the squares of residuals is transformed to a zero-mean, unit standard deviation Gaussian random variable. We also use the standardized scale parameter. The reliability measure is the probability that the transformed residuals and scale parameter are greater than the absolute value of the observed values. We tested the reliability measure with thirty chest x-ray images with "leave-out-one" testing. The Gaussian assumption was verified using normal probability plots. For each image, a statistical shape model was computed from the hand-digitized data of the rest of the images in the training set. The residuals and scale parameter with automated segment results for the image were used to compute the reliability measure in each case. The reliability measure was significantly lower for two images in the training set with unusual lung fields or processing errors. The data and Matlab scripts for reproducing the figures are at http://www.aprendtech.com/papers/relmsr.zip Errors detected by the new reliability measure can be used to adjust processing or warn the user.

Alvarez, Robert E.

2004-05-01

380

Sub-Second Parallel State Estimation  

SciTech Connect

This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects of severe events on the grid. The power grid continues to grow and the number of measurements is increasing at an accelerated rate due to the variety of smart grid devices being introduced. A parallel state estimation implementation will have better performance than traditional, sequential state estimation by utilizing the power of high performance computing (HPC). This increased performance positions parallel state estimators as valuable tools for operating the increasingly more complex power grid.

Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

2014-10-31

381

Supporting data intensive applications with medium grained parallelism  

SciTech Connect

ADAMS is an ambitious effort to provide new database access paradigms for the kinds of scientific applications that require massively parallel access to very large data sets in order to be effective. Many of the Grand Challenge Problems fall into this category, as well as those kinds of scientific research which depend on widely distributed shared sets of disparate data. The essence of the ADAMS approach is to view data purely in functional terms, rather than the more traditional structural view in which multiple data items are aggregated into records or tuples of flat files. Further, ADAMS has been implemented as an embedded interface so that scientists can develop applications in the host programming language of their choice, often Fortran, Pascal, or C, and still access shared data generated in other environments. The syntax and semantics of ADAMS is essentially complete. The functional nature of the ADAMS data interface paradigm simplifies its implementation in a distributed environment, e.g., the Mentat run-time system, because one must only distribute functional servers, not pieces of data structures. However, this only opens up the possibility of effective parallel database processing; to realize this potential far more work must be done in the areas of data dependence, intra-statement parallelism, parallel query optimization, and maintaining consistency and reliability in concurrent systems. Discovering how to make effective parallel data access an actually in real scientific applications is the point of this research.

Pfaltz, J.L.; French, J.C.; Grimshaw, A.S.; Son, S.H.

1992-04-01

382

Measurement, estimation, and prediction of software reliability  

NASA Technical Reports Server (NTRS)

Quantitative indices of software reliability are defined, and application of three important indices is indicated: (1) reliability measurement, (2) reliability estimation, and (3) reliability prediction. State of the art techniques for each of these procedures are presented together with considerations of data acquisition. Failure classifications and other documentation for comprehensive software reliability evaluation are described.

Hecht, H.

1977-01-01

383

ETARA - EVENT TIME AVAILABILITY, RELIABILITY ANALYSIS  

NASA Technical Reports Server (NTRS)

The ETARA system was written to evaluate the performance of the Space Station Freedom Electrical Power System, but the methodology and software can be modified to simulate any system that can be represented by a block diagram. ETARA is an interactive, menu-driven reliability, availability, and maintainability (RAM) simulation program. Given a Reliability Block Diagram representation of a system, the program simulates the behavior of the system over a specified period of time using Monte Carlo methods to generate block failure and repair times as a function of exponential and/or Weibull distributions. ETARA can calculate availability parameters such as equivalent availability, state availability (percentage of time at a particular output state capability), continuous state duration and number of state occurrences. The program can simulate initial spares allotment and spares replenishment for a resupply cycle. The number of block failures are tabulated both individually and by block type. ETARA also records total downtime, repair time, and time waiting for spares. Maintenance man-hours per year and system reliability, with or without repair, at or above a particular output capability can also be calculated. The key to using ETARA is the development of a reliability or availability block diagram. The block diagram is a logical graphical illustration depicting the block configuration necessary for a function to be successfully accomplished. Each block can represent a component, a subsystem, or a system. The function attributed to each block is considered for modeling purposes to be either available or unavailable; there are no degraded modes of block performance. A block does not have to represent physically connected hardware in the actual system to be connected in the block diagram. The block needs only to have a role in contributing to an available system function. ETARA can model the RAM characteristics of systems represented by multilayered, nesting block diagrams. There are no restrictions on the number of total blocks or on the number of blocks in a series, parallel, or M-of-N parallel subsystem. In addition, the same block can appear in more than one subsystem if such an arrangement is necessary for an accurate model. ETARA 3.3 is written in APL2 for IBM PC series computers or compatibles running MS-DOS and the APL2 interpreter. Hardware requirements for the APL2 system include 640K of RAM, 2Mb of extended memory, and an 80386 or 80486 processor with an 80x87 math co-processor. The standard distribution medium for this package is a set of two 5.25 inch 360K MS-DOS format diskettes. A sample executable is included. The executable contains licensed material from the APL2 for the IBM PC product which is program property of IBM; Copyright IBM Corporation 1988 - All rights reserved. It is distributed with IBM's permission. The contents of the diskettes are compressed using the PKWARE archiving tools. The utility to unarchive the files, PKUNZIP.EXE, is included. ETARA was developed in 1990 and last updated in 1991.

Viterna, L. A.

1994-01-01

384

Overview of the Vesta parallel file system  

Microsoft Academic Search

The Vesta parallel file system provides parallel access from compute nodes to files distributed across I\\/O nodes in a massively parallel computer. Vesta is intended to solve the I\\/O problems of massively parallel computers executing numerically intensive scientific applications. Vesta has three interesting characteristics: First, it provides a user defined parallel view of file data, and allows user defined partitioning

Peter F. Corbett; Sandra Johnson Baylor; Dror G. Feitelson

1993-01-01

385

Fatigue Reliability of Gas Turbine Engine Structures  

NASA Technical Reports Server (NTRS)

The results of an investigation are described for fatigue reliability in engine structures. The description consists of two parts. Part 1 is for method development. Part 2 is a specific case study. In Part 1, the essential concepts and practical approaches to damage tolerance design in the gas turbine industry are summarized. These have evolved over the years in response to flight safety certification requirements. The effect of Non-Destructive Evaluation (NDE) methods on these methods is also reviewed. Assessment methods based on probabilistic fracture mechanics, with regard to both crack initiation and crack growth, are outlined. Limit state modeling techniques from structural reliability theory are shown to be appropriate for application to this problem, for both individual failure mode and system-level assessment. In Part 2, the results of a case study for the high pressure turbine of a turboprop engine are described. The response surface approach is used to construct a fatigue performance function. This performance function is used with the First Order Reliability Method (FORM) to determine the probability of failure and the sensitivity of the fatigue life to the engine parameters for the first stage disk rim of the two stage turbine. A hybrid combination of regression and Monte Carlo simulation is to use incorporate time dependent random variables. System reliability is used to determine the system probability of failure, and the sensitivity of the system fatigue life to the engine parameters of the high pressure turbine. 'ne variation in the primary hot gas and secondary cooling air, the uncertainty of the complex mission loading, and the scatter in the material data are considered.

Cruse, Thomas A.; Mahadevan, Sankaran; Tryon, Robert G.

1997-01-01

386

Parallel processing and expert systems  

NASA Technical Reports Server (NTRS)

Whether it be monitoring the thermal subsystem of Space Station Freedom, or controlling the navigation of the autonomous rover on Mars, NASA missions in the 1990s cannot enjoy an increased level of autonomy without the efficient implementation of expert systems. Merely increasing the computational speed of uniprocessors may not be able to guarantee that real-time demands are met for larger systems. Speedup via parallel processing must be pursued alongside the optimization of sequential implementations. Prototypes of parallel expert systems have been built at universities and industrial laboratories in the U.S. and Japan. The state-of-the-art research in progress related to parallel execution of expert systems is surveyed. The survey discusses multiprocessors for expert systems, parallel languages for symbolic computations, and mapping expert systems to multiprocessors. Results to date indicate that the parallelism achieved for these systems is small. The main reasons are (1) the body of knowledge applicable in any given situation and the amount of computation executed by each rule firing are small, (2) dividing the problem solving process into relatively independent partitions is difficult, and (3) implementation decisions that enable expert systems to be incrementally refined hamper compile-time optimization. In order to obtain greater speedups, data parallelism and application parallelism must be exploited.

Lau, Sonie; Yan, Jerry C.

1991-01-01

387

Forms of matter and forms of radiation  

E-print Network

The theory of defects in ordered and ill-ordered media is a well-advanced part of condensed matter physics. Concepts developed in this field also occur in the study of spacetime singularities, namely: i)- the topological theory of quantized defects (Kibble's cosmic strings) and ii)- the Volterra process for continuous defects, used to classify the Poincar\\'e symmetry breakings. We reassess the classification of Minkowski spacetime defects in the same theoretical frame, starting from the conjecture that these defects fall into two classes, as on they relate to massive particles or to radiation. This we justify on the empirical evidence of the Hubble's expansion. We introduce timelike and null congruences of geodesics treated as ordered media, viz. 'm'-crystals of massive particles and 'r'-crystals of massless particles, with parallel 4-momenta in M^4. Classifying their defects (or 'forms') we find (i) 'm'- and 'r'- Volterra continuous line defects and (ii) quantized topologically stable 'r'-defects, these latter forms being of various dimensionalities. Besides these 'perfect' forms, there are 'imperfect' disclinations that bound misorientation walls in three dimensions. We also speculate on the possible relation of these forms with the large-scale structure of the Universe.

Maurice Kleman

2011-04-08

388

Parallel Adaptive Mesh Refinement  

SciTech Connect

As large-scale, parallel computers have become more widely available and numerical models and algorithms have advanced, the range of physical phenomena that can be simulated has expanded dramatically. Many important science and engineering problems exhibit solutions with localized behavior where highly-detailed salient features or large gradients appear in certain regions which are separated by much larger regions where the solution is smooth. Examples include chemically-reacting flows with radiative heat transfer, high Reynolds number flows interacting with solid objects, and combustion problems where the flame front is essentially a two-dimensional sheet occupying a small part of a three-dimensional domain. Modeling such problems numerically requires approximating the governing partial differential equations on a discrete domain, or grid. Grid spacing is an important factor in determining the accuracy and cost of a computation. A fine grid may be needed to resolve key local features while a much coarser grid may suffice elsewhere. Employing a fine grid everywhere may be inefficient at best and, at worst, may make an adequately resolved simulation impractical. Moreover, the location and resolution of fine grid required for an accurate solution is a dynamic property of a problem's transient features and may not be known a priori. Adaptive mesh refinement (AMR) is a technique that can be used with both structured and unstructured meshes to adjust local grid spacing dynamically to capture solution features with an appropriate degree of resolution. Thus, computational resources can be focused where and when they are needed most to efficiently achieve an accurate solution without incurring the cost of a globally-fine grid. Figure 1.1 shows two example computations using AMR; on the left is a structured mesh calculation of a impulsively-sheared contact surface and on the right is the fuselage and volume discretization of an RAH-66 Comanche helicopter [35]. Note the ability of both meshing methods to resolve simulation details by varying the local grid spacing.

Diachin, L; Hornung, R; Plassmann, P; WIssink, A

2005-03-04

389

Parallel integer sorting with medium and fine-scale parallelism  

NASA Technical Reports Server (NTRS)

Two new parallel integer sorting algorithms, queue-sort and barrel-sort, are presented and analyzed in detail. These algorithms do not have optimal parallel complexity, yet they show very good performance in practice. Queue-sort designed for fine-scale parallel architectures which allow the queueing of multiple messages to the same destination. Barrel-sort is designed for medium-scale parallel architectures with a high message passing overhead. The performance results from the implementation of queue-sort on a Connection Machine CM-2 and barrel-sort on a 128 processor iPSC/860 are given. The two implementations are found to be comparable in performance but not as good as a fully vectorized bucket sort on the Cray YMP.

Dagum, Leonardo

1993-01-01

390

Template based parallel checkpointing in a massively parallel computer system  

DOEpatents

A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

Archer, Charles Jens (Rochester, MN); Inglett, Todd Alan (Rochester, MN)

2009-01-13

391

Estimating Test Score Reliability When No Examinee Has Taken the Complete Test.  

ERIC Educational Resources Information Center

Develops formulas to cope with the situation in which the reliability of test scores must be approximated even though no examinee has taken the complete instrument. Develops different estimators for part tests that are judged to be classically parallel, tau-equivalent, or congeneric. Proposes standards for differentiating among these three models.…

Feldt, Leonard S.

2003-01-01

392

Parallel computation using boundary elements in solid mechanics  

NASA Technical Reports Server (NTRS)

The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.

Chien, L. S.; Sun, C. T.

1990-01-01

393

40 CFR 75.42 - Reliability criteria.  

Code of Federal Regulations, 2010 CFR

...2010-07-01 2010-07-01 false Reliability criteria. 75.42 Section 75.42...Alternative Monitoring Systems § 75.42 Reliability criteria. To demonstrate reliability equal to or better than the continuous...

2010-07-01

394

Implementing a parallel C++ runtime system for scalable parallel systems  

Microsoft Academic Search

pC++ is a language extension to C++ designed toallow programmers to compose "concurrent aggregate"collection classes which can be aligned and distributedover the memory hierarchy of a parallel machine ina manner modeled on the High Performance FortranForum (HPFF) directives for Fortran 90. pC++ allowsthe user to write portable and efficient code whichwill run on a wide range of scalable parallel computersystems.

A. Malony; B. Mohr; P. Beckman; D. Gannon; S. Yang; F. Bodin; S. Kesavan

1993-01-01

395

Software for parallel processing applications  

SciTech Connect

Parallel computing has been used to solve large computing problems in high-energy physics. Typical problems include offline event reconstruction, monte carlo event-generation and reconstruction, and lattice QCD calculations. Fermilab has extensive experience in parallel computing using CPS (cooperative processes software) and networked UNIX workstations for the loosely-coupled problems of event reconstruction and monte carlo generation and CANOPY and ACPMAPS for Lattice QCD. Both systems will be discussed. Parallel software has been developed by many other groups, both commercial and research-oriented. Examples include PVM, Express and network-Linda for workstation clusters and PCN and STRAND88 for more tightly-coupled machines.

Wolbers, S.

1992-10-01

396

Ultra precision and reliable bonding method  

NASA Technical Reports Server (NTRS)

The bonding of two materials through hydroxide-catalyzed hydration/dehydration is achieved at room temperature by applying hydroxide ions to at least one of the two bonding surfaces and by placing the surfaces sufficiently close to each other to form a chemical bond between them. The surfaces may be placed sufficiently close to each other by simply placing one surface on top of the other. A silicate material may also be used as a filling material to help fill gaps between the surfaces caused by surface figure mismatches. A powder of a silica-based or silica-containing material may also be used as an additional filling material. The hydroxide-catalyzed bonding method forms bonds which are not only as precise and transparent as optical contact bonds, but also as strong and reliable as high-temperature frit bonds. The hydroxide-catalyzed bonding method is also simple and inexpensive.

Gwo, Dz-Hung (Inventor)

2001-01-01

397

Robust Design of Reliability Test Plans Using Degradation Measures.  

SciTech Connect

With short production development times, there is an increased need to demonstrate product reliability relatively quickly with minimal testing. In such cases there may be few if any observed failures. Thus, it may be difficult to assess reliability using the traditional reliability test plans that measure only time (or cycles) to failure. For many components, degradation measures will contain important information about performance and reliability. These measures can be used to design a minimal test plan, in terms of number of units placed on test and duration of the test, necessary to demonstrate a reliability goal. Generally, the assumption is made that the error associated with a degradation measure follows a known distribution, usually normal, although in practice cases may arise where that assumption is not valid. In this paper, we examine such degradation measures, both simulated and real, and present non-parametric methods to demonstrate reliability and to develop reliability test plans for the future production of components with this form of degradation.

Lane, Jonathan Wesley; Lane, Jonathan Wesley; Crowder, Stephen V.; Crowder, Stephen V.

2014-10-01

398

PARALLEL IMPLEMENTATION OF VLSI HED CIRCUIT SIMULATION  

E-print Network

14 PARALLEL IMPLEMENTATION OF VLSI HED CIRCUIT SIMULATION INFORMATICA 2/91 Keywords: circuit simulation, direct method, vvaveform relaxation, parallel algorithm, parallel computer architecture Srilata, India Junj Sile Marjan Spegel Jozef Stefan Institute, Ljubljana, Slovenia The importance of circuit

Silc, Jurij

399

Automatic Generation of Parallel CRC Circuits  

Microsoft Academic Search

A parallel CRC circuit simultaneously processes multiple data bits. A generic VHDL description of parallel CRC circuits lets designers synthesize CRC circuits for any generator polynomial or required amount of parallelism

Michael Sprachmann

2001-01-01

400

A Parallel Logic Programming Language for PEPSys  

Microsoft Academic Search

This paper describes a new parallel Logic Programming language designed to exploit the OR- and Independent AND- parallelisms. The language is based on conventional Prolog but with natural extensions to support handling of multiple solutions and expression of parallelism.

Michael Rate

401

Uhlmann's parallelism Nagaoka's quantum information geometry  

E-print Network

Uhlmann's parallelism and Nagaoka's quantum information geometry Keiji Matsumoto METR 97-09 October 1997 #12;Uhmann's parallelism and Nagaoka's quantum information geometry Keiji Matsumoto 1 Abstract: Uhlmann's parallelism and Nagaoka's quantum information geometry. In this paper, intrinsic relation

Yamamoto, Hirosuke

402

Load Balancing of Parallelized Information Filters  

E-print Network

Load Balancing of Parallelized Information Filters Neil C. Rowe, Member, IEEE Computer Society develop an analytic model for the costs and advantages of load rebalancing the parallel filtering, data parallelism, load balancing, information retrieval, conjunctions, optimality, and Monte Carlo

Rowe, Neil C.

403

Employing machine learning for reliable miRNA target identification in plants  

PubMed Central

Background miRNAs are ~21 nucleotide long small noncoding RNA molecules, formed endogenously in most of the eukaryotes, which mainly control their target genes post transcriptionally by interacting and silencing them. While a lot of tools has been developed for animal miRNA target system, plant miRNA target identification system has witnessed limited development. Most of them have been centered around exact complementarity match. Very few of them considered other factors like multiple target sites and role of flanking regions. Result In the present work, a Support Vector Regression (SVR) approach has been implemented for plant miRNA target identification, utilizing position specific dinucleotide density variation information around the target sites, to yield highly reliable result. It has been named as p-TAREF (plant-Target Refiner). Performance comparison for p-TAREF was done with other prediction tools for plants with utmost rigor and where p-TAREF was found better performing in several aspects. Further, p-TAREF was run over the experimentally validated miRNA targets from species like Arabidopsis, Medicago, Rice and Tomato, and detected them accurately, suggesting gross usability of p-TAREF for plant species. Using p-TAREF, target identification was done for the complete Rice transcriptome, supported by expression and degradome based data. miR156 was found as an important component of the Rice regulatory system, where control of genes associated with growth and transcription looked predominant. The entire methodology has been implemented in a multi-threaded parallel architecture in Java, to enable fast processing for web-server version as well as standalone version. This also makes it to run even on a simple desktop computer in concurrent mode. It also provides a facility to gather experimental support for predictions made, through on the spot expression data analysis, in its web-server version. Conclusion A machine learning multivariate feature tool has been implemented in parallel and locally installable form, for plant miRNA target identification. The performance was assessed and compared through comprehensive testing and benchmarking, suggesting a reliable performance and gross usability for transcriptome wide plant miRNA target identification. PMID:22206472

2011-01-01

404

Detection of faults and software reliability analysis  

NASA Technical Reports Server (NTRS)

Multi-version or N-version programming is proposed as a method of providing fault tolerance in software. The approach requires the separate, independent preparation of multiple versions of a piece of software for some application. These versions are executed in parallel in the application environment; each receives identical inputs and each produces its version of the required outputs. The outputs are collected by a voter and, in principle, they should all be the same. In practice there may be some disagreement. If this occurs, the results of the majority are taken to be the correct output, and that is the output used by the system. A total of 27 programs were produced. Each of these programs was then subjected to one million randomly-generated test cases. The experiment yielded a number of programs containing faults that are useful for general studies of software reliability as well as studies of N-version programming. Fault tolerance through data diversity and analytic models of comparison testing are discussed.

Knight, John C.

1987-01-01

405

Computational Thermochemistry and Benchmarking of Reliable Methods  

SciTech Connect

During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

2006-06-20

406

Assessment of NDE reliability data  

NASA Technical Reports Server (NTRS)

Twenty sets of relevant nondestructive test (NDT) reliability data were identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations was formulated, and a model to grade the quality and validity of the data sets was developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, were formulated for each NDE method. A comprehensive computer program was written and debugged to calculate the probability of flaw detection at several confidence limits by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. An example of the calculated reliability of crack detection in bolt holes by an automatic eddy current method is presented.

Yee, B. G. W.; Couchman, J. C.; Chang, F. H.; Packman, D. F.

1975-01-01

407

Designing and Building Parallel Programs  

NSDL National Science Digital Library

Designing and Building Parallel Programs [Online] is an innovative traditional print and online resource publishing project. It incorporates the content of a textbook published by Addison-Wesley into an evolving online resource.

408

Debugging Serial and Parallel Codes  

NSDL National Science Digital Library

Introduction to debugger software. Serial debugging of array indexing, arguments mismatch, infinite loops, pointer misuse, and memory allocation. Parallel debugging of process count, shared memory, MPI I/O, collective communications, and OpenMP scope.

NCSA

409

Demonstrating Forces between Parallel Wires.  

ERIC Educational Resources Information Center

Describes a physics demonstration that dramatically illustrates the mutual repulsion (attraction) between parallel conductors using insulated copper wire, wooden dowels, a high direct current power supply, electrical tape, and an overhead projector. (WRM)

Baker, Blane

2000-01-01

410

Parallel algorithms for message decomposition  

SciTech Connect

The authors consider the deterministic and random parallel complexity (time and processor) of message decoding: an essential problem in communications systems and translation systems. They present an optimal parallel algorithm to decompose prefix-coded messages and uniquely decipherable-coded messages in O(n/P) time, using O(P) processors (for all P:1 less than or equal toPless than or equal ton/log n) deterministically as well as randomly on the weakest version of parallel random access machines in which concurrent read and concurrent write to a cell in the common memory are not allowed. This is done by reducing decoding to parallel finite-state automata simulation and the prefix sums.

Teng, S.H.; Wang, B.

1987-06-01

411

Electronics reliability and measurement technology  

NASA Technical Reports Server (NTRS)

A summary is presented of the Electronics Reliability and Measurement Technology Workshop. The meeting examined the U.S. electronics industry with particular focus on reliability and state-of-the-art technology. A general consensus of the approximately 75 attendees was that "the U.S. electronics industries are facing a crisis that may threaten their existence". The workshop had specific objectives to discuss mechanisms to improve areas such as reliability, yield, and performance while reducing failure rates, delivery times, and cost. The findings of the workshop addressed various aspects of the industry from wafers to parts to assemblies. Key problem areas that were singled out for attention are identified, and action items necessary to accomplish their resolution are recommended.

Heyman, Joseph S. (editor)

1987-01-01

412

Turbomachinery CFD on parallel computers  

NASA Technical Reports Server (NTRS)

The role of multistage turbomachinery simulation in the development of propulsion system models is discussed. Particularly, the need for simulations with higher fidelity and faster turnaround time is highlighted. It is shown how such fast simulations can be used in engineering-oriented environments. The use of parallel processing to achieve the required turnaround times is discussed. Current work by several researchers in this area is summarized. Parallel turbomachinery CFD research at the NASA Lewis Research Center is then highlighted. These efforts are focused on implementing the average-passage turbomachinery model on MIMD, distributed memory parallel computers. Performance results are given for inviscid, single blade row and viscous, multistage applications on several parallel computers, including networked workstations.

Blech, Richard A.; Milner, Edward J.; Quealy, Angela; Townsend, Scott E.

1992-01-01

413

Interpreting Quantum Parallelism by Sequents  

NASA Astrophysics Data System (ADS)

We introduce an interpretation of quantum superposition in predicative sequent calculus, in the framework of basic logic. Then we introduce a new predicative connective for the entanglement. Our aim is to represent quantum parallelism in terms of logical proofs.

Battilotti, Giulia

2010-12-01

414

New NAS Parallel Benchmarks Results  

NASA Technical Reports Server (NTRS)

NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

1997-01-01

415

"Feeling" Series and Parallel Resistances.  

ERIC Educational Resources Information Center

Equipped with drinking straws and stirring straws, a teacher can help students understand how resistances in electric circuits combine in series and in parallel. Follow-up suggestions are provided. (ZWH)

Morse, Robert A.

1993-01-01

416

Parallel computation and computers for artificial intelligence  

SciTech Connect

This book discusses Parallel Processing in Artificial Intelligence; Parallel Computing using Multilisp; Execution of Common Lisp in a Parallel Environment; Qlisp; Restricted AND-Parallel Execution of Logic Programs; PARLOG: Parallel Programming in Logic; and Data-driven Processing of Semantic Nets. Attention is also given to: Application of the Butterfly Parallel Processor in Artificial Intelligence; On the Range of Applicability of an Artificial Intelligence Machine; Low-level Vision on Warp and the Apply Programming Mode; AHR: A Parallel Computer for Pure Lisp; FAIM-1: An Architecture for Symbolic Multi-processing; and Overview of Al Application Oriented Parallel Processing Research in Japan.

Kowalik, J.S. (Boeing Computer Services, Bellevue, WA (US))

1988-01-01

417

Massively parallel neural computation  

E-print Network

and Huxley (1952), who studied the behaviour of giant squid neurons. We also have a good understanding of the high-level func- tional structure of the human brain based on studies using brain imaging technolo- gies such as MRI scans. However, we currently... know very little about how neurons in the human brain are connected to form neural systems capable of exhibiting the functions that we observe. One way to explore the connectivity of neurons in the brain is to infer candidate neural networks based...

Fox, Paul James

2013-03-12

418

Photovoltaic power system reliability considerations  

NASA Technical Reports Server (NTRS)

An example of how modern engineering and safety techniques can be used to assure the reliable and safe operation of photovoltaic power systems is presented. This particular application is for a solar cell power system demonstration project designed to provide electric power requirements for remote villages. The techniques utilized involve a definition of the power system natural and operating environment, use of design criteria and analysis techniques, an awareness of potential problems via the inherent reliability and FMEA methods, and use of fail-safe and planned spare parts engineering philosophy.

Lalli, V. R.

1980-01-01

419

Metrological Reliability of Medical Devices  

NASA Astrophysics Data System (ADS)

The prominent development of health technologies of the 20th century triggered demands for metrological reliability of physiological measurements comprising physical, chemical and biological quantities, essential to ensure accurate and comparable results of clinical measurements. In the present work, aspects concerning metrological reliability in premarket and postmarket assessments of medical devices are discussed, pointing out challenges to be overcome. In addition, considering the social relevance of the biomeasurements results, Biometrological Principles to be pursued by research and innovation aimed at biomedical applications are proposed, along with the analysis of their contributions to guarantee the innovative health technologies compliance with the main ethical pillars of Bioethics.

Costa Monteiro, E.; Leon, L. F.

2015-02-01

420

Reliability in the design phase  

SciTech Connect

A study was performed to determine the common methods and tools that are available to calculated or predict a system's reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

Siahpush, A.S.; Hills, S.W.; Pham, H. (EG and G Idaho, Inc., Idaho Falls, ID (United States)); Majumdar, D. (USDOE Idaho Field Office, Idaho Falls, ID (United States))

1991-12-01

421

Reliability in the design phase  

SciTech Connect

A study was performed to determine the common methods and tools that are available to calculated or predict a system`s reliability. A literature review and software survey are included. The desired product of this developmental work is a tool for the system designer to use in the early design phase so that the final design will achieve the desired system reliability without lengthy testing and rework. Three computer programs were written which provide the first attempt at fulfilling this need. The programs are described and a case study is presented for each one. This is a continuing effort which will be furthered in FY-1992. 10 refs.

Siahpush, A.S.; Hills, S.W.; Pham, H. [EG and G Idaho, Inc., Idaho Falls, ID (United States); Majumdar, D. [USDOE Idaho Field Office, Idaho Falls, ID (United States)

1991-12-01

422

Efficiency of parallel direct optimization  

NASA Technical Reports Server (NTRS)

Tremendous progress has been made at the level of sequential computation in phylogenetics. However, little attention has been paid to parallel computation. Parallel computing is particularly suited to phylogenetics because of the many ways large computational problems can be broken into parts that can be analyzed concurrently. In this paper, we investigate the scaling factors and efficiency of random addition and tree refinement strategies using the direct optimization software, POY, on a small (10 slave processors) and a large (256 slave processors) cluster of networked PCs running LINUX. These algorithms were tested on several data sets composed of DNA and morphology ranging from 40 to 500 taxa. Various algorithms in POY show fundamentally different properties within and between clusters. All algorithms are efficient on the small cluster for the 40-taxon data set. On the large cluster, multibuilding exhibits excellent parallel efficiency, whereas parallel building is inefficient. These results are independent of data set size. Branch swapping in parallel shows excellent speed-up for 16 slave processors on the large cluster. However, there is no appreciable speed-up for branch swapping with the further addition of slave processors (>16). This result is independent of data set size. Ratcheting in parallel is efficient with the addition of up to 32 processors in the large cluster. This result is independent of data set size. c2001 The Willi Hennig Society.

Janies, D. A.; Wheeler, W. C.

2001-01-01

423

Parallel asynchronous particle swarm optimization  

PubMed Central

SUMMARY The high computational cost of complex engineering optimization problems has motivated the development of parallel optimization algorithms. A recent example is the parallel particle swarm optimization (PSO) algorithm, which is valuable due to its global search capabilities. Unfortunately, because existing parallel implementations are synchronous (PSPSO), they do not make efficient use of computational resources when a load imbalance exists. In this study, we introduce a parallel asynchronous PSO (PAPSO) algorithm to enhance computational efficiency. The performance of the PAPSO algorithm was compared to that of a PSPSO algorithm in homogeneous and heterogeneous computing environments for small- to medium-scale analytical test problems and a medium-scale biomechanical test problem. For all problems, the robustness and convergence rate of PAPSO were comparable to those of PSPSO. However, the parallel performance of PAPSO was significantly better than that of PSPSO for heterogeneous computing environments or heterogeneous computational tasks. For example, PAPSO was 3.5 times faster than was PSPSO for the biomechanical test problem executed on a heterogeneous cluster with 20 processors. Overall, PAPSO exhibits excellent parallel performance when a large number of processors (more than about 15) is utilized and either (1) heterogeneity exists in the computational task or environment, or (2) the computation-to-communication time ratio is relatively small. PMID:17224972

Koh, Byung-Il; George, Alan D.; Haftka, Raphael T.; Fregly, Benjamin J.

2006-01-01

424

Tax Forms  

NSDL National Science Digital Library

As thoughts in the US turn to taxes (April 15 is just around the corner), Mary Jane Ledvina of the Louisiana State University regional government depository library has provided a simple, effective pointers page to downloadable tax forms. Included are federal tax forms and those for 43 states. Of course, available forms vary by state. Most forms are in Adobe Acrobat (.pdf) format. This is a simple, crisply designed page that should save time, although probably not headaches.

Ledvina, Mary Jane.

425

Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer  

DOEpatents

Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

2014-08-12

426

Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer  

DOEpatents

Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

2014-02-11

427

Heat exchanger header with parallel edges  

SciTech Connect

A heat exchanger is described, comprising: a pair of fluid tank units supporting a plurality of parallel tube passes extending in fluid communication therebetween; each of the tank units having a separate tank member and header secured to one another to form the tank unit, the tank member having parallel walls including grooves at the base of each of which is formed a protrusion that is curved in cross section and which runs the length of each respective groove, the header having an arcuate interior surface including perimeter side flanges that fit into the grooves and against the curved protrusions, the header also having a plurality of slots and surrounding wells through which the ends of the tube passes are receivable, the edges of the wells terminating short of the side flanges and having a curvature that blends into the arcuate shape of the side flanges as well as complementing the cross sectional curvature of the protrusions, whereby, when the tank member and header are secured, the protrusions, side flange interior surfaces, and well edges mate together so as to provide complete sealing contact and reinforcement for the securement.

Smith, D.M.

1993-08-24

428

Reliability of Liquid Crystal Display  

Microsoft Academic Search

This paper reports the reliability of twisted nematic liquid-crystal display for basic applications such as watches and calculators. We have studied significant stress factors such as voltage, temperature and humidity, and their corresponding failure modes. The main failure mode is LCD misalignment; many different modes appear corresponding to different stress conditions as well as material and process for the LCD.

Kenji Kitagawa; Kazuhisa Toriyama; Yoji Kanuma

1984-01-01

429

Reliable Multicast Transport Protocol (RMTP)  

Microsoft Academic Search

This paper presents the design, implementation, and performance of a reliable multicast transport protocol (RMTP). RMTP is based on a hierarchical structure in which receivers are grouped into local regions or domains and in each domain there is a special receiver called a designated receiver (DR) which is responsible for sending acknowledgments periodically to the sender, for processing acknowledgment from

Sanjoy Paul; Krishan K. Sabnani; John C.-H. Lin; Supratik Bhattacharyya

1997-01-01

430

Becoming a high reliability organization  

PubMed Central

Aircraft carriers, electrical power grids, and wildland firefighting, though seemingly different, are exemplars of high reliability organizations (HROs) - organizations that have the potential for catastrophic failure yet engage in nearly error-free performance. HROs commit to safety at the highest level and adopt a special approach to its pursuit. High reliability organizing has been studied and discussed for some time in other industries and is receiving increasing attention in health care, particularly in high-risk settings like the intensive care unit (ICU). The essence of high reliability organizing is a set of principles that enable organizations to focus attention on emergent problems and to deploy the right set of resources to address those problems. HROs behave in ways that sometimes seem counterintuitive - they do not try to hide failures but rather celebrate them as windows into the health of the system, they seek out problems, they avoid focusing on just one aspect of work and are able to see how all the parts of work fit together, they expect unexpected events and develop the capability to manage them, and they defer decision making to local frontline experts who are empowered to solve problems. Given the complexity of patient care in the ICU, the potential for medical error, and the particular sensitivity of critically ill patients to harm, high reliability organizing principles hold promise for improving ICU patient care. PMID:22188677

2011-01-01

431

Wanted: A Solid, Reliable PC  

ERIC Educational Resources Information Center

This article discusses PC reliability, one of the most pressing issues regarding computers. Nearly a quarter century after the introduction of the first IBM PC and the outset of the personal computer revolution, PCs have largely become commodities, with little differentiating one brand from another in terms of capability and performance. Most of…

Goldsborough, Reid

2004-01-01

432

Quantitative transmission system reliability assessment  

Microsoft Academic Search

Quantitative transmission system reliability is a prerequisite in transmission system planning and operating that supplying industrial and commercial customer loads. A utility industry traditionally has relied on a set of deterministic criteria such as the widely used N-1 criterion to guide transmission planning supplying all customer types, e. g., residential, agricultural, commercial, industrial and sensitive high tech electronic customers. The

A. A. Chowdhury; D. O. Koval

2009-01-01

433

Fast estimation of reboiler reliability  

Microsoft Academic Search

The problems one faces in evaluating the reliability of a reboiler design, or in judging the effect of modifications of process conditions on reboiler operation can be complex. To carry out such evaluations, it is necessary for engineers to perform some calculations to determine: heat transfer coefficients in convection boiling; temperature difference, for the onset of nucleate boiling; heat transfer

A. A. Durand; M. A. O. Bonilla

1995-01-01

434

Proposed Reliability/Cost Model  

NASA Technical Reports Server (NTRS)

New technique estimates cost of improvement in reliability for complex system. Model format/approach is dependent upon use of subsystem cost-estimating relationships (CER's) in devising cost-effective policy. Proposed methodology should have application in broad range of engineering management decisions.

Delionback, L. M.

1982-01-01

435

Reliability of dermatology teleconsultations with the use of teleconferencing technology  

Microsoft Academic Search

Background: Recent advances in telecommunications technology allow physicians to consult on patients at a distance via an interactive video format. Few data exist as to the reliability of this form of consultation.Objective: Our purpose was to measure the degree of concordance between a dermatologist seeing a patient in a clinic and another dermatologist seeing the same patient over a commercially

Charles M. Phillips; William A. Burke; Aaron Shechter; Debbie Stone; David Balch; Susan Gustke

1997-01-01

436

Can Contact Potentials Reliably Predict Stability of Proteins?  

E-print Network

Can Contact Potentials Reliably Predict Stability of Proteins? Jainab Khatun, Sagar D. Khare residues in proteins is the contact potential, which defines the effective free energy of a protein conformation by a set of amino acid contacts formed in this conformation. Finding a contact potential capable

Khatun, Jainab

437

Magnetically operated limit switch has improved reliability, minimizes arcing  

NASA Technical Reports Server (NTRS)

Limit switch for reliable, low-travel, snap action with negligible arcing uses an electrically nonconductive permanent magnet consisting of a ferrimagnetic ceramic and ferromagnetic pole shoes which form a magnetic and electrically conductive circuit with a ferrous-metal armature.

Steiner, R.

1966-01-01

438

A Semantic Wiki Alerting Environment Incorporating Credibility and Reliability Evaluation  

E-print Network

transnational criminal gangs in order to automatically produce models of the gangs' membership and activities in the form of a semantic wiki. A gang ontology and semantic inferencing are used to annotate the reports; semantic analysis; entity/relation extraction; event tracking; gangs; reliability; credibility 1

Kokar, Mieczyslaw M.

439

IDENTIFICATION OF RELIABLE PREDICTOR MODELS FOR UNKNOWN SYSTEMS: A  

E-print Network

and time series are con- structed for different purposes. Among others, for the analysis of the system forming a reliable prediction of the system output is important. Typical examples 1 G. Calafiore acknowledges funding from the CNR Agenzia 2000 program. are the forecast of future events, in which case

Campi, Marco

440

Reliable telemetry in white spaces using remote attestation  

Microsoft Academic Search

We consider reliable telemetry in white spaces in the form of protecting the integrity of distributed spectrum measurements against coordinated misreporting attacks. Our focus is on the case where a subset of the sensors can be remotely attested. We propose a practical framework for using statistical sequential estimation coupled with machine learning classifiers to deter attacks and achieve quantifiably precise

Omid Fatemieh; Michael LeMay; Carl A. Gunter

2011-01-01

441

Master/slave clock arrangement for providing reliable clock signal  

NASA Technical Reports Server (NTRS)

The outputs of two like frequency oscillators are combined to form a single reliable clock signal, with one oscillator functioning as a slave under the control of the other to achieve phase coincidence when the master is operative and in a free-running mode when the master is inoperative so that failure of either oscillator produces no effect on the clock signal.

Abbey, Duane L. (Inventor)

1977-01-01

442

Use of an Eductor to Reliably Dilute a Plutonium Solution  

Microsoft Academic Search

Savannah River Site (SRS) in South Carolina is dissolving Pu239 scrap, which is a legacy from the production of nuclear weapons materials, and will later convert it into oxide form to stabilize it. An eductor has been used to both dilute and transfer a plutonium containing solution between tanks. Eductors have the advantages of simplicity and no moving parts. Reliable

Steimke

1999-01-01

443

Long life high reliability thermal control systems study data handbook  

NASA Technical Reports Server (NTRS)

The development of thermal control systems with high reliability and long service life is discussed. Various passive and semi-active thermal control systems which have been installed on space vehicles are described. The properties of the various coatings are presented in tabular form.

Scollon, T. R., Jr.; Carpitella, M. J.

1971-01-01

444

Analysis of shunt active power filters using PSCAD for parallel operation  

Microsoft Academic Search

The modularity of shunt active power filters (APF) is considered to be the most advantageous feature that allows parallel operation of a number of modules. From the viewpoint of reliability, flexibility, and efficiency, modular filtering approach is quite appropriate for high power applications. This configuration allows various control schemes to be employed, namely power and frequency splitting and capacity limitation

Tolga Sürgevil; Kadir Vardar; Eyüp Akpnar

2009-01-01

445

Quantum Memory Hierarchies: Efficient Designs to Match Available Parallelism in Quantum Computing  

Microsoft Academic Search

The assumption of maximum parallelism support for the successful realization of scalable quantum computers has led to homogeneous, ``sea-of-qubits'' architectures. The resulting architectures overcome the primary challenges of reliability and scalability at the cost of physically unacceptable system area. We find that by exploiting the natural serialization at both the application and the physical microarchitecture level of a quantum computer,

Darshan D. Thaker; Tzvetan S. Metodi; Andrew W. Cross; Isaac L. Chuang; Frederic T. Chong

2006-01-01

446

Native Speakers' versus L2 Learners' Sensitivity to Parallelism in Vp-Ellipsis  

ERIC Educational Resources Information Center

This article examines sensitivity to structural parallelism in verb phrase ellipsis constructions in English native speakers as well as in three groups of advanced second language (L2) learners. The results of a set of experiments, based on those of Tanenhaus and Carlson (1990), reveal subtle but reliable differences among the various learner…

Duffield, Nigel G.; Matsuo, Ayumi

2009-01-01

447

CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED  

EPA Science Inventory

This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter, and the power-law relationship between...

448

CONTAMINANT TRANSPORT IN PARALLEL FRACTURED MEDIA: SUDICKY AND FRIND REVISITED  

EPA Science Inventory

This paper is concerned with a modified, nondimensional form of the parallel fracture, contaminant transport model of Sudicky and Frind (1982). The modifications include the boundary condition at the fracture wall, expressed by a parameter , and the power-law relationship betwe...

449

Adaptive Memory Paging for Efficient Gang Scheduling of Parallel Applications  

Microsoft Academic Search

Summary form only given. The gang scheduling paradigm allows timesharing of computing nodes by multiple parallel applications and supports the coordinated context switches of these applications. It can improve system responsiveness and resource utilization. However, the memory paging overhead incurred during context switches can be expensive and may diminish the positive effects of gang scheduling. We investigate the reduction of

Kyung Dong Ryu; Nimish Pachapurkar; Liana L. Fong

2004-01-01

450

An Analysis of Gang Scheduling for Multiprogrammed Parallel Computing Environments  

E-print Network

An Analysis of Gang Scheduling for Multiprogrammed Parallel Computing Environments Mark S­8285 fwang­fang,papaefthymiou­mariosg@cs.yale.edu Abstract Gang scheduling is a resource management scheme and analyze a queueing theoretic model for a general gang scheduling scheme that forms the basis of a multipro

Papaefthymiou, Marios

451

Parallel electronic circuit simulation on the iPSC system  

Microsoft Academic Search

A parallel circuit simulator was implemented on the iPSC system. Concurrent model evaluation, hierarchical BBDF (bordered block diagonal form) reordering, and distributed multifrontal decomposition to solve the sparse matrix are used. A speedup of six times has been achieved on an eight-processor iPSC hypercube system

C.-P. Yuan; R. Lucas; P. Chan; R. Dutton

1988-01-01

452

Automatic Computation of Sensitivities for a Parallel Aerodynamic Simulation  

Microsoft Academic Search

Derivatives of functions given in the form of large-scale simulation codes are frequently used in computational science and engineering. Examples include design optimization, parameter estimation, solution of nonlinear systems, and inverse problems. In this note we address the computation of derivatives of a parallel computational fluid dynamics code by automatic differ- entiation. More precisely, we are interested in the derivatives

Arno Rasch; H. Martin Bücker; Christian H. Bischof

2007-01-01

453

Parallel Biomolecular Computation on Surfaces with Advanced Finite Automata  

E-print Network

Parallel Biomolecular Computation on Surfaces with Advanced Finite Automata Michal Soreni, Sivan@techunix.technion.ac.il Abstract: A biomolecular, programmable 3-symbol-3-state finite automaton is reported. This automaton in the form of bio- molecular structures and functions.4 Our previously reported 2-symbol-2-state finite

Keinan, Ehud

454

Reliability data collection and analysis system  

Microsoft Academic Search

This paper presents ReDCAS, the reliability data collection and analysis system. ReDCAS is a software tool for reliability data collection and analysis developed for Ford Motor Company. The software employs Bayesian data analysis techniques to estimate reliability measures based on warranty data, test data, and engineering judgments regarding the impact of design changes on the reliability. The software was developed

G. J. Groen; Siyuan Jiang; A. Mosleh; E. L. Droguett

2004-01-01

455

Power Quality and Reliability Project  

NASA Technical Reports Server (NTRS)

One area where universities and industry can link is in the area of power systems reliability and quality - key concepts in the commercial, industrial and public sector engineering environments. Prairie View A&M University (PVAMU) has established a collaborative relationship with the University of'Texas at Arlington (UTA), NASA/Johnson Space Center (JSC), and EP&C Engineering and Technology Group (EP&C) a small disadvantage business that specializes in power quality and engineering services. The primary goal of this collaboration is to facilitate the development and implementation of a Strategic Integrated power/Systems Reliability and Curriculum Enhancement Program. The objectives of first phase of this work are: (a) to develop a course in power quality and reliability, (b) to use the campus of Prairie View A&M University as a laboratory for the study of systems reliability and quality issues, (c) to provide students with NASA/EPC shadowing and Internship experience. In this work, a course, titled "Reliability Analysis of Electrical Facilities" was developed and taught for two semesters. About thirty seven has benefited directly from this course. A laboratory accompanying the course was also developed. Four facilities at Prairie View A&M University were surveyed. Some tests that were performed are (i) earth-ground testing, (ii) voltage, amperage and harmonics of various panels in the buildings, (iii) checking the wire sizes to see if they were the right size for the load that they were carrying, (iv) vibration tests to test the status of the engines or chillers and water pumps, (v) infrared testing to the test arcing or misfiring of electrical or mechanical systems.

Attia, John O.

2001-01-01

456

Performance and Scalability Evaluation of the Ceph Parallel File System  

SciTech Connect

Ceph is an open-source and emerging parallel distributed file and storage system technology. By design, Ceph assumes running on unreliable and commodity storage and network hardware and provides reliability and fault-tolerance through controlled object placement and data replication. We evaluated the Ceph technology for scientific high-performance computing (HPC) environments. This paper presents our evaluation methodology, experiments, results and observations from mostly parallel I/O performance and scalability perspectives. Our work made two unique contributions. First, our evaluation is performed under a realistic setup for a large-scale capability HPC environment using a commercial high-end storage system. Second, our path of investigation, tuning efforts, and findings made direct contributions to Ceph's development and improved code quality, scalability, and performance. These changes should also benefit both Ceph and HPC communities at large. Throughout the evaluation, we observed that Ceph still is an evolving technology under fast-paced development and showing great promises.

Wang, Feiyi [ORNL] [ORNL; Nelson, Mark [Inktank Storage, Inc.] [Inktank Storage, Inc.; Oral, H Sarp [ORNL] [ORNL; Settlemyer, Bradley W [ORNL] [ORNL; Atchley, Scott [ORNL] [ORNL; Caldwell, Blake A [ORNL] [ORNL; Hill, Jason J [ORNL] [ORNL

2013-01-01

457

JPARSS: A Java Parallel Network Package for Grid Computing  

SciTech Connect

The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

2002-03-01

458

Scalable Parallel Random Number Generators Library, The (SPRNG)  

NSDL National Science Digital Library

Computational stochasitc approaches (Monte Carlo methods) based on the random sampling are becoming extremely important research tools not only in their "traditional" fields such as physics, chemistry or applied mathematics but also in social sciences and, recently, in various branches of industry. An indication of importance is, for example, the fact that Monte Carlo calculations consume about one half of the supercomputer cycles. One of the indispensable and important ingredients for reliable and statistically sound calculations is the source of pseudo random numbers. The goal of our project is to develop, implement and test a scalable package for parallel pseudo random number generation which will be easy to use on a variety of architectures, especially in large-scale parallel Monte Carlo applications.

Michael Mascagni, Ashok Srinivasan

459

A Recovery Algorithm for Reliable Multicasting in Reliable Networks  

Microsoft Academic Search

Any reliable multicast protocol requires some recovery mechanism. A generic description of a recovery mechanism consists of a prioritized list of recovery servers\\/receivers (clients), hierarchically and\\/or geographically and\\/or randomly organized. Recovery requests are sent to the recovery clients on the list one-by- one until the recovery effort is successful. There are many recovery strategies available in literature fitting the generic

Danyang Zhang; Sibabrata Ray; Rajgopal Kannan; S. Sitharama Iyengar

2003-01-01

460

Reliability Impact of Stockpile Aging: Stress Voiding  

SciTech Connect

The objective of this research is to statistically characterize the aging of integrated circuit interconnects. This report supersedes the stress void aging characterization presented in SAND99-0975, ''Reliability Degradation Due to Stockpile Aging,'' by the same author. The physics of the stress voiding, before and after wafer processing have been recently characterized by F. G. Yost in SAND99-0601, ''Stress Voiding during Wafer Processing''. The current effort extends this research to account for uncertainties in grain size, storage temperature, void spacing and initial residual stress and their impact on interconnect failure after wafer processing. The sensitivity of the life estimates to these uncertainties is also investigated. Various methods for characterizing the probability of failure of a conductor line were investigated including: Latin hypercube sampling (LHS), quasi-Monte Carlo sampling (qMC), as well as various analytical methods such as the advanced mean value (Ah/IV) method. The comparison was aided by the use of the Cassandra uncertainty analysis library. It was found that the only viable uncertainty analysis methods were those based on either LHS or quasi-Monte Carlo sampling. Analytical methods such as AMV could not be applied due to the nature of the stress voiding problem. The qMC method was chosen since it provided smaller estimation error for a given number of samples. The preliminary results indicate that the reliability of integrated circuits due to stress voiding is very sensitive to the underlying uncertainties associated with grain size and void spacing. In particular, accurate characterization of IC reliability depends heavily on not only the frost and second moments of the uncertainty distribution, but more specifically the unique form of the underlying distribution.

ROBINSON,DAVID G.

1999-10-01

461

Asynchronous parallel status comparator  

DOEpatents

Apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition.

Arnold, Jeffrey W. (828 Hickory Ridge Rd., Aiken, SC 29801); Hart, Mark M. (223 Limerick Dr., Aiken, SC 29803)

1992-01-01

462

Asynchronous parallel status comparator  

DOEpatents

Disclosed is an apparatus for matching asynchronously received signals and determining whether two or more out of a total number of possible signals match. The apparatus comprises, in one embodiment, an array of sensors positioned in discrete locations and in communication with one or more processors. The processors will receive signals if the sensors detect a change in the variable sensed from a nominal to a special condition and will transmit location information in the form of a digital data set to two or more receivers. The receivers collect, read, latch and acknowledge the data sets and forward them to decoders that produce an output signal for each data set received. The receivers also periodically reset the system following each scan of the sensor array. A comparator then determines if any two or more, as specified by the user, of the output signals corresponds to the same location. A sufficient number of matches produces a system output signal that activates a system to restore the array to its nominal condition. 4 figs.

Arnold, J.W.; Hart, M.M.

1992-12-15

463

Parallel object-oriented programming in SYMPAL  

Microsoft Academic Search

An object-oriented programming model in a parallel system is presented. It is designed for modelling, describing and solving a wide variety of AI problems. AI applications, in particular, deal with knowledge-bases and a lot of problems have parallel solutions. Therefore, while parallel computers are becoming widespread, there is a need for a rich language that enables the exploitation of parallelism

I. Danieli; S. Cohen

1988-01-01

464

Dynamic parallel complexity of computational circuits  

Microsoft Academic Search

The dynamic parallel complexity of general computational circuits (defined in introduction) is discussed. We exhibit some relationships between parallel circuit evaluation and some uniform closure properties of a certain class of unary functions and present a systematic method for the design of processor efficient parallel algorithms for circuit evaluation. Using this method: (1) we improve the algorithm for parallel Boolean

Gary L. Miller; Shang-Hua Teng

1987-01-01

465

Refinement Transformation Using Abstract Parallel Machines  

E-print Network

for circuit specification--- `concrete parallel machines'. #12; The ability to define Abstract ParallelRefinement Transformation Using Abstract Parallel Machines Joy Goodman 1 , John O'Donnell 1 and Gudula R¨unger 2 1 University of Glasgow 2 Universit¨at Leipzig Abstract. Abstract Parallel Machines

Goodman, Joy

466

Study on activity float parallel properties in Activity-on-Arc Network  

Microsoft Academic Search

The study of activity float properties is the basis of scientific project planning and controlling. Aiming at the problem that an arbitrary activity having consumed float will affect float of parallel activities and both preceding and succeeding activities which aren't activities of float characteristic transfer chains, i.e., activity float parallel properties, by the analysis of three forms and the affection

Zhixiong Su; Jianxun Qi

2010-01-01

467

Parallel Hybrid Clustering using Genetic Programming and Multi-Objective Fitness with Density (PYRAMID)  

E-print Network

(PYRAMID) Samir Tout1 , William Sverdlik2 , Junping Sun1 1 Graduate School of Computer and Information parallel hybrid clustering using genetic programming and multi-objective fitness with density (PYRAMID on user-supplied parameters, PYRAMID employs a combination of data parallelism, a form of genetic

Fernandez, Thomas

468

A Three Degree of Freedom Parallel Manipulator with Only Translational Degrees of Freedom  

Microsoft Academic Search

In this dissertation, a novel parallel manipulator is investigated. The manipulator has three degrees of freedom and the moving platform is constrained to only translational motion. The main advantages of this parallel manipulator are that all of the actuators can be attached directly to the base, closed-form solutions are available for the forward kinematics, the moving platform maintains the same

R. E. Stamper

1997-01-01

469

High frequency switched capacitor IIR filters using parallel cyclic type circuits  

Microsoft Academic Search

In order to reduce the performance deterioration due to the finite gain bandwidth (GB) product of op-amps in switched capacitor (SC) transversal filters, parallel cyclic type circuits have been proposed. The authors consider how to implement direct form I SC IIR (infinite impulse response) filters using the parallel cyclic type circuit. The effects of finite GB products of op-amps and

Yoshinori HIRATA; Kyoko KATO; Nobuaki TAKAHASHI; Tsuyoshi TAKEBE

1992-01-01

470

200 NATURE PHYSICS | VOL 9 | APRIL 2013 | www.nature.com/naturephysics The parallel approach  

E-print Network

200 NATURE PHYSICS | VOL 9 | APRIL 2013 | www.nature.com/naturephysics commentary The parallel approach Massimiliano Di Ventra and Yuriy V. Pershin A class of two-terminal passive circuit elements that can also act as memories could be the building blocks of a form of massively parallel computation

Loss, Daniel

471

Tutorial: Performance and reliability in redundant disk arrays  

NASA Technical Reports Server (NTRS)

A disk array is a collection of physically small magnetic disks that is packaged as a single unit but operates in parallel. Disk arrays capitalize on the availability of small-diameter disks from a price-competitive market to provide the cost, volume, and capacity of current disk systems but many times their performance. Unfortunately, relative to current disk systems, the larger number of components in disk arrays leads to higher rates of failure. To tolerate failures, redundant disk arrays devote a fraction of their capacity to an encoding of their information. This redundant information enables the contents of a failed disk to be recovered from the contents of non-failed disks. The simplest and least expensive encoding for this redundancy, known as N+1 parity is highlighted. In addition to compensating for the higher failure rates of disk arrays, redundancy allows highly reliable secondary storage systems to be built much more cost-effectively than is now achieved in conventional duplicated disks. Disk arrays that combine redundancy with the parallelism of many small-diameter disks are often called Redundant Arrays of Inexpensive Disks (RAID). This combination promises improvements to both the performance and the reliability of secondary storage. For example, IBM's premier disk product, the IBM 3390, is compared to a redundant disk array constructed of 84 IBM 0661 3 1/2-inch disks. The redundant disk array has comparable or superior values for each of the metrics given and appears likely to cost less. In the first section of this tutorial, I explain how disk arrays exploit the emergence of high performance, small magnetic disks to provide cost-effective disk parallelism that combats the access and transfer gap problems. The flexibility of disk-array configurations benefits manufacturer and consumer alike. In contrast, I describe in this tutorial's second half how parallelism, achieved through increasing numbers of components, causes overall failure rates to rise. Redundant disk arrays overcome this threat to data reliability by ensuring that data remains available during and after component failures.

Gibson, Garth A.

1993-01-01

472

File concepts for parallel I/O  

NASA Technical Reports Server (NTRS)

The subject of input/output (I/O) was often neglected in the design of parallel computer systems, although for many problems I/O rates will limit the speedup attainable. The I/O problem is addressed by considering the role of files in parallel systems. The notion of parallel files is introduced. Parallel files provide for concurrent access by multiple processes, and utilize parallelism in the I/O system to improve performance. Parallel files can also be used conventionally by sequential programs. A set of standard parallel file organizations is proposed, organizations are suggested, using multiple storage devices. Problem areas are also identified and discussed.

Crockett, Thomas W.

1989-01-01

473

Performance Issues in Parallelized Network Protocols  

Microsoft Academic Search

Parallel processing has been proposed as a means of improving network protocol throughput. Several different strategies have been taken towards parallelizing protocol s. A relatively popular approach is packet-level parallelism, where packets are distributed across processors. This paper provides an experimental performance study of packet-level parallelism on a contemporary shared- memory multiprocessor. We examine several unexplored areas in packet-level parallelism

Erich M. Nahum; David J. Yates; James F. Kurose; Donald F. Towsley

1994-01-01

474

Parallels plane projection and its geometric features  

Microsoft Academic Search

A new equivalent map projection called the parallels plane projection is proposed in this paper. The transverse axis of the\\u000a parallels plane projection is the expansion of the equator and its vertical axis equals half the length of the central meridian.\\u000a On the parallels plane projection, meridians are projected as sine curves and parallels are a series of straight, parallel

ChengHu Zhou; Ting Ma; Liao Yang; Biao Qin

2007-01-01

475

On-orbit spacecraft reliability  

NASA Technical Reports Server (NTRS)

Operational and historic data for 350 spacecraft from 52 U.S. space programs were analyzed for on-orbit reliability. Failure rates estimates are made for on-orbit operation of spacecraft subsystems, components, and piece parts, as well as estimates of failure probability for the same elements during launch. Confidence intervals for both parameters are also given. The results indicate that: (1) the success of spacecraft operation is only slightly affected by most reported incidents of anomalous behavior; (2) the occurrence of the majority of anomalous incidents could have been prevented piror to launch; (3) no detrimental effect of spacecraft dormancy is evident; (4) cycled components in general are not demonstrably less reliable than uncycled components; and (5) application of product assurance elements is conductive to spacecraft success.

Bloomquist, C.; Demars, D.; Graham, W.; Henmi, P.

1978-01-01

476

Assessment of NDE Reliability Data  

NASA Technical Reports Server (NTRS)

Twenty sets of relevant Nondestructive Evaluation (NDE) reliability data have been identified, collected, compiled, and categorized. A criterion for the selection of data for statistical analysis considerations has been formulated. A model to grade the quality and validity of the data sets has been developed. Data input formats, which record the pertinent parameters of the defect/specimen and inspection procedures, have been formulated for each NDE method. A comprehensive computer program has been written to calculate the probability of flaw detection at several confidence levels by the binomial distribution. This program also selects the desired data sets for pooling and tests the statistical pooling criteria before calculating the composite detection reliability. Probability of detection curves at 95 and 50 percent confidence levels have been plotted for individual sets of relevant data as well as for several sets of merged data with common sets of NDE parameters.

Yee, B. G. W.; Chang, F. H.; Covchman, J. C.; Lemon, G. H.; Packman, P. F.

1976-01-01

477

Gearbox Reliability Collaborative Bearing Calibration  

SciTech Connect

NREL has initiated the Gearbox Reliability Collaborative (GRC) to investigate the root cause of the low wind turbine gearbox reliability. The GRC follows a multi-pronged approach based on a collaborative of manufacturers, owners, researchers and consultants. The project combines analysis, field testing, dynamometer testing, condition monitoring, and the development and population of a gearbox failure database. At the core of the project are two 750kW gearboxes that have been redesigned and rebuilt so that they are representative of the multi-megawatt gearbox topology currently used in the industry. These gearboxes are heavily instrumented and are tested in the field and on the dynamometer. This report discusses the bearing calibrations of the gearboxes.

van Dam, J.

2011-10-01

478

Reliable modeling of complex behavior  

SciTech Connect

The status of modeling for large-strain plasticity is assessed, and this overview is used to emphasize some general points concerning modeling in Materials Science. While a physical foundation is essential in order to achieve generality and some measure of confidence in extrapolations, phenomenological constraint is equally crucial to achieve reliability and predictive value in descriptions of the macroscopic behavior despite the enormous complexity of the underlying physics. Many details that may be of interest in modeling the physical foundation lose importance in the integration to an overall materials response, which depends on few parameters and is quite reproducible. From this point of view, the current understanding of large-strain plasticity is adequate in many respects. However, some problems are highlighted in which more quantitative modeling results would impact the reliability and generality of macroscopic properties descriptions, and which seem amenable to treatment with current techniques and resources. 21 refs., 6 figs., 1 tab.

Kocks, U.F.

1991-01-01

479

Model-Based Reliability Analysis  

SciTech Connect

Modeling, in conjunction with testing, is a rich source of insight. Model parameters are easily controlled and monitoring can be done unobtrusively. The ability to inject faults without otherwise affecting performance is particularly critical. Many iterations can be done quickly with a model while varying parameters and conditions based on a small number of validation tests. The objective of Model-Based Reliability Analysis (MBRA) is to identify ways to capitalize on the insights gained from modeling to make both qualitative and quantitative statements about product reliability. MBRA will be developed and exercised in the realm of weapon system development and maintenance, where the challenges of severe environmental requirements, limited production quantities, and use of one-shot devices can make testing prohibitively expensive. However, the general principles will also be applicable to other product types.

Rene L. Bierbaum; Thomas d. Brown; Thomas J. Kerschen

2001-01-22

480

Integrated CNI avionics maximizes reliability  

NASA Astrophysics Data System (ADS)

An integrated architecture is presented for communications, navigation, and cooperative identification (CNI) functions in the avionics of fighter aircraft. Attention is given to the development of fault tolerant, gracefully degrading systems, where inputs may be rerouted if failure occurs in any prime circuit. A block diagram is provided for the recommended architecture. Major partitioning is noted in the L-band transit/receive and HF/VHF/UHF-band transmit/receive sections, involving provision of alternative paths for up- and down-link circuits. Functional applications of the proposed circuitry in various geographic regions and hostile environments are discussed, particularly the substitution of function reliability for component/module reliability, producing a continuum of availability. Numerical modeling to define the capability of a system to meet a given mission is demonstrated.

Camana, P. C.; Cambell, M. E.

481

What makes a family reliable?  

NASA Technical Reports Server (NTRS)

Asteroid families are clusters of asteroids in proper element space which are thought to be fragments from former collisions. Studies of families promise to improve understanding of large collision events and a large event can open up the interior of a former parent body to view. While a variety of searches for families have found the same heavily populated families, and some searches have found the same families of lower population, there is much apparent disagreement between proposed families of lower population of different investigations. Indicators of reliability, factors compromising reliability, an illustration of the influence of different data samples, and a discussion of how several investigations perceived families in the same region of proper element space are given.

Williams, James G.

1992-01-01

482

Ternary SNARE complexes in parallel versus anti-parallel orientation: examination of their disassembly using single-molecule force spectroscopy.  

PubMed

Interactions between the proteins of the ternary soluble N-ethyl maleimide-sensitive fusion protein attachment protein receptor (SNARE) complex, synaptobrevin 2 (Sb2), syntaxin 1A (Sx1A) and synaptosome-associated protein of 25 kDa (SNAP25) can be readily assessed using force spectroscopy single-molecule measurements. We studied interactions during the disassembly of the ternary SNARE complex pre-formed by binding Sb2 in parallel or anti-parallel orientations to the binary Sx1A-SNAP25B acceptor complex. We determined the spontaneous dissociation lifetimes and found that the stability of the anti-parallel ternary SNARE complex is ?1/3 less than that of the parallel complex. While the free energies were very similar, within 0.5 k(B)T, for both orientations, the enthalpy changes (42.1 k(B)T and 39.8 k(B)T, for parallel and anti-parallel orientations, respectively) indicate that the parallel ternary complex is energetically advantageous by 2.3 k(B)T. Indeed, both ternary SNARE complex orientations were much more stable (by ?4-13 times) and energetically favorable (by ?9-13 k(B)T) than selected binary complexes, constituents of the ternary complex, in both orientations. We propose a model which considers the geometry for the vesicle approach to the plasma membrane with favorable energies and stability as the basis for preferential usage of the parallel ternary SNARE complex in exocytosis. PMID:22525946

Liu, Wei; Stout, Randy F; Parpura, Vladimir

2012-01-01

483

Gang scheduling a parallel machine  

SciTech Connect

Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processes. User programs and their gangs of processes are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantum are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory.

Gorda, B.C.; Brooks, E.D. III.

1991-12-01

484

Gang scheduling a parallel machine  

SciTech Connect

Program development on parallel machines can be a nightmare of scheduling headaches. We have developed a portable time sharing mechanism to handle the problem of scheduling gangs of processors. User program and their gangs of processors are put to sleep and awakened by the gang scheduler to provide a time sharing environment. Time quantums are adjusted according to priority queues and a system of fair share accounting. The initial platform for this software is the 128 processor BBN TC2000 in use in the Massively Parallel Computing Initiative at the Lawrence Livermore National Laboratory. 2 refs., 1 fig.

Gorda, B.C.; Brooks, E.D. III.

1991-03-01

485

Parallelization of the SIR code  

NASA Astrophysics Data System (ADS)

A high-resolution 3-dimensional model of the photospheric magnetic field is essential for the investigation of small-scale solar magnetic phenomena. The SIR code is an advanced Stokes-inversion code that deduces physical quantities, e.g. magnetic field vector, temperature, and LOS velocity, from spectropolarimetric data. We extended this code by the capability of directly using large data sets and inverting the pixels in parallel. Due to this parallelization it is now feasible to apply the code directly on extensive data sets. Besides, we included the possibility to use different initial model atmospheres for the inversion, which enhances the quality of the results.

Thonhofer, S.; Bellot Rubio, L. R.; Utz, D.; Jur?ak, J.; Hanslmeier, A.; Piantschitsch, I.; Pauritsch, J.; Lemmerer, B.; Guttenbrunner, S.

486

Reliability in individual monitoring service.  

PubMed

As a laboratory certified to ISO 9001:2008 and accredited to ISO/IEC 17025, the Secondary Standard Dosimetry Laboratory (SSDL)-Nuclear Malaysia has incorporated an overall comprehensive system for technical and quality management in promoting a reliable individual monitoring service (IMS). Faster identification and resolution of issues regarding dosemeter preparation and issuing of reports, personnel enhancement, improved customer satisfaction and overall efficiency of laboratory activities are all results of the implementation of an effective quality system. Review of these measures and responses to observed trends provide continuous improvement of the system. By having these mechanisms, reliability of the IMS can be assured in the promotion of safe behaviour at all levels of the workforce utilising ionising radiation facilities. Upgradation of in the reporting program through a web-based e-SSDL marks a major improvement in Nuclear Malaysia's IMS reliability on the whole. The system is a vital step in providing a user friendly and effective occupational exposure evaluation program in the country. It provides a higher level of confidence in the results generated for occupational dose monitoring of the IMS, thus, enhances the status of the radiation protection framework of the country. PMID:21147789

Mod Ali, N

2011-03-01

487

New solvable sigma models in plane--parallel wave background  

E-print Network

We explicitly solve the classical equations of motion for strings in backgrounds obtained as non-abelian T-duals of a homogeneous isotropic plane-parallel wave. To construct the dual backgrounds, semi-abelian Drinfeld doubles are used which contain the isometry group of the homogeneous plane wave metric. The dual solutions are then found by the Poisson-Lie transformation of the explicit solution of the original homogeneous plane wave background. Investigating their Killing vectors, we have found that the dual backgrounds can be transformed to the form of more general plane-parallel waves.

Ladislav Hlavaty; Ivo Petr

2014-03-19

488

Mapping Pixel Windows To Vectors For Parallel Processing  

NASA Technical Reports Server (NTRS)

Mapping performed by matrices of transistor switches. Arrays of transistor switches devised for use in forming simultaneous connections from square subarray (window) of n x n pixels within electronic imaging device containing np x np array of pixels to linear array of n(sup2) input terminals of electronic neural network or other parallel-processing circuit. Method helps to realize potential for rapidity in parallel processing for such applications as enhancement of images and recognition of patterns. In providing simultaneous connections, overcomes timing bottleneck or older multiplexing, serial-switching, and sample-and-hold methods.

Duong, Tuan A.

1996-01-01

489

Parallel Monte Carlo Simulation for control system design  

NASA Technical Reports Server (NTRS)

The research during the 1993/94 academic year addressed the design of parallel algorithms for stochastic robustness synthesis (SRS). SRS uses Monte Carlo simulation to compute probabilities of system instability and other design-metric violations. The probabilities form a cost function which is used by a genetic algorithm (GA). The GA searches for the stochastic optimal controller. The existing sequential algorithm was analyzed and modified to execute in a distributed environment. For this, parallel approaches to Monte Carlo simulation and genetic algorithms were investigated. Initial empirical results are available for the KSR1.

Schubert, Wolfgang M.

1995-01-01

490

Parallel language constructs for tensor product computations on loosely coupled architectures  

NASA Technical Reports Server (NTRS)

Distributed memory architectures offer high levels of performance and flexibility, but have proven awkard to program. Current languages for nonshared memory architectures provide a relatively low level programming environment, and are poorly suited to modular programming, and to the construction of libraries. A set of language primitives designed to allow the specification of parallel numerical algorithms at a higher level is described. Tensor product array computations are focused on along with a simple but important class of numerical algorithms. The problem of programming 1-D kernal routines is focused on first, such as parallel tridiagonal solvers, and then how such parallel kernels can be combined to form parallel tensor product algorithms is examined.

Mehrotra, Piyush; Vanrosendale, John

1989-01-01

491

FAROW: A tool for fatigue and reliability of wind turbines  

SciTech Connect

FAROW is a computer program that evaluates the fatigue and reliability of wind turbine components using structural reliability methods. A deterministic fatigue life formulation is based on functional forms of three basic parts of wind turbine fatigue calculation: (1) the loading environment, (2) the gross level of structural response given the load environment, and (3) the local failure criterion given both load environment and gross stress response. The calculated lifetime is compared with a user specific target lifetime to assess probabilities of premature failure. The parameters of the functional forms can be defined as either constants or random variables. The reliability analysis uses the deterministic lifetime calculation as the limit state function of a FORM/SORM (first and second order reliability methods) calculation based on techniques developed by Rackwitz. Besides probability of premature failure, FAROW calculates the mean lifetime, the relative importance of each of the random variables, and the sensitivity of the results to all of the input parameters, both constant inputs and the parameters that define the random variable inputs. The ability to check the probability of failure with Monte Carlo simulation is included as an option.

Veers, P.S. [Sandia National Labs., Albuquerque, NM (US); Lange, C.H.; Winterstein, S.R. [Stanford Univ., CA (US). Civil Engineering Dept.

1993-07-01

492

Complete classification of parallel Lorentz surfaces in four-dimensional neutral pseudosphere  

SciTech Connect

A Lorentz surface of an indefinite space form is called parallel if its second fundamental form is parallel with respect to the Van der Waerden-Bortolotti connection. Such surfaces are locally invariant under the reflection with respect to the normal space at each point. Parallel surfaces are important in geometry as well as in ge